id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
39,924,517
https://en.wikipedia.org/wiki/Malassezia%20caprae
Malassezia caprae is a fungus first isolated in goats, which can cause opportunistic infections in animals. Its type strain is MA383=CBS 10434. This species will not grow without any lipid supplementation. It grows slowly and forms small colonies (average diameter ). In the lab, colonies will not grow at temperatures of 40 °C, differing from M. sympodialis-related species, such M. dermatis and M. nana, which can grow at this temperature. Malassezia caprae cells are ellipsoidal to more or less spherical. References Further reading Basidiomycota Parasitic fungi Yeasts Fungi described in 2007 Fungus species
Malassezia caprae
[ "Biology" ]
142
[ "Yeasts", "Fungi", "Fungus species" ]
39,924,732
https://en.wikipedia.org/wiki/Sphingosine-1-phosphate%20receptor
The sphingosine-1-phosphate receptors are a class of G protein-coupled receptors that are targets of the lipid signalling molecule Sphingosine-1-phosphate (S1P). They are divided into five subtypes: S1PR1, S1PR2, S1PR3, S1PR4 and S1PR5. Discovery In 1990, S1PR1 was the first member of the S1P receptor family to be cloned from endothelial cells. Later, S1PR2 and S1PR3 were cloned from rat brain and a human genomic library respectively. Finally, S1P4 and S1PR5 were cloned from in vitro differentiated human dendritic cells and rat cDNA library. Function The sphingosine-1-phosphate receptors regulate fundamental biological processes such as cell proliferation, angiogenesis, migration, cytoskeleton organization, endothelial cell chemotaxis, immune cell trafficking and mitogenesis. Sphingosine-1-phosphate receptors are also involved in immune-modulation and directly involved in suppression of innate immune responses from T cells. Subtypes Sphingosine-1-phosphate (S1P) receptors are divided into five subtypes: S1PR1, S1PR2, S1PR3, S1PR4 and S1PR5. They are expressed in a wide variety of tissues, with each subtype exhibiting a different cell specificity, although they are found at their highest density on leukocytes. S1PR1, 2 and 3 receptors are expressed ubiquitously. The expression of S1PR4 and S1PR5 are less widespread. S1PR4 is confined to lymphoid and hematopoietic tissues whereas S1PR5 primarily located in the white matter of the central nervous system (CNS) and spleen. G protein interactions and selective ligands The sphingosine-1-phosphate (S1P) is the endogenous agonist for the five subtypes. References G protein-coupled receptors
Sphingosine-1-phosphate receptor
[ "Chemistry" ]
437
[ "G protein-coupled receptors", "Signal transduction" ]
39,924,959
https://en.wikipedia.org/wiki/Environmental%20health%20ethics
Environmental health ethics is a field of study that combines environmental health policies and ethical consideration towards a mutually acceptable goal. Given the myriad of environmental issues facing society today a sound ethical background can be applied in an attempt to reach a compromise between conflicting interests, like anthropocentrism, global stewardship, religious values, economic development, and public health. A small sample of the scientific disciplines involved in environmental health ethics include: ecology, toxicology, epidemiology, and exposure biology. Ethical approaches Virtue theories Christian ethics Natural rights Kantianism John Stuart Mill's Utilitarianism Richard Brandt's Utilitarianism W. D. Ross's view Environmental health topics Environmental health embodies a wide range of topics with which there are many ethical issues. Many of these issues can be traced back to a moral obligation towards to life forms and other units of biological organization, like ecosystems, and the nature of that obligation. Humanity's place within any given ecosystem must be weighed against the importance of regional, and global health of the environment as a whole. Human and animal rights, property use, and other freedoms can be combined with other factors like to form an ethical dilemma social justice, equality, sustainability, and globalism to form ethical dilemmas. In response to difficulties with using moral theories to resolve ethical dilemmas various approaches can be used. A case-by-case approach may be too slow when considering the volume of issues at present, so an alternative may be better suited to the task. Taking into account commonly accepted moral virtues can guide conduct and address conflicts between values, rules, and obligations. Most, if not all, of these generally held principles can be found in the ethical approaches listed above, an example of which may be 'respect human rights'. This Principle-Based Method for ethical decision making can be viewed below. State the question or problem. Gather the relevant information. Explore different opinions. Apply ethical principles to the different options. Resolve any conflicts among ethical principles. Take action. After settling on a methodology for analyzing different ethical situations we can turn to a broad survey of some of the most relevant issues which face humanity today. Pest control Pesticides are used throughout the world in an attempt to control, repel, or kill pest species. Though many species of insect can be commonly identified throughout the world, others may harm human health and well-being while providing a benefit to the overall environment of an area. One example of the phenomenon is a bee's ability to sting an individual who may have a serious allergic reaction, though they also play a crucial role in the pollination of an ecosystem. A further example would be various species of bat, which though they can transmit rabies, also help to control mosquito populations. Perhaps the biggest event in the history of pesticide use is the widespread use of DDT to control various pests, including mosquitoes and lice. Its long-term effects had not been sufficiently documented and thus it was assumed to be of low toxicity. Over time the widespread use of DDT began to have serious environmental and human health consequences. Organisms further up the food chain showed significant amounts of DDT in their tissues and this presence had adverse health effects, for example, the weakening of predatory bird eggshells and fish kills. Adverse effects among humans included endocrine system disruption which can lead to reproductive complications. Among the most disruptive pesticides are those dubbed Persistent organic pollutants (POPs) which do not break down easily in the environment, or if they do they become something equally harmful. Because POPs represent such a threat to organisms within an environment, especially those higher on a food chain, specific international legislation referred to as the Stockholm Convention banned several of them being used. Some of these pollutants are DDT, aldrin, chlordane, dieldrin, endrin, heptachlor, hexachlorobenzene, and toxaphene. With these considerations in place it falls onto lawmakers to regulate responsible use of pesticides, and ethics can provide a starting point to consider the best option. Extensive use of pesticides would improve life in the short-term but be harmful in the long-term, and completely banning their use would likewise be detrimental to overall environmental and human health. One strategy to encourage is called Integrated Pest Management (IPM), in which pesticides are responsibly used to limit agricultural loss but also watched for growing resistance and environmental toxicity. The Center for Disease Control (CDC) have also taken measured to educate clinicians and the public about relevant issues and the best ways to manage pesticide use. Genetic engineering, food, and nutrition Genetic engineering concerns the application of scientific alteration of plant and animal DNA in order to combat pests, disease, drought, and other factors which can adversely harm the organism. Objections to genetically modified organisms (GMOs) include theological (playing God) and economic (GMOs can be costly) viewpoints. Genetic engineering of both plants and animals must pass through FDA legislation, which may include public labeling of the product or otherwise marking it as genetically modified. Food and nutrition also fall under the category of things regulated by the FDA, however, the ethics of this regulation are not always clear. Health consequences of unsafe food, eating in overlarge quantities, are well documented yet in all societies there is no legislation against over-consumption. Ethical properties of utilitarianism and social justice conflict with humanity's freedom of choice in the determining of access to healthy, safe food. Pollution and waste Air, water, and solid waste pollution are environmental health issues which can adversely affect people, plants, and animals. From an ethical standpoint many things about pollutants can be studied, like questions of disposal, storage, recycling, and responsibility. A few examples of air pollutants include particulate matter, sulfur dioxide, nitrogen oxides, carbon oxides, chlorofluorocarbons, and heavy metals (e.g. mercury). Perhaps the largest ethical debate concerning air pollution is how to balance economic development against the interests of the public health, safety, and cleanliness. With both sides offering benefits and drawbacks it can be difficult to establish an acceptable compromise. Legislation enacted to prevent widespread use of chlorofluorocarbons, which cause significant environmental damage, can be seen as one instance of economic development taking a lower priority to public health. Water pollution is another type of widespread contaminant which has ethical implications in mitigating the source and balancing conflicting priorities. The two types of water contaminants are anthropogenic compounds (generally referred to as pollutants, such as disinfection products, metals, municipal and agricultural waste, and petroleum and coal hydrocarbons) and natural contaminants (such as microorganisms or chemicals like arsenic and nitrogen, which are naturally present in the soil). the common misconception is that chemicals leaking into the soil will be diluted over time and rendered harmless. This theory does not take into account persistent organic pollutants, which do not break down easily, and sometimes break down into more harmful constituents. Most industrialized nations have legislation in place to protect the public from impure drinking water. The Safe Drinking Water Act of 1974 established maximum levels of pollutants in public drinking water, however its power to regulate private sources of bottled water or wells is severely limited. An additional issue regarding water pollution is the relative scarcity of clean fresh water on the earth, an issue which acutely presents itself in areas prone to drought. Agriculture uses a great deal of water, so much so that shortages in drought-prone areas can significantly affect crop yield. The main ethical issues with water pollution is whether growth should be restricted in order to preserve public health. An additional issue is the regulation of private corporations, whose activities may put populations of citizens at risk for groundwater contamination. Solid waste pollution includes pollutants like agricultural waste, construction waste, electronic waste, hazardous waste, medical, and mining waste. The two prevailing strategies for solid waste management are prevention and treatment/disposal. Waste prevention is the preferable, both economically and environmentally, as it does not necessitate costly removal and storage. Many of the same ethical issues related above manifest themselves with the handling and storage of solid waste, as well as an additional social justice issue of exactly where the storage area for solid waste should be located. Chemical regulation Chemical regulation, including carbon particles and nanotubes and nanotechnology, are very new technologies whose long-term effects have not been satisfactorily studied. This lack of research argues that cautionary use of these products is warranted, especially when short-term effects include harmful symptoms. In opposition to this caution is the nanotechnology industry which is growing very rapidly and may be able to alleviate many of the problems facing society today, like selective cancer treatment and the energy crisis. Perhaps the largest obstacle to testing occurs with the sheer diversity of nanoparticles, of which the only unifying factor is their minuscule size. References Cranor C. 2011. Legally Poisoned: How the Law Puts Us at Risk from Toxicants. Cambridge, MA: Harvard University Press. Elliott KC. 2011. Is A Little Pollution Good for You? Incorporating Societal Values in Environmental Research. New York: Oxford University Press. Gardiner S, Caney S, Jamieson D, and Shue H (eds.). 2010. Climate Ethics: Essential Readings. New York: Oxford University Press. Shrader-Frechette KS. 2002. Environmental Justice: Creating Equity, Reclaiming Democracy. New York: Oxford University Press. Environmental ethics Environmental health
Environmental health ethics
[ "Environmental_science" ]
1,968
[ "Environmental ethics" ]
39,925,201
https://en.wikipedia.org/wiki/Hamburg%20Aviation
Hamburg Aviation, formerly the "Luftfahrtcluster Metropolregion Hamburg e.V." (Aviation Cluster Hamburg Metropolitan Region) is an association of aviation organizations in Hamburg, Germany. Its goal is to promote the aviation industry in the Hamburg Metropolitan Region. Hamburg Metropolitan Region Companies based in the Hamburg Metropolitan Region include the aircraft manufacturer Airbus and Lufthansa Technik. Hamburg Airport, which first opened in 1912, is one of the world's oldest operational airports to still be based at its original location. There are over 300 specialist suppliers, including branches of Diehl Aerospace. As of 2012, it had over 40,000 employees making it one of the largest sites for civil aviation in the world. Educational institutions Hamburg University of Applied Sciences (HAW Hamburg) Helmut Schmidt University / University of the German Federal Armed Forces Hamburg Hamburg University of Technology (TUHH) University of Hamburg Also based in Hamburg are the German Aerospace Center’s Institute of Aerospace Medicine and Institute of Air Transportation Systems. Crystal Cabin Award Hamburg is the host city of the annual Aircraft Interiors Expo, a trade show for the aircraft cabin industry. The Crystal Cabin Award was launched in 2007 to honour innovation in the field of cabin design. The prize is funded by sponsors from the aviation industry. Hamburg Aerospace Cluster In 2001, companies, universities and government bodies collaborated forming Hamburg Aviation. This developed into the “Luftfahrtcluster Metropolregion Hamburg E.V.” association, with 15 founding members, officially established in 2011. Its mission statement is to promote the aviation industry in the Hamburg business cluster. Recognitions and projects Leading-Edge Cluster competition Center of Applied Aeronautical Research European Aerospace Cluster Partnership Faszination Technik Klub Founding members Commercial enterprises Airbus Lufthansa Technik AG Hamburg Airport Associations Hanse-Aerospace E.V. HECAS – Hanseatic Engineering & Consulting Association German Aerospace Industries Association (BDLI) Research facilities German Aerospace Center (DLR) Hamburg Centre of Aviation Training (HCAT) Center for Applied Aeronautical Research (ZAL) Universities Hamburg University of Applied Sciences (HAW Hamburg) Hamburg University of Technology (TUHH) Helmut Schmidt University (HSU) University of Hamburg Public sector HWF Hamburgische Gesellschaft für Wirtschaftsförderung mbH (Hamburg Business Development Corporation) Department of the Economy, Transport and Innovation (BWVI) See also Aviation Notes External links http://www.hamburg-aviation.de http://www.faszination-fuer-technik.de http://www.eacp-aero.eu https://web.archive.org/web/20130829020312/http://care-aero.eu/ http://www.crystal-cabin-award.com Consortia in Germany Engineering university associations and consortia Regional science Business organisations based in Germany Aeronautics organizations Economy of Hamburg Organisations based in Hamburg
Hamburg Aviation
[ "Engineering" ]
595
[ "Aeronautics organizations" ]
39,925,358
https://en.wikipedia.org/wiki/Photoactivatable%20probes
Photoactivatable probes, or caged probes, are cellular players (proteins, nucleic acids, small molecules) that can be triggered by a flash of light. They are used in biological research to study processes in cells. The basic principle is to bring a photoactivatable agent (e.g. a small molecule modified with a light-responsive group: proteins tagged with an artificial photoreceptor protein) to cells, tissues or even living animals and specifically control its activity by illumination. Light is a well-suited external trigger for these types of experiments since it is non-invasive and does not influence normal cellular processes (though care has to be taken when using light in the ultra-violet part of the spectrum to avoid DNA damage. Furthermore, light offers high spatial and temporal control. Usually, the activation stimulus comes from a laser or a UV lamp and can be incorporated into the same microscope used for monitoring of the effect. All these advantages have led to the development of a wide variety of different photoactivatable probes. Even though the light-induced activation step is usually irreversible, reversible changes can be induced in a number of photoswitches. History The first reported use of photoprotected analogues for biological studies was the synthesis and application of caged ATP by Joseph F. Hoffman in 1978 in his study of Na:K pumps. As of 2013, ATP is still the most commonly used caged compound. Hoffman was also the one to coin the term 'caged' for this type of modified molecules. This nomenclature persisted, despite it being scientifically a misnomer, since it suggests the idea of the molecule being in a physical cage (like in a Fullerene). However, scientists have tried to introduce the newer, more accurate term 'photoactivatable probes'. Both nomenclatures are currently in use. Major discoveries were made in the following years with caged neurotransmitters, such as glutamate, which is used to map functional neuronal circuits in mammalian brain slices. Small molecules are easier to modify by photocleavable groups, compared to larger constructs such as proteins. Photoactivatable proteins were serendipitously discovered much later (in 2002), by the observation that Kaede protein, when left on the bench exposed to sunlight, changed fluorescence to longer wavelength. Photoactivatable proteins Proteins which sense and react to light were originally isolated from photoreceptors in algae, corals and other marine organisms. The two most commonly used photoactivatable proteins in scientific research, as of 2013, are photoactivatable fluorescent proteins and retinylidene proteins. Photoactivatable fluorescent proteins change to longer emission wavelength upon illumination with UV light. In Kaede, this change is brought upon by cleavage of the chromophore tripeptide His62-Tyr63-Gly64. This discovery paved the way for modern super resolution microscopy techniques like PALM or STORM. Retinylidene proteins, such as Channelrhodopsins or Halorhodopsins, are light sensitive cation and chloride channels, which open during illumination with blue and yellow light, respectively. This principle has been successfully employed to control the activity of neurons in living cells and even tissue and gave rise to a whole new research field, optogenetics. Photoactivatable nucleic acids Nucleic acids play important roles as cellular information storage and gene regulation machinery. In efforts to regulate this machinery by light, DNA and RNA have been modified with photocleavable groups at the backbone (in an approach called ‘statistical backbone caging’; the protection groups react mainly with backbone phosphate groups). In the organism, modified nucleic acids are ‘silent’ and only upon irradiation with light can their activity be turned on. This approach finds use in developmental biology, where the chronology of gene activity is of particular interest. Caged nucleic acids enable researchers to very precisely turn on genes of interest during the development of whole organisms. Photoactivatable small molecules Small molecules are easily modified by chemical synthesis and therefore were among the first to be modified and used in biological studies. A wide variety of caged small molecules exist. Photoactivatable fluorophores Photochemical reactions can convert a nonemissive reactant into a fluorescent product. These reactions can be exploited in super-resolution microscopy to allow localization beyond the diffraction limit. Caged neurotransmitters The advantages of activating effectors with light (precise control, fast response, high specificity, no cross-reactions) are particularly interesting in neurotransmitters. Caged dopamine, serotonin, glycine and GABA have been synthesized and their effect on neuronal activity has been extensively studied. Caged ions Not only amino acids, but also ions can be caged. Since calcium is a potent cellular second messenger, caged variants have been synthesized by employing the ion-trapping properties of EDTA. Light-induced cleavage of the EDTA backbone leads to a wave of free calcium inside the cell. Caged hormones Another class of molecules used for transmitting signals in the cell is hormones. Caged derivates of estradiol were shown to induce gene expression upon uncaging. Other caged hormones were used to study receptor-ligand interactions. Caged lipids Lipids were shown to be involved in signaling. To dissect the roles that lipids have in certain pathways, it is advantageous to be able to increase the concentration of the signaling lipid in a very rapid manner. Therefore, many signaling lipids have been also protected with photoremovable protection groups and their effect on cellular signaling has been studied. Caged PI3P has been shown to induce endosomal fusion. Caged IP3 helped elucidate the effect of IP3 on action potential and caged diacylglycerol has been used to determine the influence of fatty acid chain length on PKC dependent signaling. When studying protein-lipid interactions, another type of photoactivation has proved to provide many insights. Photolabile groups such as diaziridines or benzophenones, which, upon UV irradiation leave behind a highly reactive carbenium ions, can be used to crosslink the lipid of interest to its interacting proteins. This methodology is especially useful to verify existing and discover new protein-lipid interactions. See also Fluorescence microscope Optical microscope Photoactivatable fluorescent protein Photoactivated localization microscopy (PALM) Stochastic optical reconstruction microscopy (STORM) Super-resolution microscopy References External links Microbiology equipment
Photoactivatable probes
[ "Biology" ]
1,368
[ "Microbiology equipment" ]
39,925,767
https://en.wikipedia.org/wiki/Oral%20History%20Metadata%20Synchronizer
The Oral History Metadata Synchronizer (OHMS) is a web application designed to enhance online access to oral history interviews. OHMS was originally designed and created by the Louie B. Nunn Center for Oral History, University of Kentucky Libraries in 2008 for deployment through the Kentucky Digital Library. In 2011, the Louie B. Nunn Center for Oral History received a grant from the Institute for Museum and Library Services to make the system open source and free to use with interoperability and sustainability as the primary goals. According to the Nunn Center, "The primary purpose for OHMS is to empower users to more effectively and efficiently discover information in an oral history interview online by connecting the user from a search result to the corresponding moment in an interview." OHMS is a two-part system which includes the web app (the back-end) and the viewer (user interface). The web application is used to prepare the oral history interview by embedding timecode into a transcript or creating a time-coded index of the interview, which is then viewable online in the OHMS Viewer, accessible from the archive's chosen content management system. "The programme enables researchers to search through an oral history recording using keywords, and to be taken to the exact moment that the keyword is spoken. It means researchers do not have to scroll through hours of tape or pages of transcript before finding the topic they are interested in." The original version of OHMS synchronized transcribed text with time code in the audio/video, as well as providing a user map/viewer that connected search results of a transcript to the corresponding moments in the audio or video. OHMS designer Doug Boyd writes, "OHMS inexpensively and efficiently encodes transcripts of interviews and then connects the transcripts to the corresponding moments in the audio or video interview." In 2011, the Nunn Center introduced the Interview Indexing Module which allows indexing or annotation of an interview that corresponds to time-code. In his article in the Chronicle of Higher Education, Brad Wolverton writes about Doug Boyd's work on OHMS: "Through his work as director of the Louie B. Nunn Center for Oral History he’s developed a method for indexing audio and video recordings, making it easy for researchers to call up precise words without having to listen to endless hours of tape." In the Fall of 2011, the Institute for Museum and Library Services awarded the Nunn Center a National Leadership Grant of$195,853 to further develop OHMS for open source distribution. OHMS is currently being developed and the grant initiative is working with project partners to implement the system beyond the University of Kentucky. OHMS Viewer Examples OHMS Viewer: synchronized transcript OHMS Viewer: interview index OHMS Viewer: synchronized transcript + interview index OHMS Viewer: bilingual (synchronized transcript + translation) Development Timeline According to the Nunn Center's OHMS Website: 2005: Louie B. Nunn Center for Oral History and UK Libraries' Digital Program begin digitization and online access to oral histories. Team: Eric Weig, Kathryn Lybarger, Kopana Terry 2008: Only 50 interviews uploaded using the manual system. UK Libraries Hires Dr. Doug Boyd as Director of the Louie B. Nunn Center for Oral History. Boyd and Weig design system, work with contract programmer Dr. Jack Schmidt to implement initial version of OHMS. Team: Doug Boyd, Eric Weig, Jack Schmidt. 2009: UK Libraries' programmer Dr. Michael Slone works with Boyd and Weig to further develop OHMS and prepare initial functional specification. Team: Doug Boyd, Eric Weig, Michael Slone. 2011: Nunn Center outsources development of OHMS to Artifex Technology Consulting. Rewrites code completely, adds Indexing Module, adds support for video. Team: Doug Boyd, Eric Weig, Michael Slone, Artifex Technology Consulting. 2011: Receives Grant from Institute for Museum and Library Services 2012: Programmer James Howard hired to develop OHMS and prepare for distribution. Team: Doug Boyd, Eric Weig, James Howard. 2013: OHMS application made accessible to grant partners. Team: Doug Boyd, Eric Weig, James Howard. 2014: OHMS Application and viewer updates include YouTube compatibility, major upgrades to the Interview Manager. References External links Louie B. Nunn Center for Oral History OHMS: Oral History Metadata Synchronizer Nunn Center Blog Digital Omnium: Oral History, Archives and Digital Technology Oral history Online archives of the United States Free and open-source software Library science Metadata
Oral History Metadata Synchronizer
[ "Technology" ]
935
[ "Metadata", "Data" ]
39,926,400
https://en.wikipedia.org/wiki/Space%20Physics%20Archive%20Search%20and%20Extract
The Space Physics Archive Search and Extract (SPASE) effort is an international consortium formed in 2001. Its mission is to define standards and services to enable the establishment and operation of discipline specific Virtual Observatories. The main focus of the consortium is to define and maintain a standard data model to enable data sharing and interoperability within the Space and Solar Physics community. Another goal of the consortium is to facilitate data search and retrieval across the Space and Solar Physics data environment by providing conventions, tools and services to assist data providers, researchers and general users. The SPASE consortium also encourages collaboration between agencies and groups interested in sharing space and solar physics data. Membership Membership in the SPASE consortium is open to any individual or agency. The consortium meets by telecon every two weeks to discuss and vote on changes to the data model, conventions and to endorse compliant tools and services. Face-to-face meetings typically occur in conjunction with related science meetings. There are many individuals who are members of the SPASE consortium with a diverse range of science and agency interests. Participants in the SPASE consortium represent interested groups from NASA, NOAA, JAXA, ESA, CNES and a variety of research institutions. Systems that have adopted the SPASE standards or use SPASE compliant services can display a "SPASE" logo on their pages. Projects Using SPASE CDPP is using the SPASE standard in its tools. IUGONET is using the SPASE standard in its tools. IMPEx has adapted the SPASE standard to describe simulation runs. References External links Space research Space organizations
Space Physics Archive Search and Extract
[ "Astronomy" ]
322
[ "Astronomy organizations", "Space organizations" ]
39,926,929
https://en.wikipedia.org/wiki/Marginal%20structural%20model
Marginal structural models are a class of statistical models used for causal inference in epidemiology. Such models handle the issue of time-dependent confounding in evaluation of the efficacy of interventions by inverse probability weighting for receipt of treatment, they allow us to estimate the average causal effects. For instance, in the study of the effect of zidovudine in AIDS-related mortality, CD4 lymphocyte is used both for treatment indication, is influenced by treatment, and affects survival. Time-dependent confounders are typically highly prognostic of health outcomes and applied in dosing or indication for certain therapies, such as body weight or lab values such as alanine aminotransferase or bilirubin. The first marginal structural models were introduced in 2000. The works of James Robins, Babette Brumback, and Miguel Hernán provided an intuitive theory and an easy-to-implement software which made them popular for the analysis of longitudinal data. References Statistical models Epidemiology Causal inference
Marginal structural model
[ "Environmental_science" ]
214
[ "Epidemiology", "Environmental social science" ]
62,738,560
https://en.wikipedia.org/wiki/S%C3%BCdwall
The Mediterranean Wall, also known as the Southern Wall ( in German), was an extensive system of coastal fortifications built by Nazi Germany during the Second World War, between 1943 and 1945. The project foresaw that the fortifications would extend along all the coasts of the Mediterranean Sea of southern France, from Cerbère to Menton, so as to prevent Allied landings in the South of France. This defensive line extended as far as Italy via the Ligurian Wall and complemented the Atlantic Wall. Structure The 19th Army of the Wehrmacht (Armeeoberkommando 19 (AOK 19)) defended 7 coastal defense sectors (; KVA) covering the 864 km of the French Mediterranean coast from the Spanish border to the Italian border. AOK 19 includes the following KVAs: 19 KVA A / 271. ID: Port-Vendres, Collioure, Cap Leucate... 19 KVA B / 277. ID: Port-la-Nouvelle, Narbonne-Plage... 19 KVA C / 271. ID: Cap d'Agde, Sète... 19 KVA D / 338. ID: La Camargue 19 KVA E / 244. ID: Marseille 19 KVA F / 242. ID: Toulon 19 KVA G / 148. ID: Cannes, Nice... At the time of Allies landing in Provence, the defensive line consisted of about 500 defensive blocks, while about 200 were still under construction. See also Regelbau Atlantic Wall Bibliography External links Südwall – Bunkersite.com Coastal fortifications Military installations of the Wehrmacht World War II defensive lines
Südwall
[ "Engineering" ]
337
[ "World War II defensive lines", "Fortification lines" ]
62,744,331
https://en.wikipedia.org/wiki/Digital%20sustainability
The concept of digital sustainability describes the long-term oriented production and further development of digital artifacts and addresses the tragedy of the anticommons. Originating from the term sustainability, which has been predominantly used in connection with ecological topics, the concept of digital sustainability, according to the definition of sustainable development in the Brundtland Report, refers to the conscious handling of resources in a way that their current creation and use do not impair the needs of future generations. Definition and distinction Digital resources are sustainably managed when their benefit to society is maximized, so that the digital needs of current and future generations are equally met. The societal benefit is maximized when the resources are accessible to the largest number and reusable with a minimum of technical, legal, and social restrictions. Digital resources are knowledge and cultural artifacts digitally represented as text, image, audio, video, or software. (Definition after Dapp) Digital sustainability distinguishes itself from the original definition of sustainability in that digital sustainability exclusively deals with intangible goods, so-called knowledge goods. Such non-physical resources are non-rivalrous, so that no consumption of the goods can occur. Nevertheless, digital resources can be both excludable (a so-called club goods) and non-excludable (a so-called public goods). Through the protection of intellectual property, digital resources can be excluded from free use and further development (see also "Copyright"). Ten preconditions of digital sustainability In early 2017, a scientific publication appeared in Sustainability Science by Springer Publishing and in July 2017 a related article in German describing ten preconditions of digital sustainability. The first four criteria concern the properties of the digital goods, the next five criteria the properties of the ecosystem, and the last criterion the impact on society. Concrete examples of digital sustainability include Wikipedia, Linux, and OpenStreetMap. The following ten preconditions of digital sustainability were presented with individual icons at DINAcon 2017. These are also published on Wikimedia Commons under the Creative Commons Zero license. Properties of the digital good Source: Properties of the ecosystem Impact on Society References Digital sustainability in academia Since 2004, the definition by Marcus Dapp has been further developed and taught in a lecture of the same name at ETH Zurich. The student organizations TheAlternative and SUBDiN (University of Basel) also describe this new sustainability approach in detail. The first historical text that explained the concept in writing was a competition entry for the anniversary publication "Essays 2030" of ETH Zurich, titled "ETH Zurich - A Pioneer in Digital Sustainability". A more recent contribution describes digital sustainability in the context of Open Data and Open-Source Software. Since 2014, the University of Bern has had the Research Center for Digital Sustainability. The center is led by Matthias Stürmer and employs around 20 staff members. The research center was established with a start-up funding of CHF 80,000 from CH Open at the Institute of Information Systems. Since 2019, the research center has been located at the Institute of Computer Science. The research center addresses issues related to open-source software, open data, linked data, open government, smart city, blockchain, smart contracts, and public procurement in research, teaching, and service provision. Open-source software and sustainability Based on the definition of sustainability, Thorsten Busch describes in the Open Source Yearbook 2008 the relationship between open-source software and the concept of sustainability. The extensive literature analysis addresses both the ecological aspects of information and communications technology and the societal influences of digital, intangible resources. The focus is on the problem of the digital divide, which, according to Busch, could be reduced, for example, by promoting open-source software. Busch uses the term "informational sustainability" coined by Volker Grassmuck for the same issue as the concept of digital sustainability described here. References External links Research Center for Digital Sustainability Computing and society Sustainability Archival science
Digital sustainability
[ "Technology" ]
802
[ "Computing and society" ]
62,745,033
https://en.wikipedia.org/wiki/Center%20for%20the%20Fundamental%20Laws%20of%20Nature
The Center for the Fundamental Laws of Nature is a research center at Harvard University that focuses on theoretical particle physics and cosmology. About The Center for the Fundamental Laws of nature is the high-energy theory group in Harvard's Physics Department. , it had 12 faculty and affiliate faculty, 18 postdoctoral, and 19 graduate student members, in addition to multiple affiliates, visiting scholars, and staff. A number of prominent particle theorists have earned degrees or worked at Harvard, including Nobel Laureates David Politzer (PhD 1974), Sheldon Glashow (PhD 1959), David Gross, Steven Weinberg, and Julian Schwinger. Research Current areas of research listed include: Quantum gravity String theory Black holes Applications of AdS/CFT Physics beyond the standard model Dark matter Effective field theories References External links Official Website Theoretical physics institutes Harvard University
Center for the Fundamental Laws of Nature
[ "Physics" ]
170
[ "Theoretical physics", "Theoretical physics institutes", "Particle physics", "Theoretical physics stubs", "Particle physics stubs" ]
62,750,098
https://en.wikipedia.org/wiki/List%20of%20writing%20awards
This list of writing awards is an index to articles about notable awards for writing other than literary awards. It includes general writing awards, science writing awards, screenwriting awards and songwriting awards. General Science writing awards Screenwriting awards for film Screenwriting awards for television Songwriting awards See also Lists of awards List of journalism awards List of literary awards List of media awards References Science writing awards Writing
List of writing awards
[ "Technology" ]
77
[ "Science and technology awards", "Science writing awards" ]
62,750,132
https://en.wikipedia.org/wiki/Eman%20Al%20Yousuf
Eman Al Yousuf (Arabic: إيمان اليوسف) is an Emirati writer who was born in the United Arab Emirates in 1987. She has published three novels "the Window Which Saw", "Guard the Sun", "The Resurrection of Others" and three short stories including "A Bird in a Fish Tank" and "Many Faces of a Man. In 2015, her novel "Guard of the Sun" won the 2016 Emirates Novel Award and was translated into seven languages. She is the first Emirati woman to attend for the prestigious Program of International Writers at the University of Iowa. Eman is also a regular columnist in Emirati print media and the writer of the feminist short film "Ghafa". Biography Eman Al Yousuf is a chemical engineer and a certified coach in graphology who graduated from the American University of Sharjah. In 2017, she earned a diploma in Cultural Diplomacy from Berlin and recently obtained her master's degree in Knowledge Management from the American University in the Emirates. Al Yousuf published three novels and four short stories. Her novel "The Guard of the Sun", which was published in 2015, won the 2016 Emirates Novel Award. She also published in 2015 a literary interviews "Bread and Ink" with female Emirati writers. She is known as the first Emirati ever to write a feminist short film "Ghafa" which was directed by Aisha Alzaabi and was screened at the 2017 Dubai International Film Festival. She is the first Emirati who was chosen for the International Writing Program of the University of Iowa in the United States. Her play "The Teapot and I" was the UAE submission at the fifth Gulf Festival for Art and Literature and was made into a play. Eman Al Yousuf has a monthly literary column in Emirates Culture Magazine called "Under the Ink" and a weekly column named "Woman of the Pen" in the Emirati newspaper "Al Ru'ya". Over the past few years, she has participated in many cultural events and festivals such as the Emirates Airline Festival of Literature and has represented the UAE in several countries including Spain, Paris, Cairo, U.S., Berlin, etc. Novels The Window Which Saw (Original title: Al Nafitha Allati Absarat), 2014 Guard of the Sun (Original title: Haris Al Shams) The Resurrection of Others (Original title: Qiamat Al Akhareen) Short stories Many Faces of a Man (Original title: Wijooh Insan), 2014 A Bird in a Fish Tank (Original title: Ta'er fi Haowth Alasamk), 2015 Eggs Sunny Side up (Original title: Baidh Aoyoon) The Tea and I Short Film Ghafa, 2017 Awards 2016: She won the Emirates Novel Award. See also Taghreed Najjar Maria Dadouch Huda Hamad References Emirati writers Emirati women writers Emirati novelists Emirati women novelists Emirati chemical engineers Women chemical engineers 1987 births Living people American University of Sharjah alumni
Eman Al Yousuf
[ "Chemistry" ]
631
[ "Women chemical engineers", "Chemical engineers" ]
62,750,289
https://en.wikipedia.org/wiki/Tetramethylethylene
Tetramethylethylene is a hydrocarbon with the formula Me2C=CMe2 (Me = methyl). A colorless liquid, it is the simplest tetrasubstituted alkene. Synthesis It can be prepared by base-catalyzed isomerization of 2,3-dimethyl-1-butene. Another route involves direct dimerization of propylene. It can also be produced by photolysis of tetramethylcyclobutane-1,3-dione. Reactions Tetramethylethylene forms metal-alkene complexes with low-valent metals and reacts with diborane to give the monoalkyborane known as thexylborane. Oxidation gives pinacol. It is a precursor to the herbicide fenpropathrin. References Alkenes
Tetramethylethylene
[ "Chemistry" ]
178
[ "Organic compounds", "Alkenes" ]
62,753,136
https://en.wikipedia.org/wiki/Grounding%20resistance%20tester
A grounding resistance tester also called an earth tester is a soil resistance measuring instrument. It is used for sizing and projecting grounding grids. The first soil resistance measuring instrument was invented in the 1950s by Evershed & Vignoles Meggers who made the first insulation and earth resistance testers. One of the most used analog grounding testers in USSR were М416. From the 21st century several companies produced digital earth resistance meters and testers. The main purpose of the instrument is to determine the adequacy of the grounding of an electrical system. By a standard of the National Electrical Code the resistance of the soil should be less than 25 Ohms to reliably and efficiently ground the installation. Operating principle The meter generates an electrical current and then supplies it to the measuring electrodes. The potential difference between the two electrodes gives information about the value of soil resistance. Analog grounding resistance tester The analog grounding resistance tester is realized by four main blocks DC generator, rectifier, current and potential coil. The deflection of the pointer of the analog screen depends on the ratio of the voltage of pressure coil to the current of the current coil. Digital grounding resistance tester The digital grounding resistance tester is realized by digital electronic blocks as Timers, voltage regulators, and digital display. The ranges are changed with multiturn trimpot. Main characteristics When measuring earth resistance with an instrument, it is important to know some of its basic characteristics in order to accurately measure the soil resistance and to properly size the grounding installation. Most importantly, the range of resistance the device measures. Usually the range is three or four degrees. The soil moisture at which the appliance operates is another important parameter. If the instrument cannot operate at a certain humidity, then the measurement may differ significantly from the real value of soil resistance. Comparison analog and digital grounding resistance testers. The main characteristics: Standardisation IEEE 81-2012 ГОСТ 22261-94 References Earth/ground clamp for measuring earth resistance of electrical installations Philippe Legros Electrical safety Product testing Measuring instruments
Grounding resistance tester
[ "Technology", "Engineering" ]
433
[ "Measuring instruments" ]
62,756,782
https://en.wikipedia.org/wiki/List%20of%20train-surfing%20injuries%20and%20deaths
This is a list of train-surfing injuries and deaths. Data of train-surfing injuries and deaths Train-surfing injuries and deaths See also Car surfing Elevator surfing List of graffiti and street-art injuries and deaths List of selfie-related injuries and deaths Rail suicide Skitching Train surfing References Train surfing Train surfing Train surfing Train surfing Train surfing
List of train-surfing injuries and deaths
[ "Technology" ]
71
[ "Railway accidents and incidents" ]
65,589,325
https://en.wikipedia.org/wiki/European%20e-commerce%20VAT%20directive
The European e-commerce VAT directive (2002/38/EC from 7 May 2002) is a directive in the European Union which regulates value added tax of sales to consumers in EU and EEA countries. A consequence of the directive is the Norwegian VAT On E-Commerce (VOEC) scheme, which was implemented in Norway starting in 2020. To avoid a customs clearance fee, foreign webshops selling goods to consumers in Norway need to register in the VOEC registry of the Norwegian Tax Administration. See also Import One-Stop Shop (IOSS), an EU-wide scheme with similarities to the VOEC References External links Official list of stores and marketplaces registered in the VOEC - The Norwegian Tax Administration Official information on VOEC - The Norwegian Tax Administration (English) Socioeconomics Tax E-commerce
European e-commerce VAT directive
[ "Technology" ]
167
[ "Information technology", "E-commerce" ]
65,591,646
https://en.wikipedia.org/wiki/Lists%20of%20space%20organizations
Lists of space organizations include: List of government space agencies List of non-profit space agencies List of private spaceflight companies List of space forces, units, and formations Space organizations
Lists of space organizations
[ "Astronomy" ]
37
[ "Astronomy organizations", "Space organizations" ]
65,592,360
https://en.wikipedia.org/wiki/Tuber%20borchii
Tuber borchii, known as the whitish truffle or bianchetto truffle, is a small, common species of edible truffle excellent for use in cuisine. Taxonomy Given its name by Carlo Vittadini from Latin borchii (from von der Borch, the naturalist who described the species in 1780). Description It is from in size and rounded, with a tuber-like appearance, often bony or irregular. The exterior is slightly velvety and white at first, and when mature it has a smooth, reddish ochre surface with brown markings. It has a compact consistency and, when sectioned, the interior is whitish when young, light ochre or pinkish brown with age. A series of white, twisted and branched veins run through the interior. Young specimens give off a pleasant smell, but when mature they give off an unpleasant smell reminiscent of kitchen gas. The flavor is described by some as garlicky, by others as similar to hazelnuts. Seen through a microscope, the spores have an ellipsoidal to rounded shape, measuring 35–55 × 25–40 microns, and are brown in colour. The ascii are rather rounded and contain 1 to 4 spores. There are many similar species: Tuber asa (with spores shaped like lemons when immature), Tuber oligospermum and Tuber puberulum (which has rounder spores). Distribution and habitat Whitish truffles often grow in oak groves, and less frequently under conifers. Like most truffles, it bears fruit underground, although not very deep (occasionally, mature specimens reach the surface). It bears fruit from winter to early summer (from December to June), in coastal or low-lying areas (between 200 and 1,000 metres above sea level). It is highly adaptable to different environments: although it prefers calcareous sandy soils (typical of coastal areas) it also bears fruit in black truffle (Tuber melanosporum) habitat at higher altitudes. It grows well in soils with a pH of 7–8, as well as in subalkalines with a pH of 6–7, although occasionally it also grows in soils with a pH as low as 5.2. It is found throughout Europe: from Finland to Italy (Tuscany, Abruzzo, Romagna, Umbria, Marche, Molise and Sicily) and the Iberian Peninsula (Andalusia, Portugal, and Castile and León), and from Ireland and Great Britain to Hungary and Poland, very common in France, including Corsica. Use and cultivation It is harvested from winter to spring (from mid-January to the end of April in Italy), unlike Tuber magnatum, which is collected in autumn and early winter. It sells for some 300–400 €/kg. Although it is not as sought after as Tuber magnatum or Tuber melanosporum, there are several reasons for its cultivation: it fruits early in new plantations (as early as 4 years in pine), is adaptable to different ecological niches, is not extremely specific to host plants, and lastly, it is very competitive with other ectomycorrhizal fungi (particularly in young plantations). Mycorrhiza Biotech of Gibsonville, North Carolina has been developing methods for commercial production of bianchetto truffles in North Carolina. Their CEO, Nancy Rosborough, reports 2021 harvests from their plots are outstanding, producing as much as an estimated 200 pounds. References Iotti, M., Lancellotti, E., Hall, I. i Zambonelli, A., 2010. The ectomycorrhizal community in natural Tuber borchii grounds. FEMS Microbiology Ecology, 72: 250–260. Urbanelli, S., Sallicandro, P., De Vito, E., Bullini, L. i Biocca, E., 1998. Biochemical systematics of some species in the genus Tuber. Mycologia 90: 537–546. de Borch, Michel-Jean 1780. Lettres sur les truffes du Piémont (French) Vittadini, Carlo., Monographia Tuberacearum (Latin) Truffles (fungi) Fungus species
Tuber borchii
[ "Biology" ]
889
[ "Fungi", "Fungus species" ]
65,593,195
https://en.wikipedia.org/wiki/Dysorgasmia
Dysorgasmia is the experience of a painful orgasm, usually in the abdomen. The condition may be experienced during or after orgasm, sometimes as late as several hours after the orgasm occurred. Both men and women can experience orgasmic pain. The term is sometimes used interchangeably with painful ejaculation when experienced by a man, but ejaculatory pain is only a subtype of male dysorgasmia as men can experience pain without ejaculating. The phenomenon is poorly understood and underresearched. Dysorgasmia can come as a side effect of surgical interventions such as prostatectomy. See also Dyspareunia, pain during sex References Orgasm Sexual dysfunctions Pain
Dysorgasmia
[ "Biology" ]
147
[ "Behavior", "Sexuality stubs", "Sexuality" ]
65,593,498
https://en.wikipedia.org/wiki/Vibroacoustic%20therapy
Vibroacoustic therapy (VAT) is a type of sound therapy that involves passing low frequency sine wave vibrations into the body via a device with embedded speakers. This therapy was developed in Norway by Olav Skille in the 1980s. The Food and Drug Administration determined that vibroacoustic devices, such as the Next Wave® PhysioAcoustic therapeutic vibrator, are "substantially equivalent" to other therapeutic vibrators, which are "intended for various uses, such as relaxing muscles and relieving minor aches and pains"; thus, vibroacoustic devices (therapeutic vibrators) are "exempt from clinical investigations, Good Guidance Practices (GGPs), and premarket notification and approval procedures." Frequencies Vibroacoustic therapy uses low frequency sinusoidal vibrations between 0 and 500Hz depending on the product's frequency response and capabilities. This is similar to the range of subwoofers or vibrating theater seating. Human mechanoreceptors, such as Pacinian corpuscles, can detect vibrations up to 1,000 Hz, frequencies between 30 Hz and 120 Hz are generally considered to have a calming and relaxing effect, which is why they are often used in therapeutic contexts.40 Hz specifically, has been widely studied in vibroacoustic therapy and other fields due to its potential benefits, such as promoting relaxation and improving focus. In addition to sinusoidal waves, vibroacoustic music is specifically composed for vibroacoustic therapy. These compositions incorporate low-frequency musical instruments and advanced audio engineering techniques to create an immersive and enjoyable therapeutic experience. The combination of carefully engineered music and vibroacoustic technology enhances the physical and emotional benefits of the therapy. Devices Vibroacoustic devices come in a range of forms including beds, chairs, pillows, mats, wristbands, wearable backpacks, and simple DIY platforms. They generally function by playing sound files through transducers, bass shakers, or exciters which then transfer the vibrations into the body. Some devices attempt to target very specific parts of the body such as the wrist or the spine. Proposed mechanisms of action Pallesthesia, the ability to perceive vibration, plays a crucial role in vibroacoustic therapy. This form of therapy relies on the body's sensitivity to mechanical vibrations. By stimulating vibratory perception through therapeutic sound waves, vibroacoustic therapy aims to promote physical and emotional well-being. Another proposed mechanisms of action for vibroacoustic therapy is brainwave entrainment. Entrainment suggests that brainwaves will synchronize with rhythms from sensory input. This further suggests that some brainwave frequencies are preferable to others in given situations. Current practice Vibroacoustic therapy is available at a number of spas, resorts, and clinics around the world as well as a number of professional and holistic practitioners. Related therapies Vibroacoustic Therapy is closely related to Physio Acoustic Therapy (PAT) which was developed by Petri Lehikoinen in Finland. Both are examples of low frequency sound stimulation (LFSS). More broadly, they are subsets of Rhythmic Sensory Stimulation (RSS) which is being studied across a range of sensory modalities. Criticism The science behind vibroacoustic therapy has been questioned by multiple sources. Some sources refer to it as pseudoscience and the TedX talk by prominent vibroacoustic researcher Lee Bartel has been tagged as falling outside of the TED talk guidelines. Practitioners of VAT do agree that more research is needed as VAT has been a largely clinical practice since its inception. Academic research published in peer reviewed journals and meeting higher scientific standards is being pursued at the University of Toronto and other institutions to address these objections. References Music therapy Wave mechanics 21. VIBURE by both FZ LLC
Vibroacoustic therapy
[ "Physics" ]
807
[ "Waves", "Wave mechanics", "Physical phenomena", "Classical mechanics" ]
65,593,860
https://en.wikipedia.org/wiki/Z-Library
Z-Library (abbreviated as z-lib, formerly BookFinder) is a shadow library project for file-sharing access to scholarly journal articles, academic texts and general-interest books. It began as a mirror of Library Genesis, but has expanded dramatically. According to the website's own data released in February 2023, its collection comprised over 13.35million books and over 84.8million articles. Z-Library is particularly popular in emerging economies and among academics. In June 2020, Z-Library was visited by around 2.84million users, of whom 14.76% were from the United States of America. According to the Alexa Traffic Rank service, Z-Library was ranked as the 8,182nd most active website in October 2021. The organization describes itself as "the world's largest e-book library", as well as "the world's largest scientific articles store", and operates as a non-profit organization sustained by donations. Besides sharing ebooks, Z-Library plans to expand their offerings to include physical paperback books, at dedicated "Z-Points" around the globe, as well. Z-Library and its activities are illegal in many jurisdictions. While website seizures reduced the accessibility of the content, it remains available on the dark web. The legal status of the project, as well as its potential impact on the publishing industry and authors' rights, is a matter of ongoing debate. Website The site is financed by user donations, that are collected twice a year (September and March) through fundraising. Over the years, various URLs and IP addresses have been used for Z-Library as domain names have been confiscated by various legal authorities. Functionality Not much is known about Z-Library in terms of its operation, management, and commercial status. Notably, Z-Library does not open its full database to the public. Despite that, its database, excluding books from libgen, was mirrored by archivists in 2022. In an effort to prevent blacklisting of domains (oftentimes by internet providers at the DNS-level in accordance with legal procedures), Z-Library used a homepage that did not contain any infringing content, but instead listed many working mirror domains for different regions. This did not help, as the domain "z-lib.org" was seized in 2022. In March 2019, the Z-Library team claimed to have servers in Finland, Germany, Luxembourg, Malaysia, Panama, Russia and the United States, and the size of their database is over 220 TB. In August 2023, Z-Library announced the possible use of browser extensions to help mitigate challenges if the domain name has to change. Legal status Z-Library has cycled through domain names, some of which have been blocked by domain registry operators. Z-Library remains reachable via alternative domains, and is also accessible through the .onion-linked Tor network. United Kingdom In mid-2015, The Publishers Association, a UK organization, attempted to enact internet service provider-level blocks on Z-Library. In late 2015, publisher Elsevier filed a successful court request that ordered the registrar of bookfi.org to seize the site's internet domain. United States Some of Z-Library's domains, bookfi.org, booksc.org and b-ok.org, were included in the 2017 Office of the United States Trade Representative report on notorious markets. Z-Library's domains were temporarily blocked in 2021 after a DMCA notice issued by Harvard Business Publishing. The domain suspensions were lifted. In October 2022, TikTok blocked hashtags related to Z-Library after it gained popularity there and the Authors Guild submitted a complaint to the United States Trade Representative. On November 3, 2022, over 240 domain names of Z-Library's were seized by the United States Department of Justice and Federal Bureau of Investigation in response to a court order, and two Russian nationals associated with the project were arrested on charges related to copyright infringement and money laundering in Argentina. When the domains z-lib.org, b-ok.org, and 3lib.net were seized, the DNS servers utilised switched to ns1.seizedservers.com and ns2.seizedservers.com, used commonly in US law enforcement seizures. These servers have switched to Njalla, an anonymous hosting provider. The website continues to be active and accessible through the Tor network and the I2P network, before returning to the regular Internet through private personal domains issued to each user on February 11, 2023. On November 16, 2022, U.S. Attorneys for the Eastern District of New York of the Department of Justice unsealed the indictment for two Russian nationals, Anton Napolsky and Valeriiia Ermakva, who had been placed under house arrest in Argentina on November 3, 2022 pending an extradiction hearing. They were charged with criminal copyright infringement, wire fraud and money laundering for operating the Z-Library website. The indictment pertains to alleged criminal activity taking place from 2018 to 2022, though the pair are suspected to have operated Z-Library for "over a decade". The arrests were accomplished by the FBI with data from Google and Amazon (among other sites), accessed with search warrants, that helped identify the founders of the website. The U.S. lawyers retained as official representatives requested a dismissal of the criminal indictment in June 2023. The two escaped their house arrest in Argentina. The presiding judge issued an Interpol warrant for their arrest, and their whereabouts are unknown. The law enforcement efforts were formally assisted by The Publishers Association along with the Authors Guild, and reportedly, indirectly by BREIN, a Dutch anti-piracy group. The Authors Guild issued a statement supporting the arrests, commenting that it was "one of the biggest breakthroughs in the fight against online criminal e-book piracy to date". The executive director for the Authors Alliance, a group dedicated to increasing access for literature, said "I certainly don't condone illegal behavior, but I think this seizure and press release highlight how broken our copyright system is". Some authors like Alison Rumfitt have also defended the project, arguing that it provides a valuable service by increasing access to knowledge and promoting education in underprivileged communities. Decreased accessibility to Z-Library and its services has substantially impacted students and researchers in underfunded institutions who rely on its resources for their studies and work. In response to the law enforcement action, a group of anonymous archivists launched Anna's Archive, a free non-profit online shadow library search engine. The team claims to provide metadata access to Open Library materials, to be a backup of the Library Genesis and Z-Library shadow libraries, presents ISBN information, has no copyrighted materials on its website, and only indexes metadata that is already publicly available. Many other workarounds to the recent attempts to take down Z-Library have been reported. Some of these purported alternative sites have taken up the top search results and submitted bogus DMCA takedown requests of their own, according to news reports. In May 2023, a new round of domain name seizures was carried out by U.S. authorities. In November 2023, dozens of domains were seized by authorities from the United States and Austria. In January 2024, additional Z-Library domains were reported to have been targeted by publishers. In May 2024, several domain name seizures were carried out by U.S. authorities, including the site’s email domain z-lib.se. India The website was banned in India in August 2022, following a court order from the Tis Hazari district court, after a complaint which stated that the copyrights of ten books (pertaining to the topics of tax and corporate law) were being violated by Z-Library. Internet service providers in India were directed to block the site. The decision to block Z-Library and other shadow libraries has been criticized by some Indian authors, students, academics, and freedom of information activists. On November 5, 2022, the Hindu right wing group Swadeshi Jagran Manch formally objected that the FBI's seizure of the Indian domain name 1lib.in (used by Z-lib) merely by the District Judge of New York's order without jurisdiction had violated India's sovereignty. France In September 2022, it was announced that the (National Publishing Union) in France succeeded in a legal challenge to Z-Library, having filed a complaint against about two hundred domains and mirror site domains associated with it. The decision was made by the Tribunal Judiciaire de Paris, which is Paris's civil court; internet service providers in France were directed to block the domains. China The site is targeted and blocked by the Great Firewall. Fraudulent domains Some non Z-Library domains have attempted to impersonate the site. They use similar domain names and an identical visual design. The purpose of these sites is to obtain usernames and passwords from users to try them on other services, including banking, and obtain economic profit from it. Some of the fraudulent domains are z-lib.io, z-lib.id, zlibrary.to, and z-lib.is. See also :Category:Shadow libraries Anna's Archive Library Genesis Sci-Hub Electronic Frontier Foundation Freedom of information #ICanHazPDF JSTOR Open Library References External links Book websites File sharing communities Intellectual property activism Internet censorship in India Internet censorship in the United States Internet properties established in 2009 Russian digital libraries Search engine software Shadow libraries Domain name seizures by United States Internet-related controversies
Z-Library
[ "Technology" ]
1,982
[ "File sharing communities", "Computing websites" ]
65,594,504
https://en.wikipedia.org/wiki/Lionel%20Salem
Lionel Salem (5 March 1937 – 29 June 2024) was a French theoretical chemist, former research director at the French National Centre for Scientific Research (CNRS), retired since 1999. He is a member of the International Academy of Quantum Molecular Science which named him its annual award winner in 1975 for his work on photochemical processes and on chemical reaction mechanisms. He has contributed to the theories of forces between molecules, of conjugated molecules, of organic reaction mechanisms and of heterogeneous catalysis. He developed the electronic theory of diradicals, as well as the concepts of diradical and zwitterionic states. In 1968, he described the energy change for the approach of two molecules as a function of their orbitals' properties; this approach, pursued independently by Gilles Klopman, led to the Klopman–Salem equation and the theory of frontier orbitals. He is the author of several books on chemical subjects, including The Molecular Orbital Theory of Conjugated Systems (1966), The Organic Chemist's Book of Orbitals (with William L. Jorgensen, 1973), The Marvelous Molecule (1979), and Electrons in Chemical Reactions (1982). Salem died on 29 June 2024, at the age of 87. External links References Theoretical chemists 20th-century French chemists 1937 births 2024 deaths
Lionel Salem
[ "Chemistry" ]
279
[ "Quantum chemistry", "Theoretical chemistry", "Theoretical chemists", "Physical chemists" ]
65,594,715
https://en.wikipedia.org/wiki/CYP305%20family
Cytochrome P450, family 305, also known as CYP305, is an animal cytochrome P450 family found in insect genome. The first gene identified in this family is the CYP305A1 from the Drosophila melanogaster (Fruit fly). References Animal genes 305 Protein families
CYP305 family
[ "Biology" ]
69
[ "Protein families", "Protein classification" ]
65,595,160
https://en.wikipedia.org/wiki/General%20relativity%20priority%20dispute
Albert Einstein's discovery of the gravitational field equations of general relativity and David Hilbert's almost simultaneous derivation of the theory using an elegant variational principle, during a period when the two corresponded frequently, has led to numerous historical analyses of their interaction. The analyses came to be called a priority dispute. Einstein and Hilbert The events of interest to historians of the dispute occurred in late 1915. At that time Albert Einstein, now perhaps the most famous modern scientist, had been working on gravitational theory since 1912. He had "developed and published much of the framework of general relativity, including the ideas that gravitational effects require a tensor theory, that these effects determine a non-Euclidean geometry, that this metric role of gravitation results in a redshift and in the bending of light passing near a massive body." While David Hilbert never became a celebrity, he was seen as a mathematician unequaled in his generation, with an especially wide impact on mathematics. When he met Einstein in the summer of 1915, Hilbert had started working on an axiomatic system for a unified field theory, combining the ideas of Gustav Mie's on electromagnetism with Einstein's general relativity. As the historians referenced below recount, Einstein and Hilbert corresponded extensively throughout the fall of 1915, culminating in lectures by both men in late November that were later published. The historians debate consequences of this friendly correspondence on the resulting publications. Undisputed facts The following facts are well established and referable: The proposal to describe gravity by means of a pseudo-Riemannian metric was first made by Einstein and Marcel Grossmann in the so-called Entwurf theory published 1913. Grossmann identified the contracted Riemann tensor as the key for the solution of the problem posed by Einstein. This was followed by several attempts of Einstein to find valid field equations for this theory of gravity. David Hilbert invited Einstein to the University of Göttingen for a week to give six two-hour lectures on general relativity, which he did in June–July 1915. Einstein stayed at Hilbert's house during this visit. Hilbert started working on a combined theory of gravity and electromagnetism, and Einstein and Hilbert exchanged correspondence until November 1915. Einstein gave four lectures on his theory on 4, 11, 18 and 25 November in Berlin, published as [Ein15a], [Ein15b], [Ein15c], [Ein15d]. 4 November: Einstein published non-covariant field equations and on 11 November returned to the field equations of the "Entwurf" papers, which he now made covariant by the assumption that the trace of the energy-momentum tensor was zero, as it was for electromagnetism. Einstein sent Hilbert proofs of his papers of 4 and 11 November. (Sauer 99, notes 63, 66) 15 November: Invitation issued for the 20 November meeting at the academy in Göttingen. "Hilbert legt vor in die Nachrichten: Grundgleichungen der Physik". (Sauer 99, note 73) 16 November: Hilbert spoke at the Göttingen Mathematical Society "Grundgleichungen der Physik" (Sauer 99, note 68). Talk not published. 16 or 17 November: Hilbert sent Einstein some information about his talk of 16 November (letter lost). 18 November: Einstein replied to Hilbert's letter (received by Hilbert on 19 November), saying as far as he (Einstein) could tell, Hilbert's system was equivalent to the one he (Einstein) had found in the preceding weeks. (Sauer 99, note 72). Einstein also told Hilbert in this letter that he (Einstein) had "considered the only possible generally covariant field equations three years earlier", adding that "The difficulty was not to find generally covariant equations for the ;this is easy with the help of the Riemann tensor. What was difficult instead was to recognize that these equations form a generalization, and that is, a simple and natural generalization of Newton's law" (A. Einstein to D. Hilbert, 18 November, Einstein Archives Call No. 13-093). Einstein also told Hilbert in that letter that he (Einstein) had calculated the correct perihelion advance for Mercury, using covariant field equations based on the assumption that the trace of the energy momentum tensor vanished as it did for electromagnetism. 18 November: Einstein presented the calculation of the perihelion advance to Prussian Academy. 20 November: Hilbert lectured at the Göttingen Academy. The content of his presentation and of the proofs of the paper later published on the presentation are at the heart of the dispute among historians (see below). 25 November: In his last lecture, Einstein submitted the correct field equations. The published paper (Einstein 1915d) appeared on 2 December and did not mention Hilbert Hilbert starts his paper by citing Einstein: "The vast problems posed by Einstein as well as his ingeniously conceived methods of solution, and the far-reaching ideas and formation of novel concepts by means of which Mie constructs his electrodynamics, have opened new paths for the investigation into the foundations of physics." Hilbert's paper took considerably longer to appear. He had galley proofs that were marked "December 6" by the printer in December 1915. Most of the galley proofs have been preserved, but about a quarter of a page is missing. The extant part of the proofs contains Hilbert's action from which the field equations can be obtained by taking a variational derivative, and using the contracted Bianchi identity derived in theorem III of Hilbert's paper, though this was not done in the extant proofs. Hilbert rewrote his paper for publication (in March 1916), changing the treatment of the energy theorem, dropping a non-covariant gauge condition on the coordinates to produce a covariant theory, and adding a new credit to Einstein for introducing the gravitational potentials into the theory of gravity. In the final paper, he said his differential equations seemed to agree with the "magnificent theory of general relativity established by Einstein in his later papers" Hilbert nominated Einstein for the third Bolyai prize in 1915 'for the high mathematical spirit behind all his achievements' The 1916 paper was rewritten and republished in 1924 [Hil24], where Hilbert wrote: Einstein [...] kehrt schließlich in seinen letzten Publikationen geradewegs zu den Gleichungen meiner Theorie zurück. (Einstein [...] in his most recent publications, returns directly to the equations of my theory.) Historians on Hilbert's point of view Historians have discussed Hilbert's view of his interaction with Einstein. Walter Isaacson points out that Hilbert's publication on his derivation of the equations of general relativity included the text: “The differential equations of gravitation that result are, as it seems to me, in agreement with the magnificent theory of general relativity established by Einstein.” Wuensch points out that Hilbert refers to the field equations of gravity as "meine Theorie" ("my theory") in his 6 February 1916 letter to Schwarzschild. This, however, is not at issue, since no one disputes that Hilbert had his own "theory", which Einstein criticized as naive and overly ambitious. Hilbert's theory was based on the work of Mie combined with Einstein's principle of general covariance, but applied to matter and electromagnetism as well as gravity. Mehra and Bjerknes point out that Hilbert's 1924 version of the article contained the sentence "... und andererseits auch Einstein, obwohl wiederholt von abweichenden und unter sich verschiedenen Ansätzen ausgehend, kehrt schließlich in seinen letzten Publikationen geradenwegs zu den Gleichungen meiner Theorie zurück" - "Einstein [...] in his last publications ultimately returns directly to the equations of my theory.". These statements of course do not have any particular bearing on the matter at issue. No one disputes that Hilbert had "his" theory, which was a very ambitious attempt to combine gravity with a theory of matter and electromagnetism along the lines of Mie's theory, and that his equations for gravitation agreed with those that Einstein presented beginning in Einstein's 25 November paper (which Hilbert refers to as Einstein's later papers to distinguish them from previous theories of Einstein). None of this bears on the precise origin of the trace term in the Einstein field equations (a feature of the equations that, while theoretically significant, does not have any effect on the vacuum equations, from which all the empirical tests proposed by Einstein were derived). Sauer says "the independence of Einstein's discovery was never a point of dispute between Einstein and Hilbert ... Hilbert claimed priority for the introduction of the Riemann scalar into the action principle and the derivation of the field equations from it," (Sauer mentions a letter and a draft letter where Hilbert defends his priority for the action functional) "and Einstein admitted publicly that Hilbert (and Lorentz) had succeeded in giving the equations of general relativity a particularly lucid form by deriving them from a single variational principle". Sauer also stated, "And in a draft of a letter to Weyl, dated 22 April 1918, written after he had read the proofs of the first edition of Weyl's 'Raum-Zeit-Materie' Hilbert also objected to being slighted in Weyl's exposition. In this letter again 'in particular the use of the Riemannian curvature [scalar] in the Hamiltonian integral' ('insbesondere die Verwendung der Riemannschen Krümmung unter dem Hamiltonschen Integral') was claimed as one of his original contributions. SUB Cod. Ms. Hilbert 457/17." Did Einstein develop the field equations independently? While Hilbert's paper was submitted five days earlier than Einstein's, it only appeared in 1916, after Einstein's field equations paper had appeared in print. For this reason, there was no good reason to suspect plagiarism on either side. In 1978, an 18 November 1915 letter from Einstein to Hilbert resurfaced, in which Einstein thanked Hilbert for sending an explanation of Hilbert's work. This was not unexpected to most scholars, who were well aware of the correspondence between Hilbert and Einstein that November, and who continued to hold the view expressed by Albrecht Fölsing in his Einstein biography: In November, when Einstein was totally absorbed in his theory of gravitation, he essentially only corresponded with Hilbert, sending Hilbert his publications and, on November 18, thanking him for a draft of his article. Einstein must have received that article immediately before writing this letter. Could Einstein, casting his eye over Hilbert's paper, have discovered the term which was still lacking in his own equations, and thus 'nostrified' Hilbert? In the very next sentence, after asking the rhetorical question, Folsing answers it with "This is not really probable...", and then goes on to explain in detail why [Einstein's] eventual derivation of the equations was a logical development of his earlier arguments—in which, despite all the mathematics, physical principles invariably predominated. His approach was thus quite different from Hilbert's, and Einstein's achievements can, therefore, surely be regarded as authentic. In their 1997 Science paper, Corry, Renn and Stachel quote the above passage and comment that "the arguments by which Einstein is exculpated are rather weak, turning on his slowness in fully grasping Hilbert's mathematics", and so they attempted to find more definitive evidence of the relationship between the work of Hilbert and Einstein, basing their work largely on a recently discovered pre-print of Hilbert's paper. A discussion of the controversy around this paper is given below. Those who contend that Einstein's paper was motivated by the information obtained from Hilbert have referred to the following sources: The correspondence between Hilbert and Einstein mentioned above. More recently, it became known that Einstein was also given notes of Hilbert's 16 November talk about his theory. Einstein's 18 November paper on the perihelion motion of Mercury, which still refers to the incomplete field equations of 4 and 11 November. (The perihelion motion depends only on the vacuum equations, which are unaffected by the trace term that was added to complete the field equations.) Reference to the final form of the equations appears only in a footnote added to the paper, indicating that Einstein had not known the final form of the equations on 18 November. This is not controversial, and is consistent with the well-known fact that Einstein did not complete the field equations (with the trace term) until 25 November. Letters of Hilbert, Einstein, and other scientists may be used in attempts to make guesses about the content of Hilbert's letter to Einstein, which is not preserved, or of Hilbert's lecture in Göttingen on 16 November. Those who contend that Einstein's work takes priority over Hilbert's, or that both authors worked independently have used the following arguments: Hilbert modified his paper in December 1915, and the 18 November version sent to Einstein did not contain the final form of the field equations. The extant part of the printer proofs does not have the explicit field equations. This is the point of view defended by Corry, Renn, Stachel, and Sauer. Sauer (1999) and Todorov (2005) agree with Corry, Renn and Satchel that Hilbert's proofs show that Hilbert had originally presented a non-covariant theory, which was dropped from the revised paper. Corry et al. quote from the proofs: "Since our mathematical theorem ... can provide only ten essentially independent equations for the 14 potentials [...] and further, maintaining general covariance makes quite impossible more than ten essential independent equations [...] then, in order to keep the deterministic characteristic of the fundamental equations of physics [...] four further non-covariant equations ... [are] unavoidable." (proofs, pages 3 and 4. Corry et al.) Hilbert derives these four extra equations and continues "these four differential equations [...] supplement the gravitational equations [...] to yield a system of 14 equations for the 14 potentials , : the system of fundamental equations of physics". (proofs, page 7. Corry et al.). Hilbert's first theory (16 November lecture, 20 November lecture, 6 December proofs) was titled "The fundamental equations of Physics". In proposing non-covariant fundamental equations, based on the Ricci tensor but restricted in this way, Hilbert was following the causality requirement that Einstein and Grossmann had introduced in the Entwurf papers of 1913. One may attempt to reconstruct the way in which Einstein arrived at the field equations independently. This is, for instance, done in the paper of Logunov, Mestvirishvili and Petrov quoted below. Renn and Sauer investigate the notebook used by Einstein in 1912 and claim he was close to the correct theory at that time. Scholars This section cites notable publications where people have expressed a view on the issues outlined above. Albrecht Fölsing on the Hilbert-Einstein interaction (1993) From Fölsing's 1993 (English translation 1998) Einstein biography " Hilbert, like all his other colleagues, acknowledged Einstein as the sole creator of relativity theory." Corry/Renn/Stachel and Friedwardt Winterberg (1997/2003) In 1997, Corry, Renn and Stachel published a three-page article in Science entitled "Belated Decision in the Hilbert-Einstein Priority Dispute" concluding that Hilbert had not anticipated Einstein's equations. Friedwardt Winterberg, a professor of physics at the University of Nevada, Reno, disputed these conclusions, observing that the galley proofs of Hilbert's articles had been tampered with - part of one page had been cut off. He goes on to argue that the removed part of the article contained the equations that Einstein later published, and he wrote that "the cut off part of the proofs suggests a crude attempt by someone to falsify the historical record". Science declined to publish this; it was printed in revised form in Zeitschrift für Naturforschung, with a dateline of 5 June 2003. Winterberg criticized Corry, Renn and Statchel for having omitted the fact that part of Hilbert's proofs was cut off. Winterberg wrote that the correct field equations are still present on the existing pages of the proofs in various equivalent forms. In this paper, Winterberg asserted that Einstein sought the help of Hilbert and Klein to help him find the correct field equation, without mentioning the research of Fölsing (1997) and Sauer (1999), according to which Hilbert invited Einstein to Göttingen to give a week of lectures on general relativity in June 1915, which however does not necessarily contradict Winterberg. Hilbert at the time was looking for physics problems to solve. A short reply to Winterberg's article can be found at ; the original long reply can be accessed via the Internet Archive at . In this reply, Winterberg's hypothesis is called "paranoid" and "speculative". Corry et al. offer the following alternative speculation: "it is possible that Hilbert himself cropped off the top of p. 7 to include it with the three sheets he sent Klein, in order that they not end in mid-sentence." As of September 2006, the Max Planck Institute of Berlin has replaced the short reply with a note saying that the Max Planck Society "distances itself from statements published on this website [...] concerning Prof. Friedwart Winterberg" and stating that "the Max Planck Society will not take a position in [this] scientific dispute". Ivan Todorov, in a paper published on ArXiv, says of the debate: Their [CRS's] attempt to support on this ground Einstein's accusation of "nostrification" goes much too far. A calm, non-confrontational reaction was soon provided by a thorough study of Hilbert's route to the "Foundations of Physics" (see also the relatively even handed survey (Viz 01)). In the paper recommended by Todorov as calm and non-confrontational, Tilman Sauer concludes that the printer's proofs show conclusively that Einstein did not plagiarize Hilbert, stating any possibility that Einstein took the clue for the final step toward his field equations from Hilbert's note [Nov 20, 1915] is now definitely precluded. Max Born's letters to David Hilbert, quoted in Wuensch, are quoted by Todorov as evidence that Einstein's thinking towards general covariance was influenced by the competition with Hilbert. Todorov ends his paper by stating: Einstein and Hilbert had the moral strength and wisdom - after a month of intense competition, from which, in a final account, everybody (including science itself) profited - to avoid a lifelong priority dispute (something in which Leibniz and Newton failed). It would be a shame to subsequent generations of scientists and historians of science to try to undo their achievement. Anatoly Alexeevich Logunov on general relativity (2004) Anatoly Logunov (a former vice president of the Soviet Academy of Sciences and at the time the scientific advisor of the Institute for High Energy Physics), is author of a book about Poincaré's relativity theory and coauthor, with Mestvirishvili and Petrov, of an article rejecting the conclusions of the Corry/Renn/Stachel paper. They discuss both Einstein's and Hilbert's papers, claiming that Einstein and Hilbert arrived at the correct field equations independently. Specifically, they conclude that: Their pathways were different but they led exactly to the same result. Nobody "nostrified" the other. So no "belated decision in the Einstein–Hilbert priority dispute", about which [Corry, Renn, and Stachel] wrote, can be taken. Moreover, the very Einstein–Hilbert dispute never took place. All is absolutely clear: both authors made everything to immortalize their names in the title of the gravitational field equations. But general relativity is Einstein's theory. Wuensch and Sommer (2005) Daniela Wuensch, a historian of science and a Hilbert and Kaluza expert, responded to Bjerknes, Winterberg and Logunov's criticisms of the Corry/Renn/Stachel paper in a book which appeared in 2005, where in she defends the view that the cut to Hilbert's printer proofs was made in recent times. Moreover, she presents a theory about what might have been on the missing part of the proofs, based upon her knowledge of Hilbert's papers and lectures. She defends the view that knowledge of Hilbert's 16 November 1915 letter was crucial to Einstein's development of the field equations: Einstein arrived at the correct field equations only with Hilbert's help ("nach großer Anstrengung mit Hilfe Hilberts"), but nevertheless calls Einstein's reaction (his negative comments on Hilbert in the 26 November letter to Zangger) "understandable" ("Einsteins Reaktion ist verständlich") because Einstein had worked on the problem for a long time. According to her publisher, Klaus Sommer, Wuensch concludes though that: This comprehensive study concludes with a historical interpretation. It shows that while it is true that Hilbert must be seen as the one who first discovered the field equations, the general theory of relativity is indeed Einstein's achievement, whereas Hilbert developed a unified theory of gravitation and electromagnetism. In 2006, Wuensch was invited to give a talk at the annual meeting of the German Physics Society (Deutsche Physikalische Gesellschaft) about her views about the priority issue for the field equations. Wuensch's publisher, Klaus Sommer, in an article in Physik in unserer Zeit, supported Wuensch's view that Einstein obtained some results not independently but from the information obtained from Hilbert's 16 November letter and from the notes of Hilbert's talk. While he does not call Einstein a plagiarist, Sommer speculates that Einstein's conciliatory 20 December letter was motivated by the fear that Hilbert might comment on Einstein's behaviour in the final version of his paper. Sommer claimed that a scandal caused by Hilbert could have done more damage to Einstein than any scandal before ("Ein Skandal Hilberts hätte ihm mehr geschadet als jeder andere zuvor"). David E. Rowe (2006) The contentions of Wuensch and Sommer have been strongly contested by the historian of mathematics and natural sciences David E. Rowe in a detailed review of Wuensch's book published in Historia Mathematica in 2006. Rowe argues that Wuensch's book offers nothing but tendentious, unsubstantiated, and in many cases highly implausible, speculations. In popular works by famous physicists Wolfgang Pauli's Encyclopedia entry for the theory of relativity pointed out two reasons physicists did not consider Hilbert's derivation equivalent to Einstein's: 1) it required accepting the stationary-action principle as a physical axiom and more important 2) it was based on Mie unified field theory. In his 1999 article for Time Magazine which featured Einstein Man of the Century Stephen Hawking wrote: Kip Thorne concludes, in remarks based on Hilbert's 1924 paper, that Hilbert regarded the general theory of relativity as Einstein's: However, Kip Thorne also stated, "Remarkably, Einstein was not the first to discover the correct form of the law of warpage [. . . .] Recognition for the first discovery must go to Hilbert" based on "the things he had learned from Einstein's summer visit to Göttingen." This last point is also mentioned by Corry et al. Insignificance of the dispute As noted by the historians John Earman and Clark Glymour, "questions about the priority of discoveries are often among the least interesting and least important issues in the history of science." There was no real controversy between Einstein and Hilbert themselves: And: See also History of Lorentz transformations History of general relativity List of scientific priority disputes Multiple discovery Notes Citations References Works of physics (primary sources) [Ein05c] : Albert Einstein: Zur Elektrodynamik bewegter Körper, Annalen der Physik 17(1905), 891–921. Received 30 June, published 26 September 1905. Reprinted with comments in [Sta89], pp. 276–306 English translation, with footnotes not present in the 1905 paper, available on the net [Ein05d] : Albert Einstein: Ist die Trägheit eines Körpers von seinem Energiegehalt abhängig?, Annalen der Physik 18(1905), 639–641, Reprinted with comments in [Sta89], Document 24 English translation available on the net [Ein06] : Albert Einstein: Das Prinzip von der Erhaltung der Schwerpunktsbewegung und die Trägheit der Energie Annalen der Physik 20(1906):627-633, Reprinted with comments in [Sta89], Document 35 [Ein15a]: Einstein, A. (1915) "Die Feldgleichungun der Gravitation". Sitzungsberichte der Preussischen Akademie der Wissenschaften zu Berlin, 844–847. [Ein15b]: Einstein, A. (1915) "Zur allgemeinen Relativatstheorie", Sitzungsberichte der Preussischen Akademie der Wissenschaften zu Berlin, 778-786 [Ein15c]: Einstein, A. (1915) "Erklarung der Perihelbewegung des Merkur aus der allgemeinen Relatvitatstheorie", Sitzungsberichte der Preussischen Akademie der Wissenschaften zu Berlin, 799-801 [Ein15d]: Einstein, A. (1915) "Zur allgemeinen Relativatstheorie", Sitzungsberichte der Preussischen Akademie der Wissenschaften zu Berlin, 831-839 [Ein16]: Einstein, A. (1916) "Die Grundlage der allgemeinen Relativitätstheorie", Annalen der Physik, 49 [Hil24]: Hilbert, D., Die Grundlagen der Physik - Mathematische Annalen, 92, 1924 - "meiner theorie" quote on page 2 - online at Uni Göttingen - index of journal [Lan05]:Langevin, P. (1905) "Sur l'origine des radiations et l'inertie électromagnétique", Journal de Physique Théorique et Appliquée, 4, pp. 165–183. [Lan14]:Langevin, P. (1914) "Le Physicien" in Henri Poincaré Librairie (Felix Alcan 1914) pp. 115–202. [Lor99]:Lorentz, H. A. (1899) "Simplified Theory of Electrical and Optical Phenomena in Moving Systems", Proc. Acad. Science Amsterdam, I, 427–43. [Lor04]: Lorentz, H. A. (1904) "Electromagnetic Phenomena in a System Moving with Any Velocity Less Than That of Light", Proc. Acad. Science Amsterdam, IV, 669–78. [Lor11]:Lorentz, H. A. (1911) Amsterdam Versl. XX, 87 [Lor14]:. [Pla07]:Planck, M. (1907) Berlin Sitz., 542 [Pla08]:Planck, M. (1908) Verh. d. Deutsch. Phys. Ges. X, p218, and Phys. ZS, IX, 828 [Poi89]:Poincaré, H. (1889) Théorie mathématique de la lumière, Carré & C. Naud, Paris. Partly reprinted in [Poi02], Ch. 12. [Poi97]:Poincaré, H. (1897) "The Relativity of Space", article in English translation [Poi00] : . See also the English translation [Poi02] : [Poi04] : English translation as The Principles of Mathematical Physics, in "The value of science" (1905a), Ch. 7–9. [Poi05] : [Poi06] : [Poi08] : [Poi13] : [Ein20]: Albert Einstein: "Ether and the Theory of Relativity", An Address delivered on May 5, 1920, in the University of Leyden. [Sta89] : John Stachel (Ed.), The collected papers of Albert Einstein, volume 2, Princeton University Press, 1989 Further reading Nándor Balázs (1972) "The acceptability of physical theories: Poincaré versus Einstein", pages 21–34 in General Relativity: Papers in Honour of J.L. Synge, L. O'Raifeartaigh editor, Clarendon Press. Albert Einstein Theory of relativity Discovery and invention controversies
General relativity priority dispute
[ "Physics" ]
6,270
[ "Theory of relativity" ]
65,595,468
https://en.wikipedia.org/wiki/Ash%20Archive
The Ash Archive is a project founded in 2019 to restore ash trees to the landscape in England. English ash trees experienced massive dieback beginning in 2012 as a result of a fungal pathogen, Hymenoscyphus fraxineus. The archive contains over 3,000 trees, all of which propagated from the shoots of trees that had demonstrated some resistance to the fungus. The archive was established with £1.9 million (about USD 2.5 million) in government funding, and followed a five-year project to identify ash trees that were resistant to the fungus. One of the final trees in the archive was planted in January 2020 by Nicola Spence, the Chief Plant Health Officer of the UK government. Spence said, "I'm delighted to acknowledge the successes of the Ash Archive project and welcome the International Year of Plant Health by planting an ash dieback-tolerant tree." The Ash Archive trees were planted in the county of Hampshire in an unspecified location by the Future Trees Trust. Propagated shoots came from trees in East Anglia. All the trees will be monitored for five years to identify those that are the most resistant to disease. These will form the basis of the future breeding program. References Environmental mitigation Reforestation
Ash Archive
[ "Chemistry", "Engineering" ]
249
[ "Environmental mitigation", "Environmental engineering" ]
65,596,601
https://en.wikipedia.org/wiki/JT-010
JT-010 is a chemical compound which acts as a potent, selective activator of the TRPA1 channel, and has been used to study the role of this receptor in the perception of pain, as well as other actions such as promoting repair of dental tissue after damage. See also ASP-7663 PF-4840154 References Nitrogen mustards Thiazoles Amides Phenyl compounds Ethers Transient receptor potential channel agonists
JT-010
[ "Chemistry" ]
98
[ "Organic compounds", "Amides", "Functional groups", "Ethers" ]
65,596,698
https://en.wikipedia.org/wiki/AM-0902
AM-0902 is a drug which acts as a potent and selective antagonist for the TRPA1 receptor, and has analgesic and antiinflammatory effects. References Purines Oxadiazoles 4-Chlorophenyl compounds Phenyl compounds
AM-0902
[ "Chemistry" ]
59
[ "Pharmacology", "Pharmacology stubs", "Medicinal chemistry stubs" ]
65,596,802
https://en.wikipedia.org/wiki/HC-030031
HC-030031 is a drug which acts as a potent and selective antagonist for the TRPA1 receptor, and has analgesic and antiinflammatory effects. References Xanthines Acetamides
HC-030031
[ "Chemistry" ]
47
[ "Pharmacology", "Xanthines", "Medicinal chemistry stubs", "Alkaloids by chemical classification", "Pharmacology stubs" ]
65,597,539
https://en.wikipedia.org/wiki/Budget-balanced%20mechanism
In mechanism design, a branch of economics, a weakly-budget-balanced (WBB) mechanism is a mechanism in which the total payment made by the participants is at least 0. This means that the mechanism operator does not incur a deficit, i.e., does not have to subsidize the market. Weak budget balance is considered a necessary requirement for the economic feasibility of a mechanism. A strongly-budget-balanced (SBB) mechanism is a mechanism in which the total payment made by the participants is exactly 0. This means that all payments are made among the participants - the mechanism has neither a deficit nor a surplus. The term budget-balanced mechanism is sometimes used as a shorthand for WBB, and sometimes as a shorthand for SBB. Weak budget balance A simple example of a WBB mechanism is the Vickrey auction, in which the operator wants to sell an object to one of n potential buyers. Each potential buyer bids a value, the highest bidder wins an object and pays the second-highest bid. As all bids are positive, the total payment is trivially positive too. As an example of a non-WBB mechanism, consider its extension to a bilateral trade setting. Here, there is a buyer and a seller; the buyer has a value of b and the seller has a cost of s. Trade should occur if and only if b > s. The only truthful mechanism that implements this solution must charge a trading buyer the cost s and pay a trading seller the value b; but since b > s, this mechanism runs a deficit. In fact, the Myerson–Satterthwaite theorem says that every Pareto-efficient truthful mechanism must incur a deficit. McAfee developed a solution to this problem for a large market (with many potential buyers and sellers): McAfee's mechanism is WBB, truthful and almost Pareto-efficient - it performs all efficient deals except at most one. McAfee's mechanism has been extended to various settings, while keeping its WBB property. See double auction for more details. Strong budget balance In a strongly-budget-balanced (SBB) mechanism, all payments are made between the participants themselves. An advantage of SBB is that all the gain from trade remains in the market; thus, the long-term welfare of the traders is larger and their tendency to participate may be higher. McAfee's double-auction mechanism is WBB but not SBB - it may have a surplus, and this surplus may account for almost all the gain from trade. There is a simple SBB mechanism for bilateral trading: trade occurs iff b > s, and in this case the buyer pays (b+s)/2 to the seller. Since the payment goes directly from the buyer to the seller, the mechanism is SBB; however, it is not truthful, since the buyer can gain by bidding b' < b and the seller can gain by bidding s''' > s''. Recently, some truthful SBB mechanisms for double auction have been developed. Some of them have been generalized to multi-sided markets. See also Balanced budget - a budget in which revenues are equal to expenditures Government budget balance - a financial statement presenting the government's proposed revenues and spending for a financial year. Balanced budget amendment - a rule in the USA constitution requiring that a state cannot spend more than its income. References Mechanism design Auction theory
Budget-balanced mechanism
[ "Mathematics" ]
708
[ "Game theory", "Mechanism design", "Auction theory" ]
65,601,334
https://en.wikipedia.org/wiki/Overcategory
In mathematics, specifically category theory, an overcategory (also called a slice category), as well as an undercategory (also called a coslice category), is a distinguished class of categories used in multiple contexts, such as with covering spaces (espace etale). They were introduced as a mechanism for keeping track of data surrounding a fixed object in some category . There is a dual notion of undercategory, which is defined similarly. Definition Let be a category and a fixed object of pg 59. The overcategory (also called a slice category) is an associated category whose objects are pairs where is a morphism in . Then, a morphism between objects is given by a morphism in the category such that the following diagram commutesThere is a dual notion called the undercategory (also called a coslice category) whose objects are pairs where is a morphism in . Then, morphisms in are given by morphisms in such that the following diagram commutesThese two notions have generalizations in 2-category theory and higher category theorypg 43, with definitions either analogous or essentially the same. Properties Many categorical properties of are inherited by the associated over and undercategories for an object . For example, if has finite products and coproducts, it is immediate the categories and have these properties since the product and coproduct can be constructed in , and through universal properties, there exists a unique morphism either to or from . In addition, this applies to limits and colimits as well. Examples Overcategories on a site Recall that a site is a categorical generalization of a topological space first introduced by Grothendieck. One of the canonical examples comes directly from topology, where the category whose objects are open subsets of some topological space , and the morphisms are given by inclusion maps. Then, for a fixed open subset , the overcategory is canonically equivalent to the category for the induced topology on . This is because every object in is an open subset contained in . Category of algebras as an undercategory The category of commutative -algebras is equivalent to the undercategory for the category of commutative rings. This is because the structure of an -algebra on a commutative ring is directly encoded by a ring morphism . If we consider the opposite category, it is an overcategory of affine schemes, , or just . Overcategories of spaces Another common overcategory considered in the literature are overcategories of spaces, such as schemes, smooth manifolds, or topological spaces. These categories encode objects relative to a fixed object, such as the category of schemes over , . Fiber products in these categories can be considered intersections, given the objects are subobjects of the fixed object. See also Comma category References Category theory
Overcategory
[ "Mathematics" ]
610
[ "Functions and mappings", "Mathematical structures", "Mathematical objects", "Fields of abstract algebra", "Mathematical relations", "Category theory" ]
65,601,895
https://en.wikipedia.org/wiki/Cynthia%20Hipwell
M. Cynthia Hipwell is an American nanotechnologist and tribologist who worked in the electronic storage and food and materials processing industries before becoming a professor of mechanical engineering at Texas A&M University, where she has been TEES Eminent Professor and is currently Oscar S. Wyatt, Jr. '45 Chair II Professor. Education and career Hipwell studied mechanical engineering as an undergraduate at Rice University, and completed her Ph.D. at the University of California, Berkeley. After completing her doctorate, she worked for electronic storage company Seagate Technology, and later for food and materials processing company Bühler, Inc., where she became Vice President of Engineering. She moved to Texas A&M University in 2017. Recognition Hipwell was elected a member of the National Academy of Engineering in 2016, "for leadership in the development of technologies to enable areal density increases in hard disk drives". She is also a member of the National Academy of Inventors and the Academy of Medicine, Engineering and Science of Texas. References External links The INVENT lab at Texas A&M, directed by Hipwell Year of birth missing (living people) Living people American mechanical engineers American women engineers Tribologists Rice University alumni University of California, Berkeley alumni Texas A&M University faculty Members of the United States National Academy of Engineering American women academics 21st-century American women
Cynthia Hipwell
[ "Materials_science" ]
275
[ "Tribology", "Tribologists" ]
65,602,236
https://en.wikipedia.org/wiki/RBC%20EXT8
RBC EXT8 is a globular cluster in the galaxy Messier 31, 27 kpc from the Galaxy Center. The spectral lines reveal levels of iron 800 times lower than the Sun. Its position is right ascension 00h53m14s.53, declination +41°33′24′′. (J2000 equinox) according to the Revised Bologna Catalogue (10). Its magnitude is 15.79, and 15.5" across. References Andromeda (constellation) Globular clusters Andromeda Galaxy
RBC EXT8
[ "Astronomy" ]
115
[ "Andromeda (constellation)", "Constellations" ]
60,610,770
https://en.wikipedia.org/wiki/Timed%20comments
Timed comments are a feature offered by some audio and video players and websites where people can add comments associated with specific times in an audio or video. These comments are then displayed in the player when that time is reached while playing the audio or video. Timed comments differ from annotations, captions, and subtitles in an important respect: they can be added by viewers, not just video creators, and they include the identity of the person adding the comment. Examples SoundCloud, an audio distribution platform and music sharing website: Timed comments can be added at a specific minute and second mark in a soundtrack, and are displayed when the track reaches that minute and second mark. Users can see each other's comments. Viki, a video streaming website that hosts a number of television shows and movies from Korea, Japan, China, and Taiwan. Viki's timed commenting system is one of its distinguishing features. Viddler, a video platform used for training videos. References Streaming media systems
Timed comments
[ "Technology" ]
202
[ "Streaming media systems", "Telecommunications systems", "Computer systems" ]
60,611,549
https://en.wikipedia.org/wiki/Cremation%20in%20Japan
Cremation in Japan was originally practiced by monks seeking to emulate the cremation of the Buddha. Virtually all deceased are now cremated in Japan – as of 2012, it had the highest cremation rate in the world of over 99.9%. The Meiji government attempted to ban the practice in the 19th century, but the ban was only in effect for less than two years. Religion Cremation in Japan was originally practiced by monks inspired by the Buddha, who gave detailed instructions regarding his own cremation. It was therefore seen as a way of accruing spiritual merit and getting closer to Buddhahood. Cremation also exemplifies the Buddhist teaching of impermanence. Referred to as kasō, which translates to 'fire burial', it is only one of several options mentioned in Buddhist literature, the others being earth burial (dosō), water burial (suisō), open-air burial (fusō, or 'wind burial') and forest burial (rinsō). Water, wind and forest burial accrue the most merit, followed by cremation and then earth burial, which accrues the least merit as it does not offer the body for the benefit of wild plants and animals. Buddhist relics have been found in the ashes of spiritually meritorious individuals. Most Japanese Christians cremate their dead as well. The issue of limited burial space in Japan is felt particularly by Japanese Muslims, who do not cremate their dead. History There is evidence of cremation from the prehistoric Jōmon period. The first notable cremation in Japanese history was that of the Buddhist monk Dōshō in 700 AD. Cremation spread rapidly from this time throughout Japan. Excavations have revealed roughly 2,000 examples of cremation stretching from northern Kyūshū to northeastern Japan (Iwate) with the highest concentrations in the Kinai region of the ancient capitals, the Kantō region of eastern Japan, and northern Kyūshū. The cremation of Empress Jito in 700 BC began an aristocratic tradition which remained generally unbroken until the full-body burial of Emperor Gokomyo in 1654. Even then, his burial was preceded by a symbolic burning. Such false cremations followed by discreet full-body burials became less common with Emperor Komei’s full-body burial in 1867. Towards the end of the Heian period (794–1185), cremation in Japan became a distinctly Buddhist practice, and Buddhist temples came to own or maintain most crematoria. The cost of firewood largely limited cremation to the nobility until the Kamakura period (1185–1333), when it spread to the common people. During the Edo/Tokugawa period (1603–1868), in modern-day Akita prefecture, each household in a certain village would contribute two bundles of straw towards the cremation of a recently deceased member of the community. Cremation was especially common among Jōdo Shinshū or Shin Buddhists, whose founder Shinran encouraged cremation. Popularity amongst other schools of Buddhism varied. The compactness of the ashes resulting from cremation contributed to the rise of ancestral or family graves.   Crematory workers were generally poor people called 'onboyaki', a term with negative connotations. Anti-cremation movement The notion of cremation as the greatest sin against filial piety originated in China, where it was used as a punishment, during the Song dynasty (960–1279). Japanese Confucians constituted the majority of vocal opponents, claiming that the dead should be treated as if they were still living. This justified ignoring a parent's misguided wish to be cremated and giving them a full-body burial instead. The Meiji government (1868–1912) sought to replace Buddhist influences on national culture with Shintoist influences. For instance, they used Shinto and Confucian texts to design a new kind of Shinto funeral in an effort to replace Buddhist funerals. Meiji officials continually stressed that cremation was a foreign, Indian practice, brought to Japan via Buddhism. In 1873, Tokyo police relocated crematoria beyond city limits, citing the smell as detrimental to public health. Shinto leaders argued that to approve the relocation of crematoria was to implicitly condone cremation, leading the Meiji government to completely ban the practice on 18 July 1873. Pro-cremation movement During the ban, mourners carried out false cremations by burning firewood atop graves. Advocates argued that cremation was not unfilial as the compactness of the resulting ashes made it easier for people to fulfil the filial task of interring family members together in ancestral graves. The Meiji government was less lenient than the contemporary Qing government in China, which made an exception for those who had died far from home. Advocates argued that burning bodies were better than rotting ones, citing European studies on the detrimental effect of decomposing bodies on public health, as well as the fact that cremation was being promoted in the West as a hygienic practice. The ban also affected the practice of full-body burials. The local government in Tokyo planned to use temple grounds for extra burial space, to accommodate the increased full-body burials under the ban. However, the Finance Ministry argued that urban graveyards were a waste of potentially profitable, taxable land. The Council of State decided to ban full-body burial within Tokyo city limits, making no exception for those wishing to be buried in ancestral graves, even those on temple or personal property. After less than two years in effect, the ban was repealed in May 1875. After the repeal In 1878, English traveller Isabella Bird visited a new Japanese crematory equipped with smokestacks, which minimised the impact of crematory smoke on the public. Her description of the facilities was disseminated by Western cremationists. In 1880, German cremationists requested to view plans of another crematory modernised with ventilation systems and a lime filter. In 1884, the British government also requested to view plans, and completed England's first crematory a year later. An 1897 law mandated the cremation of individuals that had died of communicable infections. A public crematorium was built in every Japanese municipality in the 1910s and 1920s. From the 1920s, firewood was gradually replaced with fossil fuels which produced less smoke and odours, allowing cremation to happen during the daytime. Because families no longer had to travel back to the crematory the next day to collect the ashes, the Buddhist service traditionally held on the seventh day after death could be held on the same day as both the funeral and cremation for convenience. After the Great Kanto Earthquake of 1923, the majority of those who had been killed in Tokyo were cremated by Buddhist priests at an old military clothing depot, one of twelve areas designated for the cremation of victims. Postcards were spread depicting piles of ashes and bone fragments on the ground beside piles of personal effects removed before cremation. Present day Cremation is now mandatory in most parts of Japan. After death, 24 hours must pass before cremation can take place, unless the cause of death is communicable infection. The ashes, which contain bone fragments (okotsu), can be pulverised into a fine powder for an additional cost. Local governments own and maintain most crematoria, and thus profit minimally from cremation costs. A particular public crematorium in Yokohama charged ¥12 000 for residents and ¥50 000 for visitors in 2016. A shortage in crematoria as Japan's population ages means that families can wait up to 4 days before the deceased can be cremated. Temporary mortuaries, commonly called 'hotels', are now available for families to store the deceased for around ¥9000 a night. Some temples also offer this service. In the aftermath of the Great East Japan Earthquake of 2011, the bodies of 2,000 victims were temporarily buried due to fuel shortages in the affected area. Many were exhumed before the expected two years had passed, despite their semi-decomposed state, by relatives whose mourning could not be complete without their cremation. Cremation became more common than full-body burial in the 1930s, and more common in all areas of Japan in the 1970s. As of 2010, Japan had a cremation rate of 99.94%. It is less common in rural areas and in the Okinawan archipelago where the bones of the decomposed body are exhumed, washed, and reburied (senkotsu). Since the 1990s, there has been the option to pulverise all remains to a fine powder and incorporate them into temoto kuyohin, or 'close-at-hand funerary items' such as memorial diamonds, crystals and ceramics. As of 2016, one third of cremations in Tokyo took place without a funeral. This cheaper and simpler option is called chokuso, or 'direct cremation'. In 2012, Emperor Akihito and Empress Michiko stated that they wish to be cremated due to concerns over limited space in the imperial graveyard. Process of cremation The family may give a monetary gift to the cremator in charge before cremation begins. A Buddhist priest chants a Buddhist scripture, called a sutra, as cremation begins. The chief mourner presses the button to ignite the furnace, or two chief mourners press it together. This action mimics the ignition of the 'death flower', a paper flower traditionally placed atop graves. Igniting the flower or furnace marks the chief mourner's relationship to the deceased as their primary heir and caretaker. Attendees wait in the crematorium as the body is cremated for about 60 to 90 minutes. Lower temperatures of 500 to 600 °C are used than in Western cremation, to retain some bone as fragments. The remains are then placed on a metal tray and moved to the ash collecting room (shū-kotsu-shitsu). Some mourners choose to consume some of the ashes, as they are or mixed with water. Transferring the ashes into a cinerary urn is traditionally done in male/female pairs as a precaution against the contaminating nature of death, and against accidentally dropping the bones. Mourners approach in order of their relationship to the deceased, and pass bone fragments from one pair of chopsticks to the other. Using chopsticks in this way outside of a bone-picking ceremony (kotsuage) is typically taboo. The chopsticks are longer than those used for eating, and one is wooden and the other bamboo. Sometimes only the left hand is used, or the left hand is used initially before switching to the right hand. Children are not exempt from participation in kotsuage. The bone fragments are transferred in order of those of the feet to those of the skull, so that the deceased will be upright within the urn. It is often necessary for the cremator to break the skull so that it will fit into the urn. The second cervical vertebra is placed in the urn last by the closest relative. Called 'nodobotoke', or 'throat Buddha', it resembles a meditating Buddha. In Eastern Japan, all of the remains are transferred into the urn, whereas in Western Japan, only some of the remains are collected. Mourners often only transfer some of the remains, while crematorium staff finish the task. The urn is then sealed, and placed in a box which is covered with cloth. As for full-body burials, permits for cremation are issued by a city office. During the Heian period, the cremation site was marked by a 6 foot tall fence, made of cypress bark or bamboo. Its rough construction deliberately distinguished it as a structure meant for the dead. It was also built without digging supports into the ground, to avoid angering the earth deities. At Emperor Go-Ichigo's cremation in 1036, and that of other royals and aristocrats around this time, a second fenced area was built within the first, increasing the religious and imperial sanctity of the inner space. A second fence also further protected the living and the dead from each other. Emperor Go-Ichigo's cremation pit contained straw mats, cloth, silk, and kindling. The various layers protected the dead from any angered earth deities. A ladel, a broom, and a bucket of water were placed at each corner. Pet cremation A Buddhist priest may chant sutras for a pet if it is cremated at a cemetery owned by a temple. Some opt for mass cremation, where the ashes are not collected by the owners but interred by the cemetery. Object cremation Some Buddhists believe that non-sentient objects also have the Buddha-nature, or the potential to attain Buddhahood. Some Japanese people thus express their gratitude towards certain significant material possessions by ceremonially cremating them. Commonly cremated objects include traditional Japanese tools such as needles, writing brushes, tea whisks, and paper umbrellas. Shoes, hairdressing scissors, hats, semiconductors, clocks, watches, and dolls have been cremated as well. One notable ceremony is the annual cremation of wooden chopsticks at Buddhist temples or Shinto shrines for Chopsticks Commemoration Day on the 4th of August. References Japan Death in Japan
Cremation in Japan
[ "Chemistry" ]
2,786
[ "Cremation", "Incineration" ]
60,611,843
https://en.wikipedia.org/wiki/Architecture%20of%20Bolivia
The architecture of Bolivia is closely related to its history, culture and religion. Bolivian architecture has been constantly changing and progressing over time. Subject to terrain and high altitudes, most of Bolivia's Pre-Columbian buildings were built for housing, mainly influenced by Bolivian indigenous culture. The arrival of Spanish settlers brought many European-style buildings, and the Spaniards began planning to build big cities. After Independence, the architectural style became Neoclassical and many churches and government buildings were built. In modern Bolivia, like many countries, skyscrapers and post-modern buildings dominate, and of course there are special styles of architecture to attract tourists and build. Before the arrival of Spanish settlers, the architecture of the Tiwanaku Empire and the Inca Empire was the main representative of the architectural style of Pre-Columbian Bolivia. They not only reflect the culture of their respective empires, but also the Bolivian indigenous culture. During the Spanish colonial period, when the Spanish colonists built the big cities, they not only brought the Baroque style from Europe, but also the new building materials and religions from Europe. At the same time, Bolivia's original architectural style and Baroque style led to a new style, known as the Andean Baroque style. In modern times, like many countries, modern Bolivian architecture is dominated by modern and postmodernism. In order to meet the needs of tourism and benefit by the unique geographical environment of Bolivia, some other styles of architecture have emerged in Bolivia which better showing the diverse architecture of Bolivia. History Tiwanaku empire (400–1000) Tiwanaku is located near the southern shore of Lake Titicaca at an altitude of 3,850 meters. Most of the ancient city sites were built from adobe, although they are now covered by modern towns, but representative stone buildings survive in protected archaeological areas. It was a prosperous and planned city between 400 AD and 900 AD. The city has impressive stone carvings and complex underground drainage systems that control the flow of rain. At the same time, there are many buildings related to religion and political structure, such as Temple Semi-underground, the terraced platform mound Akapana, Kalasasaya‘s Temple and Palace of Putuni. One of the most spectacular monuments in Tiwanaku is Akapana. This is a terraced platform mound with an initial stack of seven superimposed platforms with a stone retaining wall of up to 18 meters, but now only the lowest and a small part of it are well preserved. According to the investigations, it was clad in sandstone and surrounded by well-preserved drainage systems. Located in the north is Kalasasaya‘s Temple, which is believed to have been used as an observatory. The most representative of it is the two huge sun gates cut from andesite, which is one of the most important representatives of Tiwanaku art. There are niches at the sides of the door, and a well-designed bas-relief above the door. The Temple Semi-underground also has beautifully carved. The walls are made up of 48 red sandstone columns with many carved stones set into it. It is clear that the former settlers of the city had a superb technique of carving and polishing various stone materials, combined with architectural techniques to make the architecture of the Tiwanaku empire more recognizable and representative. The ruins of ancient city of Tiwanaku has been listed on the World Heritage List by the United Nations Educational, Scientific and Cultural Organization (UNESCO) in 2000. Inca empire (1438–1471) Bolivia became part of the Inca Empire in the 13th century. Because the Inca Empire occupied Bolivia through invasion, many of the buildings of this period began to have military use, many fortresses and defensive walls appeared, and the buildings of this period had better comprehensiveness. The archaeological site of Incallajta covers an area of 67 hectares and is one of the main Inca sites in Bolivia. Incallajta can be considered a building similar to the Pocona fortress, with nearly forty buildings and a defensive wall, and the large space that can be covered is characteristic of Inca architecture. This is the Inca complex in Bolivia, with habitational, defensive, military, religious agricultural and towers for astronomy. The archaeological site of Incallajta was the largest and most important administrative centre in the region. In 2003, it was submitted to the Tentative Lists in the Global Strategy part created by UNESCO. Spanish colonial period (1538–1825) With the arrival of Spanish settlers, almost all Bolivian architecture has changed. They bring Baroque style, new building materials and enough wealth. With the massive construction, the architectural landscape of Bolivia has changed. For Aboriginal people, the biggest change in housing is the introduction of adobe to replace the previous mud and clay mixture. Over time, this kind of European traditional style house with a courtyard and red tile roof is becoming more and more popular. However, the more obvious change is that Spanish settlers began to build big cities. These large cities are usually centered around a cathedral and palace which built in the "Andean Baroque style." This style is a combination of Baroque style and original architecture style of Bolivia, although in most cases it looks almost identical to the original. In the period when Bolivia was colonized by Spain, this style can be found in most cities in Bolivia, the most representative being the City of Potosí. City of Potosí is the largest supplier of silver in Spain during the Spanish colonial period. It is even hailed as the world's largest industrial park in the 16th century, with a major impact on the Spanish economy and the global economy. So in the 17th century, about 160,000 Spanish colonists lived in the city. In other words, the city is a good example of Bolivian architecture in Spanish colonial times. Not only has the various industrial infrastructures required for mining, but also has many buildings related to daily life. There is no doubt that the Spanish colonists established a European-style church associated with religion and its high-end residence. In this city, many buildings have adopted the “Andean Baroque style”, which blends Indian style. Potosi also has a lasting impact on the architecture of central Bolivia and the Andes by spreading Brazilian architectural style. City of Potosí has been listed on the World Heritage List by the UNESCO in 1987. Another representative city built by the Spanish settlers is the Historic City of Sucre. This is a city built in the south-central part of Bolivia in the first half of the 16th century. The various buildings in the city are well preserved, blending in Latin America and kinds of European architectural style Sucre's predecessor was La Plata, a silver town founded by the Spanish settlers in 1538. It is the representative of the indigenous culture of Characas. To commemorate the independent leader Antonio Jose de Sucre, it was designated as the first capital of Bolivia and renamed Sucre. This historic city was designed according to a simple urban plan, with a checkerboard pattern of streets, similar to other cities established by the Spanish settlers in the United States during the same period. Since the wealth of the nearby City of Potosí has supported the economics of La Plata, La Plata has been the center of justice, religion and culture in the region since its establishment. In La Plata, Characas Audiencia is the prototype of the current Supreme Court. As a cultural centre, La Plata has many universities and royal colleges, such as the University of Saint-Francois-Xavier, the Royal Academia Carolina, and San Isabel de Hungria Seminario. These buildings are representative of the fusion of European architectural style and the style of local architecture in Latin America. Of course, in Sucre, the most representative architectural style is religious architecture. A series of cathedrals built since the 16th century, such as San Lázaro, San Francisco, Santo Domingo and the Metropolitan Cathedral. In the 250-year span of construction, the integration of the architectural styles of the two continents became more and more mature. As one of the most important construction in Bolivia, Casa de la Libertad was built in 1621 as part of the Convent of the Jesuits, where it witnessed the independence of Bolivia. The 18th century architecture best reflects the style of the local architecture, similar to the structure in the same period in City of Potosí. Architecture in Sucre, is a complete display of the architectural style brought from Europe and the blend of local architectural styles and traditions in Bolivia, including the architectural traditions and styles of the Renaissance, Mudéjar, Gothic, Baroque and Neoclassical periods. Historic City of Sucre has been listed on the World Heritage List by the UNESCO in 1991. After independence (1825–1982) After gained independence in the early 19th century, the architecture style in Bolivia changed in general. The architectural style became Neoclassical style, which is much like in Europe, but it also retained the characteristics of the early courtyard. During this time, large number of new churches and government buildings were built in Bolivia. But in the early 20th century, due to war, social unrest and economic depression, there was little development in Bolivia's architecture except for the necessary government buildings and churches. Modern Bolivia (1982–present) In the late 20th century and early 21st century, a large number of impressive buildings were built again in Bolivia. Like many countries in the world, modern Bolivian architecture is dominated by modern and postmodernism. Skyscrapers have risen in many cities and gradually occupied the skyline. Some representative cities are San Jorge, La Paz, Santa Cruz and Cochabamba. The Top 10 skyscrapers in Bolivia are almost in La Paz and Santa Cruz. The tallest skyscraper of Bolivia is WTC TOYOSA Tower 1 which is 228 meters in La Paz. This is a building for office use. At the same time, with the development of science and technology, some new technologies can be applied to buildings, especially to repair some old or damaged buildings. For example, there is a new technology for repairing the ruins of Tiwanaku, 3D printing. Some parts of the Tiwanaku ruins were damaged due to the destruction of the Spanish colonists. So this study converted the literature into a 3D form, then Researchers can try to make quick combinations, which often try different insights into previous architectural styles. Other style of architecture of Bolivia In Bolivia, there are other unique architectural styles due to historical, cultural and geographical reasons. Over 15 km from Uyuni, southwest of Bolivia, there has the world’s largest salt flat, Salar de Uyuni. In order to attract tourists and provide them with convenience and comfort, combined with the local geographical environment, the world's first salt hotel, Palacio de Sal was built in 1998. This is an almost 100% salt hotel built by solid salt bricks from Salar de Uyuni. The hotel incorporates salt into every detail of the design to better blend into the dramatic and spectacular environment, bringing visitors different experiences and memories. The infrastructure and furniture in the hotel are also made of solid salt bricks from Salar de Uyuni. such as tables, chairs and other utensils. In public areas, the carefully carved salt sculptures by Bolivian artists are equally eye-catching. This white environment is easily reminiscent of white minimalism. At the same time, in order to prevent the salt environment from being too smooth, the deep red wood brings warm colors. Combining a wall with an alcove, a bedroom with a domed salt brick ceiling and a cathedral-like vestibule, this hotel gives the impression of a modern and local Andean architecture style. Unfortunately, the hotel must be demolished in 2002 due to environmental pollution caused by poor management. A new hotel was built at the eastern bank of Salar de Uyuni. Fortunately, the new hotel is still named after Palacio de Sal, and the infrastructure and furniture in the hotel are still made of solid salt bricks from Salar de Uyuni. But in order to comply with government standards, the health system has been redesigned. In the new hotel, same as the old hotel, people is forbidden to lick the walls in order to prevent the salt blocks from collapsing. Local architect Freddy Mamani designed an emerging architectural style known as the Neo-Andean architecture. In the exterior of the building, similar pigments match each other for a colourful effect, while large glass panels are placed on the exterior wall. For a high-altitude city consist of bare bricks and monochrome, this clearly attracts attention. The Neo-Andean architecture rejects the minimalist and Baroque style preferred by Western traditional architects and marks the “decolonization of symbolic order”. For Mamani, architecture can promote Bolivian culture, to show and maintain the roots and identity of Bolivians themselves. References Architectural history Bolivian art Bolivia
Architecture of Bolivia
[ "Engineering" ]
2,571
[ "Architectural history", "Architecture" ]
60,611,934
https://en.wikipedia.org/wiki/Daniel%20McKinsey
Daniel Nicholas McKinsey is an American experimental physicist. McKinsey is a leader in the field of direct searches for dark matter interactions, and serves as Co-Spokesperson of the Large Underground Xenon experiment. and is an executive committee member of the LUX-ZEPLIN experiment. He serves as Director and Principal Investigator of the TESSERACT Project, and is also The Georgia Lee Distinguished Professor of Physics at the University of California, Berkeley. Biography Daniel N. McKinsey joined the University of California, Berkeley Physics Department faculty in July 2015. He received a B.S. in Physics with highest honors at the University of Michigan in 1995. His Ph.D. was awarded by Harvard University in 2002, with a thesis on the magnetic trapping, storage, and detection of ultracold neutrons in superfluid helium. His postdoctoral research was performed at Princeton University, and in 2003 he joined the Yale University physics department, where he was promoted to Full Professor in 2014. He was awarded a Packard Fellowship in Science and Engineering Fellowship and an Alfred P. Sloan Research Fellowship, and served on the 2013-2014 Particle Physics Project Prioritization Panel (P5). Research interests McKinsey's research centers on non-accelerator particle physics, particle astrophysics, and low temperature physics. In particular, his work is on the development, construction, and operation of new detectors using liquefied noble gases, which are useful in looking for physics beyond the Standard Model. Applications include the search for dark matter interactions with ordinary matter, searches for neutrinoless double beta decay, and the measurement of the low energy solar neutrino flux. He is especially interested in the physics of the response of liquefied noble gases to particle interactions, the calibration of these detectors so as to understand their response, and the overall development of new experimental techniques for reaching sensitivity to extremely rare, low-energy particle interactions. Other interests include the use of liquid xenon for gamma-ray imaging, and the visualization of turbulence in superfluid helium References Dark matter University of California, Berkeley faculty University of Michigan College of Literature, Science, and the Arts alumni Harvard University alumni Princeton University alumni Yale University faculty Alfred P. Sloan Prize winners American particle physicists American astrophysicists Year of birth missing (living people) Living people Fellows of the American Physical Society
Daniel McKinsey
[ "Physics", "Astronomy" ]
478
[ "Dark matter", "Unsolved problems in astronomy", "Concepts in astronomy", "Unsolved problems in physics", "Exotic matter", "Physics beyond the Standard Model", "Matter" ]
60,612,651
https://en.wikipedia.org/wiki/Cannabis%20and%20sex
Human consumption of cannabis (which is commonly known as marijuana, pot or weed) is commonly thought to enhance sexual pleasure but there is limited scientific research on the relationship between cannabis and sex, in part due to U.S. drug policies that focus on prohibition. Effects of cannabis are difficult to study because sexual arousal and functioning are extremely complex and differ among individuals. Cannabis affects people differently, making it difficult to study. Both men and women report greater sexual pleasure after having consumed cannabis but there is no scientific evidence of the effects on the physiological components of the sexual response cycle when using the drug. Research As of 2010, research on the effects of cannabis on sex in humans is limited to self-report studies. This type of study has disadvantages because it requires people to accurately remember how much they consumed and its effects, leaving researchers unable to verify responses. In these studies, the majority of people who consumed cannabis before sex reported they experienced greater pleasure than those without it. Researchers believe this reported increase in sexual pleasure is likely a result of the drug’s effects on the senses. In particular, it commonly makes users feel more relaxed. Some research says the amount of cannabis consumed affects one's sexual experience. In one study, 59% of users thought sexual pleasure was enhanced after smoking one joint, though 39% thought consuming more than one joint provided any further enhancement; and large doses of cannabis have been used in India as a sexual depressant. It is not clear whether cannabis consumption affects the quality of orgasms; over half of male consumers, as well as a lower percentage of female consumers, reported it enhances their orgasms. In a small study published in 1979, 84 graduate students, the majority of whom were men and identified as "experienced smokers", believed cannabis increased the intensity of orgasms and should be considered an aphrodisiac. Some more-recent studies said orgasms are improved with cannabis use. Studies on the effects of cannabis consumption on sex have shown few other significant physical improvements. In 1979, Masters and Johnson completed a five-year-long study with a sample size of 800 men and 500 women whose ages ranged between 18 and 30 years old. In this study, men reported no improvements in maintaining erections or any increase in penile firmness. Women reported no increase of vaginal lubrication. A study in 2017 in the Journal of Sexual Medicine looked at data from the large, nationally representative National Survey of Family Growth and included more than 28,000 women and nearly 23,000 men. It reviewed survey responses on the frequency of cannabis consumption and intercourse in the four weeks prior to the survey. It found women who consumed cannabis daily had an average of 7.1 sexual encounters in the previous four weeks compared with 6 for those who never consumed it. Men who consumed cannabis daily reported having 6.9 sexual encounters on average compared with 5.6 in those men who never consumed it. There is evidence of the negative effects of cannabis use during sex. Some studies show a correlation between chronic cannabis use and reduced testosterone levels in men. It has been found heavy use of cannabis decreases the sperm count of healthy men, though this reduction can be reversed. Habitual use of cannabis is also linked to decreased sexual performance while increasing sexual arousal. Psychotropic mechanism The effects of cannabis begin as a chemical process in the brain in which the neural communication network becomes altered. Tetrahydrocannabinol's (THC) chemical structure is similar to that of anandamide, which is responsible for sending chemical messages between neurons throughout the nervous system. The brain areas that are affected influence memory, pleasure, thinking, concentration, movement, coordination, and sensory and time perception. These areas include the amygdala, hippocampus, basal ganglia, and prefrontal cortex. Within those areas are cannabinoid receptors that make up a part of the endocannabinoid system. Such effects within the nervous system may vary among individuals. Cannabis influences experiences of sexual pleasure and memory in distinctive ways. Studies have observed differences between male and female neuropsychological functioning. While the results have shown little significant differences, limited studies have been undertaken with very small sample sizes. Sexual pleasure A study published in March 2019 observing women using cannabis prior to sex and their sexual function measured outcomes of satisfaction in drive, orgasm, lubrication, dyspareunia, sexual experience, and the frequency of cannabis use on satisfaction. The results of the study show women who used cannabis prior to having sex are more likely to have satisfactory orgasms and an increased sex drive. Women who frequently use cannabis had higher chances of satisfactory orgasms, regardless of whether they used it prior to sex. Such results show cannabis has a positive relationship to increased sexual satisfaction, and such results can be considered in research to develop treatment for female sexual dysfunction. In another study, a large portion of participants reported having increased desire and sexual satisfaction while using cannabis before sex. In contrast, some reported the experience being worse than usual. Memory The orbitofrontal cortex and hippocampus help with the formation of new memories, and cannabinoid receptors are found in these areas. Thus, cannabis will affect abilities regarding memory and learning. While using cannabis to enhance sexual experience and satisfaction, cannabis will also influence perception and sensation. Studies conducted observe neurocognitive behaviour of individuals under the influence of cannabis and the relationship between cannabis with risky behaviour. Cannabis can have negative effects on learning and memory. Cannabis users show lower spans of attention, concentration, and abstract reasoning than non-users. Cannabis use impairs neurocognitive functioning, and the user may lose the ability to effectively recall or learn while intoxicated. This can hinder responses to the surrounding environment and decision-making, leading to the individual's inability to accurately remember details accurately or their perception of time becomes distorted. While such results are studied, the neurocognitive domains remain inconsistent in results when observing neurocognitive behaviour of users. One study observed the risky behaviour ( of individuals who use cannabis. The findings of the study revealed that adolescents who use cannabis are more likely to voluntarily engage in unprotected sex repeatedly, while the participants who never used cannabis or started to use cannabis after adolescence were less likely to have unprotected sex. Products There are a variety of cannabis-infused sex products, such as lubricants and massage oils containing CBD and THC. References External links Cannabis Human sexuality
Cannabis and sex
[ "Biology" ]
1,320
[ "Human sexuality", "Behavior", "Human behavior", "Sexuality" ]
60,612,980
https://en.wikipedia.org/wiki/Extreme%20users
Extreme Characters (also known as brink users and extreme users) is a methodology used within user-centered design in order to represent edge case users of a product, brand or user interface. Extreme Characters also fits under the umbrella of market segmentation within marketing as it formulates design solutions for both average users and extreme, brink users. The concept of creating extreme users has been adopted heavily into the concept user-centered design and human-centered computing, and has led to its wide adoption both within online and offline applications, along with its placement within marketing communications. Extreme characters is based within designing for the brink users for a product, this behaviour is cultivated through the collection of data through focus groups and interviews with specific users. From this, a clear goal, or user need, is formulated. By designing for a user need that is only reflected by a minority of the focus group, the use-case of extreme users is born. Through designing for this minority, design solutions stem for both use brink users, and the average user. The concept of extreme characters has, however, come under scrutiny, leading to a critique of its placement within user-centered design and marketing. This scrutiny comes under the umbrella that it leads designers and marketers away from the target market for a specific product or service. Moreover, the critique also dictates that the methodology does not ultimately portray real end users of a specific product or service. Through the use of extreme users, designers are able to characterise and formulate their products needs to fit around the extreme users differing contexts. By observing these work arounds the extreme users use, it allows for the designer to not only form a product for the general population, but for also these brink users. Finally, the influence of extreme characters can be seen to have formed landmark technologies. This can clearly be seen with J. Djajadiningrat's Interaction Relabelling and Extreme Characters referencing the creation of first telephone by Alexander Graham Bell and the creation of the first closed source email protocol within the year 1972 by Vint Cerf. It also has a large play in modernistic technology and its overall design. This can be seen with the creation of the Nintendo Wii and the redesign of the Ford Focus. History The earliest study that coined the name 'extreme characters/extreme users' for this approach was J. Djajadiningrat's Interaction Relabelling and Extreme Characters. It is here that the concept of using brink users for the exploration of unexplored use cases for a product or service. However, the concept of using extreme characters in the user experience of product or services can be dated back to the 19th century. This can be seen with through Vint Cerf's exploration into programming a closed source email protocol within the year 1972. From his programming, he communicated that through the capability for electronic letters within a closed network, he was inherently able to talk to his deaf wife whilst they were both at work. Moreover, another known landmark use of this design methodology before the cultivation of its academic recognition within the aforementioned J. Djajadiningrat's study, was Alexander Graham Bell's work with the deaf formulating in his patient for the first telephone. Through designing for patient with hearing difficulties, a user that is not inherently the 'generic user', Bell's work cultivated in one of the biggest telecommunication leaps in the modern age. J. Djajadiningrat's study The first academic recognition of extreme users can be seen within Interaction Relabelling and Extreme Characters: Methods for Exploring Aesthetic Interactions. Through its citing of specific studies in the use-cases of personas, it brings to fruition how many studies had been citing the use of extreme characters within their product design without realising its academic potential. J. Djajadiningrat depicts this in judgement of Actors, Hairdos and Videotapes - Informance Design. which was already using extreme characters before the technique was coined. They used it in this design in order to steer away the product design away from the characters within a specific target group. He judges how through their accidental use of three extreme personas, a drug dealer, the Pope and a woman with two husbands. Following this, Djajadiningrat judged the development of each character in order to analyse the creation of specific products that helped their contexts. For the Pope, it was a product that gave status, for the drug dealer it was a product that highlighted secrecy and finally for the polyandrous wife, it created a product that designed a management system Thereby, by J. Djajadingrat's study, the first academic recognition of extreme users formed the academic advantage of how the methodology can expose emotions and traits that are not obvious for standard user's scenario. Moreover, by utilising this technique, J Djajadiningrat argues that allows for designers to make more humane and human centric products and services. Benefits In accordance with Martin Tomitsch, a lead user experience designer at the University of Sydney, the concept of "extreme characters" stems from the design process as it installs a consideration for the users outside of the typical and generic user base of a product or service. It can, thereby, provide a compelling design solution and focus in on constructing abstract data, that is received through user testing, into a complete design solution for the non stereotypical type of user of a product or service. It is often used in the early stages of the design, according to Tomitsch, as it allows for the User Experience Designer to generate new problem aspects of the design, and, in hand, promote new elements and aspects of the design concept. Following this, representation of the 'extreme users', through other various User Experience methodologies, such as bodystorming or role playing into these type of characters, it can thereby promote a bigger picture of this character profile that the design team can manufacture and ideate a product or service towards. As dictated by the Harvard Business School, an exploration of extreme users can be ventured into through several approaches. The analysis of representing extreme characters through personas is one approach, as dictated through Martin Tomitsch's study, however benefits can also present themselves if the extreme consumer is donned by the very designers. This can be seen with studies, such as the redesign of the Ford Focus, that showed the benefits of the methodology by the engineers physically becoming these extreme consumers. Acclaimed usages of the methodology August de los Reyes Global recognition, within the user experience community, of this methodology resounded with the appearance of the 'extreme user' approach by August de los Reyes, a lead designer for the Xbox One. This occurred due to his recent wheelchair bound state, resulting in him creating a design for the video game console that could be accessed by the non-conventional user of the device, individuals who are wheelchair bound. Chris Messina and Twitter's hashtags Moreover, often cited as the "father of the hashtag", Chris Messina proposed, a tweet in 2007, the concept of having a metadata tag to group specific events that could generated by users on the social media platform Twitter in order to create order to constant streams of tweets. It would thereby, also, allow users to find similar messages and people who are also reacting and replying to this specific theme or event. Twitters initial response to this proposition by Messina was quoted to be "these things are for nerds". However, through extreme users of the Twitter platform, as seen with the fire that ravished through San Diego within 2007 saw the first "social trend" of a hashtag. The hashtag "#sandiegofire" allowed users to gain easily updates on the fire location and track constant news outlets. Through this, the benefits were saw by the Twitter platform itself and was quickly implemented by the platform. Using the hashtag has become prominent on the internet and, as calculated in 2018, 85% of websites that can be found in the top 50 websites (as based by traffic) use the hashtag. as a way for users to be able to group metadata and content under a specific hashtag. This use of extreme users adopting this service, proposed by Messina, promoted this thinking of designing for use cases for individuals effected by events like the 2007 wildfires and how specific products will benefit them. The Nintendo Wii In accordance to the Harvard University Business School, the Nintendo Wii game console stemmed from Nintendo researching into the extreme users of the gaming market, people who hate playing games on gaming consoles. From this, apparent research took place and resulted in an array of information about the struggles of gaming for the "non-gamer". This included that, at the current gaming environment at the time before the launch of the Wii, consoles were too complicated and that the console controllers were too difficult to operate. In response, the Nintendo Wii was born. The controller used motion as its primary input, emulating real-life movement and eliminating the pain points that non-gamers had with traditional game controllers. Moreover, the system designed used easier to understand graphics to avoid the convoluted user interface. The Wii was a revolution and, through Nintendo's analysis of extreme users, it created this instant hit of a console. In all the Nintendo Wii is the fifth best selling game console ever put to market and has forced the competition in the gaming market to follow suit with its motion controlled gestures that mimic real life. This is abundantly clear with the subsequent creations of the Xbox Kinect and the PlayStation Move. Both these products were made as a result of the design leap that the Wii took from its exploration within the extreme characters design methodology. The Ford Focus redesign The Ford team used the extreme user approach in their engineering of the Ford Focus. Through their approach of learning from an extreme demographic of the user of their car, the elderly with physical limitations, they were able to dictate the troublesome tasks for this age bracket. By observing this specific user in this demographic, it was clear that their physical constraints made it difficult for them to use different functionalities of the car, such as the seatbelt. Through using specific body suits that removed motion around the legs, arms and neck movements, with similar constraints put on for hearing and sight, the engineers became this extreme user. By the engineers becoming the user, they released how the different designs of the Focus forced difficulty for users with these physical limitations. Thereby, through this method of design, the end redesign of the car focused on this accessibility factor. It allowed for new features that promoted accessibility for not only this demographic, but provided important features for all consumers. Critique and criticism As dictated through Prof. Luciano Gualberto's research into user modelling through personas and extreme characters, the concept of using an extreme scenario for a user will, in hand, lead the user to situations that they can design for, but in reality, are viewed as redundant. As taken from his research User Modelling with Personas, he brings to light this concept of designing for an extreme user, a drug dealer, who could use a specific service to hide a 'secret agenda', in this case it would be hiding an organised crime syndicate. However, Gualberto argues that the senior, generic user, will not need the same complexity of secrecy which, in hand, formulates this concept of "feature overload" - where the resources available clutter the real, usable data on the interface of the product or service. Gualberto's concluding critique of the methodology formulates the concept that the extreme user does lead to new design and user requirements, however, the difficulty of recognising of the important user needs within the user experience process as a whole can stem from the extreme user approach. Moreover, the method does not depict the real user need of the system, product or service and, in hand, the methodology does not argue for much creation time during the design process. References Usability Design
Extreme users
[ "Engineering" ]
2,418
[ "Design" ]
60,613,037
https://en.wikipedia.org/wiki/OctaDist
OctaDist is computer software for crystallography and inorganic chemistry program. It is mainly used for computing distortion parameters of coordination complex such as spin crossover complex (SCO), magnetic metal complex and metal–organic framework (MOF). The program is developed and maintained in an international collaboration between the members of the Computational Chemistry Research Unit at Thammasat University, the Functional Materials & Nanotechnology CoE at Walailak University and the Switchable Molecules and Materials group at University of Bordeaux. OctaDist is written entirely in Python binding to Tkinter graphical user interface toolkit. It is available for Windows, macOS, and Linux. It is free and open-source software distributed under a GNU General Public License (GPL) 3.0. Standard abilities The following are the main features of the latest version of OctaDist: Structural distortion analysis Determination of regular and irregular distorted octahedral molecular geometry Octahedral distortion parameters Volume of the octahedron Tilting distortion parameter for perovskite complex Molecular graphics 3D modelling of complex Display of the eight faces of octahedron Atomic orthogonal projection and projection plane Twisting triangular faces Molecular superposition (Overlay) Other utilities Scripting language Surface area of the faces of octahedron Jahn–Teller distortion parameters Root-mean-square deviation of atomic positions Capabilities Simple and flexible processes of use Cross-platform for both 32-bit and 64-bit systems Graphical user interface (GUI) Command-line interface (CLI) User-friendly interactive scripting code User-adjustable program setting On top of huge and complicated complexes Support for several outputs of computational chemistry software, including Gaussian, Q-Chem, ORCA, and NWChem See also List of quantum chemistry and solid-state physics software References External links OctaDist official website OctaDist at Github repository OctaDist PyPI package OctaDist at IUCr software archive Computational chemistry software Crystallography software Free science software 2019 software
OctaDist
[ "Chemistry", "Materials_science" ]
407
[ "Computational chemistry software", "Chemistry software", "Crystallography", "Computational chemistry", "Crystallography software" ]
60,613,068
https://en.wikipedia.org/wiki/Confined%20environment%20psychology
Confined environment psychology is a refined subcategory of environmental psychology. There can be severe neurological impacts upon remaining in a confined environment over a prolonged period of time. Confined environment psychology can come in different forms, including; by location and lack of or limited human interaction. The broad subcategory also includes the effects of social isolation on animals. Behavioural and neurological impacts of confined environments Solitary confinement and isolation can have severe psychological effects and is heavily dependent on the extent of isolation, particularly for prisoners. A study conducted by Stuart Grassian stated some of the behavioural effects of solitary confinement and isolation include agitative behaviour, hallucinations and restlessness. Solitary confinement and isolation can disrupt the function of neurotransmitter systems, which result in unusual behaviour. Mice experience similar behaviour to humans, including agitation and aggression, fear and hypersensitivity to unfamiliar objects that are viewed as a threat. Neurologically, chronic social isolation for mice activates a neuropeptide found in the central nervous system known as tachykinin. Tachykinin (also known as the TACC2 gene for mice) is produced in the amygdala and hypothalamus of a mouse's brain. These regions of the mouse brain directly control the behaviour of mice emotionally and socially. Suppressing certain neurochemicals can have an adverse effect on the behaviour of mice. By location Various experiments have been conducted that physically confine human beings based on location. Antarctica Isolation experiments conducted in Antarctica analysed the dynamics of human relationships during prolonged periods of time. Participants at Concordia Antarctic Research Station experienced different psychological outcomes. Although cognitive performance generally remains unchanged, inactive participants were demotivated and physically strained, whereas the active group were psychologically and physically stable. Given the psychological and physical isolation of residing in Antarctica, there were higher rates of self-diagnosed depression, lower blood pressure levels and lower adrenaline. The Zimbardo Experiment Another example of a physical confined environment is the Stanford environment prison test, also known as the Zimbardo experiment. The Zimbardo experiment was conducted by an American psychologist and Stanford University Professor named Phillip Zimbardo in 1971. Using the bottom level of the Psychology building in Stanford University, Zimbardo transformed it into a mock prison, whereby random participants were assigned different prison roles, including prisoners and guards. Each participant was given an ID number and prison uniform to deprive them of their identity. The participants were given their assigned roles and soon started behaving as they were legitimately a prisoner or a prison guard. Those assigned as prison guards began to adopt authoritative mannerisms and asserted their dominance over the prisoners by punishing them physically, often through push-ups. The psychological impact of being in a confined prison was demonstrated when Prisoner #8612 experienced uncontrollable rage within 36 hours of the experiment. NASA Mars One The NASA Mars One isolation mission examines the social and individual psychological conditions of living on Mars for 32 to 33 months. There has been extensive research conducted by Michigan State University Professor Steve Kozlowski regarding team cohesion in confined environments. Experiments have been performed around the Mauna Loa volcano in Hawaii and Antarctica to mimic the conditions of Mars. The official NASA Mars One program is yet to be conducted, with the program aiming to "establish a permanent human settlement in Mars". Hospital patient isolation A study conducted by US hospitals in relation to changes in patient psychology when in contact isolation due to being infected with multi-drug resistant organisms (MDROs). Precautions to reduce the spread of MDROs had adverse psychological effects for patients, specifically an increase in the severity of mental health issues including depression and anxiety. The outcome of the patient isolation studies resulted in nosocomial infection, whereby the infection is borne within the hospital. The scoring scales used for the studies included Hamilton Anxiety/Depression Rating Scale (HAM-D) and the Self-Esteem Scale (SES). This study requires larger studies to be conducted to be more accurate in results. Homogeneous environment psychology Living in a homogeneous country is a form of social isolation that can have several psychological effects on an individual. Saudi Arabia is an example of a culturally homogeneous country, whereby individuals are viewed as a collectivist society. With the whole population being of Muslim faith, an established way of thinking has been developed, predominantly influenced by Islamic beliefs and values. Saudi Arabian people share the same taboo topics, due to the profound link between religion and government. Arabs typically share the mentality that life is "in the hands of Allah (God)", therefore a stigma has been formed against the practice of psychology in the Arab world. Being in a confined country bounded by religious beliefs and stern traditions has resulted in individuals being bounded by narrow mentalities. With the government removing the ban on female drivers on 24 June 2018, the empowerment of women is being initiated, and culture, religion and government are being separated. Confined work environments Royal Navy personnel experiment Confined work environments can have significant psychological and physical effects on individuals. A 2010 study on Royal Navy personnel., conducted by the Institute of Naval Medicine in Hampshire, investigated the negative effects associated with occupational stress, specifically when working in submarines. Factors that can affect the work of personnel include the mood of the individual. According to the study, workers remain in a positive mood during and after the mission, however, may experience a negative mood when reflecting on the remaining duration of the mission. The 'salutogenic effect' is presented, although this largely relied on consistent interaction with families for workers to be in a positive mood. High extraversion, interpersonal sensitivity, and emotional stability enhanced mental resilience of confined workers. The results of the study stated that from 1999 to 2007, submariners that had stress increased by 9%, from 31% in 1999 to 40% in 2007 Limited social interaction Social environment confinement produces similar effects to physical confined environments. The AARP study on social isolation by age, the Amish community, Oxana Malaya (the feral child), Amala and Kamala (the "wolf-like children") and mice/rat experiments are examples of psychological changes caused by limited social interaction. Social isolation by age A 2018 study conducted by the American Association of Retired Persons (AARP) deconstructs the effects of social isolation on adults over 40 years old (Anderson & Thayer, 2018). The survey conducted by the AARP, consisting of a sample size of 1,300 adults aged over 40, found that two-thirds of the sample believed social isolation is triggered by a specific event, however other factors that contribute to social isolation also included lack of community facilities in health and wellbeing that can increase the severity of social isolation. The study debunked commonly believed myths about social isolation, with 82% of the sample size believing married people cannot be lonely since they have a partner around, however this myth was deemed false. The myth that prolonged periods of social isolation is equivalent to smoking 15 cigarettes per day was not commonly known, with 28% of the sample size answering correctly (Anderson & Thayer, 2018). The Amish community The Amish community, primarily found in Ohio and Pennsylvania, are a religious group that follow a strict lifestyle and limit their social interactions to their own community. When an Amish person marries with a non-Amish person, they are shunned by the community. The Amish community refrain from using advanced technologies, and other facilities that go against their Ordnung, which is their set of rules for daily life. The Ordnung includes the prohibition of public electricity and automobiles. This also applies to the installation of phones within households, however Amish people still recognise the need for phones for communication purposes. The confined mentality of having limited social interaction results in the Amish people being very family-oriented and reliant on family connections for business, particularly for farming practices. Oxana Malaya (feral child) Oxana Malaya, a Ukrainian woman raised by dogs, was also in a confined environment due to her limited social interaction. In 1991, the feral child was found living with a pack of dogs, adopting the behaviour of dogs by barking and using all four legs. Though she returned to a relatively normal lifestyle after the discovery of her behaviour, British child psychologist Lyn Fry visited Oxana 5 years after her appearance on the Discovery Channel. Lyn Fry examined her mannerisms, stating that "her language is odd". Fry made further commented that though her orientation towards humans has improved, there are still some behavioural aspects that still resemble dog-like gestures. Amala and Kamala Amala and Kamala were two girls, commonly known as the "wolf-children" due to their feral upbringing in Midnapore, India. The siblings were hidden from the media as an Indian reverend took them to an orphanage. The observed behaviour of the "wolf-children" suggested they did not understand human behaviour and emotions, and were unable to feel affection from other people. The behaviour of the youngest sibling, Kamala, demonstrated a subtle change following the death of his sibling Amala, who died due to "severe diarrhea caused by an infestation of worms". Changes in behaviour included the development of understanding human emotions (as Kamala first cried following Amala's death) and human speech Experiments using animals Scientists have utilised rats to conduct experiments to examine behaviour within confined environments. Manipulating a rat's environment to analyse their behaviour to social isolation has neurological differences, which resemble symptoms of schizophrenia. Such symptoms include memory impairment and hyperactivity. MRI scans of rats in social isolation were performed, and differences in the right and left medial prefrontal cortex (mPFC) were prevalent as it became delineated. The disruption of neurobiological composition of sensorimotor gating is found for both socially isolated rats and non-medicated people with schizophrenia. In 2015, another controlled experiment was conducted which explored depression for female rats as a result of social isolation. Conclusions for this experiment stated that the female rats did experience depression, however the anti-depressant drug amitriptyline is able to counteract the psychological effects of the experiment. A 2003 report states that isolation over longer timeframes results in increased fear and agitation amongst mice, showing signs of anxiety and depression. Changes in behaviour for an isolated mouse in comparison to a group of isolated mice differs, as a completely isolated mouse shows more aggression. Male and female mice react differently to stimuli. Males tend to engage more in general activity than females, whereas females are more likely to experience catalepsy earlier on in the timeframe of the study. Both male and female isolated mice experienced a lack of sleep in comparison to their corresponding gender's group house References Environmental psychology
Confined environment psychology
[ "Environmental_science" ]
2,184
[ "Environmental social science", "Environmental psychology" ]
60,613,203
https://en.wikipedia.org/wiki/%C4%80whitu%20Peninsula
The Āwhitu Peninsula is a long peninsula in the North Island of New Zealand, extending north from the mouth of the Waikato River to the entrance to Manukau Harbour. The Peninsula is bounded in the west by rugged cliffs over the Tasman Sea, but it slopes gently to the east, with low-lying pastoral and swamp land along the edge of the Waiuku River and Manukau Harbour. At the northern tip, the Manukau Heads rises to a prominence above the entrance to the similarly named harbour. The nearby historic Manukau Heads Lighthouse is one of the few in the country open to the public. The peninsula is relatively sparsely populated, despite its proximity to the centre of Auckland city (which lies to the northeast). The largest settlement on or near the peninsula is Waiuku, which lies at the peninsula's isthmus. There are rural settlements at Grahams Beach and Matakawau Point. Geology The Āwhitu Peninsula was formed geologically recently, from black volcanic sand from eruptions of Mount Taranaki mixed with white quartz and pumice sand, carried from the Waikato River. Prior to this, the Manukau Harbour was an extensive bay. The peninsula is a sand dune which developed over the last two million years. Historically much of the peninsula was native forest dominated by taraire, with significant numbers of kauri, pūriri, tawa, karaka, kohekohe, tītoki, tōtara and kahikatea. Hamiltons Gap is a small gap in the western coast of the Āwhitu Peninsula, where the path of a stream has cut through the terrain. History The peninsula is named after the traditional settlement of Āwhitu, located to the west of Orua Bay. The name refers to the regret Hoturoa, captain of the Tainui migratory canoe, felt as he left the area. The area has strong significance for Ngāti Te Ata Waiohua, and is the location of Tāhuna Marae. The west coast of the Āwhitu Peninsula is the former site of Paorae, a flat sand dune land which was a major kūmara (sweet potato) cultivation area for Tāmaki Māori iwi. The land eroded during the 18th century. The northern shore of the Āwhitu Peninsula around the Manukau Heads is one of the earliest archaeological sites in the Auckland region. In 1834, a Wesleyan mission was established at Orua Bay on the peninsula by William Woon. On 20 March 1840, Orua Bay became one of the locations where the Treaty of Waitangi was signed, by Manukau and Waikato chiefs. During the event, Apihai Te Kawau of Ngāti Whātua signed, but several Waikato Tainui chiefs refused. From 1835, the kauri forest on the peninsula was logged. During the early colonial period, the native bush of the peninsula was converted to farmland. Between 1870 and 1900, the peninsula, alongside neighbouring Waiuku and Karaka were major centres for the kauri gum industry. Demographics Āwhitu covers and had an estimated population of as of with a population density of people per km2. Before the 2023 census, the Āwhitu statistical area had a larger boundary, covering . Using that boundary, Āwhitu had a population of 2,919 at the 2018 New Zealand census, an increase of 408 people (16.2%) since the 2013 census, and an increase of 381 people (15.0%) since the 2006 census. There were 1,107 households, comprising 1,467 males and 1,452 females, giving a sex ratio of 1.01 males per female. The median age was 47.4 years (compared with 37.4 years nationally), with 525 people (18.0%) aged under 15 years, 387 (13.3%) aged 15 to 29, 1,512 (51.8%) aged 30 to 64, and 492 (16.9%) aged 65 or older. Ethnicities were 89.6% European/Pākehā, 12.8% Māori, 2.8% Pacific peoples, 3.8% Asian, and 1.7% other ethnicities. People may identify with more than one ethnicity. The percentage of people born overseas was 17.0, compared with 27.1% nationally. Although some people chose not to answer the census's question about religious affiliation, 60.7% had no religion, 26.5% were Christian, 0.3% had Māori religious beliefs, 0.9% were Hindu, 0.3% were Buddhist and 2.1% had other religions. Of those at least 15 years old, 312 (13.0%) people had a bachelor's or higher degree, and 522 (21.8%) people had no formal qualifications. The median income was $34,700, compared with $31,800 nationally. 516 people (21.6%) earned over $70,000 compared to 17.2% nationally. The employment status of those at least 15 was that 1,236 (51.6%) people were employed full-time, 354 (14.8%) were part-time, and 75 (3.1%) were unemployed. Education Awhitu District School and Waipipi School are coeducational full primary schools (years 1-8) with rolls of and students respectively as of Biodiversity The Peninsula has a high sympatric diversity of native New Zealand land snails. Communities of >70 native species in a 4 ha patch of bush can be found here, whereas in other parts of the world, 15 sympatric land snail species would be considered high. Grazing and other habitat disturbances can negatively impact this diversity. Climate References Franklin Local Board Area Peninsulas of the Auckland Region Kauri gum
Āwhitu Peninsula
[ "Physics" ]
1,201
[ "Amorphous solids", "Unsolved problems in physics", "Kauri gum" ]
60,613,653
https://en.wikipedia.org/wiki/National%20Biotechnology%20Research%20Park
The National Biotechnology Research Park () is an industrial park in Nangang District, Taipei, Taiwan. History The area where the industrial park stands today used to be the site for the 202nd arsenal of Ministry of National Defense. Later on, the area was decided to be redeveloped into an industrial park during the presidency of Chen Shui-bian. The construction project was launched in 2007. Hampered by several controversies, the construction finally began in 2014. The industrial park was developed with a budget of NT$22.5 billion and construction of was completed on 14 March 2018. The industrial park was inaugurated on 15 October 2018 by President Tsai Ing-wen. Tenants Biomedical Translation Research Center, Academia Sinica Development Center for Biotechnology Food and Drug Administration National Laboratory Animal Center Transportation The industrial park is accessible from Nangang Station of Taipei Metro. See also National Science and Technology Council (Taiwan) References External links 2018 establishments in Taiwan Buildings and structures in Taipei Economy of Taipei Science parks in Taiwan Biotechnology in Taiwan
National Biotechnology Research Park
[ "Biology" ]
201
[ "Biotechnology in Taiwan", "Biotechnology by country" ]
60,615,424
https://en.wikipedia.org/wiki/Ian%20Hamley
Ian Hamley (born 1965) is a British academic who is the Diamond Professor of Physical Chemistry at the University of Reading. He is a soft matter scientist and physical chemist with research expertise in self-assembling molecules including polymers, peptides and other biomolecules. He has more than 400 published scientific papers. He is the author of 'The Physics of Block Copolymers', 'Introduction to Soft Matter', 'Block Copolymers in Solution', 'Introduction to Peptide Science', and 'Small-Angle Scattering: Theory, Instrumentation, Data and Applications', as well as several edited texts. Career After postdoctoral research at AMOLF (FOM Institute for Atomic and Molecular Physics, Amsterdam) and University of Minnesota, Hamley was appointed as lecturer in Physics at the University of Durham in 1993 where he worked until 1995. He moved to the Department of Chemistry at the University of Leeds in 1995 and was promoted to become Professor of Polymer Materials and Director of the Centre for Self-Organising Molecular Systems in 2004. He moved to the University of Reading as Diamond Professor of Physical Chemistry in 2005. This was a five-year, joint appointment with Diamond Light Source. His past research concerned the self-assembly of block copolymers. Most recently he has developed interests in peptide and peptide conjugate self-assembly, including molecules with bioactivity such as amyloid peptides peptide hormones, antimicrobial peptides, peptides in cosmetic applications and peptides with anti-cancer activity. Several of these show promise as therapeutics. Awards and honours Hamley was a Royal Society-Woolfson Research Merit Award Holder 2011–2016 and won the RSC Peter Day award for Materials Chemistry in 2016 and the MacroGroup UK Medal for Contribution to UK Polymer Science in 2016. Lecturing career Hamley has lectured at the University of Reading for many years specializing in physical chemistry teaching modules on thermodynamics and surface and interface chemistry. References British scientists 1965 births Living people Fellows of the Royal Society of Chemistry Date of birth missing (living people) Place of birth missing (living people) Academics of the University of Reading Academics of the University of Leeds Alumni of the University of Reading Alumni of the University of Southampton Polymer scientists and engineers
Ian Hamley
[ "Chemistry", "Materials_science" ]
458
[ "Polymer scientists and engineers", "Physical chemists", "Polymer chemistry" ]
60,619,734
https://en.wikipedia.org/wiki/Interactionism%20%28nature%20versus%20nurture%29
In the context of the nature-nurture debate, interactionism is the view that all human behavioral traits develop from the interaction of both "nature" and "nurture", that is, from both genetic and environmental factors. This view further holds that genetic and environmental influences on organismal development are so closely interdependent that they are inseparable from one another. Historically, it has often been confused with the statistical concept of gene-environment interaction. Historically, interactionism has presented a limited view of the manner in which behavioral traits develop, and has simply demonstrated that "nature" and "nurture" are both necessary. Among the first biologists to propose an interactionist theory of development was Daniel Lehrman. Since then, numerous interactionist perspectives have been proposed, and the contradictions between many of these perspectives has led to much controversy in evolutionary psychology and behavioral genetics. Proponents of various forms of interactionist perspectives include Philip Kitcher, who refers to his view as "causal democracy", and Susan Oyama, who describes her perspective as "constructive interactionism". Critics of interactionism include major figures in behavioral genetics such as Arthur Jensen, Robert Plomin, and philosopher Neven Sesardic. Interactionist perspective to depression Depression is not dependent entirely on genetic nor environmental influences alone to trigger its onset. Both genetic and environmental factors work accompanied to transform a vulnerability to depression to be expressed in its actuality. Research has demonstrated the synchrony of polygenic scores of major depressive disorders (MDD) with stressful life events and social support to increase the probability of developing depression. Although Monroe and Simons criticize interactionism for a lack of precise measurement to grasp its ‘conceptual essence’, there’s been numerous studies surrounding gene by environment interaction commonly focussing on candidate genes such as genetic variation within the serotonin transporter (SLC6A4) gene. As MDD is a polygenic trait its development is dependent on variations of a range of genes each exhibiting small effect sizes. Peyrot et al also found increased polygenic risk scores and genetic vulnerability in the presence of childhood trauma demonstrating the collaboration between environmental and genetic stressors. In the instance of personal life events however, whether they were passive or active trauma has a mediating effect on the heritability of the disorder. When passive, meaning the individual played less of an active in their trauma i.e., illness or accident, the heritability wasn’t as severe than when active i.e., in cases of separation, relationship conflict, financial or legal trouble. Contrarily, Mullins found whilst polygenic risk scores and stressful events were predictors of depression, however he believed them to be isolated factors acting independently. The combined therapy (psychotherapy with pharmacotherapy) for depression demonstrates statistically significant improvement compared with psychotherapy and pharmacotherapy. This is the value of considering both genetic and environmental factors in the explanation for depression. However, its effectiveness depends on the severity and chronicity of depression. For mild and moderate non-chronic depression, the combined therapy has no differences from a single therapy. Whilst results in the field are unreliable, research generally points in favour of the compatibility between genetic and environmental contributors to psychopathology and depression. Interactionist perspective to PTSD Ecological, biological, and residual stress pathways interact in order to manifest post-traumatic stress disorder (PTSD), the experience of trauma being the primary contributor to PTSD. The severity of trauma is a prime factor to the onset of PTSD but the question as to why only a fraction of individuals who experience trauma develop a pathological response whilst others do not rest in the assumptions that it is the combination of both genetic vulnerability that exhibit PTSD in alliance with environmental trauma. Among those who experience extreme severities of PTSD such as violent crimes, assault, severe accidents, approximately 3-35% develop symptoms. As inherent vulnerability increases the threshold for the environmental trauma to trigger the disorder decreases. Residual stress is a key factor in the expression of PTSD, it is the initial and prolonged effects of trauma and a catalyst in its development. Ecological and biological pathways are also preceding factors that increase the likelihood of PTSD following trauma and residual stress. Ecological pathways include both personal and environmental influences such as coping mechanisms, interpersonal support, and the individual’s environment. And biological pathways include neurological anomalies, inherited traits, and structural anomalies such as hippocampal atrophy. Ecological and biological factors provide a predisposition whilst residual stress triggers its onset. Greater trauma leads to greater levels of residual stress, and trauma can be divided into pre-trauma i.e., childhood abuse, and post-trauma i.e., social support, in which females are more influenced by post-trauma than their male counterparts in the development of PTSD. For example, survivors of sexual abuse found PTSD was influenced considerably by familial nature of support, negative parental reactions were found to intensify PTSD whereas high levels of social support helped diminish psychological fallout and recovery time. Ecological pathways include factors such as a history of abuse, physical and sexual. Women with a history of physical abuse were found to be 5x more likely to have a history of PTSD, 10x more susceptible to experiencing it then controls. Parental abuse is a predictor for future anti-social behaviour and decreased social skills, and maladaptive social information processing increasing sensitivity to PTSD. These environmental factors aside from residual stress generate maladaptive cognitive patterns that provide a ‘breaking point’ to individuals with genetic vulnerability to PTSD to exhibit the disorder. Biological pathways include a diversity of factors including hormonal and neurotransmitter abnormalities, and alterations in neural structure. Male adults of PTSD were found to have higher urinary biproduct of norepinephrine and epinephrine (adrenaline), lower cortisol levels, and abnormalities in neurotransmitter levels associated with anger, hostility and depression related to PTSD. Increased norepinephrine is involved with the activation of traumatic memories, and increased catecholamines are what lead to increased levels of stress-related hormones such as cortisol, and neurotransmitters associated with PTSD, which in the instance of trauma increases one’s vulnerability to it. Furthermore, structural alterations increase the susceptibility of PTSD, for example, sexually abused adolescent girls and those generally maltreated in comparison to a control group had dysregulation within their hypothalamic pituitary adrenal axis (HPA) alongside decreased hippocampal volume. Furthermore, corticotropin releasing hormone (CRH) increases heart rate and enhances fear conditioning and is also the hormone responsible for regulating the HPA axis which corresponds with symptoms of PTSD. Interactionist perspective to Schizophrenia Background Twin studies that investigated the development of schizophrenia found that identical twins have a higher concordance rate for schizophrenia than non-identical twins. However, none of them found a 100% concordance rate from identical twins. Identical twins have exact genes. This suggests the development of schizophrenia is not only due to genetic factors but also environmental factors. On the other hand, in an adoption study, participants were adopted at a young age by families without a schizophrenia history. Children with genetic risks (such as having a genetic schizophrenia mother) were more sensitive to negative child-raring styles than those with no genetic risk. They are more likely to develop schizophrenia in undesirable child-raring style families. This suggests the role of genetic factors in the development of schizophrenia. Therefore, researchers consider the interaction of genetic and environmental factors to explain schizophrenia. This is also the tendency of psychologists to research mental disorders. Diathesis stress model The diathesis-stress model is an interactionism approach. In the context of schizophrenia, diathesis is the vulnerability. Vulnerabilities can be either genetic predisposition or early experiences, or both. Stressors are any event that can trigger a schizophrenia-vulnerable individual to the onset of the condition. This interactionism approach explains why people with similar genes or traumatic experiences, do not necessarily develop schizophrenia together. Early diathesis-stress model The early diathesis-stress model was developed by Meehl. It suggested that diathesis was a single gene, called schizogene. It led to a personality development, called schizotypic personality. Individuals who did not have schizotypic personality would not express the symptoms of schizophrenia, no matter what stress they experienced. However, once a schizotypic personality, the stress from the family could trigger schizophrenia. This model is over-simple. First, more recent research found that 108 genes are associated with the development of schizophrenia. No schizogene determines schizophrenia, it is a polygenic mental disorder. Second, the stressors are not limited to the problematic family environment. They can be any negative experience in life. For example, childhood traumas, low socioeconomic status, substance misuse… Modern diathesis-stress model However, the concept of the diathesis-stress model did not be given up. Researchers make it more comprehensive. The research suggests that genetic defect leads to biological vulnerability. This schizophrenia–vulnerable individual’s dopamine (DA) receptors and hippocampus area were abnormal. Later, stressful experiences (the stressor) can activate their hypothalamic pituitary adrenal (HPA) axis. This leads to the release of cortisol. This release of cortisol can further disrupt that abnormal DA system. According to the dopamine hypothesis, the DA system abnormality is associated with the symptoms of schizophrenia. Stressors also further damage the hippocampus of vulnerable individuals. Schizophrenia patients have a smaller volume of the hippocampus compared with a typical brain Moreover, stressors also cause patient more sensitive to stress in everyday life, this is a vicious cycle. Therefore, the stressors are not just a trigger of schizophrenia but also lead to deterioration. In the more comprehensive model, diathesis is not just genetic. Early childhood traumas can also lead to psychological vulnerability. This traumatic experience can be child abuse and neglect. For example, Read found that 69% of women and 59% of male schizophrenia patients experienced childhood abuse (physical abuse, sexual abuse or both). This experience impacts the early development of the brain. For example, disrupts the HPA, DA and hippocampal systems. This can create a similar vulnerability to that created by genes. Similarly, stressors are not just subjective negative life experiences. For example, using cannabis can be a stressor that triggers schizophrenia-vulnerable individuals to develop schizophrenics. Houston found that sexual abuse had a significant correlation with the development of schizophrenia, only if the patients used cannabis. This can be because cannabis influences the dopamine system. However, to the patients, the reason for using substances such as cannabis is for enjoyment. Therefore, stressors are now defined as all events that can trigger vulnerable individuals to express schizophrenia symptoms. However, not all smokers develop schizophrenia. Thus, using cannabis is only considered as a stressor but not a cause. Application of interactionism approach to schizophrenia The diathesis-stress model supports a reason for using combined therapy for schizophrenia treatment. If not approved the interactionism approach, there is no reason to use the combined therapy. For example, if therapists propose schizophrenia is due to genetic reasons. Then, it is difficult to convince the patients to participate in drug treatment followed by psychological treatment, and vice versa. However, combined therapy demonstrates statistically significant improvement in reducing schizophrenia symptoms compared to a single treatment. Although it did not indicate the effect of reducing the relapse rate. Criticism of the application Recent research has a better understanding of the diathesis-stress model to explain schizophrenia. However, the mechanism by which vulnerability and stressors contribute to the development of schizophrenia remains unclear. Therefore, the interactionist treatment (combined therapy) just simply combines psychotherapy and pharmacotherapy. This may be attributed to the reason why combined therapy does not always have statistically significant improvement in mitigating schizophrenia symptoms compared to drug therapy. In the case of combined therapy, it is more expensive than a single therapy. This unstable effectiveness can impact its promotion because it lacks cost-effectiveness. Moreover, the significant improvement of the combined therapy can simply be because the effect of their own is added up. It does not necessarily mean there is an interaction effect between these two treatments. There can be a treatment causation fallacy. See also Flynn effect Heritability Diathesis-stress model Social information processing References Human behavior Behavioural genetics
Interactionism (nature versus nurture)
[ "Biology" ]
2,598
[ "Behavior", "Human behavior" ]
60,620,623
https://en.wikipedia.org/wiki/Preimplantation%20factor
Preimplantation factor (PIF) is a peptide secreted by trophoblast cells prior to placenta formation in early embryonic development. Human embryos begin to express PIF at the 4-cell stage, with expression increasing by the morula stage and continuing to do so throughout the first trimester. Expression of preimplantation factor in the blastocyst was discovered as an early correlate of the viability of the eventual pregnancy. Preimplantation factor was identified in 1994 by a lymphocyte platelet-binding assay, where it was thought to be an early biomarker of pregnancy. It has a simple primary structure with a short sequence of fifteen amino acids without any known quaternary structure. A synthetic analogue of preimplantation factor (commonly abbreviated in studies as sPIF or PIF*) that has an identical amino acid sequence and mimics the normal biological activity of PIF has been developed and is commonly used in research studies, particularly those that aim to study potential adult therapeutics. Preimplantation factor acts by paracrine signaling; that is to say trophoblast cells, which collectively form extra-embryonic tissues, secrete it onto the surface of the endometrium. PIF is known to influence many events in the implantation process, the process by which an early embryo implants into the uterine wall. A crucial event in human implantation is when trophoblast cells expressing preimplantation factor invade the uterine wall and found the placenta, an organ that connects maternal blood supply, and along with it, nutrients, to the growing fetus. This requires changes to the histology of the endometrium; a process called decidualisation. Upregulated expression of PIF increases the presence of integrins on the endometrium wall, promoting the embryo's adhesion to the uterine wall. PIF is thought to modulate and facilitate the depth of the trophoblast's invasion into the uterus at physiological doses. Maternal immune system regulation is also a critical event in implantation as the early embryo is essentially a partial allograft, that is a tissue that is recognised as fully identical to that of the mother. Consequently, the embryo may be rejected and attacked if it is not recognised, an event that normally causes spontaneous miscarriage. Preimplantation factor regionally modulates the mother's immune system, decreasing the activity of peripheral maternal leukocytes, reducing inflammation and consequently also increasing the chance that the embryo will be tolerated. Preimplantation factor is also an anti-apoptotic effector, maintaining the trophoblast cell integrity through the intrinsic p53 signalling pathway. Moreover, preimplantation factor protects the central nervous system by downregulating pathways that promote neurone death and promoting neurogenesis. PIF is also known to signal against neonatal prematurity and rescues embryos from toxic uterine environments. Due to its multiple autoimmune and neuroprotective effects in the embryonic environment, preimplantation factor has been studied in clinical environments as a potential novel therapy for reproductive, autoimmune and neurodegenerative diseases. PIF has been successfully studied as a therapy for recurrent pregnancy loss, as it is able to rescue non-viable embryos from a hostile maternal environment. It has also been shown to prevent diabetes mellitus type 1 in mice due to its ability to modulate immunological tolerance in the pancreas. Finally, it reverses paralysis and neuroinflammation whilst promoting neurogenesis in adult patients with neurodegenerative diseases. It also may be able to decrease the severity of brain injuries by modulating the behaviour of supporting cells in the nervous system. Discovery and structure Preimplantation factor has a simple primary peptide structure with a 15 amino acid sequence (MVRIKPGSANKPSDD). As the regulation of the maternal immune system is a requisite for successful implantation, the immune system shows different characteristics in pregnant women and non-pregnant women. In 1994, preimplantation factor was isolated by a lymphocyte platelet-binding assay that compared immune responses and proteins found in pregnant women and non-pregnant women. The assay also compared immune responses with men to verify if the proteins were specific to female reproductive tissues. Results generated in the preliminary study showed that "a preimplantation factor" was being expressed exclusively in pregnant women. On the fourth day after embryo transfer in women who had undergone successful in-vitro fertilisation, this protein was also found, suggesting that it had a role in the determination of the viability of the embryo. Subsequent studies, most seminally including a 1996 study that partially characterised the biological activity of PIF, adopted and established the current term "preimplantation factor" as the name for this novel peptide. Functions Trophoblast invasion and adhesion Trophoblast cells form the outer lining of the blastocyst in preimplantation development, eventually forming more differentiated extra-embryonic tissues including the placenta. Before this differentiation can occur the embryo's invasion and infiltration into the uterine wall must be tightly regulated by both maternal and foetal signals, including secretion of PIF by trophoblast cells. In particular, preimplantation factor is thought to have a paracrine effect on the decidualisation process, which ultimately primes trophoblast cells to invade appropriately into the endometrium. When compared to non-functional short peptides at the same concentration, application of PIF to the endometrium at the implantation stage promoted deeper invasion of the embryo. This effect was not observed to occur indefinitely with successive increases of concentration and any artificial increases of PIF above the human physiological concentration (approximately 50 nmol/L) did not meaningfully increase the invasion of the embryo. Consequently, it is thought that PIF is limited in its promotion of trophoblast invasion by maternal signals. The outermost layer of the uterine wall is an epithelial tissue called the endometrium that requires cell surface adhesion molecules called integrins to adhere the embryo. This additional paracrine effect of PIF has been shown to increase the expression of the integrin molecule α2β3 on the cell membranes of cells in the endometrium. Integrins are a broad class of cell adhesion molecules that allow cells to bind to extracellular matrix. In this way, they assist the entire embryo in binding to the uterine wall, an important event in successfully generating a placenta. Maternal immune tolerance The embryo is immunologically characterised as a partial allograft as it is not a maternal tissue. During fertilisation, a paternal spermatozoon fuses with a maternal oocyte producing a zygote. Phenotypically, the zygote expresses certain epitopes that are controlled by genes inherited from the father, making the embryo a foreign material. In order for successful implantation to occur, the maternal immune system must tolerate the presence of the embryo while not completely inactivating its innate responsiveness to foreign pathogens. This process is not always successful; indeed maternal immune rejection of the embryo is a common and well-characterised cause of recurrent pregnancy loss. Preimplantation factor has a significant role in signalling this grafting behaviour; it has been, for instance shown to signal an anti-inflammatory response in a broad range of peripheral blood mononuclear cells. PIF also impacts similar cytoskeletal proteins in CD14+, CD8+ and CD4+ cells suggesting that they have a broad and integrative role in modulating the immune system of the mother. In particular, PIF inhibits the process of platelet aggregation in helper T lymphocytes and skeletal proteins in cytotoxic T cells. While PIF attenuates or modulates the immune system, it does not effect the response to other pathogens or foreign material. This modulatory effect on immunological tolerance is responsible for a strong correlation between PIF expression and the viability of pregnancy. Viability of pregnancy The expression of preimplantation factor in the embryo is strongly correlated with the likelihood of a live birth. This observed viability is not solely due to PIF's ability to mediate the implantation and allografting process but also due to its ability to promote the upregulation and integrity of certain intracellular targets that are positively associated with normal developmental processes. For instance, PIF is known to target the enzyme disulfide isomerase, which reduces intracellular oxidative stress and also heat-shock proteins, which are molecular chaperones that ensure proteins produced by a cell will fold into the correct conformation for their function. Additionally, PIF is known to promote the production of vital cytoskeletal proteins including actin and tubulin that are required for the current morphological development of nerve axons and the viscera of vital organs. Axons use circular tubulin polymers called microtubules to transport intracellular material between the cell body and the axon terminal and require actin to form synapses. They are hence important for the organisation and function of the growing immune system. Additionally, when uterine serum from patients with recurrent pregnancy loss is applied to embryos that are positive for PIF, they display the capacity to resist the toxin and are able to survive. Combined, these observations and combination of intracellular effects suggest that PIF has multifaceted impacts directed towards viable pregnancy. Neurogenic and anti-apoptotic effects In the prenatal environment, PIF has neuroprotective impacts. It protects the growing fetus against neonatal prematurity, preventing the fetus from being delivered before adequate neural development has taken place. The neurogenic effects of PIF are not isolated to the prenatal environment; in fact PIF is thought to have impacts throughout life. In adult models, PIF has multiple neurogenic effects: it promotes the growth of neurons and reduces neuroinflammation. It is thought to have these impacts by modulating signalling through the ubiquitous protein kinase A and protein kinase C intracellular signalling pathways. PIF also inhibits microRNA let-7, a sequence that is highly upregulated in the central nervous system. The Let-7 system has been associated with cell death in neurons, and PIF is known to inhibit this process from occurring. In rats that were induced to have a hypoxic-ischemic brain injury, PIF was able to promote neuron growth, reduced detrimental responses by neuroglia and was able to generate a significant cerebral cortex volume, suggesting it could rescue rats from side effects of brain damage. PIF also has a series of anti-apoptotic impacts in human extravillous trophoblasts, mediated by the TP53 gene. Apoptosis is a controlled cell death process that must not occur if a cell is to proliferate. PIF has specific anti-apoptotic impacts by reducing the phosphorylation of the p53 protein at the serine-15 residue. Without phosphorylation p53 is unstable and undergoes ubiquitylation, signalling the trophoblast and endometrial cells to degrade it in proteasomes and attenuating downstream apoptotic effects. PIF, in particular, has been correlated with increasing the expression of anti-apoptotic effector BCL2 and decreasing the expression of pro-apoptotic effector BAX. BCL2, which is upregulated by PIF, ensures that cytochrome c remains within the inner mitochondrial membrane and hence does not trigger the production of an apoptosome in the cell cytosol. BAX, which is downregulated by PIF, produces transmembrane transport channels that liberate cytochrome c, triggering apoptosis. Collectively, these biochemical effects show that PIF signals against the internal mechanisms of apoptosis in extravillous trophoblast cells, allowing them to proliferate before they implant into the uterine wall. Therapeutic uses Given its multifaceted functionality, including autoimmune, neuroprotective and anti-apoptotic effects, preimplantation factor has been extensively studied as a potential therapeutic agent in both reproductive and non-reproductive medical contexts. PIF is also advantageous because of its easily replicable biochemical structure. In reproductive contexts, PIF has been studied as a treatment for infertility. In women with recurrent pregnancy loss, treatment with PIF is able to rescue a non-viable embryo and promotes a successful implantation and pregnancy. It does this by mitigating the toxic influence of certain factors that naturally occur in the uterus, such as acidity. PIF has also been studied in a range of other non-reproductive contexts. Due to the ability of PIF to attenuate the attack mechanisms of mononuclear immune cells, it has been implicated as a successful treatment for autoimmune diseases including diabetes mellitus type 1 in mice studies. Diabetes mellitus type 1 is characterised by the misrecognition of pancreatic beta islet cells as foreign material. These studies show that PIF is able to preserve the pancreatic beta islet cell's integrity, rescuing them from the autoimmune attacks which cause diabetes. In adult models, PIF also reverses the pathological neuroinflammation caused by autoimmune diseases such as multiple sclerosis. It also reverses paralysis and promotes growth of neurons in patients with neurodegeneration. References External links Proteins Immune system Human pregnancy Cell cycle Developmental biology Hormones of the conceptus
Preimplantation factor
[ "Chemistry", "Biology" ]
2,920
[ "Biomolecules by chemical classification", "Behavior", "Developmental biology", "Reproduction", "Immune system", "Organ systems", "Cellular processes", "Molecular biology", "Proteins", "Cell cycle" ]
60,621,073
https://en.wikipedia.org/wiki/Crenobacter%20cavernea
Crenobacter cavernea Cave-375 is a gram-negative bacterium that is closely related to a previously discovered Crenobacter cavernae strain K1W11S-77ͭ. C. cavernea Cave-375 has not directly been described morphologically, however the related strain K1W11S-77ͭ is a "rod-shaped, motile, and strictly aerobic novel bacteria". Its metabolism has not yet been determined. C. cavernea Cave-375 was first identified from a water sample coming from a dripping stalactite. This stalactite was located in the Algar do Pena cave in the karst Estremadura Limestone Massif in central western Portugal. C. cavernea Cave-375 was first isolated and "grown on nutrient agar at 25 degrees Celsius". Its ecology is not yet known. With the sequencing of the genome of C. cavernea Cave-375, the ecological impact should be able to be identified. Diversity C. cavernea Cave-375 belongs in the Proteobacteria phylum, Neisseriaceae family, and Crenobacter cavernea species. By comparing the 16s rRNA of the CAVE-375 stain to Crenobacter cavernea species, a 99% similarity value was calculated. When comparing DNA-DNA hybridization using a Genome-to-Genome Distance Calculator, a 62.66% hybridization percentage was found. Genome "Genomic DNA was extracted from C. cavernea Cave-375 using an NZY microbial gDNA isolation kit (NZYTech, Portugal)". The whole genome was then sequenced using whole genome shotgun sequencing method. With this, "17,325,372 high-quality raw sequences were assembled into 15 contigs with an N50 value of 323,281 and a total genome size of 2,273,143 base pairs (2.9 Mb)". NCBI Prokaryotic Genome Annotation Pipeline was able to identify a 65.9% GC content and sequencing coding for proteins and tRNA. "2,779 protein coding sequences and 63 tRNA sequences" were identified using this method. References Bacteria
Crenobacter cavernea
[ "Biology" ]
450
[ "Prokaryotes", "Microorganisms", "Bacteria" ]
60,621,440
https://en.wikipedia.org/wiki/Liquid%20carbon%20dioxide
Liquid carbon dioxide is the liquid state of carbon dioxide (), which cannot occur under atmospheric pressure. It can only exist at a pressure above , under (temperature of critical point) and above (temperature of triple point). Low-temperature carbon dioxide is commercially used in its solid form, commonly known as "dry ice". Solid sublimes at at Earth atmospheric pressure — that is, it transitions directly from solid to gas without an intermediate liquid stage. The uses and applications of liquid carbon dioxide include decaffeinating coffee, extracting virgin olive oil from olive paste, in fire extinguishers, and as a coolant. Properties Liquid carbon dioxide is a type of liquid which is formed from highly compressed and cooled gaseous carbon dioxide. It does not form under atmospheric conditions. It only exists when the pressure is above 5.1 atm and the temperature is under (temperature of critical point) and above (temperature of triple point). The chemical symbol remains the same as gaseous carbon dioxide (). It is transparent and odorless and the density of it is 1101 kg/m3 when the liquid is at full saturation at . The solubility of water in liquid carbon dioxide is measured in a range of temperatures, ranging from to . At this temperature, the pressure is measured in a range from 15 to 60 atmospheres. The solubility turned out to be very low: from 0.02 to 0.10 %. Uses Uses of liquid carbon dioxide include the preservation of food, in fire extinguishers, and in commercial food processes. For food preservation, liquid carbon dioxide is used to refrigerate, preserve, store and soften. In a fire extinguisher, the is stored under pressure as a liquid to act as an anti-flammable. The liquid carbon dioxide not only reduces combustion by displacing oxygen, but also cools the burning surface to avoid further damage. Solvent extraction using compressed liquid can be used in industrial processes such as removing caffeine from coffee or improving the yield of olive oil production. Liquid carbon dioxide is being considered as a means of CO2 transportation for underground or subsea storage purposes. Due to its high density as a liquid, it is much more feasible to ship than as a gas. is also used in large-scale air-to-water heat pumps for district heating, replacing less environmentally friendly refrigerants. The changes phases between liquid and gaseous in the process. See also Other chemical compounds and elements are commonly used for commercial and research purposes in their liquid state: Liquid oxygen Liquid nitrogen Liquid helium Liquid hydrogen Supercritical carbon dioxide References Carbon dioxide
Liquid carbon dioxide
[ "Chemistry" ]
539
[ "Greenhouse gases", "Carbon dioxide" ]
60,621,630
https://en.wikipedia.org/wiki/Overabundant%20species
In biology, overabundant species refers to an excessive number of individuals and occurs when the normal population density has been exceeded. Increase in animal populations is influenced by a variety of factors, some of which include habitat destruction or augmentation by human activity, the introduction of invasive species and the reintroduction of threatened species to protected reserves. Population overabundance can have a negative impact on the environment, and in some cases on the public as well. There are various methods through which populations can be controlled such as hunting, contraception, chemical controls, disease and genetic modification. Overabundant species is an important area of research as it can potentially impact the biodiversity of ecosystems. Most research studies have examined negative impacts of overabundant species, whereas very few have documented or performed an in-depth examination on positive impacts. As a result, this article focuses on the negative impact of overabundant species. Definitions When referring to animals as “overabundant”, various definitions apply. The following classes explore the different associations with overabundance: The inconvenience of animals in a certain region or area that threatens human livelihood, for example the tropics are considered to contain an overabundant population of the Anopheles mosquito which carries the malaria parasite. The population density of a preferred species has been reduced by another species population which is then considered as overabundant, for example predator populations of lions and hyenas reducing zebra and wildebeest numbers. A species population within a specific habitat exceeds the carrying capacity, for example national parks reducing herbivore populations to maintain and manage habitat equilibrium. The entire equilibrium consisting of animal and plant organisations is already out of balance, for example existing populations colonising new habitat. Out of all these classifications, class 4 is considered the most significant due to consequent ecological impacts. Causes Overabundance may occur naturally, for example after weather events such as a period of high rainfall in which habitat conditions become optimal. However, other contributing factors include: Anthropogenic disturbances Natural habitats are altered by human activity resulting in habitat fragmentation, decrease in forest densities and wild fires. Other human disturbances include restrictions on hunting, agricultural land modification and predator removal or control within a region or area. The consequent change in land use and the presence or withdrawal of human influence can trigger a rapid increase in both native and non-native species populations. Invasive species can be better adapted to specific environments Invasive species are often overabundant as they outcompete native species for resources such as food and shelter which allows their population to thrive. Other factors influencing population growth include the lack of native predators or the less common presence of the introduced species within native predator habitat. Overabundance due to translocation of threatened species to protected areas Some methods in managing threatened species involve reintroducing species to enclosed reserves or island areas. Once these species are introduced, their populations can become overabundant as these areas serve to protect the targeted species against predators and competitors. This occurred for the Bettongia lesueur, the burrowing bettong, which was reintroduced to the Arid Recovery reserve in Australia: their population has increased from 30 to approximately 1532 individuals. Due to the damage within this reserve their population is considered overabundant. Potential impacts Overabundant species can have an adverse impact on ecosystems. Within ecosystems food resources and availability, competitors, and species composition can be negatively impacted on. Impacts of overabundant herbivores A common impact from overabundant herbivores is vegetative damage by overgrazing, where overgrazing refers to the effect of grazing having reached a level where other biodiversity within the ecosystem becomes threatened. Overgrazing can occur in both terrestrial and marine environments and can alter vegetation as well as the composition of vegetation. Population densities and the composition of fauna can also be negatively impacted on. Additionally, permanent ecological damage can be caused by overgrazing before maximum carrying capacity has been reached. Trophic relationships (i.e. feeding relationships in the ecosystem) can be altered by overabundant species, potentially causing a trophic cascade. Trophic cascades impact vegetation as well as invertebrates (including microorganisms) and birds. Furthermore, predator behaviour and populations may be indirectly affected. Impacts of overabundant predators Overabundant predators are considered harmful to local biodiversity as they prey on native species, compete for resources and can introduce disease. They can decrease native mammal populations and, in some cases, can cause species to become extinct which results in a cascading ecological impact. Examples of invasive species include: “cats (Felis catus), rats (Rattus rattus), mongoose (Urva auropunctata), stoats (Mustela erminea)” and red foxes (Vulpes vulpes). Such species have contributed to the extinction of approximately 58% of modern-day mammals, birds and reptiles. In Australia, red foxes and feral cats have contributed to many native mammals becoming threatened or extinct which has led to diminished vegetation as foraging mammals have an important ecological role in maintaining a healthy landscape. A particular example is where grassland vegetation diminished to shrub land as a result of seabirds being preyed on by Arctic foxes. Seabirds have an essential ecological role which consists of helping to maintain nutrient levels and soil fertility. “Invasive predators also threaten 596 species classed as "vulnerable" (217 species), "endangered" (223), or "critically endangered" (156), of which 23 are classed as “possibly extinct.” Impact on society It can be very costly to control or eradicate overabundant species. For example, fencing regions as a protective measure against red foxes can cost approximately $10, 000 per kilometre while baiting an area of 35,0002 kilometres can cost about $1.3 million. Invasive species According to biology, invasive species are non-native animals that are introduced to a region or area outside of their usual habitat. Invasive species can either be introduced intentionally (if they have a beneficial purpose) or non-intentionally. In general, invasive species that become overabundant most commonly have a negative impact on local biodiversity with little research having found positive effects. Furthermore, an invasive species may have an initial positive benefit that fades as the species become overabundant and the cost of damage control increases. Invasive species can negatively impact food web structures. In terms of trophic levels, the initial introduction of a non-native species results in a higher species richness whereby the trophic relationships are altered by the additional resource (if an animal is not a predator at the top of the food chain) and consumer. However, the consequent degree of the impact on the local ecosystem once a species becomes overabundant is case dependent as some invasive species, like the brown tree snake in Guam, have caused numerous extinctions of native fauna, while others have had fewer damaging impacts on the environment. Costs of invasive species are estimated at millions and billions each year. A focus on Australian Wildlife Red fox The red fox, Vulpes Vulpes, was introduced to Australia during the 1870s. The established population has thrived in previous years due to the following factors: adaptability to climate conditions, the ability to live in a wide range of habitats including deserts and forests, and lastly human modification of Australian landscapes which are suitable environments for red foxes to thrive in. Red foxes have mainly had a negative impact on Australian fauna, with the exception of regulated rabbit populations. The diet of red foxes include a number of threatened native fauna which has contributed to their population declines and extinctions. Furthermore, populations of native fauna, mammals in particular, have increased through fox population control techniques. Rabbit Rabbits were initially introduced to Australia as pets during colonisation. Rabbits pose a threat to native herbivores as they compete for shared resources. Additionally, overgrazing and modification of habitat vegetation by rabbits allow introduced predators to thrive when hunting. Rabbits have thrived in Australia as they reproduce rapidly, have few predators to regulate their population and the climatic conditions is preferable, especially as the environmental conditions limit diseases that regulate rabbit populations on other continents. Methods for controlling overabundant species There are various methods for controlling overabundant populations. Some methods have been used over many years, for example culling, while others such as immunocontraception are still being researched. Culling Culling refers to selective elimination of animals to decrease a population. Two ways of culling involve killing animals by hunting and translocation of animals. Culling of animals may also be an option in reserves established for specific animal conservation as a way of managing their population density, examples include: elephants and hippos. Target animals can be hunted on the ground or culled by aerial pursuit, with the aim to eliminate the animal in one accurate hit to reduce or limit suffering before death. This method allows a large number of animals to be eliminated within a relatively short amount of time, however shots are not always accurate which can lead to the escape and suffering of individuals. Baiting Baiting is a common method of controlling overabundant populations, it involves the placement of lethal chemicals in food (the bait) that eliminates the animal. It is cost-effective and helps remove a large number of animals from a population, however if ingested by non-target animals it could potentially cause death depending on the type of bait the chemical is administered in, as well as the areas of bait placement. 1080 is a common chemical used in bait. 1080 once ingested causes death by inhibiting the animal's neurological functioning. It consists of an enzyme that native Australian fauna is tolerant to, however it can still be lethal if ingested. Fumigation Fumigation, which involves the spreading of poisonous gas, helps to selectively kill a large number of animals. It is a method used to control rabbit and fox populations in Australia by spraying a lethal chemical into warrens and dens. Chemicals used include phosphine for rabbits and carbon monoxide for foxes, both of which induce suffering prior to death. Difficulties with fumigation include pinpointing individual dens and warrens, which can be both time-consuming and hard work, as well as the restricted time period during which animals regularly inhabit their dens, for example during spring when offspring are born. Disease This method is used on select animals and is species specific, such as to control the rabbit population in Australia. It involves spreading a disease, for example "rabbit calicivirus disease", through bait or through capture and release programs. The aim is to have the disease spread through the targeted species population to reduce their numbers. Death may take up to 1 or 2 weeks in which the animal suffers from symptoms such as fever, loss of appetite and lethargy. Contraception Two methods for managing fertility in overabundant wildlife include the employment of biotechnology such as immunocontraception, and surgery to neuter males or spay females.  There are various factors that impact the effectiveness of contraceptive methods, some of which include: expense, longevity of the treatment effect, level of difficulty in administering the treatment, and whether or not the method has a negative impact on the individual or other species in the environment. An example of an immune-contraceptive is gonadotropin releasing hormone (GnRH). Studies have been conducted on various animals, for example white-tailed deer and cats, of which have shown that GnRH can be effective in reducing short term fertility. Immunocontraception Immunocontraception causes animals to become infertile which helps to control and reduce overabundant populations. Two methods of administration include vaccines and chemical implants. In some studies immunocontraception has shown to effectively reduce pregnancy rates, however this method is both time-consuming and expensive due to further research required to overcome challenges such as longevity of the contraceptive effect. Surgery This method can be effective in small populations as it is fairly accessible, however the procedure is costly, invasive as well as the individual being at risk of infection after surgery. Surgical sterilisation is permanent, as a result it may not be appropriate for use in native populations due to the risk of potentially losing genetic variation. References Population ecology Biodiversity Ecology terminology
Overabundant species
[ "Biology" ]
2,534
[ "Ecology terminology", "Biodiversity" ]
60,622,276
https://en.wikipedia.org/wiki/Kazachstania%20yasuniensis
Kazachstania yasuniensis is a recently isolated yeast. This organism is part of the genus Kazachstania, which can be found in a large variety of habitats such as fermented foods, animals, wastewater, et cetera. Isolation Kazachstania yasuniensis was isolated on Ecuador and the Galápagos archipelago in arboreal regions. Seven strains of the genus Kazachstania were isolated in order to provide a thorough taxonomy for the novel species. These samples were cultured on 7.6% ethanol medium (Sniegowski et al. 2002). They were isolated either from rotten wood, soil, or decaying fruits. Based upon where these species were isolated, researchers concluded that K. yasuniensis most likely resides in an arboreal habitat. The strains were collected from the Ecuadorian Amazon, as well as the Scalesia forest of Santa Cruz Island in the Galápagos islands. Although no Gram staining was performed, using a scanning electron microscope, K. yasuniensis cells were found to be ovoid and either single, paired, or in short chains or groups of cells. Characteristics Morphologically, standard methods such as growth temperature testing using yeast extract-malt extract agar cultivation and sporulation tests on various agars were used. As far as physiological characterization goes, the novel species differed from its close relatives in that it absorbed trehalose and ethanol, while growing on ethylamine hydrogen chloride and sodium chloride. It was also unable to grow at 37 °C and absorb glucose, and could absorb sucrose, all of which are physiological aspects that differ from its close relatives. The Si value was calculated for the species as well. A higher Si value indicates a more specialized species. The value for K. yasuniensis was 0.62, and therefore characterized with the majority of yeast species found in highly specialized environments. Kazachstania yasuniensis was found to absorb and metabolize glucose, sucrose, raffinose, galactose, trehalose, cadaverine, ethylamine hydrochloride, and ethanol. It could not grow on inulin, melibiose, lactose, maltose, melezitose, methyl α- d-glucoside, starch, cellobiose, salicin, l-sorbose, l-rhamnose, d-xylose, l-arabinose, d-arabinose, d-ribose, methanol, glycerol, erythritol, ribitol, galactitol, d-mannitol, d-glucitol, inositol, dl-lactate, succinate, citrate, d-glucosamine, glucono- d-lactone, ysine, nitrate, xylitol, or 50% glucose/yeast extract. The organism proliferated at 30 °C, but there was no growth at 37 °C. Multigene sequencing needs to be performed in order to establish clearer boundaries between yeast species. This is a very novel species and there is still much to be discovered. References Further reading Fungi described in 2015 Saccharomycetaceae Fungus species
Kazachstania yasuniensis
[ "Biology" ]
669
[ "Fungi", "Fungus species" ]
60,623,191
https://en.wikipedia.org/wiki/Gustav%20Wilhelm%20Richard%20Sorge
Gustav Wilhelm Richard Sorge (6 April 1852 – 1 December 1907) was a German mining engineer. Life Sorge was the son of a surgeon who practiced in Schilda. His uncle was Friedrich Adolf Sorge. He specialized in the field of coal mining in Wettin, Saxony-Anhalt, Germany. He studied mining conditions and material handling technology. Convinced that coal mining prospects at the Wettin Coal Mine were poor, he changed his emphasis to oil exploration, studying for several years in the United States. From there he traveled to the oilfields near Baku in 1877 to set up a drilling technology workshop for the Otto Lenz machine factory in Sabunçu, Baku, concentrating on the field of deep drilling technology and the industrial equipment required for this purpose. In 1881 he founded his own company in his name. He worked for Deutsche Petroleum-Aktiengesellschaft (DPAG) and the Caucasian oil company Branobel (of Robert, Ludvig and Alfred Nobel and others). He married Nina Semyonovna Kobeleva, born in Baku on April 20, 1867, into a working-class family and became the father of the famous spy, Richard Sorge (1895–1944) along with eight other children. After his health deteriorated and he lost his lucrative contract, the family moved to Berlin-Lankwitz in 1898. There he worked as a director for the Disconto-Gesellschaft which had founded the German-Russian Naphta Society. In 1900 he prepared a report on the Romanian oil industry. Works Tiefbohrtechnische Studien über Ölgruben-Betrieb und Spülbohrung. Berlin, 1908. The Theory of the Movement of the Flushing Streams in Bore Holes (Die Theorie der Bewegung des Spülstroms in Bohrlöchern). In: The Engineering Index Annual of the American Society of Mechanical Engineers 1969. See also Anton Raky (1868–1943), a pioneer in deep drilling and global oil exploration. References Bibliography . An early account by two leading British historians of the time. It is informed by their differing perspectives, Deakin being an authority on 20th century European history and Storry an authority on 20th century Japan. Julius Mader: Dr.-Sorge-Report. Ein Dokumentarbericht über Kundschafter des Friedens mit ausgewählten Artikeln von Richard Sorge. Militärverlag der DDR, Berlin 1986. Joachim Mai: Das deutsche Kapital in Russland, 1850-1894. Deutscher Verlag der Wissenschaft, Berlin 1970 (teilweise Habilitation, Universität Greifswald 1969). Jahrbuch für Wirtschaftsgeschichte. Akademie-Verlag, Berlin 1960. 1852 births 1907 deaths Engineers from Baku People from Baku Governorate People from the Province of Saxony 19th-century Prussian people Mining engineers
Gustav Wilhelm Richard Sorge
[ "Engineering" ]
602
[ "Mining engineering", "Mining engineers" ]
60,623,303
https://en.wikipedia.org/wiki/Planar%20SAT
In computer science, the planar 3-satisfiability problem (abbreviated PLANAR 3SAT or PL3SAT) is an extension of the classical Boolean 3-satisfiability problem to a planar incidence graph. In other words, it asks whether the variables of a given Boolean formula—whose incidence graph consisting of variables and clauses can be embedded on a plane—can be consistently replaced by the values TRUE or FALSE in such a way that the formula evaluates to TRUE. If this is the case, the formula is called satisfiable. On the other hand, if no such assignment exists, the function expressed by the formula is FALSE for all possible variable assignments and the formula is unsatisfiable. For example, the formula "a AND NOT b" is satisfiable because one can find the values a = TRUE and b = FALSE, which make (a AND NOT b) = TRUE. In contrast, "a AND NOT a" is unsatisfiable. Like 3SAT, PLANAR-SAT is NP-complete, and is commonly used in reductions. Definition Every 3SAT problem can be converted to an incidence graph in the following manner: For every variable , the graph has one corresponding node , and for every clause , the graph has one corresponding node An edge is created between variable and clause whenever or is in . Positive and negative literals are distinguished using edge colorings. The formula is satisfiable if and only if there is a way to assign TRUE or FALSE to each variable node such that every clause node is connected to at least one TRUE by a positive edge or FALSE by a negative edge. A planar graph is a graph that can be drawn on the plane in a way such that no two of its edges cross each other. Planar 3SAT is a subset of 3SAT in which the incidence graph of the variables and clauses of a Boolean formula is planar. It is important because it is a restricted variant, and is still NP-complete. Many problems (for example games and puzzles) cannot represent non-planar graphs. Hence, Planar 3SAT provides a way to prove those problems to be NP-hard. Proof of NP-completeness The following proof sketch follows the proof of D. Lichtenstein. Trivially, PLANAR 3SAT is in NP. It is thus sufficient to show that it is NP-hard via reduction from 3SAT. This proof makes use of the fact that is equivalent to and that is equivalent to . First, draw the incidence graph of the 3SAT formula. Since no two variables or clauses are connected, the resulting graph will be bipartite. Suppose the resulting graph is not planar. For every crossing of edges (a, c1) and (b, c2), introduce nine new variables a1, b1, α, β, γ, δ, ξ, a2, b2, and replace every crossing of edges with a crossover gadget shown in the diagram. It consists of the following new clauses: If the edge (a, c1) is inverted in the original graph, (a1, c1) should be inverted in the crossover gadget. Similarly if the edge (b, c2) is inverted in the original, (b1, c2) should be inverted. One can easily show that these clauses are satisfiable if and only if and . This algorithm shows that it is possible to convert each crossing into its planar equivalent using only a constant amount of new additions. Since the number of crossings is polynomial in terms of the number of clauses and variables, the reduction is polynomial. Variants and related problems Planar 3SAT with a variable-cycle: Here, in addition to the incidence-graph, the graph also includes a cycle going through all the variables, and each clause is either inside or outside this cycle. The resulting graph must still be planar. This problem is NP-complete. However, if the problem is further restricted such that all clauses are inside the variable-cycle, or all clauses are outside it, then the problem can be solved in polynomial time using dynamic programming. Planar 3SAT with literals: The bipartite incidence graph of the literals and clauses is planar too. This problem is NP-complete. Planar rectilinear 3SAT: Vertices of the graph are represented as horizontal segments. Each variable lies on the x-axis while each clause lies above/below the x-axis. Every connection between a variable and a clause must be a vertical segment. Each clause may only have up to 3 connections with variables and are either all-positive or all-negative. This problem is NP-complete. Planar monotone rectilinear 3SAT: This is a variant of planar rectilinear 3SAT where the clauses above the x-axis are all-positive and the clauses below the x-axis are all-negative. This problem is NP-complete and remains NP-complete when each clause containing three variables has two neighboring variables that are adjacent on the x-axis (i.e., no other variable appears horizontally between the neighboring variables). Planar 1-in-3SAT: This is the planar equivalent of 1-in-3SAT. It is NP-complete. Planar positive rectilinear 1-in-3SAT: This is the planar equivalent of positive 1-in-3SAT. It is NP-complete. Planar NAE 3SAT: This problem is the planar equivalent of NAE 3SAT. Unlike the other variants, this problem can be solved in polynomial time. The proof is by reduction to planar maximum cut. Planar circuit SAT: This is a variant of circuit SAT in which the circuit, computing the SAT formula, is a planar directed acyclic graph. Note that this is a different graph than the adjacency graph of the formula. This problem is NP-complete. Reductions Logic puzzles Reduction from Planar SAT is a commonly used method in NP-completeness proofs of logic puzzles. Examples of these include Fillomino, Nurikabe, Shakashaka, Tatamibari, and Tentai Show. These proofs involve constructing gadgets that can simulate wires carrying signals (Boolean values), input and output gates, signal splitters, NOT gates and AND (or OR) gates in order to represent the planar embedding of any Boolean circuit. Since the circuits are planar, crossover of wires do not need to be considered. Flat folding of fixed-angle chains This is the problem of deciding whether a polygonal chain with fixed edge lengths and angles has a planar configuration without crossings. It has been proven to be strongly NP-hard via a reduction from planar monotone rectilinear 3SAT. Minimum edge-length partition This is the problem of partitioning a polygon into simpler polygons such that the total length of all edges used in the partition is as small as possible. When the figure is a rectilinear polygon and it should be partitioned into rectangles, and the polygon is hole-free, then the problem is polynomial. But if it contains holes (even degenerate holes—single points), the problem is NP-hard, by reduction from Planar SAT. The same holds if the figure is any polygon and it should be partitioned into convex figures. A related problem is minimum-weight triangulation - finding a triangulation of minimal total edge length. The decision version of this problem is proven to be NP-complete via a reduction from a variant of Planar 1-in-3SAT. References Satisfiability problems NP-complete problems Electronic design automation Boolean algebra
Planar SAT
[ "Mathematics" ]
1,583
[ "Boolean algebra", "Automated theorem proving", "Mathematical logic", "Computational problems", "Fields of abstract algebra", "Mathematical problems", "NP-complete problems", "Satisfiability problems" ]
55,787,246
https://en.wikipedia.org/wiki/Bis%28trimethylsilyl%29peroxide
Bis(trimethylsilyl)peroxide (sometimes abbreviated as BTSP) is an organosilicon compound with the formula ((CH3)3SiO)2. It is a colorless liquid that is soluble in organic solvents so long as they lack acidic groups. The compound represents an aprotic analogue of hydrogen peroxide and as such it is used for certain sensitive organic oxidations. Upon treatment with organolithium compounds, it affords the silyl ether. Preparation It is prepared by treating trimethylsilyl chloride with the Hydrogen peroxide-urea complex. References Trimethylsilyl compounds Organic peroxides Reagents for organic chemistry Oxidizing agents
Bis(trimethylsilyl)peroxide
[ "Chemistry" ]
145
[ "Redox", "Functional groups", "Oxidizing agents", "Trimethylsilyl compounds", "Organic compounds", "Reagents for organic chemistry", "Organic peroxides" ]
55,788,372
https://en.wikipedia.org/wiki/Elephant%27s%20Foot%20%28Chernobyl%29
The Elephant's Foot is the nickname given to the large mass of corium, composed of materials formed from molten concrete, sand, steel, uranium, and zirconium. The mass formed beneath Reactor 4 of the Chernobyl Nuclear Power Plant, near Pripyat, Ukraine, during the Chernobyl disaster of 26 April 1986, and is noted for its extreme radioactivity. It is named for its wrinkled appearance and large size, evocative of the foot of an elephant. Discovered in December 1986, the “foot” is located in a maintenance corridor below the remains of Reactor No. 4, though the often-photographed formation is only a small portion of several larger corium masses. It has a popular reputation as one of the most radioactive objects in history, though the danger has decreased over time due to the decay of its radioactive components. Origin The Elephant's Foot is a mass of black corium with many layers, resembling tree bark and glass. It was formed during the Chernobyl disaster of April 1986 from a lava-like mixture of molten core material that had escaped the reactor enclosure, materials from the reactor itself, and structural components of the plant such as concrete and metal. The Foot was discovered in December 1986 in Room 217/2, to the southeast of the ruined reactor and above ground level. The material making up the Elephant's Foot had melted through at least of reinforced concrete, then flowed through pipes and fissures and down a hallway to reach its current location. Composition The Elephant's Foot is a black ceramic composed primarily of silicon dioxide, with smaller amounts of other oxides, primarily uranium, calcium, iron, zirconium, aluminum, magnesium, and potassium. Over time, zircon crystals have started to form slowly within the mass as it cools, and crystalline uranium dioxide dendrites are growing quickly and breaking down repeatedly. Despite the distribution of uranium-bearing particles not being uniform, the radioactivity of the mass is evenly distributed. The mass was quite dense and unyielding to efforts to collect samples for analysis via a drill mounted on a remote-controlled trolley, and armor-piercing rounds fired from an AK-47 rifle were necessary to break off usable chunks. By June 1998, the outer layers had started turning to dust and the mass had started to crack, as the radioactive components were starting to disintegrate to a point where the structural integrity of the glass was failing. In 2021, the mass was described as having a consistency similar to sand. Radioactivity At the time of its discovery, about eight months after formation, radioactivity near the Elephant's Foot was approximately 8,000 to 10,000 roentgens, or 80 to 100 grays per hour, delivering a 50/50 lethal dose of radiation (4.5 grays) within 3 minutes. Since that time, the radiation intensity has declined significantly, and in 1996, the Elephant's Foot was briefly visited by the deputy director of the New Safe Confinement Project, Artur Korneyev, who took photographs using an automatic camera and a flashlight to illuminate the otherwise dark room. The Elephant's Foot is roughly 10% uranium by mass, which is an alpha emitter. While alpha radiation is ordinarily unable to penetrate the skin, it is the most damaging form of radiation when radioactive particles are inhaled or ingested, which has renewed concerns as samples of material from the meltdown (including the Elephant's Foot) turn to dust. Nevertheless, the corium still poses an external gamma radiation hazard due to the presence of fission products, mainly caesium-137. See also Chernobylite Trinitite Notes References Nuclear accidents and incidents Chernobyl disaster
Elephant's Foot (Chernobyl)
[ "Chemistry" ]
761
[ "Nuclear accidents and incidents", "Radioactivity" ]
55,789,686
https://en.wikipedia.org/wiki/Human%20information%20interaction
Human-information interaction or HII is the formal term for information behavior research in archival science; the term was invented by Nahum Gershon in 1995. HII is not transferable from analog to digital research because nonprofessional researchers greatly emphasize the need for further elaboration of context and scope finding aid elements. Researchers in HII take on many tasks, including helping to design information systems from a biological perspective that conform to the requirements of different segments of society, along with other behaviour intended to improve interaction between humans and information systems. HII is generally considered to be multi-disciplinary as different disciplines have different viewpoints on these interactions and their consequences. HII is considered especially important due to humanity's dependence on information and the technology needed to access it. References Information theory Information science Interdisciplinary subfields
Human information interaction
[ "Mathematics", "Technology", "Engineering" ]
167
[ "Telecommunications engineering", "Applied mathematics", "Computer science", "Information theory" ]
55,789,725
https://en.wikipedia.org/wiki/Austroboletus%20olivaceoglutinosus
Austroboletus olivaceoglutinosus is a species of bolete fungus found in Sikkim in northeast India. It is so named for its sticky olive-green cap. Taxonomy Austroboletus olivaceoglutinosus was described as new to science in 2015, after collections in subalpine forests in Sikkim. The species name is derived from the latin words oliva "olive" and glūtĕn "glue", and refers to the mushroom's cap. Limited genetic testing showed an affinity to Austroboletus fusisporus. Description The olive cap is diameter and fades to green-yellow with age, particularly at the margins. It is conical in shape initially, becoming more convex but not flat, and often has a central boss. The cap surface is very sticky, and often has dead insects stuck to it. Like other boletoid fungi, it has pores rather than gills on the cap underside. The cap margin covers the spore-bearing surface under the cap completely in young specimens. The pores are yellow-white when young, becoming more pinkish with age and staining red-grey when bruised or damaged. The cylindrical stem is high by wide, and is initially white and yellows with age. It is solid in young specimens, with the inner pith softening and leaving a hollow stem in older mushrooms. The mushroom has a pronounced fruity smell. The spore print is a reddish-tan colour, and the narrow-oval to spindle-shaped spores are 12.7–19.0 μm long by 5.9–7.7 μm wide. The flesh does not change colour when potassium hydroxide is applied to it, but the cap surface turns salmon-pink. Distribution and habitat A. olivaceoglutinosus is native to northern Sikkim. The mushrooms appear in August and September in subalpine coniferous forests at an altitude of , under such trees as Sikkim spruce (Picea spinulosa), Bhutan fir (Abies densa), Sikkim larch (Larix griffithiana) and Himalayan hemlock (Tsuga dumosa). References olivaceoglutinosus Fungi described in 2015 Fungi of India Fungus species
Austroboletus olivaceoglutinosus
[ "Biology" ]
465
[ "Fungi", "Fungus species" ]
55,791,310
https://en.wikipedia.org/wiki/Calocybe%20indica
Calocybe indica, commonly known as the milky white mushroom, is a species of edible mushroom native to India. The sturdy all-white mushrooms appear in summer after rainfall in fields and on road verges. Traditionally eaten in West Bengal, it is being grown commercially in several Indian states and other tropical countries. Taxonomy Calocybe indica was formally described in 1974, from material collected in Kolkata. The authors—botanists R.P. Purkayastha and Aindrila Chandra—had noted it to be a popular mushroom in markets in West Bengal. They placed it in the section Calocybe of the genus Calocybe, noting that it appeared closely related to and was similar morphologically to Calocybe gambosa, from which it differed by having slightly larger oval spores, and a stouter mushroom. Botanist A. S. Krishnamoorthy found it growing in Tamil Nadu in the mid 1990s, and its commercial production was overhauled and improved. Description The robust mushroom is all-white in colour and has a firm consistency. Its cap is across, convex initially before flattening out with age. The cuticle (skin) can be easily peeled off the cap. The crowded gills are white but gradually develop into brown with age, and the cylindrical stem is high with no ring nor volva. It has a subbulbous base, being wide at the apex (top), in the middle and wide at the base. The mushroom does not change colour on cutting or bruising, though old dried specimens have a buff colour. The flesh has a mild flavour that has been described as oily, and a faint smell reminiscent of radishes. The spore print is white, and the oval spores measure 5.9–6.8 μm long by 4.2–5.1 μm wide. Distribution, habitat and ecology Calocybe indica grows in grasslands, fields and road verges in Tamil Nadu and Rajasthan, generally on a substrate that is rich in organic material. The mushrooms appear between May and August after spells of rainfall. The fungus is saprophytic, though it has been reported to form ectomycorrhizal relationships with the roots of the coconut tree (Cocos nucifera), palmyra palm (Borassus flabellifer), tamarind (Tamarindus indica) and yellow poinciana (Peltophorum pterocarpum). Cultivation Calocybe indica is cultivated commercially in southern India and becoming more popular in China, Malaysia, and Singapore; it can be grown in hot humid (60% to 70%) countries with a temperature range of 25 to 35 °C year-round. References Fungi described in 1974 Fungi of India Lyophyllaceae Edible fungi Fungus species
Calocybe indica
[ "Biology" ]
579
[ "Fungi", "Fungus species" ]
55,791,579
https://en.wikipedia.org/wiki/Tianhuang%20Emperor
The Great Emperor of the Curved Array (), also called the Gouchen Emperor and Tianhuang Emperor, is one of the highest sky deities of Taoism. He is one of the Four Sovereigns (; ) and is in charge of heaven, earth, and human and of wars in the human world. Chinese mythology The "Curved Array" is a constellation in the Purple Forbidden enclosure, equivalent to the European constellation called Ursa Minor or the Little Dipper. In Taoism, the Great Emperor of Curved Array is the eldest son of Doumu and the brother of the Ziwei Emperor. History Emperor Gaozong of Tang was called by the title Emperor Tianhuang as his Posthumous name given by Wu Zetian. Liu Yan was also given the posthumous name. Constellation There is a constellation named after the Tianhuang Emperor. See also North Star Myōken Wufang Shangdi Four heavenly ministers Notes References External links 道教文化资料库 玉皇大帝 后土皇地祇-地母元君 Taoist deities Chinese gods Four heavenly ministers Chinese constellations Stellar deities Polaris
Tianhuang Emperor
[ "Astronomy" ]
227
[ "Chinese constellations", "Stellar deities", "Constellations" ]
55,792,714
https://en.wikipedia.org/wiki/Euplotidium
Euplotidium is a genus of ciliates. Species form symbiotic relations with bacteria in structures named Epixenosomes. References Hypotrichea Ciliate genera Symbiosis
Euplotidium
[ "Biology" ]
46
[ "Biological interactions", "Behavior", "Symbiosis" ]
55,794,265
https://en.wikipedia.org/wiki/Hexafluoroisobutylene
Hexafluoroisobutylene is an organofluorine compound with the formula (CF3)2C=CH2. This colorless gas is structurally similar to isobutylene. It is used as a comonomer in the production of modified polyvinylidene fluoride. It is produced in a multistep process starting with the reaction of acetic anhydride with hexafluoroacetone. It is oxidized by sodium hypochlorite to hexafluoroisobutylene oxide. As expected, it is a potent dienophile. See also Perfluoroisobutene References Trifluoromethyl compounds Fluoroalkenes Gases Vinylidene compounds Hydrofluoroolefins
Hexafluoroisobutylene
[ "Physics", "Chemistry" ]
166
[ "Statistical mechanics", "Phases of matter", "Gases", "Matter" ]
55,795,858
https://en.wikipedia.org/wiki/Pulse%20compression%20detonation%20system
A pulse compression detonation system (PCD-system) is a combination of pulse detonation and compression systems. History A prototype of the PCD-system has been made at the National Technical University Kharkiv Polytechnic Institute in Ukraine in 2017. The measurements of the prototype with the detonation tube diameter of 20 mm and its length of 600 mm were taken. The device was operated using the mixture of atmospheric air and LPG fuel. In 2019 the device started operating on the mixture of atmospheric air and petroleum. The shock wave velocity at the open end of the tube reached 1700 m/s. The frequency of device pulsations was 23-24 Hz. The deflagration-to-detonation transition (DDT) occurred due to the mixture heating and the compression of it. A cooperation between the National Technical University "Kharkiv Polytechnic Institute" and the University of Warmia and Mazury in Olsztyn started to investigate an efficiency of the PCD-system as a detonation gun for coating technology. Construction PCD-system includes the piston compressor 1 with the cylinder 2. The crankshaft 4 connected to the external drive is used for the reciprocal motion of the piston 3. The intake valve 7 arranged inside the intake port 6 of the cylinder head. The air supply system 8 is connected to the port 6. The fuel can be supplied both immediately into the cylinder 2 of the compressor and into the inlet port 6. The detonation tube 9 is connected to the cylinder 2 through the exhaust port 10. Principle of operation PCD-system operates in the following way: The crankshaft 4 starts moving in a circular motion by the external drive. During the motion of the piston 3 from the top dead center (TDC) to the bottom dead center (BDC) the intake valve 7 is opened, and the detonable gas mixture is pumped into the cylinder 2 of the compressor 1 through the intake port 6 using the supply system 8. As soon as the BDC is reached the valve 7 is closed. Due to the further motion of the piston 3 from the BDC to the TDC the compression of combustible mixture occurs in the cylinder 2 and in the detonation tube 9. This results in an increase of the density, the temperature, and the pressure of combustible mixture at a closed end of the detonation tube 9 and inside the tube itself. As the piston approaches the TDC the combustible mixture is self-ignited due to the compression of it. Then, the deflagration to the detonation transition occurs in the detonation tube 9. The output of detonation products from the tube 9 happens in a short period of time, when the piston is near the TDC. Then, the process is repeated. A pulse compression detonation system has been designed to solve the problem of a high-frequency efficient initiation of a detonation in fuel-air mixtures. Instead of the Shchelkin spiral, U-bend tubes and an electrical treatment of detonable mixture, a technique of an ultra-fast pressurized filling of a detonation tube with a preheated detonable gaseous mixture is applied to reduce a time and a length of DDT in the tube. Potential uses PCD-system is applied in techniques of a generation of pulsed high-speed hot gas flows and also the acceleration of solid particles and drop-liquid medium. The PCD-system can be used by pulse detonation engines to initiate the detonation, for the detonation coating, to solve the problems related to crushing of minerals, abrasive or water blasting, to produce aerosols, for gas detonation lasers, and as a vibrator machine. See also Rotating detonation engine Notes The maximal frequency can exceed 100 Hz per one detonation tube. The critical tube diameter at which the deflagration-to-detonation transition happens is equal to the detonation cell size λ. For the propane – air mixture λ≈50 mm at normal temperature and pressure. References Senderowski C, Bojar Z. (2009) Influence of Detonation Gun Spraying Conditions on the Quality of Fe-Al Intermetallic Protective Coatings in the Presence of NiAl and NiCr Interlayers. Journal of Thermal Spray Technology. 18(3): 435. Korytchenko KV. (2014) High-Voltage Electro-Discharge Technique Used for the Generation of Shock Waves and the Heating of Reactive Gases. Dr.Sc. thesis. Pawlowski A, Czeppe T, Major L, Senderowski C. (2009) Structure Morphology of Fe-Al Coating Detonation Sprayed onto Carbon Steel Substrate. Archives of Metallurgy and Materials. 54(3): 783. International Application No.: PCT/UA2018/000089 . External links (Video) An experimental PCD-system operating (Video) An experimental air-breathing PDE operating with a detonation frequency of 24 Hz (Video) An experimental thermal spraying Scientific projects of the department of electrical engineering, National Technical University "Kharkiv Polytechnic Institute" (Video) PDE operating on mixture of petroleum and air Jet engines Aircraft engines
Pulse compression detonation system
[ "Technology" ]
1,081
[ "Jet engines", "Engines", "Aircraft engines" ]
55,796,356
https://en.wikipedia.org/wiki/Voltage-controlled%20resistor
A voltage-controlled resistor (VCR) is a three-terminal active device with one input port and two output ports. The input-port voltage controls the value of the resistor between the output ports. VCRs are most often built with field-effect transistors (FETs). Two types of FETs are often used: the JFET and the MOSFET. There are both floating voltage-controlled resistors and grounded voltage-controlled resistors. Floating VCRs can be placed between two passive or active components. Grounded VCRs, the more common and less complicated design, require that one port of the voltage-controlled resistor be grounded. Usages Voltage-controlled resistors are one of the most commonly used analog design blocks: adaptive analog filters, automatic gain-control circuits, clock generators, compressors, electrometers, energy harvesters, expanders, hearing aids, light dimmers, modulators (mixers), artificial neural networks, programmable-gain amplifiers, phased arrays, phase-locked loops, phase-controlled dimming circuits, phase-delay and -advance circuits, tunable filters, variable attenuators, voltage-controlled oscillators, voltage-controlled multivibrators, as well as waveform generators, all include voltage-controlled resistors. The JFET is one of the more common active devices used for the design of voltage-controlled resistors. So much so, that JFET devices are packaged and sold as voltage-controlled resistors. Typically, JFETs when they are packaged as VCRs often have high pinch-off voltages, which result in a greater dynamic resistance range. JFETs for VCRs are often packaged in pairs, which allows VCR designs that require matched transistor parameters. For VCR applications that involve sensor signal amplification or audio, discrete JFETs are often used. One reason is that JFETs and circuit topologies built with JFETs feature low-noise (specifically low 1/f flicker noise and low burst noise). In these applications, low-noise JFETs allow more reliable and accurate measurements and heightened levels of sound purity. Another reason discrete JFETs are used is that JFETs are better suited for rugged environments. JFETs can withstand electrical, electromagnetic interference (EMI) and other high radiation shocks better than MOSFET circuits. JFETs can even serve as an input surge-protection device. JFETs are also less susceptible to electrostatic discharge than MOSFETs. Voltage-controlled resistor design Two of the more common and most cost-effective designs for JFET VCR are the non-linearized and linearized VCR design. The non-linearized design only requires one JFET, The linearized design also uses one JFET, but has two linearization resistors. The linearized designs are used for VCR applications that require high input-signal voltage levels. The non-linearized designs are used in low input signal level and cost-driven DC applications. Non-linearized VCR design In the circuit on the figure, a non-linearized VCR design, the voltage-controlled resistor, the LSK489C JFET, is used as a programmable voltage divider. The VGS supply sets the level of the output resistance of the JFET. The drain-to-source resistance of the JFET (RDS) and the drain resistor (R1) form the voltage-divider network. The output voltage can be determined from the equation Vout = VDC · RDS / (R1 + RDS). An LTSpice simulation of the non-linearized VCR design verifies that the JFET resistance changes with a change in gate-to-source voltage (VGS). In the simulation (below), a constant input voltage is applied (the VDC supply is set to 4 volts), and the gate-to-source voltage is reduced in steps, which increases the JFET drain-to-source resistance. The resistance between the drain to source terminals of the JFET increases as the gate-to-source voltage becomes more negative and decreases as the gate-to-source voltage approaches 0 volts. The simulation below bears this out. The output voltage is about 2.5 volts with a gate-to-source voltage of −1 volt. Conversely, the output voltage drops to about 1.6 volts when the gate-to-source voltage is 0 volts. With a 4-volt input signal and R1 of 300 ohms, the range of resistance for the JFET VCR can be calculated from the simulation results as VGS varies between −1 volt and 0 volts using the equation RDS = V0 · R1 / (VDS − V0). Using the above equation, at VGS = −1 V, the VCR resistance is about 500 ohms, and at VGD = 0 V, the VCR resistance is about 200 ohms. Applying a ramp voltage to the input of a similar VCR circuit (the load resistor has been changed to 3000 ohms) allows one to determine the exact value of the resistance of the JFET as the input voltage is varied. The ramp simulation, below, reveals that the drain-to-source resistance of the JFET is fairly constant (about 280 ohms) up until the input sweep voltage, Vsweep (Vsignal), reaches about 2 V. At this point the drain-to-source resistance starts to rise slowly until the input voltage reaches 8 V. At around 8 V, for this bias condition (VGS = 0 V and R = 3 kΩ), the JFET drain current (ID(J1)) saturates, and the resistance is no longer constant and changes with an increase in input voltage. The ramp simulation also indicates that even below 2 V, the VCR's resistance is not completely independent of the input voltage level. That is, the VCR resistance does not represent a perfectly linear resistor. Because the resistance is not constant above 2 V, this non-linearized VCR design is most often used when the input voltage signal is below 1 V, such as in sensor applications or in applications where distortion is not a concern at higher input voltage levels. Or in other cases, when a constant resistor value is not required (for example, in LED dimmer applications and musical pedal-effect circuits). Linearized VCR design To increase the dynamic range of the input voltage, maintain a constant resistance over the input signal range, and to improve the signal-to-noise ratio and total harmonic distortion specifications, linearization resistors are used. A fundamental limitation of voltage-controlled resistors is that input signal must be kept below the linearization voltage (approximately the point when the JFET enters saturation). If the linearization voltage is exceeded, the voltage control resistor value will change both with the level of the input voltage signal and the gate-to-source voltage. For the evaluation of this design's ability to handle larger input signals, a ramp is applied to the VCR input. From the results of the ramp simulation, how closely the VCR emulates a real resistor and over what range of input voltages the VCR behaves as a resistor is determined. The linearized VCR ramp simulation, below, indicates that the VCR resistance is constant at approximately 260 ohms for an input signal range from about −6 V to 6 V (the V(Vout)/I(R1) curve). The sweep also indicates that the VCR resistance starts to dramatically increase, as does in the non-linearized design, once the JFET enters its saturation region. Because of the linearized VCR's wider constant resistance region, much larger input signals than the non-linearized designs can be applied to the VCR without distortion. However, it is also important to consider that the drain resistor value will slightly affect the range of drain-to-source voltages that the VCR resistance is constant. Because of the increased linearization range, the linearized circuit is able to handle AC signals that are in the order of 8 V peak-to-peak before visual levels of distortion set in. The simulation below, which uses a 3000-ohm drain resistor, illustrates that the VCR can be successfully used at fairly high input voltage input signals. For this design, the 8 V peak-to-peak input voltage signal can be attenuated from 2.2 volts peak to 0.5 volts peak when the control voltage is varied from −2.5 volts to 0.5 volts. What is important to note about the linearized VCR design, as opposed to the non-linearized design, is that the output signal does not have any significant offset. It stays centered at 0 V as the control voltage is changed. Simulations of the non-linearized design indicate a significant offset voltage at the output. Another important characteristic of the linearized VCR design is that it has a higher output current than the non-linearized design. The effect of the linearization resistors is to effectively increase the transconductance gain of the VCR. Resistance range selection Different JFETs can be used to obtain different VCR resistance ranges. Typically, the higher the IDSS value for a JFET, the lower the resistance value obtained. Similarly, JFETs with lower values of IDSS have higher values of resistance. With a bank of JFETs, with different IDSS values (and hence, RDS values), banks of programmable automatic gain-control circuits can be constructed that offer a wide range of resistance ranges. For example, the LSK489A and LSK489C, graded IDSS JFETS, show a 3:1 resistance variation. Distortion considerations Distortion is a major concern with voltage-controlled resistors. When an AC or non-DC input signal is applied that results in the VCR resistor moving out of the linear triode region (or operated in a less than perfectly linear triode region), uneven amplification of the input signal results (as a direct result of a non-linear increase in resistance). This results in distortion of the output signal. In order to overcome this problem, non-linearized VCRs are simply operated at fairly low signal levels. Linearized VCR designs, on the other hand, will have significantly less distortion at much higher input voltage signal levels and allow an improvement in total harmonic distortion specification. For example, the simulation below shows a significant amount of visual distortion when the input signal of 5 V peak-to-peak is applied to a non-linearized VCR design. On the other hand, a simulation of a linearized VCR design shows very little distortion when a 8 V peak-to-peak input signal is applied (Figure 7). Other VCR topologies and designs Besides these more basic VCR designs, there are numerous more sophisticated designs. These designs often include a differential difference conveyor current (DDCC) circuit, a differential amplifier, two or more matched JFET transistors or one or two operational amplifiers. These designs offer improvements in dynamic range, distortion, signal-to-noise ratio and sensitivity to temperature variations. Design theory – IV analysis The current–voltage (IV) transfer characteristics determine how the JFET VCR will perform. Specifically, the linear regions of the IV curves determine the input signal range where the VCR will behave as a resistor. The curves of a specific JFET also dictate the range of resistor values that the VCR can be programmed to. The mathematical function that defines a JFET IV curve is not linear. However, there are regions of these curves that are very linear. These include the triode region (also known as the ohmic or linear region) and the saturation region (also known as the active region or constant-current-source region). In the triode region, the JFET acts like a resistor, however, in the saturation region it behaves like a constant-current source. The point that separates the triode region and the saturation region is roughly the point where VDS is equal to VGS on each of the IV curves. In the triode region, changes in the drain-to-source voltage will not change (or change very little) the resistance between the JFET's drain and source terminals. In the saturation region, or more appropriately the constant-current region, changes in the drain-to-source voltage will require the drain-to-source resistance to change such that the current remains at a constant value for different drain-to-source voltage levels. For values of VGS near zero, the drain-to-source voltage linearization voltage or triode breakpoint is much higher than when VGS levels are near the pinch-off voltage. This means in order to maintain constant resistor behavior for different values of VGS, the maximal linearization value would be set according to the highest value of VGS used. The linear triode region actually includes negative values of VGS. The figure below, shows an LTSPICE (LTSPICE) simulation of the IV curves in the triode region. As can be seen, a non-linearized LSK489 is approximately linear from about −0.1 V to 0.1 V. For VGS levels near 0 V, the triode linear range extends from about −0.2 V to 0.2 V. As the value of VGS is increased, the linear triode region is significantly reduced. Conversely, when linearization resistors are used, a similar IV curve swept simulation indicates that the linear triode region is significantly extended. From the IV curves, one can see that the linearization region for the linearized design extends easily from −6 V to 6 V (the IDS versus VDS versus Vin curves). Far above the approximately 200 mV range the non-linearized design produces. Of further interest is that the linearization results in linearization of the gate-to-source voltage even though the input voltage (Vin) is held at a constant DC level during each of the sweeps. This is because as the input voltage changes, the value of the VGS voltage changes such that VGS is always equal to one-half VDS. The change in VGS for changes in VDS is such that the JFET behaves as a resistor up until the point where the JFET saturates. The mathematics of linearization The mathematics behind linearization resistors is directly related to the cancellation of the second degree VDS term in the JFET triode equation. This equation relates the drain current to VGS and VDS. Kleinfeld applies Kirchhoff's current law to prove that the VDS non-linear term cancels with linearization resistors. The linearization resistors, in order to effect cancellation of the second-degree (quadratic) term must be equal. Equal valued linearization resistors divide the drain-to-source voltage by 2, effectively cancelling out the non-linear VDS term in the JFET triode equation. The future of voltage-controlled resistors Everyday and high-performance VCRs are essential to the successful design of many analog electronic circuit designs and will continue to be so. VCR designs are expected to play a central role in the advancement of artificial intelligence (neural) based sensor networks. The VCR, basically the heart of the synaptic cells in a neural network, is necessary to enable high-speed analog data processing and control of information that microcontrollers, digital-to-analog converters and analog-to-digital converters presently do. Low-noise JFETs because of their low-signal sensitivity, electromagnetic and radiation resilience, and their ability to be configured both as a VCR in a synaptic cell and as a low-noise high-performance sensor preamplifier, offer a solution to the implementation of artificial-intelligent-based sensor nodes. This is a natural extension of the fact that low-noise JFETs and low-noise JFET circuit topologies are extensively used in the design of low-noise VCRs and low-noise preamplifiers in sensor measurement applications. References Resistive components
Voltage-controlled resistor
[ "Physics" ]
3,411
[ "Resistive components", "Physical quantities", "Electrical resistance and conductance" ]
55,796,756
https://en.wikipedia.org/wiki/Zwicky%20Transient%20Facility
The Zwicky Transient Facility (ZTF, obs. code: I41) is a wide-field sky astronomical survey using a new camera attached to the Samuel Oschin Telescope at Palomar Observatory in San Diego County, California, United States. Commissioned in 2018, it supersedes the (Intermediate) Palomar Transient Factory (2009–2017) that used the same observatory code. It is named after the Swiss astronomer Fritz Zwicky. Description Observing in visible and infrared wavelengths, the Zwicky Transient Facility is designed to detect transient objects that rapidly change in brightness, for example supernovae, gamma ray bursts, and collision between two neutron stars, and moving objects like comets and asteroids. The new camera is made of 16 CCDs of 6144×6160 pixels each, enabling each exposure to cover an area of 47 square degrees. The Zwicky Transient Facility is designed to image the entire northern sky in three nights and scan the plane of the Milky Way twice each night to a limiting magnitude of 20.5 (r band, 5σ). The amount of data produced by ZTF is expected to be ten times larger than its predecessor, the Intermediate Palomar Transient Factory. ZTF's large data will allow it to act as a prototype for the Vera C. Rubin Observatory (formerly Large Synoptic Survey Telescope) that is expected to be in full operation in 2024 and will accumulate ten times more data than ZTF. First light was recorded of an area in the constellation Orion on November 1, 2017. The first confirmed findings from the ZTF project were reported on 7 February 2018, with the discovery of 2018 CL, a small near-Earth asteroid. Discoveries On 9 May 2019, ZTF discovered its first comet, C/2019 J2 (Palomar), a long-period comet. A search of the ZTF's archive identified images of the interstellar comet 2I/Borisov as early as December 13, 2018, extending observations back eight months. 594913 ꞌAylóꞌchaxnim, the first asteroid discovered whose orbit is entirely within the orbit of Venus, was discovered by ZTF during its Twilight Survey. A search from ZTF's images identified Cataclysmic variable, ZTF J1813+4251 a binary with a period of under 1 hour. AT2021lwx, a long-lasting high-energy transient with a redshift of 0.9945, was discovered on 13 April 2021. A very bright tidal disruption event called AT2022cmc with a redshift of 1.19325, among the brightest astronomical events ever observed. Comet C/2022 E3 (ZTF), which reached naked eye visibility in early 2023 See also OGLE survey GOTO (telescope array) References Astronomical surveys Palomar Observatory
Zwicky Transient Facility
[ "Astronomy" ]
573
[ "Astronomical surveys", "Works about astronomy", "Astronomical objects" ]
55,796,829
https://en.wikipedia.org/wiki/Exfoliation%20corrosion%20%28metallurgy%29
In metallurgy, exfoliation corrosion (also called lamellar corrosion) is a severe type of intergranular corrosion that raises surface grains from metal by forming corrosion products at grain boundaries under the surface. It is frequently found on extruded sections where grain thickness is not as thick as the rolled grain. It can affect aircraft structures, marine vessels, heaters and other objects. References Corrosion
Exfoliation corrosion (metallurgy)
[ "Chemistry", "Materials_science" ]
83
[ "Metallurgy", "Corrosion", "Electrochemistry", "Electrochemistry stubs", "Materials degradation", "Physical chemistry stubs", "Chemical process stubs" ]
55,796,870
https://en.wikipedia.org/wiki/Information%20System%20Contingency%20Plan
An Information System Contingency Plan (ISCP) is a pre-established plan for restoration of the services of a given information system after a disruption. The US National Institute of Standards and Technology Computer Security Resource Center (CSRC) has published a Special Publication (SP) named SP 800-34 guiding organizations as to how an ISCP should be developed. References Information systems IT risk management
Information System Contingency Plan
[ "Technology" ]
80
[ "Information systems", "Computing stubs", "Information technology", "Computer network stubs" ]
55,797,021
https://en.wikipedia.org/wiki/Here%20One
Here One is a pair of wireless smart earbuds developed and manufactured by Doppler Labs. It allows users to filter sound, stream music, and amplify speech. It can also be used to take phone calls and filter certain sounds, such as background noise. Here One has been called the world's first in-ear computer and in June 2018 Here One was inducted into the Smithsonian Institution's Cooper Hewitt Museum of Design for innovation in audio technology. Background Here One was built on the hardware and software foundation of its predecessor Here Active Listening, which was originally launched on Kickstarter, but Here One added streaming capability and allowed users to stream music and take phone calls in addition to the real-world sound control found in Here Active Listening. Doppler Labs showcased Here One to fans and performing artists through a collaborative integration at the 2016 Coachella Music Festival. The products were offered to ticket holders so they could bring the earbuds to the festival in order to filter sound at Coachella. Doppler Labs also introduced custom filters for the Coachella stages, including Tiesto mode, designed by the DJ and producer, so that wearers could enhance the music at each stage. Demos were also provided backstage to performing artists. On November 1, 2017 Doppler Labs announced that it was ceasing sales of Here One. Design The earbuds are designed to fit comfortably in the ear, with a range of ear tips and wings to ensure a secure fit. They come in sleek, compact charging case that can provide up to two additional charges for the earbuds. The earbuds themselves are small and discreet, with touch controls for adjusting settings and accessing features. Hardware Here One consists of two wireless earbuds, a charging case, and a connected smartphone app. Each earbud contains four integrated circuits, including a digital signal processor, three analogue mems microphones (which provide directional capabilities), and a high-fidelity balanced armature speaker. Software Here One is powered by the Here One app, a free smartphone application. The Here One app is used to control the settings of Here One. It includes six preset audio filters which allow the users to filter the sounds of specific environments. The Here One app also includes a Live Mix section which includes effects and a live equalizer that lets users adjust specific audio frequencies and add effects like reverb and bass boost to the real-world sound entering their ears. Here One utilizes advanced machine learning algorithms for its Smart Suggest engine. This backend system uses POI data and environmental cues to tune the Here One buds to their surroundings and provide the best listening experience for the user. This "machine hearing" model was trained by over a million unique binaural samples that Doppler Labs audio engineers collected over the course of two years and is used to continuously improve the Here One software over time. Here One also offers a Personal Listening Profile, a self-calibrated hearing test that helps the product adjust to the parameters of each individual's hearing needs and preferences. Here One has been called the world's first in-ear computer by publications including Fast Company for its ability to intelligently control audio and relay information back to the phone and to the cloud. USA Today called Here One "wireless computers designed for the ear" and compared the earbuds to the innovations found in Oculus Rift. The Next Web called Here One "the real future of AR", sharing how in-ear computing provides users the tools to manipulate and personalize audio environments in real-time. Here One was initially set to be released in November 2016. Doppler Labs delayed their release to February 2017 in order to ensure manufacturing consistency. Applications for hearing health Here One Wireless Smart Earbuds have a range of applications for hearing health, including the ability to enhance speech intelligibility in noisy environments. they can also be used to reduce tinnitus symptoms and provide users with a personalized listening experience that is tailored to their unique hearing profile. Reception Here One won the South by Southwest Best of Show Innovation Award and the Award for Innovation in Music at the 2016 SXSW festival. David Pierce of Wired called the features "magic", praising the ability to control sound in real-time, and named the hands-on control a "superpower" and "one of the most thrilling gadgets". Gizmodo praised comfort, but questioned the battery life. Time magazine claimed Here One provides "supersonic hearing" and praised the nature of Here One's ability to selectively reduce frequency ranges. Inc. discussed why the earbuds do much more than play audio. Reviewing Here One as an augmented reality device, The Verge named the audio "the best you'll find on truly wireless Bluetooth earbuds" and was impressed by the hardware design, calling the sound manipulation technology "robust". References Wireless Products introduced in 2017 2017 software
Here One
[ "Engineering" ]
999
[ "Wireless", "Telecommunications engineering" ]
55,797,400
https://en.wikipedia.org/wiki/Design%20studies
Design studies can refer to any design-oriented studies but is more formally an academic discipline or field of study that pursues, through both theoretical and practical modes of inquiry, a critical understanding of design practice and its effects in society. Characteristics and scope Design studies encompasses the study of both the internal practices of design and the external effects that design activity has on society, culture and the environment. Susan Yelavich explained design studies as embracing "two broad perspectives—one that focuses inward on the nature of design and one that looks outward to the circumstances that shape it, and conversely, the circumstances design changes, intentionally or not". This dual aspect is reflected in the complementary orientations of the two leading journals in the field. Design Studies (established 1979) is "the interdisciplinary journal of design research" and is "focused on developing understanding of design processes". Design Issues (established 1984) "examines design history, theory, and criticism" and "provokes inquiry into the cultural and intellectual issues surrounding design". An interdisciplinary field, design studies includes many scholarship paradigms and uses an evolving set of methodologies and theories drawn from key thinkers from within the field itself. The field has connections with the humanities, the social sciences and the sciences, but many scholars regard design itself as a distinct discipline. Design studies scholars recognize that design, as a practice, is only one facet of much larger circumstances. They examine and question the role of design in shaping past and present personal and cultural values, especially in light of how they shape the future. The extensive scope of design studies is conveyed in two collected sets of readings: Design Studies: A Reader (2009) is a compilation of extracts from classic writings that laid the foundations of the field, and The Routledge Companion to Design Studies (2016) contains newer writings over a wide range of topics such as gender and sexuality, consumerism and responsibility, globalization and post-colonialism. History Origins and early development The origins of design studies lie in the rapid expansion of issues and topics around design since the 1960s, including its role as an academic discipline, its relationships with technological and social change, and its cultural and environmental impacts. As a field of studies it developed more specifically in the development of interaction between design history and design research. Debates about the role of design history and the nature of design research from the 1970s and 80s were brought together in 1992 when Victor Margolin argued in the journal Design Studies for the incorporation of design history into design research, in a combined approach to the study of design. Margolin noted the "dynamic crossings of intellectual boundaries" when considering developments in both fields at the time, and defined design studies as "that field of inquiry which addresses questions of how we make and use products in our daily lives and how we have done so in the past". Margolin's argument triggered counterarguments and other suggestions about what constitutes design history and how to characterize the study of design as something more than a professional practice. In a reply to Margolin in the Journal of Design History in 1993, Adrian Forty argued that design history had consistently performed a vital role in examining questions around quality in design and was already embracing new lines of thought, for example from cultural studies and anthropology. The growing debate led to a special issue of the journal Design Issues in 1995 which focused attention on "some of the controversies and problems that surround the seemingly simple task of telling the history of design". A shift from design history towards design studies continued to develop as the overlapping research methods and approaches to the study of design began to lead to broader questions of meaning, authority and power. The realization came that design history is only "but one component of what goes on in studying design, and to claim that all that is going on now could use the umbrella term 'design history' is not tenable". Foundational figures Reyner Banham (1922–1988) Banham's Theory and Design in the First Machine Age and his journalistic articles written for New Society have been described by the British writer and design historian Penny Sparke as representing a major "shift in how material culture was seen. His writing focused on popular commodities as well as formal architecture. Gui Bonsiepe (born 1934) Bonsiepe is a German designer and professor for various universities including FH Koln; Carnegie Mellon; EUA, Chile; LBDI/FIESC, Brazil; Jan van Eyck Academy, Netherlands. His most influential work is Design and Democracy. Richard Buchanan American professor of design, management, and information systems and editor of the journal Design Issues. He is well known for "extending the application of design into new areas of theory and practice, writing, and teaching as well as practicing the concepts and methods of interaction design." As a co-editor of Discovering Design: Explorations in Design Studies with Victor Margolin, he brought together the fields of psychology, sociology, political theory, technology studies, rhetoric, and philosophy. Nigel Cross (born 1942) Cross is a British academic, design researcher and educator who has focused on design's intellectual space in the academic sphere. He is an emeritus professor of design studies in the Department of Design and Innovation, Faculty of Technology, at the UK's Open University, and emeritus editor-in-chief of Design Studies, the international journal of design research. In his 1982 journal article "Designerly Ways of Knowing" in Design Studies, Cross argued that design has its own intellectual and practical culture as a basis for education, contrasting it with cultures of science and arts and humanities. Clive Dilnot Originally educated as a fine artist, Dilnot later began studying social philosophy and the sociology of culture with Polish sociologist Zygmunt Bauman. Dilnot has worked on the history, theory, and criticism of the visual arts in their broadest terms. His teaching and writing have focused on design history, photography, criticism, and theory. Dilnot studied ethics in relation to design, and the role of design's capabilities in creating a humane world in his book, Ethics? Design? published in 2005. Adrian Forty (born 1948) Forty was Professor of Architectural History at The Bartlett, The Faculty of the Built Environment at University College London. Forty believed that the drive to define a new field, the field of design studies, was unnecessary due to the fact that the field of design history had not exhausted all of its possibilities. His book Objects of Desire explores how consumer goods relate to larger issues of social processes. Tony Fry Fry is a British design theorist and philosopher who writes on the relationship between design, unsustainability, and politics. Fry has taught design and cultural theory in Britain, the United States, Hong Kong and Australia. He is perhaps best known for his writing on defuturing, the destruction of the future by design. John Heskett (1937–2014) In the late 1970s, Heskett became a prominent member of a group of academics based in several of Britain's art schools (then part of the polytechnics) who helped develop the discipline of design history and theory, later to become subsumed under the broader banner of design studies. Heskett brought his deep knowledge of economics, politics and history to the project and worked alongside scholars from other disciplines to communicate the meaning and function of that increasingly important concept, 'design', both past and present. Victor Margolin (1941–2019) Considered one of the founders of design studies, Victor Margolin was professor emeritus of design history at the University of Illinois, Chicago. He was a co-editor of the academic design journal, Design Issues, and the author, editor, or co-editor of a number of books including Design Discourse, Discovering Design, The Idea of Design, The Designed World, and The Politics of the Artificial. Victor Papanek (1923–1998) An industrial designer, Papanek suggested that industrial design had lethal effects by virtue of creating new species of permanent garbage and by choosing materials and processes that pollute the air. His writing and teaching were consistently in favour of re-focusing design for the general good of humanity and the environment. Elizabeth Sanders As a practitioner, Sanders introduced many of the methods being used today to drive design from a human-centered perspective. She has practiced participatory design research within and between all the design disciplines. Her current research focuses on codesign processes for innovation, intervention, and transdisciplinary collaboration. Penny Sparke Sparke is a professor of design history and director of the Modern Interiors Research Centre (MIRC) at Kingston University, London. Along with Fiona Fisher, Sparke co-edited The Routledge Companion to Design Studies, a comprehensive collection of essays embracing the wide range of scholarship relating to design—theoretical, practice-related, and historical. Issues and concepts Design studies inquires about the meanings and consequences of design. It studies the influence of designers and the effects design has on citizens and the environment. Victor Margolin distinguishes a degree in design from a degree in design studies by saying that "the former is about producing design, while the latter is about reflecting on design as it has been practiced, is currently practiced, and how it might be practiced". Design studies urges a rethinking of design as a process, as a practice, and as a generator or products and systems that gives lives meaning and is imbricated in our economic and political systems. The study of design thinking explores the complexities inherent in the task of thinking about design. Design studies is also concerned with the relationship between design and gender, design and race, and design and culture. It studies design as ethics, its role in sustainability (social and environmental), and the nature of agency in design's construction the artificial. Issues Ethics Design has the capacity of structuring life in certain ways and thus design should result in greater good for individuals and society but it doesn't always do so. Ethics deals with how our actions affect others and should affect others. Design studies sees ethics as central to design. Tony Fry, a leading figure in design studies, said that it is widely recognized that design is an ethical process but remains underdeveloped and marginal within design education. Clive Dilnot's essay "Ethics in Design: Ten Questions" explores the relationship between design and ethics and why we need ethics in design. Dilnot discussed the ability of the designer to address the public as citizens and not as consumers, and about infusing "humane intelligence" into the made environment. Concepts The artificial Clive Dilnot wrote that the artificial is by no means confined to technology. Today, it is combination of technical systems, the symbolic realm, including mind and the realm of human transformations and transmutations of nature. He gave the example of a genetically modified tomato that is neither purely natural nor purely artificial. It belongs rather to the extended realms of living things that are, as human beings ourselves are, a hybrid between these conditions. Design studies scholars also reference sociologist Bruno Latour when investigating the dynamics of the artificial. Latour's concept of actor–network theory (ANT) portrays the social as an interdependent network of human individual actors and non-human, non-individual entities called actants. Agency Design plays a constitutive role in everyday life. The things people see and read, the objects they use, and the places they inhabit are all designed. These products (all artificial because they are made by people) constitute an increasingly large part of the world. The built environment is the physical infrastructure that enables behavior, activity, routines, habits, and rituals, which affect our agency. Jamer Hunt defined the built environment as the combination of all design work. Decolonizing design There have been protests that the field of design studies is not sufficiently "geared towards delivering the kinds of knowledge and understanding that are adequate to addressing the systemic problems that arise from the coloniality of power". Moves towards decolonizing design entail changing design discourse from within by challenging and critiquing the dominant status quo from spaces where marginal voices can be heard, by educating designers about the politics of what they do and create, and by posing alternatives to current (colonial) design practices, rooted in the contexts and histories of the Global South rather than just the North. The argument is that design history and design research tend to have the strongest influences from the triad of Western Europe, North America, and Japan. The effect tends to be in line with the notion that history is written by the victors and thus design history is written by the economically powerful. Denise Whitehouse said, "While many countries produce local histories of design, the output is uneven and often driven by nationalist and trade agendas", although some academic groups such as the Japanese Design History Forum and the International Committee for Design History and Studies (ICDHS) attempt to draw together both western and non-western, post-communist, postcolonial, Asian, and Southern Hemisphere approaches, "to remap the scope and narrative concerns of design history". A special issue of the Design and Culture journal (Volume 10, Issue 1, 2018) was published on the topic of decolonizing design. Research methods The following are some of the research methods that may be used in design studies. Design ethnography This form of research requires the scholar to partake in the use of, or observe others use, a designed object or system. Design ethnography has become a common tool where design is observed as a social practice. It describes a process in which a researcher will partake in traditional observant style ethnography, and observe potential users complete activities that can inform design opportunities and solutions. Other ethnographic techniques used by design studies scholars would fall more in line with anthropologists usage of the method. These techniques are observant and participant ethnography. The observant style requires the scholar to observe in an unobtrusive manner. Observations are recorded and further analyzed. The participant style requires the scholar to partake in the activities with their subject. This tactic enables the scholar to record what they see, but also what they themselves experience. Design ethnography emerged out of a movement in the late 1980s by organizations such as Xerox/PARC (Palo Alto Research Center], Institute for Research on Learning and Jay Doblin & Associates toward social science approaches in their product design and development efforts. In the 1990s, the research and design consultancy E-Lab (founded by former Doblin employees) took this approach further, pioneering a multidisciplinary methodology guided by anthropology and ethnography. E-Lab challenged conventional market research by prioritizing real-world user experiences and behaviors uncovered through fieldwork, then analyzing the data for patterns organized by explanatory frameworks. Actor-network theory While it remains a broader theory or concept, actor-network theory can be used by design studies scholars as a research framework. When using this method, scholars will assess a designed object and consider the physical and nonphysical interactions which revolve around the object. The scholar will analyze what the object's impact is on psychological, societal, economical, and political worlds. This widened viewpoint allows the researcher to explore and map out the objects many interactions, identify its role within the network, and in what ways it is connected to stakeholders. Semiotics, rhetorical analysis, and discourse theory Design studies scholars may also analyze or research a designed object or system by studying it in terms of representations and their various meanings. Semiotics studies acts of communication between the designer, the thing, and the user or users. This concept branches out into a rhetorical analysis of the designed thing. Scholars such as Richard Buchanan argue that design can be studied in such a way due to the existence of a design argument. The design argument is made up by the designer, the user, and the applicability to "practical life". The scholar would pull these segments apart and thoroughly analyze each component and their interactions. Discourse analysis and Foucauldian discourse analysis can be adopted by the design studies scholar to further explore the above components. A Foucauldian approach specifically will analyze the power structures put in place, manipulated by, or used within a designed thing or object. This process can be particularly useful when the scholar intends to understand if the designed thing has agency or enables others to have agency. Societies The Design Research Society (DRS) is a learned society committed to promoting and developing design research. It is the longest established, multi-disciplinary worldwide society for the design research community, founded in the UK in 1966. The purpose of the DRS is to promote "the study of and research into the process of designing in all its many fields". The Design History Society is an organization that promotes the study of global design histories, and brings together and supports all those engaged in the subject—students, researchers, educators, designers, designer-makers, critics, and curators. The Society aims to play an important role in shaping an inclusive design history. References External links Journals CoDesign: "research and scholarship into principles, procedures and techniques relevant to collaboration in design or that relate to its theoretical underpinnings; encompassing collaborative, co-operative, participatory, socio-technical and community design". Design and Culture: "reflects the state of scholarship in the field of design and nutures new or overlooked lines of inquiry that redefine our understanding of design". Design Issues: "examines design history, theory, and criticism, and provokes inquiry into the cultural and intellectual issues surrounding design". The Design Journal: "aims to publish thought-provoking work which will have a direct impact on design knowledge and which challenges assumptions and methods". Design Studies: "focused on developing understanding of design processes; studies design activity across all domains of application, including engineering and product design, architectural and urban design, computer artefacts and systems design". International Journal of Design: "devoted to publishing research papers in all fields of design, including industrial design, visual communication design, interface design, animation and game design, architectural design, urban design, and other design related fields". Journal of Design History: "plays an active role in the development of design history, including the history of crafts and applied arts, as well as contributing to the broader fields of visual and material culture studies". Journal of Design Research: "emphasising human aspects as a central issue of design through integrative studies of social sciences and design disciplines". She Ji: The Journal of Design, Economies, and Innovation: "focusing on economics and innovation, design process, and design thinking in today's complex socio-technical environment". Architecture: "aims to provide an advanced forum for studies related to architectural research, including landscape architecture, architecture design, civil engineering design, systems architecture, industrial design, community and regional planning, interior design, sustainable design, and technology, sustainability, pedagogy, visual culture and artistic practices of architecture". Design studies Academic disciplines
Design studies
[ "Engineering" ]
3,856
[ "Design", "Design studies" ]
55,798,016
https://en.wikipedia.org/wiki/Gertrude%20Mwangala
Dr. Gertrude Mwangala Akapelwa is a former Zambian IBM systems engineer former African Development Bank ICT Infrastructure and Operations Division Manager and academic, who serves as the Vice Chancellor of Victoria Falls University of Technology (VFU), an institution she helped establish in 2002. Background and education She was born in Zambia circa 1948. She was admitted to the University of Zambia in 1969, graduating in 1973 with a Bachelor of Science in Mathematics and Education. Her Master of Public Administration degree, specializing in Public Policy and Management, was obtained from Harvard University's Kennedy School of Government, in 1997. She is also completed her thesis for a Doctor of Education degree, obtained through research, awarded by the University of Liverpool in 2020. Her doctoral specialization is Integration of Information and Communications Technology in Higher Education learning process for quality Enhancement. Career Gertrude Mwangala Akapelwa started out in 1973, working for the Zambian subsidiary of International Business Machines (IBM), as a Systems Engineer. She served there for eight and one half years until December 1981. In January 1982, she was hired as the Information Technology Infrastructure and Systems Manager at the African Development Bank, working in that capacity for just short of 24 years until June 2005, based both in Abidjan, Ivory Coast and Tunis, Tunisia. In June 2002, she founded the Victoria Falls University of Technology (VFU), based in Livingstone, Zambia and became its Vice Chancellor from 2010. As of November 2017, she is the incumbent. Other considerations She serves on several boards of public and private companies including as a non-executive director of Zambia Railways Limited. She is the owner and Chief Executive Officer (CEO) of La Residence Executive Guest House, located in Livingstone, Zambia. She previously served as the Chairperson of the Zambia Information & Communications Technology Authority (ZICTA) and a non-executive director of Zambia National Commercial Bank. She also served on the Technical Committee of the Lands Information Management System for the Zambian Government. She has received numerous awards over the course of her career, including (a) the John Mwanakatwe Distinguished Award awarded by the Zambia Society for Public Administrators (b) the recognition and honour for being the pioneer female Computer Scientist in Zambia, by the Zambia Association of University Women (ZAUW) (c) the IBM Systems Engineering Professional Excellence Award and (d) She also received awards for Africa's Most Influential Women in Business as the Overall, Regional and Country winner in 2013, 2014 and 2015. See also Mizinga Melu Florence Mumba Elizabeth Muyovwe References Living people Zambian engineers Systems engineers Zambian women engineers 1954 births Academic staff of Victoria Falls University of Technology Harvard Kennedy School alumni Alumni of the University of Liverpool Zambian women academics 21st-century Zambian women 21st-century women engineers University of Zambia alumni
Gertrude Mwangala
[ "Engineering" ]
566
[ "Systems engineers", "Systems engineering" ]
55,799,101
https://en.wikipedia.org/wiki/Glossary%20of%20representation%20theory
This is a glossary of representation theory in mathematics. The term "module" is often used synonymously for a representation; for the module-theoretic terminology, see also glossary of module theory. See also Glossary of Lie groups and Lie algebras, list of representation theory topics and :Category:Representation theory. Notations: We write . Thus, for example, a one-representation (i.e., a character) of a group G is of the form . A B C D E F G H I J K L M O P Q R S T U V W Y Z Notes References Theodor Bröcker and Tammo tom Dieck, Representations of compact Lie groups, Graduate Texts in Mathematics 98, Springer-Verlag, Berlin, 1995. Claudio Procesi (2007) Lie Groups: an approach through invariants and representation, Springer, . N. Wallach, Real Reductive Groups, 2 vols., Academic Press 1988, Further reading M. Duflo et M. Vergne, La formule de Plancherel des groupes de Lie semi-simples réels, in “Representations of Lie Groups;” Kyoto, Hiroshima (1986), Advanced Studies in Pure Mathematics 14, 1988. External links https://math.stanford.edu/~bump/ Representation theory Wikipedia glossaries using description lists
Glossary of representation theory
[ "Mathematics" ]
278
[ "Representation theory", "Fields of abstract algebra" ]
55,799,378
https://en.wikipedia.org/wiki/George%20Kurtz
George Kurtz (born October 14, 1970) is an American businessman. He is the CEO and founder of the cybersecurity technology company CrowdStrike, and the founder and former CEO of Foundstone, a worldwide security products and anti-virus software company. He is also the author of the best-selling book of all time on cybersecurity, Hacking Exposed: Network Security Secrets & Solutions. Kurtz served as executive vice president and chief technology officer of McAfee when that company released a patch that crashed many of its client's computers. In 2024, his company CrowdStrike crashed millions of Windows computers around the world, causing billions of dollars in economic losses in what has been called the largest outage in the history of information technology. In 2024, Fortune Magazine named Kurtz as the 76th most powerful person in business. Kurtz is a FIA Bronze-rated race car driver who has won the Pro-Am class in the 24 Hours of Le Mans and the 24 Hours of Spa. Early life and education Kurtz grew up in Parsippany–Troy Hills, New Jersey, and attended Parsippany High School. He claims that he started programming video games on his Commodore when he was in fourth grade. He went on to build bulletin board systems in high school. Kurtz received a Bachelor of Science with a major in accounting from the private Seton Hall University in South Orange, New Jersey. Career Price Waterhouse and Ernst & Young After college, Kurtz began his career at Price Waterhouse as a CPA. In 1993, the company made Kurtz one of its first employees in its new security group. Kurtz and his team were hired by corporations to do pen-testing and locate network risk. Kurtz’ talent at this new concept - penetration testing - led to Price Waterhouse making him a founding employee in the new domain of cybersecurity. While at Price Waterhouse, and later, when he joined Ernst & Young, Kurtz developed a number of penetration testing and Internet security protocols still in use today. After a few years at Ernst & Young, Kurtz left to start his first company, Foundstone In 1999, Kurtz co-wrote Hacking Exposed, a book about cybersecurity for network administrators, with Stuart McClure and Joel Scambray. The book sold more than 600,000 copies and was translated into more than 30 languages. Foundstone Kurtz’s company Foundstone was started in 1999. Frustrated with time-consuming and incomplete vulnerability assessment technologies of the day, Kurz pioneered vulnerability management, creating both the category and term. Through Foundstone, Kurtz also pioneered the concept of a tech-focused cybersecurity product company with in-house elite cybersecurity services. Industrial opinion of the day was that product companies couldn’t also offer high-end consulting services. Foundstone competed against Internet Security Systems which was later acquired by IBM. Foundstone also pioneered security training for aspiring as well as experienced security professionals for pen-testing and vulnerability management. The training was based on Kurtz’s book Hacking Exposed and created a global community of cybersecurity professionals well-versed on the new domain of vulnerability management. McAfee In August 2004, Foundstone was acquired for $86 million by McAfee, which appointed Kurtz to be senior vice president and general manager of risk management. In October 2009, McAfee promoted him to chief technology officer and executive vice president. Six months later, McAfee accidentally disrupted its customers' operations around the world when it pushed out a software update that deleted critical Windows XP system files and caused affected systems to bluescreen and enter a boot loop. In 2010, Kurtz participated in Operation Aurora, the investigation of a series of cyber attacks against Google and several other companies. In 2011, he led McAfee's research around the emerging Night Dragon and Shady RAT threats, alongside Dmitri Alperovitch, who was then McAfee's vice president of threat research. Over time, Kurtz became frustrated that existing security technology functioned slowly and was not, as he perceived it, evolving at the pace of new threats. On a flight, he watched the passenger seated next to him wait 15 minutes for McAfee software to load on his laptop, an incident he later cited as part of his inspiration for founding CrowdStrike. He resigned from McAfee in October 2011. CrowdStrike In November 2011, Kurtz joined private equity firm Warburg Pincus as an "entrepreneur-in-residence" and began working on his next project, CrowdStrike. He, Gregg Marston (former chief financial officer at Foundstone), and Dmitri Alperovitch co-founded CrowdStrike in Irvine, California, formally announcing the company's launch in February 2012. Kurtz pitched the idea for the company to Warburg Pincus and secured $25 million in funding. The company was founded with the goal of transforming how companies approach cybersecurity. Kurtz wanted the new focus to be cloud-based, intelligence driven, and proactive. At the time of its founding, CrowdStrike was one of the first, if not the first, company to bring cybersecurity to the cloud. The company developed a "cloud-first" model in order to reduce the software load on customers' computers. CrowdStrike shifted from anti-malware and antivirus products (McAfee's approach to cybersecurity) to identifying the techniques used by hackers in order to spot threats. CrowdStrike, now headquartered in Sunnyvale, California, attracted public interest in June 2016 for its role in investigating the Democratic National Committee cyber attacks, and in May 2017, the company exceeded a valuation of $1 billion. In 2019, CrowdStrike's $612 million initial public offering on the Nasdaq brought the company to a $6.6 billion valuation under Kurtz's leadership. In March 2020, when discussing company strategy at CrowdStrike, he said that "not one time have I regretted firing someone too fast." In July 2020, an IDC report named CrowdStrike as the fastest-growing endpoint security software vendor. A year later, Kurtz ranked on CRN's 2021 Top 100 Executives list. In 2023, Kurtz warned of cyber threats from China and criticized Microsoft’s response after Chinese hackers exploited a flaw in Microsoft's cloud email service to gain access to the email accounts of U.S. government employees. In 2024, CrowdStrike was added to the S&P 500. At just five years after going public, this was the fastest a cybersecurity company had ever been listed on the index. On July 19, 2024, CrowdStrike caused one of the largest information technology outages in history when it pushed out a software update that caused an estimated 8.5 million computers running Microsoft Windows to crash and left them unable to properly restart. This disrupted industries and governmental operations around the world, causing economic losses estimated in the billions of dollars in what has been called the largest IT outage in history and "historic in scale". In a live interview on NBC's Today, CEO Kurtz apologized to the public. He said company leaders were "deeply sorry for the impact that we've caused to customers, to travelers, to anyone affected by this, including our companies". Racing career In 2016, Kurtz made his racing debut in the Pirelli World Challenge, driving an Aston Martin Vantage GT4 for TRG-AMR. He remained in the series for the following two years, winning the GTS Am class in 2017 at the wheel of a McLaren 570S GT4. In 2019, the championship was renamed the GT World Challenge America, which Kurtz contested with pro driver Colin Braun in the GT3 category. The duo finished fifth in the Pro-Am standings. The duo reunited in 2020, when Kurtz made eight podiums, including his first overall win in GT3 machinery at Virginia International Raceway and another victory, to finish as the runner-up of Pro-Am. In 2021, Kurtz again raced in the GTWC America series but also in prototype cars, competing in a Ligier JS P320 in the IMSA SportsCar Championship's LMP3 category. In that series, he competed solely in the endurance events, winning at Sebring and scoring a class podium at Watkins Glen. Three missed weekends in the former series dropped Kurtz and Braun to sixth in the drivers' standings, with two class wins. In 2022, Kurtz remained in both championships, scoring two podiums in IMSA, including third place in class at the 24 Hours of Daytona. In GTWC America, he won ten of 16 races, earning the title in the SRO3 class. In 2023, Kurtz stepped up to the LMP2 category to compete full-time in the IMSA SCC, driving for his own Crowdstrike team supported by Algarve Pro Racing alongside Ben Hanley, with silver-ranked Nolan Siegel supporting the pair at the endurance rounds. Kurtz and Hanley won at the season-ending Petit Le Mans and another race, but finished second in the standings, edged out by Paul-Loup Chatin and Ben Keating. In the Michelin Endurance Trophy, which took into account placings solely within the four endurance races, the Kurtz-Hanley combo came out on top. Kurtz also made his debut at the 24 Hours of Le Mans, where he, Colin Braun, and James Allen won in the LMP2 Pro-Am subclass. Finally, he returned to the GTWC America to defend his title, and although Kurtz only finished third in the SRO3 category he claimed Pro-Am honours, having partnered with Braun throughout the year. During the 2023–24 winter, Kurtz and Braun raced in the Asian Le Mans Series, where they and young pro Malthe Jakobsen won two races on their way to the championship. Following the 2024 CrowdStrike incident, Kurtz withdrew from racing for the season; he returned to motorsport for the 2025 24 Hours of Daytona. Record Complete WeatherTech SportsCar Championship results (key) (Races in bold indicate pole position; results in italics indicate fastest lap) † Points only counted towards the Michelin Endurance Cup, and not the overall LMP2 Championship. † Points only counted towards the Michelin Endurance Cup, and not the overall LMP3 Championship. Complete 24 Hours of Daytona results 24 Hours of Le Mans results References External links Profile on CrowdStrike's website 1965 births Living people People from Parsippany-Troy Hills, New Jersey Racing drivers from New Jersey Parsippany High School alumni American technology executives 24H Series drivers 24 Hours of Daytona drivers GT World Challenge America drivers Mercedes-AMG Motorsport drivers Toksport WRT drivers WeatherTech SportsCar Championship drivers American chief technology officers Le Mans Cup drivers 24 Hours of Le Mans drivers 24 Hours of Spa drivers Asian Le Mans Series drivers Algarve Pro Racing drivers Warburg Pincus people GT World Challenge Europe Endurance Cup drivers
George Kurtz
[ "Technology" ]
2,279
[ "Lists of people in STEM fields", "Proprietary technology salespersons" ]
55,799,611
https://en.wikipedia.org/wiki/List%20of%20nicknames%20used%20by%20Donald%20Trump
Donald Trump became widely known during his 2016 presidential campaign, his first presidency from 2017 to 2021, his inter-presidential period and 2024 presidential campaign for using nicknames to criticize, insult, or otherwise express commentary about media figures, politicians, and foreign leaders. Domestic political figures Foreign leaders Media figures Groups of people Other people Organizations Television programs Other See also List of nicknames used by George W. Bush List of nicknames of presidents of the United States Notes References Nicknames Nicknames Lists of 21st-century people Lists of nicknames Lists of pejorative terms for people Harassment and bullying Bullying in the United States Twitter-related lists Glossaries of politics Political pejoratives for people Wikipedia glossaries using tables
List of nicknames used by Donald Trump
[ "Biology" ]
145
[ "Harassment and bullying", "Behavior", "Aggression" ]
49,018,608
https://en.wikipedia.org/wiki/Map%20segmentation
In mathematics, the map segmentation problem is a kind of optimization problem. It involves a certain geographic region that has to be partitioned into smaller sub-regions in order to achieve a certain goal. Typical optimization objectives include: Minimizing the workload of a fleet of vehicles assigned to the sub-regions; Balancing the consumption of a resource, as in fair cake-cutting. Determining the optimal locations of supply depots; Maximizing the surveillance coverage. Fair division of land has been an important issue since ancient times, e.g. in ancient Greece. Notation There is a geographic region denoted by C ("cake"). A partition of C, denoted by X, is a list of disjoint subregions whose union is C: There is a certain set of additional parameters (such as: obstacles, fixed points or probability density functions), denoted by P. There is a real-valued function denoted by G ("goal") on the set of all partitions. The map segmentation problem is to find: where the minimization is on the set of all partitions of C. Often, there are geometric shape constraints on the partitions, e.g., it may be required that each part be a convex set or a connected set or at least a measurable set. Examples 1. Red-blue partitioning: there is a set of blue points and a set of red points. Divide the plane into regions such that each region contains approximately a fraction of the blue points and of the red points. Here: The cake C is the entire plane ; The parameters P are the two sets of points; The goal function G is It equals 0 if each region has exactly a fraction of the points of each color. Related problems A Voronoi diagram is a specific type of map-segmentation problem. Fair cake-cutting, when the cake is two-dimensional, is another specific map-segmentation problem when the cake is two-dimensional, like in the Hill–Beck land division problem. The Stone–Tukey theorem is related to a specific map-segmentation problem. References Fair division Mathematical optimization
Map segmentation
[ "Mathematics" ]
436
[ "Mathematical analysis", "Recreational mathematics", "Fair division", "Game theory", "Mathematical optimization" ]
49,021,215
https://en.wikipedia.org/wiki/L%20band%20%28NATO%29
The NATO L band is the obsolete designation given to the radio frequencies from 40 to 60 GHz (equivalent to wavelengths between 7.5 and 5 mm) during the cold war period. Since 1992 frequency allocations, allotment and assignments are in line to NATO Joint Civil/Military Frequency Agreement (NJFA). However, in order to identify military radio spectrum requirements, e.g. for crises management planning, training, Electronic warfare activities, or in military operations, this system is still in use. References Radio spectrum
L band (NATO)
[ "Physics" ]
106
[ "Radio spectrum", "Spectrum (physical sciences)", "Electromagnetic spectrum" ]
49,021,319
https://en.wikipedia.org/wiki/TALE-likes
Transcription Activator-Like Effector-Likes (TALE-likes) are a group of bacterial DNA binding proteins named for the first and still best-studied group, the TALEs of Xanthomonas bacteria. TALEs are important factors in the plant diseases caused by Xanthomonas bacteria, but are known primarily for their role in biotechnology as programmable DNA binding proteins, particularly in the context of TALE nucleases. TALE-likes have additionally been found in many strains of the Ralstonia solanacearum bacterial species complex, in Paraburkholderia rhizoxinica strain HKI 454, and in two unknown marine bacteria. Whether or not all these proteins form a single phylogenetic grouping is as yet unclear. The unifying feature of the TALE-likes are their tandem arrays of DNA binding repeats. These repeats are, with few exceptions, 33-35 amino acids in length, and composed of two alpha-helices on either side of a flexible loop containing the DNA base binding residues and with neighbouring repeats joined by flexible linker loops. Evidence for this common structure comes in part from solved crystal structures of TALEs and a Burkholderia TALE-like (BAT), but also from the conservation of the code that all TALE-likes use to recognise DNA-sequences. In fact, TALE, RipTAL, and BAT repeats can be mixed and matched to generate functional DNA-binding proteins with varying affinity. TALEs TALEs are the first identified, best-studied and largest group within the TALE-likes. TALEs are found throughout the bacterial genus Xanthomonas, comprising mostly plant pathogens. Those TALEs which have been studied have all been shown to be secreted as part of the Type III secretion system into host plant cells. Once inside the host cell they translocate to the nucleus, bind specific DNA sequences within host promoters and turn on downstream genes. Every part of this process is thought to be conserved across all TALEs. The single meaningful difference between individual TALEs, based on current understanding, is the specific DNA sequence that each TALE binds. TALEs from even closely related strains differ in the composition of repeats that make up their DNA binding domain. Repeat composition determines DNA binding preference. In particular position 13 of each repeat confers the DNA base preference of each repeat. During early research it was noted that almost all the differences between repeats of a single TALE repeat array are found in positions 12 and 13 and this finding led to the hypothesis that these residues determine base preference. In fact repeat positions 12 and 13, referred to jointly as the Repeat Variable Diresidue (RVD) are commonly said to confer base specificity despite clear evidence that position 13 is the base determining residue. In addition to the repeat domain TALEs also possess a number of conserved features in the domains flanking the repeats. These include domains for type-III-secretion, nuclear localization and transcriptional activation. This allows TALEs to carry out their biological role as effector proteins secreted into host plant cells to activate expression of specific host genes. Diversity and evolution Whilst the RVD positions are commonly the only variable positions within a single TALE repeat array, there are more differences when comparing repeat arrays of different TALEs. The diversity of TALEs across the Xanthomonas genus is considerable, but a particularly striking finding is that the evolutionary history one arrives at by comparing repeat compositions differs from that found when comparing non-repeat sequences. Repeat arrays of TALEs are thought to evolve rapidly, with a number of recombinatorial processes suggested to shape repeat array evolution. Recombination of TALE repeat arrays has been demonstrated in a forced-selection experiment. This evolutionary dynamism is thought to be made possible by the very high sequence identity of TALE repeats, which is a unique feature of TALEs as opposed to other TALE-likes. T-zero Another unique feature of TALEs is a set of four repeat structures at the N-terminal flank of the core repeat array. These structures, termed non-canonical or degenerate repeats have been shown to be vital for DNA binding, though all but one do not contact DNA bases and thus make no contribution to sequence preference. The one exception is repeat -1, which encodes a fixed T-zero preference to all TALEs. This means that the target sequences of TALEs are always preceded by a thymine base. This is thought to be common to all TALEs, with the possible exception of TalC from Xanthomonas oryzae pv. oryzae strain AXO1947 (). RipTALs Discovery and molecular properties It was noted in the 2002 publication of the genome of reference strain Ralstonia solanacearum GMI1000 that its genome encodes a protein similar to Xanthomonas TALEs. Based on similar domain structure and repeat sequences it was presumed that this gene and homologs in other Ralstonia strains would encode proteins with the same molecular properties as TALEs, including sequence-specific DNA binding. In 2013 this was confirmed by two studies. These genes and the proteins they encode are referred to as RipTALs (Ralstonia injected protein TALE-like) in line with the standard nomenclature of Ralstonia effectors. Whilst the DNA binding code of the core repeats is conserved with TALEs, RipTALs do not share the T-zero preference, instead they have a strict G-zero requirement. In addition repeats within a single RipTAL repeat array have multiple sequence differences beyond the RVD positions, unlike the near-identical repeats of TALEs. RipTALs have been found in all four phylotypes of R. solanacearum, making it an ancestral feature of this clade. Despite differences in the flanking domains, the sequences their RVDs target are highly similar. Biological role Several lines of evidence support the idea that RipTALs function as effector proteins, promoting bacterial growth or disease by manipulating the expression of plant genes. They are secreted into plant cells by the Type III secretion system, which is the main delivery system for effector proteins. They localize to the cell nucleus and are able to function as sequence-specific transcription factors in plant cells. In addition a strain lacking its RipTAL was shown to grow slower inside eggplant leaf tissue than the wild type. Furthermore, a study based on DNA polymorphisms in ripTAL repeat domain sequences and host plants found a statistically significant connection between host plant and repeat domain variants. This is expected if the RipTALs of different strains are adapted to target genes in specific host plants. Despite this, no target genes have been identified for any RipTAL, . BATs Discovery The publication of the genome of bacterial strain Paraburkholderia rhizoxinica HKI 454, in 2011 led to the discovery of a set of TALE-like genes that differed considerably in nature from the TALEs and RipTALS. The proteins encoded by these genes were studied for their DNA binding properties by two groups independently and named the Bats (Burkholderia TALE-likes; ) or BurrH. This research showed that the repeat units of the Burkholderia TALE-likes bind DNA with the same code as TALEs, governed by position 13 of each repeat. There are, however, a number of differences. Biological role Burkholderia TALE-likes are composed almost entirely of repeats, lacking the large non-repetitive domains found flanking the repeats in TALEs and RpTALs. Those domains are key to the functions of TALEs and RipTALs allowing them to infiltrate the plant nucleus and turn on gene expression. It is therefore currently unclear what the biological roles of Burkholderia TALE-likes are. What is clear is that they are not effector proteins secreted into plant cells to act as transcription factors, the biological role of TALEs and RipTALs. It is not unexpected that they may differ in biological roles from TALEs and RipTALs since the life style of the bacterium they derive from is very unlike that of TALE and RipTAL bearing bacteria. B. rhizoxinica is an endosymbiont, living inside a fungus, unlike Rhizopus microsporus, a plant pathogen. The same fungus is also an opportunistic human pathogen in immuno-compromised patients, but whereas B. rhizoxinica is necessary for pathogenicity on plant hosts it is irrelevant to human infection. It is unclear whether the Burkholderia TALE-likes are ever secreted either into the fungus, let alone into host plants. Uses in biotechnology As noted in the publications on Burkholderia TALE-likes there may be some advantages to using these proteins as a scaffold for programmable DNA-binding proteins to function as transcription factors or designer-nucleases, compared to TALEs. It has been fused with a FokI nuclease analogous to TALEN. Advantages include a shorter repeat size, more compact domain structure (no large non-repeat domains), greater repeat sequence diversity enabling the use of PCR on the genes encoding them and making them less vulnerable to recombinatorial repeat loss. In addition, Burkholderia TALE-likes have no T-zero requirement relaxing the constraints on DNA target selection. However, few uses of Burkholderia TALE-likes as programmable DNA binding proteins have been published, outside of the original characterization publications. MOrTLs Discovery In 2007 the results of a metagenomic sweep of the world's oceans by the J. Craig Venter Institute were made publicly available. The paper in 2014 on Burkholderia TALE-likes was also the first to report that two entries from that database resembled TALE-likes, based on sequence similarity. These were further characterized and assessed for their DNA-binding potential in 2015. The repeat units encoded by these sequences were found to mediate DNA binding with base preference matching the TALE code, and judged likely to form structures nearly identical to Bat1 repeats based on molecular dynamics simulations. The proteins encoded by these DNA sequences were therefore designated Marine Organism TALE-likes (MOrTLs) 1 and 2 (GenBank: , ). Similar sequences found in metagenomes include and . Evolutionary relationship to other TALE-likes Whilst repeats of MOrTL1 and 2 both conform structurally and functionally to the TALE-like norm, they differ considerably at the sequence level both from all other TALE-likes and from one another. It is not known whether they are truly homologous to the other TALE-likes, and thus constitute together with the TALEs, RipTALs and Bats a true protein-family. Alternatively, they may have evolved independently. It is particularly difficult to judge the relationship to the other TALE-likes because almost nothing is known of the organisms that MOrTL1 and MOrTL2 come from. It is known only that they were found in two separate sea-water samples from the Gulf of Mexico and are likely to be bacteria based on size-exclusion before DNA sequencing. Legal status A patent for BATs and marine TALE-likes in protein engineering was filed in July 2012. , it is currently pending in all jurisdictions. References Proteins Evolution
TALE-likes
[ "Chemistry" ]
2,251
[ "Biomolecules by chemical classification", "Proteins", "Molecular biology" ]
49,021,419
https://en.wikipedia.org/wiki/PET%20response%20criteria%20in%20solid%20tumors
PET response criteria in solid tumors (PERCIST) is a set of rules that define when tumors in cancer patients improve ("respond"), stay the same ("stabilize"), or worsen ("progress") during treatment, using positron emission tomography (PET). The criteria were published in May 2009 in the Journal of Nuclear Medicine (JNM). A pooled analysis from 2016 concluded that its application may give rather different results from RECIST, and might be a more suitable tool for understanding tumor response to treatment. Details Complete metabolic response (CMR) Complete resolution of 18F-FDG uptake within the measurable target lesion so that it is less than mean liver activity and at the level of surrounding background blood pool activity. Disappearance of all other lesions to background blood pool levels. No new suspicious 18F-FDG avid lesions. If progression by RECIST must verify with follow up Partial metabolic response (PMR) Reduction of a minimum of 30% in target measurable tumor 18F-FDG SUL peak, with absolute drop in SUL of at least 0.8 SUL units. No increase >30% of SUL or size in all other lesions No new lesions Stable metabolic disease (SMD) Not CMR, PMR, or Progressive metabolic disease (PMD) No new lesions Progressive metabolic disease (PMD) >30% increase in 18F-FDG SUL peak, with >0.8 SUL units increase in tumor SUV peak from the baseline scan in pattern typical of tumor and not of infection/treatment effect. or Visible increase in the extent of 18F-FDG tumor uptake. or New 18F-FDG avid lesions which are typical of cancer and not related to treatment effect or infection. See also Response evaluation criteria in solid tumors References Cancer research Nuclear medicine PET radiotracers Positron emission tomography
PET response criteria in solid tumors
[ "Physics", "Chemistry" ]
392
[ "Antimatter", "Medicinal radiochemistry", "Positron emission tomography", "PET radiotracers", "Chemicals in medicine", "Matter" ]
49,021,621
https://en.wikipedia.org/wiki/Baogang%20Tailings%20Dam
Baogang Tailings Dam, also known as the Baotou Tailings Dam or Weikuang Dam, is a tailings dam in Inner Mongolia, China, on the outer ring of the city of Baotou, about 20 kilometres from the city centre. The dam is filled with tailings and waste slurry from nearby rare earth mineral refinery plants. Accounts of the tailings dam appeared in western media outlets after a visit in 2015 by British writers Tim Maughan, Liam Young and Kate Davies from Unknown Fields, a "nomadic design studio" from London. Footage posted on YouTube by Maughan appears to show him collecting samples from the floor of the dam. Maughan's account contrasts with the Chinese media's own reporting of the rare earth industry in the area. In 2016, Chinese authorities identified contamination of farmlands surrounding the dam. Construction of the dam began in 1955, and it was complete in 1963 but was not used until 1965. It is owned by Baotou Steel. The circular dam is long and has a capacity. The dam height will be raised a total of 20 m (66 ft) in two stages to a crest elevation of , and the final capacity will be . Bayan Obo Mining District, about 120 kilometres from Baotou city is the world's biggest supplier of rare earth minerals. They are used in the production of smartphones, tablets and other technology, like wind turbines. Production creates millions of tons of waste per year which has drawn much criticism of the dam. Chemicals in the dam have been linked to lower crop yields in surrounding farmlands and serious health problems among local villagers. References Environmental disasters in China Inner Mongolia Dams in China Tailings dams Dams completed in 1965
Baogang Tailings Dam
[ "Technology", "Engineering" ]
347
[ "Tailings dams", "Mining engineering", "Hazardous waste", "Mining equipment" ]
49,021,897
https://en.wikipedia.org/wiki/Pickup%20ion
In solar physics, heliospheric pickup ions are created when neutral particles inside the heliosphere are ionized by either solar ultraviolet radiation, charge exchange with solar wind protons or electron impact ionization. Pickup ions are generally characterized by their single charge state, a typical velocity that ranges between 0 km/s and twice the solar wind velocity (~800 km/s), a composition that reflects their neutral seed population and their spatial distribution in the heliosphere. The neutral seed population of these ions can either be of interstellar origin or of lunar-, cometary, or inner-source origin. Just after the ionization, the singly charged ions are picked up by the magnetized solar wind plasma and develop strong anisotropic and toroidal velocity distribution functions, which gradually transform into a more isotropic state. After their creation, pickup ions move with the solar wind radially outwards from the Sun. Interstellar pickup ions originate from the neutral component of the Local Interstellar Medium (LISM), which enters the heliosphere with a velocity of 25 km/s as a result of its relative motion with respect to the Sun. This neutral wind is gradually ionized and acts as the seed population for interstellar pickup ions. Inner-source pickup ions are produced by an inner-source of neutral particles. The detailed production mechanisms for these ions are currently under debate. History Interstellar pickup ions Because the Sun is moving relative to the local interstellar medium with a velocity of ~25 km/s, interstellar atoms can enter the heliosphere without being deflected by the interplanetary magnetic field. The existence of a population of neutral interstellar particles inside the heliosphere was first predicted in 1970. Their journey from the outer edge of our heliosphere, the so-called heliopause, up to the orbit of Earth takes over 30 years to complete. During that time the interstellar atoms are gradually depleted by ionization processes and their density at 1 AU is significantly lower compared to the interstellar medium. Because atoms have different sensitivities for the various ionization processes, the composition of interstellar atoms at 1 AU is very different from the composition at the edge of our heliosphere or the local interstellar medium. Helium atoms have a very high first ionization potential compared to other interstellar species and are therefore less sensitive to ionization losses by solar UV ionization. This is also the reason why He+ is the most abundant interstellar pickup ion at 1 AU (followed by H+, O+, Ne+, and N+) and was also the first pickup ion to be detected using the SULEICA instrument on the AMPTE spacecraft in 1984. Consequent detections of H+, O+, Ne+, and N+ have been made several years later with the SWICS instrument on board the Ulysses spacecraft. The observations of interstellar pickup ions close to Earth allow to investigate the gas dynamics of the local interstellar medium, which otherwise can only be inferred remotely via optical observations or by a direct measurement of the interstellar neutral gas. The relative velocity of the local interstellar medium with respect to the Sun, temperature and density can be inferred from the spatial pattern of the observed pickup ion fluxes. In particular the pickup ion focusing cone, which is an enhancement of interstellar pickup ions that is co-aligned with the velocity vector of the interstellar neutral atoms (He+ and Ne+), forms due to the Sun's gravitational attraction and can be used to infer the inflow direction of the local interstellar medium. Opposite to the focusing cone, on the so-called upwind side of the Sun, an enhanced pickup ion flux in the form of a crescent is produced for atoms with low first ionization potentials (H+, O+, N+). See also Pluto Energetic Particle Spectrometer Science Investigation (measured pickup ions at Pluto in 2015, spacecraft instrument) References Solar phenomena Space plasmas
Pickup ion
[ "Physics" ]
817
[ "Space plasmas", "Physical phenomena", "Astrophysics", "Solar phenomena", "Stellar phenomena" ]
49,021,918
https://en.wikipedia.org/wiki/Nadia%20Murad
Nadia Murad Basee Taha (; ; born 10 March 1993) is an Iraqi-born Yazidi human rights activist based in Germany. In 2014, during the Yazidi genocide by the Islamic State, she was abducted from her hometown of Kocho in Iraq. Much of her community was massacred. After losing most of her family, Murad was held as an Islamic State sex slave for three months, alongside thousands of other Yazidi women and girls. Murad is the founder of Nadia's Initiative, a non-profit organization dedicated to "helping women and children victimized by genocide, mass atrocities, and human trafficking to heal and rebuild their lives and communities". Its establishment was prompted by the Sinjar massacre. In 2018, she and Congolese gynecologist Denis Mukwege were jointly awarded the Nobel Peace Prize for "their efforts to end the use of sexual violence as a weapon of war and armed conflict." She is the first Iraqi and Yazidi to have been awarded a Nobel Peace Prize. In 2016, Murad was appointed as the first-ever Goodwill Ambassador for the Dignity of Survivors of Human Trafficking for the United Nations Office on Drugs and Crime. Early and personal life Murad was born in the village of Kocho in the Sinjar District, Iraq, populated mostly by Yazidi people. Her family, of the Yazidi minority, were farmers. Murad is the youngest of 11 children, not including her four older half siblings. Murad's father married her mother after the death of his first wife, with whom he had four children. Both of her parents were devout Yazidis, though Murad did not know much about the religion growing up. Murad's father died in 2003. As a child, Murad dreamed of owning a beauty salon. She was attached to her home and never imagined leaving Kocho to live elsewhere. On 19 August 2018, Murad married fellow Yazidi human rights activist Abid Shamdeen in Germany. Yazidi genocide Abduction by the Islamic State At the age of 19, Murad was a student living in the village of Kocho in Sinjar, northern Iraq when Islamic State fighters rounded up the Yazidi community in the village, killing 600 people – including her mother and six of Nadia's brothers and stepbrothers – and taking the younger women and girls into slavery. That year, Murad was one of more than 6,700 Yazidi women and girls taken prisoner by Islamic State in Iraq. She was captured on 15 August 2014. She was held as a slave in the city of Mosul, where she was beaten, burned with cigarettes, and raped repeatedly. She successfully escaped after her captor left the house unlocked. Murad was taken in by a neighboring family, who were able to smuggle her out of the Islamic State controlled area, allowing her to make her way to a refugee camp in Duhok, Kurdistan Region. She was out of ISIS territory in early September or in November 2014. In February 2015, she gave her first testimony – under the alias of "Basima" – to reporters of the Belgian daily newspaper La Libre Belgique while she was staying in the Rwanga camp, living in a converted shipping container. In 2015, she was one of 1,000 women and children to benefit from a refugee programme of the Government of Baden-Württemberg, Germany, which became her new home. Aftermath On 16 December 2015, Murad spoke to the United Nations Security Council about human trafficking and conflict. This was the first time the Council was ever briefed on human trafficking. In 2016, Murad was named the first UNODC Goodwill Ambassador for the Dignity of Survivors of Human Trafficking. As part of her role as an ambassador, Murad participates in global and local advocacy initiatives to bring awareness of human trafficking and refugees. Murad has reached out to refugee and survivor communities, listening to testimonies of victims of trafficking and genocide. In September 2016, Attorney Amal Clooney spoke before the United Nations Office on Drugs and Crime (UNODC) to discuss the decision that she had made in June 2016 to represent Murad as a client in legal action against ISIL commanders. Clooney characterized the genocide, rape, and trafficking by ISIL as a "bureaucracy of evil on an industrial scale", describing it as a slave market existing online, on Facebook and in the Mideast that is still active today. Murad has received serious threats to her safety as a result of her work. Activism In September 2016, Murad announced Nadia's Initiative at an event hosted by Tina Brown in New York City. The Initiative intends to provide advocacy and assistance to victims of genocide. On 3 May 2017, Murad met Pope Francis and Archbishop Paul Gallagher in Vatican City. During the meeting, she "asked for help for Yazidis who are still in ISIS captivity, acknowledged the Vatican support for minorities, discussed the scope for an autonomous region for minorities in Iraq, highlighted the current situation and challenges facing religious minorities in Iraq and Syria particularly the victims and internally displaced people as well as immigrants". In 2018, Murad's activism focused on security and accountability. Along Nadia's Initiative, Murad worked with the Mine's Advisory Group (MAG) to demine more than 2.6 million square meters of land in Sinjar, Iraq. She was also instrumental in drafting and passing UN Security Council Resolution 2379. The resolution called for the creation of an Investigative Team, headed by a Special Advisor, to support domestic efforts to hold ISIL (Da'esh) accountable by collecting, preserving, and storing evidence in Iraq of acts that may amount to war crimes, crimes against humanity, and genocide committed by the terrorist group ISIL (Da'esh). Murad's activism focused on accountability and gender equality in 2019, as she aided in the prosecution of an ISIL militant's wife in Germany and the collection of evidence of ISIL crimes. Murad worked with the German Mission to the UN to help draft and pass UN Security Council Resolution 2467 in April 2019. The resolution expands the UN's commitments to end sexual violence in conflict and emphasizes a survivor-centric approach to justice and accountability. Murad also took part in advocating for G7 member states to adopt legislation that protects and promotes women's rights as a member of France's Gender Advisory Council. Murad urged the government of the Iraqi Kurdistan region to play its role in rebuilding Yazidi areas in Sinjar District and returning the refugees back home. Nechirvan Barzani announced his full support "to the humanitarian role she plays in service of peace and the Yazidi victims," said the statement. In 2019, Murad addressed the second annual Ministerial to Advance Religious Freedom where she spoke about her story and the ongoing challenges faced by Yazidis nearly five years after the 3 August 2014 attacks. She laid out a "five-point plan of action" to address the challenges Yazidis face in Iraq. Murad was included among a delegation of survivors of religious persecution from around the world whose stories were highlighted at the summit. As part of the delegation, on 17 July 2019, Murad met with U.S. President Donald Trump in the Oval Office with whom she shared her personal story of having lost her family members, including her mother and six brothers, and pleaded with him to do something. In 2020, Murad began working with the Institute for International Criminal Investigations (IICI) and the Preventing Sexual Violence in Conflict Initiative (PSVI) of the United Kingdom government to establish the Murad Code. The Code is a global consultative initiative aimed at building and supporting a community of better practice for, with, and concerning survivors of conflict-related sexual violence. Its key objective is to respect and support survivors' rights, ensuring work with survivors to investigate, document, and record their experiences is safer, more ethical, and more effective in upholding their human rights. On 6 February 2021, the Yazidi community buried 104 victims of the Kocho massacre, including two of Nadia's brothers and her mother. The ceremony was marked by both grief and closure, as many survivors were finally able to lay their family members to rest over six years after the genocide. It was also a visceral reminder of the urgent need to exhume all mass graves throughout Sinjar. In March 2021, the Iraqi Parliament passed the long-awaited Yazidi Female Survivors Law. The law formally acknowledges the Yazidi genocide and the gender-based trauma of sexual violence against Yazidi women and other ethnic minorities. It lays the groundwork for paying reparations, and guarantees land and job opportunities for survivors of ISIL captivity. Murad worked with Iraqi authorities and the Coalition for Just Reparations to draft and advocate for the law, as well as its ongoing implementation. In May 2021, the United Nations Investigative Team to Promote Accountability for Crimes Committed by Da'esh/ISIL (UNITAD) presented landmark findings to the UN Security Council. UNITAD's Special Advisor, Karim Khan, reported to the Security Council that "there is clear and convincing evidence that the crimes against the Yazidi people clearly constituted genocide." Murad joined the proceedings to call on member states to establish international trials and support national efforts to prosecute ISIL members for their crimes of genocide and sexual violence. In November 2021, a scheduled book club event in Canada with Nadia as a speaker was boycotted by the superintendent at the Toronto District School Board Helen Fisher, who declared the students from her school would not participate over fear of offending Islamic students and fostering Islamophobia. The move drew wide criticism, and the board was forced to clarify that these views were not their official position. In 2022, Murad, along with Nadia's Initiative, the Institute for International Criminal Investigation, and the UK government, released the Murad Code. She spoke about its benefits at the United Nations Security Council open debate on "Accountability as Prevention: Ending Cycles of Sexual Violence in Conflict Open Debate on Conflict-Related Sexual Violence." Global Survivors Fund With her fellow 2018 Nobel Peace Prize Laureate, Dr. Denis Mukwege, Murad founded the Global Survivors Fund in October 2019. The Fund works to ensure that survivors of conflict-related sexual violence globally have access to reparations and other forms of redress. The Global Survivors Fund (GSF) builds on the advocacy efforts of the Office of the United Nations' Special Representative of the Secretary-General on Sexual Violence in Conflict (SRSG-SVC). The UN Secretary-General endorsed GSF in a statement in April 2019, and Security Council Resolution 2467 referenced GSF. The G7 also confirmed its support for GSF in its Declaration on Gender Equality and Women's Empowerment in August 2019. Published works Murad's memoir, The Last Girl: My Story of Captivity, and My Fight Against the Islamic State, was published by Crown Publishing Group on 7 November 2017, which is an autobiographical in which she describes being captured and enslaved by the Islamic State. The book has been released in 44 languages including French (), German (), Arabic (), Italian (), and Spanish (). Awards and honours Nobel Peace Prize (2018) In 2018, Murad was co-winner (with Denis Mukwege, a Congolese gynaecologist) of the Nobel Peace Prize, awarded for the efforts of both people to end sexual violence as a weapon of war. The press release from the prize committee cited her refusal to remain 'silent and ashamed', and spoke of her courage in highlighting her own ordeal and that of other victims. BBC 100 list In December 2024, Nadia Murad was included on the BBC's 100 Women list. Others 2016: First Goodwill Ambassador for the Dignity of Survivors of Human Trafficking of the United Nations 2016: Council of Europe Václav Havel Award for Human Rights 2016: Glamour Award for The Women Who Stood Up to ISIS 2016: Sakharov Prize for Freedom of Thought (with Lamiya Haji Bashar) 2016: Clinton Global Citizen Award 2016: United Nations Association of Spain Peace Prize 2016: TIME 100 Most Influential People 2016: Oxi Courage Award 2017: Forbes 30 Under 30 2018: Nobel Peace Prize (with Denis Mukwege) 2018: Hillary Clinton Award for Advancing Women in Peace and Security 2018: Global Goals Changemaker Award 2018: Elisabeth B. Weintz Humanitarian Award 2019: Bambi Award 2019: Golden Plate Award of the American Academy of Achievement 2019: International DVF Award 2019: Seton Hall University Honorary Doctorate 2019: Marisa Bellisario International Prize 2020: Vital Voices Global Trailblazer Award 2020: Justice O'Connor Prize 2020: Frank and Cheri Hermance Atlas Award 2021: UC Merced Spendlove Prize 2022: Chapman University Presidential Fellow Bibliography Nadia Murad: The Last Girl: My Story of Captivity, and My Fight Against the Islamic State (Virago eBook, 7 November 2017), (English) Nadia Murad: Ich bin eure Stimme: Das Mädchen, das dem Islamischen Staat entkam und gegen Gewalt und Versklavung kämpft (Knaur eBook, 31 October 2017), (German) Filmography On Her Shoulders (2018) See also Yazidi genocide List of kidnappings Lists of solved missing person cases References External links Nadia's Initiative Yazda.org 1993 births 2010s missing person cases Formerly missing people German Yazidis Iraqi emigrants to Germany Iraqi human rights activists Iraqi Nobel laureates Iraqi refugees Iraqi victims of crime Iraqi women's rights activists Iraqi Yazidis Living people Kidnapped people Missing person cases in Iraq Missing person cases in Syria Nobel Peace Prize laureates People from Nineveh Governorate Rape in Iraq Refugees in Germany Sakharov Prize laureates Sexual abuse victim advocates Women human rights activists Women Nobel laureates Slave concubines Yazidi women Violence against women in Iraq 21st-century German women writers 21st-century German writers 21st-century Iraqi women writers 21st-century Iraqi writers History of slavery in the Muslim world 21st century in slavery Václav Havel Human Rights Prize laureates
Nadia Murad
[ "Technology" ]
2,893
[ "Women Nobel laureates", "Women in science and technology" ]
49,022,545
https://en.wikipedia.org/wiki/IBM%20Journal%20of%20Research%20and%20Development
IBM Journal of Research and Development is a former, peer-reviewed bimonthly scientific journal covering research on information systems. This Journal has ceased production in 2020. According to the Journal Citation Reports in 2019, the journal had an impact factor of 1.27. IBM also published the IBM Systems Journal () starting in 1962; it ceased publication in 2008 and was absorbed in part by the IBM Journal of Research and Development. References External links English-language journals IBM Information systems journals
IBM Journal of Research and Development
[ "Technology" ]
97
[ "Information systems journals", "Information systems" ]
49,022,601
https://en.wikipedia.org/wiki/Physical%20Review%20Accelerators%20and%20Beams
Physical Review Accelerators and Beams is a monthly peer-reviewed open-access scientific journal, published by the American Physical Society. The journal focuses on accelerator physics and engineering. Its lead editor is Frank Zimmermann (CERN). The journal was established in 1998 as Physical Review Special Topics – Accelerators and Beams, obtaining its current title in 2016. The journal does not require article processing charges, being sponsored by academic and industrial institutions. Abstracting and indexing The journal is abstracted and indexed in: Current Contents/Physical, Chemical & Earth Sciences Inspec Science Citation Index Expanded Scopus According to the Journal Citation Reports, the journal has a 2021 impact factor of 1.879. References External links American Physical Society academic journals Academic journals established in 1998 English-language journals Monthly journals Particle physics journals Creative Commons Attribution-licensed journals
Physical Review Accelerators and Beams
[ "Physics" ]
171
[ "Particle physics stubs", "Particle physics", "Particle physics journals" ]
49,022,982
https://en.wikipedia.org/wiki/Incomplete%20contracts
In economic theory, the field of contract theory can be subdivided in the theory of complete contracts and the theory of incomplete contracts. In contract law, an incomplete contract is one that is defective or uncertain in a material respect. A complete contract in economic theory means a contract which provides for the rights, obligations and remedies of the parties in every possible state of the world. However, since the human mind is a scarce resource and the mind cannot collect, process, and understand an infinite amount of information, economic actors are limited in their rationality (the limitations of the human mind in understanding and solving complex problems) and one cannot anticipate all possible contingencies. Or perhaps because it is too expensive to write a complete contract, the parties will opt for a "sufficiently complete" contract. In short, every contract is incomplete for a variety of reasons and limitations. The incompleteness of a contract also means that the protection it provides may be inadequate. Even if a contract is incomplete, the legal validity of the contract cannot be denied, and an incomplete contract does not mean that it is unenforceable. The terms and provisions of the contract still have influence and are binding on the parties to the contract. As for contractual incompleteness, the law is concerned with when and how a court should fill gaps in a contract when there are too many or too uncertain to be enforceable, and when it is obliged to negotiate to make an incomplete contract fully complete or to achieve the desired final contract. The incomplete contracting paradigm was pioneered by Sanford J. Grossman, Oliver D. Hart, and John H. Moore. In their seminal contributions, Grossman and Hart (1986), Hart and Moore (1990), and Hart (1995) argue that in practice, contracts cannot specify what is to be done in every possible contingency. At the time of contracting, future contingencies may not even be describable. Moreover, parties cannot commit themselves never to engage in mutually beneficial renegotiation later on in their relationship. Thus, an immediate consequence of the incomplete contracting approach is the so-called hold-up problem. Since at least in some states of the world the parties will renegotiate their contractual arrangements later on, they have insufficient incentives to make relationship-specific investments (since a party's investment returns will partially go to the other party in the renegotiations). Oliver Hart and his co-authors argue that the hold-up problem may be mitigated by choosing a suitable ownership structure ex-ante (according to the incomplete contracting paradigm, more complex contractual arrangements are ruled out). Hence, the property rights approach to the theory of the firm can explain the pros and cons of vertical integration, thus providing a formal answer to important questions regarding the boundaries of the firm that were first raised by Ronald Coase (1937). The incomplete contracting approach has been subject of a still ongoing discussion in contract theory. In particular, some authors such as Maskin and Tirole (1999) argue that rational parties should be able to solve the hold-up problem with complex contracts, while Hart and Moore (1999) point out that these contractual solutions do not work if renegotiation cannot be ruled out. Some authors have argued that the pros and cons of vertical integration can sometimes also be explained in complete contracting models. The property rights approach based on incomplete contracting has been criticized by Williamson (2000) because it is focused on ex-ante investment incentives, while it neglects ex-post inefficiencies. It has been pointed out by Schmitz (2006) that the property rights approach can be extended to the case of asymmetric information, which may explain ex-post inefficiencies. The property rights approach has also been extended by Chiu (1998) and DeMeza and Lockwood (1998), who allow for different ways to model the renegotiations. In a more recent extension, Hart and Moore (2008) have argued that contracts may serve as reference points. The theory of incomplete contracts has been successfully applied in various contexts, including privatization, international trade, management of research & development, allocation of formal and real authority, advocacy, and many others. The 2016 Nobel Prize in Economics was awarded to Oliver D. Hart and Bengt Holmström for their contribution to contract theory, including incomplete contracts. In economic theory In 1986, Grossman and Hart (1986) used incomplete contract theory in their seminal paper on the costs and benefits of vertical integration to answer the question "What is a firm and what determines its boundaries?". The Grossman-Hart theory of property rights is the first to explain in a straightforward manner why markets are so important in the context of organizational choice. The advantage of non-integrated markets is that the owners (entrepreneurs) can exercise their control, while the advantage of market transactions also stems from the power of restraint conferred by ownership. The fact that economic actors are only finitely rational and cannot foresee all possible contingencies is perhaps at the heart of the problem. However, as this uncertain state of nature or behavior cannot be written into an enforceable contract, when the contract is incomplete, not all uses of the asset can be specified in advance and any contract negotiated in advance must leave some discretion as to the use of the asset, with the 'owner' of the company being the party to whom residual control is allocated at the contract stage. Grossman and Hart claim that the essence of the firm lies in the decision-making power conferred by the ownership of its assets. In a world of incomplete contracts, decision-making power plays a key role in determining the incentives of owners. Grossman and Hart believe that the optimal allocation or governance structure of property rights is the allocation that minimizes efficiency losses. Therefore, where Party A's investment is more important than Party B's, it is preferable to allocate title to the asset to Party A, even if this discourages Party B's investment. Incomplete contractual/property rights approach gives rise to theories of ownership and vertical integration, and it also directly addresses the question of what constitutes a firm. Both Grossman and Hart consider the firm to be a collection of assets over which the owners have residual control. In 1990, Oliver Hart and John Moore published another article, "Property Rights and the Nature of the Firm", which provided a framework for addressing when transactions should take place within the firm and when they should take place through the market. The essence of the 1986 Grossman-Hart model is about the optimal allocation of the constraining forces conferred by ownership, and its model of property rights is about the allocation of assets between individuals (entrepreneurs) rather than firms. Whereas the Hart-Moore model of 1990 extends this optimal allocation of traction, property rights theory clarifies the content of the asset allocation assumptions between firms and identifies a firm with the assets that its owners control. One of Hart-Moore's key findings suggests an explanation for why firms, rather than workers, tend to own most of the non-human assets used to produce goods and services: complementary assets should be owned by one person. New ideas Incomplete contracts can create scenarios that lead to inefficient investments and market failures, but incompleteness is essentially a feasibility constraint. The 'strategic ambiguity hypothesis' assumes that the optimal formal contract may be deliberately incomplete. Companies use strategic ambiguity to circumvent legal constraints. Invalidate these agreements and make the law insufficient to prevent their formation and performance. Limitations Contracts have many restrictions in terms. Incomplete contracts are also limited by them. Contractual terms are the specific details of an agreement, including the rights and obligations of the parties. Contractual terms are broadly divided into two types, express terms and implied terms. Express terms are included in the signed contract, or a caveat that is reasonably noticeable to the other party. Implicit terms include those implied by the court and any relevant legal provisions. Terms implied by the Court Courts are often willing to imply a term in a settled contract to "fill in the gaps" as long as it is: Reasonable and fair; Necessary to make the contract workable; So obvious as to be "self-explanatory"; Able to be expressed clearly and in line with clear terms. Example: The court will imply into the contract terms which the parties are deemed to have known by virtue of the previous transaction. Statutory implied terms Example: ACL’s (Australian Consumer Law) implied terms in consumer contracts are intended to protect the buyer, and there is an implied term in every contract for the sale of goods. Conditions of ownership by the seller, implies the right to sell these goods to the buyer: Provided that the goods will be as described. Provided that the goods will be of merchantable quality. Provided that the goods are fit for their purpose. Provided that most of the goods will correspond to the sample. Unenforceable terms If one of the parties to the contract is a minor or a person lacking mental capacity, that party will not have the legal capacity to contract. Only if both contract parties have the legal capacity to sign a contract, contracts are only enforceable. Some contracts are classified by common law as illegal and unenforceable: ——Criminal or tortious contracts ——Contracts to promote corruption in public office ——Contracts intended to avoid paying taxes ——Contracts to prevent or delay the administration of justice The effect of a breach of a statutory provision on the validity and enforceability of a contract depends on the wording of the regulation itself. An agreement may just be illegal because it violates a statutory prohibition. See also Precommitment References Economic theories
Incomplete contracts
[ "Mathematics" ]
1,965
[ "Game theory" ]
49,023,145
https://en.wikipedia.org/wiki/Journal%20of%20Experimental%20Marine%20Biology%20and%20Ecology
The Journal of Experimental Marine Biology and Ecology is a peer-reviewed bimonthly journal which publishes work on the biochemistry, physiology, behaviour, and genetics of marine plants and animals in relation to their ecology. According to the Journal Citation Reports, the journal has a 2015 impact factor of 1.796. References English-language journals Academic journals established in 1967 Biology journals Ecology journals Bimonthly journals Elsevier academic journals Marine biology
Journal of Experimental Marine Biology and Ecology
[ "Biology", "Environmental_science" ]
87
[ "Environmental science journals", "Ecology journals", "Environmental science journal stubs", "Marine biology" ]
49,023,205
https://en.wikipedia.org/wiki/Systematic%20and%20Applied%20Microbiology
Systematic and Applied Microbiology is a peer-reviewed bimonthly journal deals with various aspects of microbial diversity and systematics of prokaryotes. It focuses on Bacteria and Archaea; eukaryotic microorganisms will only be considered in rare cases. According to the Journal Citation Reports, the journal has a 2022 impact factor of 3.4. References English-language journals Applied microbiology journals Prokaryote taxonomy
Systematic and Applied Microbiology
[ "Biology" ]
91
[ "Prokaryotes", "Taxonomy (biology)", "Prokaryote taxonomy" ]
49,023,307
https://en.wikipedia.org/wiki/Applied%20Microbiology%20and%20Biotechnology
The Applied Microbiology and Biotechnology is a peer-reviewed biweekly journal publishes papers and mini-reviews of new and emerging products, processes and technologies in the area of prokaryotic or eukaryotic cells, relevant enzymes and proteins; applied genetics and molecular biotechnology; genomics and proteomics; applied microbial and cell physiology; environmental biotechnology; process and products and more. Abstracting and Indexing The journal is abstracted and indexed in: According to the Journal Citation Reports, the journal has a 2023 impact factor of 3.9. References English-language journals Applied microbiology journals Biotechnology journals
Applied Microbiology and Biotechnology
[ "Biology" ]
126
[ "Biotechnology literature", "Biotechnology journals" ]
49,023,532
https://en.wikipedia.org/wiki/Random-sampling%20mechanism
A random-sampling mechanism (RSM) is a truthful mechanism that uses sampling in order to achieve approximately-optimal gain in prior-free mechanisms and prior-independent mechanisms. Suppose we want to sell some items in an auction and achieve maximum profit. The crucial difficulty is that we do not know how much each buyer is willing to pay for an item. If we know, at least, that the valuations of the buyers are random variables with some known probability distribution, then we can use a Bayesian-optimal mechanism. But often we do not know the distribution. In this case, random-sampling mechanisms provide an alternative solution. RSM in large markets Market-halving scheme When the market is large, the following general scheme can be used: The buyers are asked to reveal their valuations. The buyers are split to two sub-markets, ("left") and ("right"), using simple random sampling: each buyer goes to one of the sides by tossing a fair coin. In each sub-market , an empirical distribution function is calculated. The Bayesian-optimal mechanism (Myerson's mechanism) is applied in sub-market with distribution , and in with . This scheme is called "Random-Sampling Empirical Myerson" (RSEM). The declaration of each buyer has no effect on the price he has to pay; the price is determined by the buyers in the other sub-market. Hence, it is a dominant strategy for the buyers to reveal their true valuation. In other words, this is a truthful mechanism. Intuitively, by the law of large numbers, if the market is sufficiently large then the empirical distributions are sufficiently similar to the real distributions, so we expect the RSEM to attain near-optimal profit. However, this is not necessarily true in all cases. It has been proved to be true in some special cases. The simplest case is digital goods auction. There, step 4 is simple and consists only of calculating the optimal price in each sub-market. The optimal price in is applied to and vice versa. Hence, the mechanism is called "Random-Sampling Optimal Price" (RSOP). This case is simple because it always calculates feasible allocations. I.e, it is always possible to apply the price calculated in one side to the other side. This is not necessarily the case with physical goods. Even in a digital goods auction, RSOP does not necessarily converge to the optimal profit. It converges only under the bounded valuations assumption: for each buyer, the valuation of the item is between 1 and , where is some constant. The convergence rate of RSOP to optimality depends on . The convergence rate also depends on the number of possible "offers" considered by the mechanism. To understand what an "offer" is, consider a digital goods auction in which the valuations of the buyers, in dollars, are known to be bounded in . If the mechanism uses only whole dollar prices, then there are only possible offers. In general, the optimization problem may involve much more than just a single price. For example, we may want to sell several different digital goods, each of which may have a different price. So instead of a "price", we talk on an "offer". We assume that there is a global set of possible offers. For every offer and agent , is the amount that agent pays when presented with the offer . In the digital-goods example, is the set of possible prices. For every possible price , there is a function such that is either 0 (if ) or (if ). For every set of agents, the profit of the mechanism from presenting the offer to the agents in is: and the optimal profit of the mechanism is: The RSM calculates, for each sub-market , an optimal offer , calculated as follows: The offer is applied to the buyers in , i.e.: each buyer who said that receives the offered allocation and pays ; each buyer in who said that do not receive and do not pay anything. The offer is applied to the buyers in in a similar way. Profit-oracle scheme Profit oracle is another RSM scheme that can be used in large markets. It is useful when we do not have direct access to agents' valuations (e.g. due to privacy reasons). All we can do is run an auction and watch its expected profit. In a single-item auction, where there are bidders, and for each bidder there are at most possible values (selected at random with unknown probabilities), the maximum-revenue auction can be learned using: calls to the oracle-profit. RSM in small markets RSMs were also studied in a worst-case scenario in which the market is small. In such cases, we want to get an absolute, multiplicative approximation factor, that does not depend on the size of the market. Market-halving, digital goods The first research in this setting was for a digital goods auction with Single-parameter utility. For the Random-Sampling Optimal-Price mechanism, several increasingly better approximations have been calculated: By, the mechanism profit is at least 1/7600 of the optimal. By, the mechanism profit is at least 1/15 of the optimal. By, the mechanism profit is at least 1/4.68 of the optimal, and in most cases 1/4 of the optimal, which is tight. Single-sample, physical goods When the agents' valuations satisfy some technical regularity condition (called monotone hazard rate), it is possible to attain a constant-factor approximation to the maximum-profit auction using the following mechanism: Sample a single random agent and query his value (the agents are assumed to have single-parameter utility). On the other agents, run a VCG auction with reserve-price determined by the sampled agent. The profit of this mechanism is at least , where is the number of agents. This is 1/8 when there are two agents, and grows towards 1/4 as the number of agents grows. This scheme can be generalized to handle constraints on the subsets of agents that can win simultaneously (e.g., there is only a finite number of items). It can also handle agents with different attributes (e.g. young vs. old bidders). Sample complexity The sample complexity of a random-sampling mechanism is the number of agents it needs to sample in order to attain a reasonable approximation of the optimal welfare. The results in imply several bounds on the sample-complexity of revenue-maximization of single-item auctions: For a -approximation of the optimal expected revenue, the sample-complexity is - a single sample suffices. This is true even when the bidders are not i.i.d. For a -approximation of the optimal expected revenue, when the bidders are i.i.d OR when there is an unlimited supply of items (digital goods), the sample-complexity is when the agents' distributions have monotone hazard rate, and when the agents' distributions are regular but do not have monotone-hazard-rate. The situation becomes more complicated when the agents are not i.i.d (each agent's value is drawn from a different regular distribution) and the goods have limited supply. When the agents come from different distributions, the sample complexity of -approximation of the optimal expected revenue in single-item auctions is: at most - using a variant of the empirical Myerson auction. at least (for monotone-hazard-rate regular valuations) and at least (for arbitrary regular valuations). discuss arbitrary auctions with single-parameter utility agents (not only single-item auctions), and arbitrary auction-mechanisms (not only specific auctions). Based on known results about sample complexity, they show that the number of samples required to approximate the maximum-revenue auction from a given class of auctions is: where: the agents' valuations are bounded in , the pseudo-VC dimension of the class of auctions is at most , the required approximation factor is , the required success probability is . In particular, they consider a class of simple auctions called -level auctions: auctions with reserve prices (a Vickrey auction with a single reserve price is a 1-level auction). They prove that the pseudo-VC-dimension of this class is , which immediately translates to a bound on their generalization error and sample-complexity. They also prove bounds on the representation error of this class of auctions. Envy A disadvantage of the random-sampling mechanism is that it is not envy-free. E.g., if the optimal prices in the two sub-markets and are different, then buyers in each sub-market are offered a different price. In other words, there is price discrimination. This is inevitable in the following sense: there is no single-price strategyproof auction that approximates the optimal profit. See also Market research Pricing Consensus estimate - an alternative approach to prior-free mechanism design. References Mechanism design Sampling techniques
Random-sampling mechanism
[ "Mathematics" ]
1,846
[ "Game theory", "Mechanism design" ]
49,023,626
https://en.wikipedia.org/wiki/Digital%20goods%20auction
In auction theory, a digital goods auction is an auction in which a seller has an unlimited supply of a certain item. A typical example is when a company sells a digital good, such as a movie. The company can create an unlimited number of copies of that movie in a negligible cost. The company's goal is to maximize its profit; to do this, it has to find the optimal price: if the price is too high, only few people will buy the item; if the price is too low, many people will buy but the total revenue will be low. The optimal price of the movie depends on the valuations of the potential consumers - how much each consumer is willing to pay to buy a movie. If the valuations of all potential consumers are known, then the company faces a simple optimization problem - selecting the price that maximizes the profit. For concreteness, suppose there is a set of consumers and that they are ordered by their valuation, so that the consumer with the highest valuation (willing to pay the largest price for the movie) is called "1", the next-highest is called "2", etc. The valuation of consumer is denoted by , such that . For every , if the price is set to , then only the first consumers buy the movie, so the profit of the company is . It is clear that in this case, the company is best-off setting the price at exactly ; in this case its profit is . Hence, the company's optimization problem is: The problem is that, usually, the valuations of the consumers are NOT known. The company can try to ask them, but then they will have an incentive to report lower valuations in order to decrease the price. There is much research on designing strategyproof digital goods auctions. Most of them are based on one of two approaches: Random-sampling mechanisms, Consensus estimates. More details and references can be found there. References Mechanism design Auction theory
Digital goods auction
[ "Mathematics" ]
399
[ "Game theory", "Mechanism design", "Auction theory" ]
49,024,305
https://en.wikipedia.org/wiki/Lepiota%20erminea
Lepiota erminea, commonly known as the dune dapperling, is a species of agaric fungus in the family Agaricaceae. It is found in Europe and North America. See also List of Lepiota species References External links erminea Fungi described in 1821 Fungi of Europe Fungi of North America Taxa named by Elias Magnus Fries Fungus species
Lepiota erminea
[ "Biology" ]
76
[ "Fungi", "Fungus species" ]
49,024,399
https://en.wikipedia.org/wiki/Lycoperdon%20lividum
Lycoperdon lividum, commonly known as the grassland puffball, is a type of puffball mushroom in the genus Lycoperdon. It is found in Europe, where it grows on sandy soil in pastures, dunes, and heaths, especially in coastal areas. It fruits in autumn. It was first described scientifically in 1809 by Christian Hendrik Persoon. References External links Puffballs Fungi described in 1809 Fungi of Europe lividum Fungus species
Lycoperdon lividum
[ "Biology" ]
97
[ "Fungi", "Fungus species" ]
49,024,588
https://en.wikipedia.org/wiki/Deuterosome
In cell biology, a deuterosome is a protein structure within a multiciliated cell (such as an epithelial cell of respiratory tract) that produces multiple centrioles. Most cells in the human body possess one primary cilium, a relatively small protrusion of the cell membrane that looks like a stick or a finger under the electron microscope. Primary cilium is typically used by the cell as a sensory organelle, or antenna. Some cells, however, have numerous cilia which they use to generate directed fluid flow. Examples include: epithelial cells of the respiratory tract, in which multiple cilia are used for mucus clearance; the oviduct, in which cilia help the egg migrate to the uterus; and others. Each cilium has a basal body formed from a centriole to which it is anchored and from which it starts to grow after each cell division, when a new daughter cell is formed. Centrioles typically replicate once during cell division, thus allowing for only one cilium for a daughter cell. Multiciliated cells, on the other hand, need to produce more than 100 centrioles in order to grow multiple cilia. This problem is solved by the existence of deuterosome, a structure thought to be formed from amorphous filamentous material and able to make many centrioles at once. The evidence of the existence of deuterosome first came from electron microscopy work in various multiciliated tissues. It was found that both centriole duplication and de novo generation of centrioles occurs in such cells. The generation of new centrioles which will serve as basal bodies for multiple cilia is due to a cytoplasmic structure, which was termed the “deuterosome” by Sorokin. References Cell biology Cell cycle
Deuterosome
[ "Biology" ]
379
[ "Cell biology", "Cellular processes", "Cell cycle" ]
49,024,626
https://en.wikipedia.org/wiki/Egorychev%20method
The Egorychev method is a collection of techniques introduced by Georgy Egorychev for finding identities among sums of binomial coefficients, Stirling numbers, Bernoulli numbers, Harmonic numbers, Catalan numbers and other combinatorial numbers. The method relies on two observations. First, many identities can be proved by extracting coefficients of generating functions. Second, many generating functions are convergent power series, and coefficient extraction can be done using the Cauchy residue theorem (usually this is done by integrating over a small circular contour enclosing the origin). The sought-for identity can now be found using manipulations of integrals. Some of these manipulations are not clear from the generating function perspective. For instance, the integrand is usually a rational function, and the sum of the residues of a rational function is zero, yielding a new expression for the original sum. The residue at infinity is particularly important in these considerations. Some of the integrals employed by the Egorychev method are: First binomial coefficient integral where Second binomial coefficient integral where Exponentiation integral where Iverson bracket where Stirling number of the first kind where Stirling number of the second kind where Example I Suppose we seek to evaluate which is claimed to be : Introduce : and : This yields for the sum : This is Extracting the residue at we get thus proving the claim. Example II Suppose we seek to evaluate Introduce Observe that this is zero when so we may extend to infinity to obtain for the sum Now put so that (observe that with the image of with small is another closed circle-like contour which makes one turn and which we may certainly deform to obtain another circle ) and furthermore to get for the integral This evaluates by inspection to (use the Newton binomial) Here the mapping from to determines the choice of square root. For the conditions on and we have that for the series to converge we require or or The closest that the image contour of comes to the origin is so we choose for example This also ensures that so does not intersect the branch cut (and is contained in the image of ). For example and will work. This example also yields to simpler methods but was included here to demonstrate the effect of substituting into the variable of integration. Computation using formal power series We may use the change of variables rule 1.8 (5) from the Egorychev text (page 16) on the integral with and We get and find with the inverse of . This becomes or alternatively Observe that so this is and the rest of the computation continues as before. External links Hosam Mahmoud, 2022, History and examples of Egorychev method Marko Riedel, 2024, Computational examples of using the Egorychev method to evaluate sums involving types of combinatorial numbers (parts 1 and 2, formal power series and residue operators Marko Riedel, 2024, Computational examples of using the Egorychev method to evaluate sums involving types of combinatorial numbers (part 3, complex variables References Factorial and binomial topics
Egorychev method
[ "Mathematics" ]
621
[ "Factorial and binomial topics", "Combinatorics" ]
49,028,043
https://en.wikipedia.org/wiki/Phlegmacium%20subfoetidum
Phlegmacium subfoetidum is a species of mushroom producing fungus in the family Cortinariaceae. It was previously known as Cortinarius subfoetidus. Taxonomy It was described as new to science in 1944 by American mycologist Alexander H. Smith who classified it as Cortinarius subfoetidus. It was placed in Cortinarius (subgenus Phlegmacium). In 1999 Meinhard Michael Moser and Joe Ammirati published the variety Cortinarius subfoetidus var. bubalinovelatus. In 2022 the species was transferred from Cortinarius and reclassified as Phlegmacium subfoetidum based on genomic data. Description The mushroom cap is 3–10 cm wide, convex to flat (sometime umbonate), lavender to pinkish, bluish in age, slimy, smooth, with a fruity odor. The gills are adnate to notched, lilac then brown as the spores mature. The stalk is 5–10 cm tall and 1–2 cm wide, equal or clavate. Its edibility is unknown, but it is not recommended due to its similarity to deadly poisonous species. Similar species include Cortinarius griseoviolaceus and C. traganus. Habitat and distribution Found in the Pacific Northwest region of the United States and Canada. See also List of Cortinarius species References External links subfoetidus Fungi described in 1944 Fungi of Canada Fungi of the United States Fungi without expected TNC conservation status Fungus species
Phlegmacium subfoetidum
[ "Biology" ]
324
[ "Fungi", "Fungus species" ]
49,028,290
https://en.wikipedia.org/wiki/Tetrakis%28cyclopentadienyl%29uranium%28IV%29
Tetrakis(cyclopentadienyl)uranium(IV), U(C5H5)4, abbreviated U(Cp)4, is an organouranium compound composed of a uranium atom sandwiched between four cyclopentadienide rings. Synthesis and properties Tetrakis(cyclopentadienyl)uranium(IV) was first prepared in 1962 by Ernst Otto Fischer, who reacted uranium tetrachloride with excess potassium cyclopentadienide in benzene and obtained the complex as red crystals at 6% yield: UCl4 + 4 KCp → U(Cp)4 + 4 KCl Solid crystals of U(Cp)4 are air-stable, but the benzene solution is extremely air-sensitive. Reduction of U(Cp)4 with uranium metal yields tris(cyclopentadienyl)uranium(III), U(Cp)3. References Organouranium compounds Metallocenes Cyclopentadienyl complexes Uranium(IV) compounds Substances discovered in the 1960s
Tetrakis(cyclopentadienyl)uranium(IV)
[ "Chemistry" ]
218
[ "Organometallic chemistry", "Cyclopentadienyl complexes" ]
49,028,503
https://en.wikipedia.org/wiki/Heteroborane
Heteroboranes are classes of boranes in which at least one boron atom is replaced by another elements. Like many of the related boranes, these clusters are polyhedra and are similarly classified as closo-, nido-, arachno-, and hypho-, according to the so-called electron count. Closo- represents a complete polyhedron, while nido-, arachno- and hypho- stand for polyhedrons that are missing one, two and three vertices. Besides carbon (carboranes or carbaboranes), other elements can also be included in the heteroborane molecules as well, such as Si (silaboranes), N (azaboranes, including borazine), P (phosphaboranes), As (arsaboranes), Sb (stibaboranes), O (oxaboranes), S (thiaboranes), Se (selenaboranes) and Te (telluraboranes), either alone or in combination. Structurally, some heteroboranes can be derived from the icosahedral (Ih) anion via formal replacement of its BH fragments with isoelectronic , or fragments, e.g., closo-1- and closo-1,2- (two of the carboranes), closo-1,2- (one of the phosphaboranes) or closo-1- (one of the thiaboranes). Heteroboranes are used in various fields, such as drug discovery, imaging, and nanotechnology. See also Carboranes Azaboranes Metallacarboranes Dicarbollide Carborane superacid References Boron compounds Cluster chemistry Boranes
Heteroborane
[ "Chemistry" ]
395
[ "Cluster chemistry", "Organometallic chemistry" ]
49,028,543
https://en.wikipedia.org/wiki/Centre%20for%20International%20Climate%20and%20Environmental%20Research
The CICERO Center for International Climate Research (abbreviated CICERO; ) is an interdisciplinary research centre for climate research and environmental science/environmental studies in Oslo. CICERO was established by the Government of Norway in 1990. It is organised as an independent foundation and is affiliated with the University of Oslo. The current director is Kristin Halvorsen, former Minister of Finance. Directors Ted Hanisch (1990–1993) Helga Hernes (1993–1996) Knut H. Alfsen (1997–2002) (2002–2012) Cecilie Mauritzen (2012–2013) Kristin Halvorsen (2014–) References University of Oslo Climate change organizations Environmental research institutes Multidisciplinary research institutes Scientific organisations based in Norway Scientific organizations established in 1990 1990 establishments in Norway
Centre for International Climate and Environmental Research
[ "Environmental_science" ]
156
[ "Environmental research institutes", "Environmental research" ]
49,029,020
https://en.wikipedia.org/wiki/Physical%20Review%20Fluids
Physical Review Fluids is a peer-reviewed scientific journal, published monthly by the American Physical Society. The journal focuses on fluid dynamics and also covers geophysical fluid dynamics, biofluid dynamics, nanofluidics and magnetohydrodynamics. Its lead editors are Eric Lauga (University of Cambridge) and Beverley McKeon (California Institute of Technology). The journal launched in January 2016 and published its 500th article in 2017. Abstracting and indexing The journal is abstracted and indexed in different databases, including: Current Contents/Physical, Chemical & Earth Sciences Inspec Science Citation Index Expanded Scopus According to the Journal Citation Reports, the journal has a 2023 impact factor of 2.5. References External links Fluid dynamics journals English-language journals Monthly journals American Physical Society academic journals Academic journals established in 2016
Physical Review Fluids
[ "Chemistry" ]
169
[ "Fluid dynamics journals", "Fluid dynamics stubs", "Fluid dynamics" ]
49,030,177
https://en.wikipedia.org/wiki/Dansk%20Datamatik%20Center
Dansk Datamatik Center (DDC) was a Danish software research and development centre that existed from 1979 to 1989. Its main purpose was to demonstrate the value of using modern techniques, especially those involving formal methods, in software design and development. Three major projects dominated much of the centre's existence. The first concerned the formal specification and compilation of the CHILL programming language for use in telecommunication switches. The second involved the formal specification and compilation of the Ada programming language. Both the Ada and CHILL efforts made use of formal methods. In particular, DDC worked with Meta-IV, an early version of the specification language of the Vienna Development Method (VDM) formal method for the development of computer-based systems. As founded by Dines Bjørner, this represented the "Danish School" of VDM. This use of VDM led in 1984 to the DDC Ada compiler becoming the first European Ada compiler to be validated by the United States Department of Defense. The third major project was dedicated towards creation of a new formal method, RAISE. The success of the Ada compiler system would lead to creation of the commercial company DDC International A/S (DDC-I, Inc. in the US) in 1985, which would develop, productise, and market it both directly to customers and to other companies which would use it as the basis for their own Ada compiler products. Origins In spring 1979, Christian Gram, a computer scientist at the Technical University of Denmark (DTU)—located in Kongens Lyngby, north of Copenhagen—suggested to his colleague Dines Bjørner the idea of building an advanced software institute. Looking at the software crisis of the time, they felt that computer science had created foundational and theoretical approaches that if applied could make software development a more professional process and permit the development of large software systems on schedule and with quality. They approached the Akademiet for de Tekniske Videnskaber (ATV, the Danish Academy for Technical Sciences) with this idea, and in September 1979, Dansk Datamatik Center was formed as an ATV institute for advanced software development. (It was also referred to as the Danish Datamatics Centre in some early documents.) Ten large producers or users of information technology in Denmark became paying members of the new entity: , Crone & Koch, the Danish Defence Research Establishment, , , Kommunedata, Regnecentralen af 1979, Sparekassernes Datacenter, (TFL), and ØK Data, with each member paying DKK 100,000 per year. Bjørner became the scientific leader of the centre. The managing director of DDC was Leif Rystrøm. When it reached its greatest size around 1984, some 30–35 professional employees worked at DDC, with about 40 employees in total. By 1984, DDC had a budget of DKK 13 million, a substantial increase from its initial budget of DKK 1 million. Many of the engineers hired came from DTU and Copenhagen University. In the beginning the centre was housed in a building on the DTU campus, but then it became located in a converted textile mill along the Mølleåen, close to Lyngby centre. The cube-inspired red logo of DDC was designed by Ole Friis, who in 1984 won the from the Danish Design Centre for it. CHILL projects During 1978, Bjørner became interested in creating a formal definition, using denotational semantics, of the CHILL programming language then under development. Work on the formal definition of CHILL began that year based upon the request of Teleteknisk Forskningslaboratorium, assigned to a group under the Comité Consultatif International Téléphonique et Télégraphique (CCITT) and conducted at DTU, with some eighteen students working on the effort. Once DDC was established, the formal definition was completed there in 1980 and 1981. Opinions on the value of the effort differ: Bjørner has stated it discovered a definitional issue that led to the simplification of the language, while Remi Bourgonjon of Philips, the convener of the Implementors' Forum organized by the CCITT, thought the formal definition was too complicated and came too late to benefit CHILL compiler designers. At the same time, a CHILL compiler was developed, again starting before DDC but completed by it and TFL. It was developed using formal methods. The two organisations made the compiler publicly available and it would have an important role in education concerning the CHILL language. It was also adapted by British firm Imperial Software Technology with a new code generator and found use by GEC and others during the 1980s. A joint project that GEC and DDC carried out in the early 1980s was to investigate the incorporation of CHILL into an Ada Programming Support Environment (APSE), to support projects that used both languages . DDC's part of the project used an examination of the denotational semantics of both languages and concluded that such an integration was technically feasible. DDC continued to be involved in publishing papers at CHILL conferences during the first half of the 1980s, but not after that. Ada projects The advent of the U.S. Defense Department sponsorship of the Ada programming language during the 1979–80 period led to European interest in the new language as well, and the Commission of the European Communities (CEC) decided to allocate funding for a European Ada compiler and runtime system. A consortium of Olivetti from Italy and DDC and Christian Rovsing from Denmark submitted a bid that in early 1981 won out over a previously favored bid from a French–German consortium; half of the funding would come from the CEC and half from Danish sources. Ole N. Oest was transferred from the Danish Defence Research Establishment to DDC to manage the Ada work. DDC was responsible for developing a Portable Ada Programming System. Requirements included hosting the Ada compiler on small, 16-bit minicomputers such as the Christian Rovsing CR80D and Olivetti M40, among other platforms, and being able to fit within 80 kilobytes code and 110 kilobytes data. As a result, the compiler was constructed of many passes, in this case six for the front end alone, with linearized trees stored in files as the representation between passes. The compiler creation process went through four steps: development of a formal specification of Ada, development of a formal specification of the compiler components; development of more detailed formal specifications of particular compiler passes; implementation of these specifications in Ada itself. Among formal approaches, using the Vienna Development Method (VDM) was advantageous in this project because it was tailored for use with computer languages and compilers and because it allowed stepwise refinement of operations as well as of data representations. The central goal of the process was to prove that the implementation was equivalent to the specification. In cases where the static abstract syntax representation needed to have additional constraints incorporated, well-formedness criteria—another aspect of VDM—were defined. The first step in the process, a formal specification for Ada, had already been started by five students at DTU in 1980 as part of their master's theses. Ada was a difficult language to implement and early attempts to build a compiler for it often resulted in disappointment or outright failure. The DDC compiler was validated on a VAX/VMS system in September 1984, being the first European Ada compiler to pass, and proved a success. At that point about 44 person-years of development work had gone into it. The defect rate and maintenance costs would prove to be significantly lower for the compiler than for the software industry average. Attention regarding DDC's use of VDM in compiler design led to interest from other computer manufacturers and sales were made of what became known as the DDC OEM Compiler Kit (the name being a reference to the original equipment manufacturer business model). The compiler system offered two points for retargeting, a high-level tree-structured intermediate language and a sequence of instructions for an abstract stack machine; the latter meant shorter project times but usually not the most optimized generated code. (The abstract stack-based virtual machine was also worked on by Christian Rovsing; there was also some idea of possibly implementing it in hardware or firmware.) The first such OEM sale was to Nokia, for rehosting on the Nokia MPS 10. The second, with a contract made in February 1984, was with Honeywell Information Systems in Boston. The compiler was thus rehosted and retargeted to the Honeywell DPS6 and validated in November 1984. In addition, cross compilers began to be developed, with DDC doing one from VAX/VMS to the Intel 8086, beginning what would become a successful line of products. In December 1984, DDC signed a contract with Advanced Computer Techniques in New York, based on a license royalty arrangement. They began using the DDC front end to develop a cross-compiler for the MIL-STD-1750A architecture, which would become a reasonably successful product with a number of customers. Success of the Ada project led to a separate company being formed in 1985, called DDC International A/S, with the purpose of commercializing the Ada compiler system; Oest was named the managing director of the company. A year later a US-based subsidiary of that company, DDC-I, Inc., was formed in the state of Arizona. Concurrent with the compiler work, there was a push on various fronts to provide a formal definition of Ada, with several different approaches and metalanguages tried. Some Europeans argued that such a task was critical and that it was the only basis upon which an ISO standard for the language should be published. The CEC sponsored this work and the contract was won by DDC in partnership with two Italian research institutes, the Istituto di Elaborazione dell’Informazione (IEI) in Pisa and the Consorzio per la Ricerca e le Applicazioni di Informatica (CRAI) in Genoa, with work beginning in 1984. Additional consulting on the project was provided by staff at the University of Genoa, the University of Pisa, and at DTU. The work built up the previous formal definitions that had been done at DTU and by DDC at the beginning of its Ada compiler project, but further work was needed the define the entire language and Meta-IV had to be extended in places or alternate approaches taken. This effort culminated in the 1987 publication of the full formal definition of Ada, encompassing three separate publications and eight volumes in total. While this effort did lead to a better understanding of the language and a number of clarifications to it being made, in the end the ultimate definition of the language remained the natural language one in the Ada Language Reference Manual. RAISE projects The use of VDM in the CHILL and Ada projects revealed the need for improvements in formal specification techniques, and in 1983 DDC conducted a Formal Methods Appraisal study, producing a number of requirements that a formal specification language should embody. Following that DDC was awarded a CEC contract to develop a successor to VDM, which was called RAISE (Rigorous Approach to Industrial Software Engineering). This was done in consortium with STC Technology of Great Britain, which helped in the creation of the new technology, and with Nordisk Brown Boveri of Denmark and International Computers Limited of Britain, which exercised it in industrial settings. The project involved some 120 person-years of effort and sought to create a wide-spectrum language intended to handle every level from the initial, high-level abstract one down to one level above programming. It sought to remedy VDM's weaknesses with respect to modularity, concurrency, and lack of tools, and it also sought to unify approaches taken in the likes of Z notation, CSP, Larch, and OBJ. Besides the RAISE Specification language, the project also produced a description of best practices for the RAISE Method, and a RAISE toolset. Other projects In 1981 DDC, in conjunction with some of its members, conducted a study of the many office automation initiatives and products then available and published a taxonomy and terminology guide that analysed the domain. They then specified a generic office automation system using both VDM and informal language. Later during 1983–1987, DDC worked as a subcontractor to member ØK Data on the Functional Analysis of Office Requirements (FAOR) project under ESPRIT. DDC also gave courses and seminars in various software development topics, and starting in 1987, initiated a Danish-language quarterly publication Cubus which discussed various technical and scientific topics in an effort to engage in technology transfer. Conclusion and legacy During the centre's existence, some of the constituent members lost interest in its work, with no need for the CHILL or Ada compilers and the RAISE work too ambitious for their use. General acceptance of Ada as a language underperformed expectations and Ada product sales by DDC-I did not provide sufficient profits to allow money to flow to DDC. With sustained funding becoming a problem, in 1989 Dansk Datamatik Center was closed down. Work on the Ada products was carried on by DDC-I, where it was used in many high-visibility aerospace and similar projects. The best-known of these was the Airplane Information Management System flight software for the Boeing 777 airliner. Subsequent developers of the DDC-I Ada compiler were often not as well versed in formal methods as the original developers. The Ada products would still be generating revenue for DDC-I into the 2010s. DDC's work and staff on RAISE were transferred to Computer Resources International (CRI) in 1988. They used it as the basis for the European ESPRIT II LaCoS project in the 1990s. The RAISE effort was subsequently sold to Terma A/S, who use it as part of work for the European Space Agency and various defense industry projects. DDC had relatively little involvement with the Nordic software world, because it relied on European Union-based partners and funding and Denmark was the only Nordic country in the EU at the time. Nor did the Danish financial sector ever show an interest in DDC's work. In looking back, the founders of the centre have stated that, "Where DDC failed was to [convince] major Danish companies of the benefits of using reliable software development based on formal methods. (But, DDC did not try very much.)" DDC researchers believed that their work was still beneficial in making Danish technology firms aware of modern software development approaches and in populating those firms with as many as a hundred software designers and developers who had worked at DDC, and that in any case, "DDC completed a large number of projects with better performance and higher product quality than was common in the 1980s." In a 2014 survey of forty years of formal methods efforts, Bjørner and Klaus Havelund lamented that adoption of formal methods has not become widespread in the software industry and referred to the DDC Ada compiler as an unsung success story of the value of such use. Bibliography A slightly expanded version of this chapter is available online at https://www.researchgate.net/publication/221271386_Dansk_Datamatik_Center. A further expanded version is part of Bjørner's online memoir at http://www.imm.dtu.dk/~dibj/trivia/node5.html. A slides presentation by Gram based on the paper is available online as Why Dansk Datamatik Center? WorldCat entry References Software engineering organizations Computer science research organizations Formal methods organizations Scientific organizations based in Denmark Defunct organizations based in Denmark Companies based in Lyngby-Taarbæk Municipality Organizations established in 1979 1979 establishments in Denmark Religious organizations disestablished in 1989 Ada (programming language)
Dansk Datamatik Center
[ "Engineering" ]
3,275
[ "Software engineering", "Software engineering organizations" ]
49,030,367
https://en.wikipedia.org/wiki/Negative%20imaginary%20systems
Negative imaginary (NI) systems theory was introduced by Lanzon and Petersen in. A generalization of the theory was presented in In the single-input single-output (SISO) case, such systems are defined by considering the properties of the imaginary part of the frequency response G(jω) and require the system to have no poles in the right half plane and > 0 for all ω in (0, ∞). This means that a system is Negative imaginary if it is both stable and a nyquist plot will have a phase lag between [-π 0] for all ω > 0. Negative Imaginary Definition A square transfer function matrix is NI if the following conditions are satisfied: has no pole in . For all such that is not a pole of and . If is a pole of , then it is a simple pole and furthermore, the residual matrix is Hermitian and positive semidefinite. If is a pole of , then for all and is Hermitian and positive semidefinite. These conditions can be summarized as: The system is stable. For all positive frequencies, the nyquist diagram of the system response is between [-π 0]. Negative Imaginary Lemma Let be a minimal realization of the transfer function matrix . Then, is NI if and only if and there exists a matrix such that the following LMI is satisfied: This result comes from positive real theory after converting the negative imaginary system to a positive real system for analysis. References Frequency-domain analysis
Negative imaginary systems
[ "Physics" ]
301
[ "Frequency-domain analysis", "Spectrum (physical sciences)" ]
49,030,594
https://en.wikipedia.org/wiki/Journal%20of%20Virological%20Methods
The Journal of Virological Methods is a monthly peer-reviewed scientific journal covering techniques on all aspects of virology. The journal was established in 1980. According to the Journal Citation Reports, the journal has a 2020 impact factor of 2.014. References External links English-language journals Virology journals Monthly journals Academic journals established in 1980 Elsevier academic journals
Journal of Virological Methods
[ "Biology" ]
76
[ "Virus stubs", "Viruses" ]
52,888,081
https://en.wikipedia.org/wiki/Endangered%20sea%20turtles
Worldwide, hundreds of thousands of sea turtles a year are accidentally caught in shrimp trawl nets, on longline hooks and in fishing gill-nets. Sea turtles need to reach the surface to breathe, and therefore many drown once caught. Loggerhead and hawksbill turtles are particularly vulnerable. Nearly all species of sea turtle are classified as Endangered. They are killed for their eggs, meat, skin and shells. They also face habitat destruction. Climate change has an impact on turtle nesting sites. As fishing activity expands, this threat is more of a problem. Endangered Species Act The Endangered Species Act of 1973 (ESA; 16 U.S.C. § 1531 et seq.) is one of the dozens of US environmental laws passed in the 1970s, and serves as the enacting legislation to carry out the provisions outlined in The Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES). Designed to protect critically imperiled species from extinction, it was signed into law by President Richard Nixon on December 28, 1973. The U.S. Supreme Court found that "the plain intent of Congress in enacting" the ESA "was to halt and reverse the trend toward species extinction, whatever the cost." The Act is administered by two federal agencies, the United States Fish and Wildlife Service (FWS) and the National Oceanic and Atmospheric Administration (NOAA). Endangered marine turtles Hawksbill Turtles are characterized by a pointed beak that resembles that of a bird's. Typically, the turtles can have a shell length up to about 45 inches, and weigh around 150 pounds. The upper shell, also known as a carapace, is serrated with thick, overlapping plates. The elaborate patterned shells make them valuable in global trading markets. Located predominantly around tropical coral reefs, Hawksbills feed on small reef species such as sea sponges, anemones and jellyfish using their narrow beaks to reach crevices in the reef. Green Turtles, named for its green, fatty underside and cartilage, are significantly larger than the Hawksbill and can be recognized by a single pair of prefrontal scales . Green turtles average 3-4 feet in carapace length, and weigh between 240 and 420 pounds once fully grown. The diet of green turtles ranges throughout their lifetime, from small crustaceans and aquatic insects at a young age, to mainly sea grasses and algae as an adult. The turtles inhabit coastlines around islands and protected shores in both tropical and temperate climates. Loggerhead Turtles are named for their large heads that support powerful jaw muscles, allowing them to crush hard-shelled prey like clams and sea urchins. They are less likely to be hunted for their meat or shell compared to other sea turtles. Bycatch, the accidental capture of marine animals in fishing gear, is a serious problem for loggerhead turtles because they frequently come in contact with fisheries. Threats Worldwide, hundreds of thousands of sea turtles a year are accidentally caught in shrimp trawl nets, on longline hooks and in fishing gillnets—a threat known as bycatch. Sea turtles need to reach the surface to breathe, and therefore many drown once caught. Loggerheads are highly migratory and are very likely to come in contact with a fishery, particularly in shrimp gill nets and long lines. [2] Climate change, also called global warming, refers to the rise in average surface temperatures on Earth. An overwhelming scientific consensus maintains that climate change is due primarily to the human use of fossil fuels, which releases carbon dioxide and other greenhouse gases into the air. The gases trap heat within the atmosphere, which can have a range of effects on ecosystems, including rising sea levels, severe weather events, and droughts that render landscapes more susceptible to wildfires. Overfishing Human activities have tipped the scales against the survival of these ancient mariners. Nearly all species of sea turtle are classified as Endangered. Slaughtered for their eggs, meat, skin, and shells, sea turtles suffer from poaching and over-exploitation. They also face habitat destruction and accidental capture in fishing gear. Climate change has an impact on turtle nesting sites. It alters sand temperatures, which then affects the sex of hatchlings. Hawksbills are particularly vulnerable to bycatch, this is a serious threat to hawksbill turtles. As fishing activity expands, this threat is more of a problem. References Endangered species Sea turtles Turtle conservation .
Endangered sea turtles
[ "Biology" ]
898
[ "Biota by conservation status", "Endangered species" ]
52,888,536
https://en.wikipedia.org/wiki/Oxyphosphides
Oxyphosphides are chemical compounds formally containing the group PO, with one phosphorus and one oxygen atom. The phosphorus and oxygen are not bound together as in phosphates or phosphine oxides, instead they are bound separately to the cations (metals), and could be considered as a mixed phosphide-oxide compound. So a compound with OmPn requires cations to balance a negative charge of 2m+3n. The cations will have charges of +2 or +3. The trications are often rare earth elements or actinides. They are in the category of oxy-pnictide compounds. Many compounds are layered, containing two metals with the formula XZPO, with an XP layer alternating with a ZO layer. Examples Examples include Ca4P2O greenish gold, has space group I4mmm Z=2 and unit cell parameters a = 4.492, c = 15.087. Uranium–Copper Oxyphosphide UCuPO semimetallic antiferromagnetic tetragonal ZrCuSiAs-type a =3:7958 c=8:2456 V=118.80 Z=2 MW=348.55 density=9.743 Thorium–Copper Oxyphosphide ThCuPO semimetallic tetragonal ZrCuSiAs-type a=3.8995 c=8.2939 V=126.12 Z=2 MW=342.56 density=9.02. NpCuOP Sr2ScCoPO3 high thermoelectric effect Sr2ScFePO3 superconducting 17K. LaNiOP Lanthanum nickel oxyphosphide YOFeP YOMnP YOCdP ROTPn (R = La, Nd, Sm, Gd; T = Mn, Fe, Co, Ni, Cu; Pn = P, As, Sb) YZnPO transparent red Rm Z=6 a = 3.946, c = 30.71 LaZnPO transparent red ZrCuSiAs type DyZnPO transparent red Rm Z=6 a=3.8933 c=30.305 PrZnPO transparent red dimorphic SmZnPO transparent red Rm Z=6 a = 3.946, c = 30.71 NdZnPO transparent a = 3.885 c = 30.32 GdZnPO transparent red Rm Z=6 a=3.922, c = 30.56 CeZnPO transparent HoZnPO transparent dark red CeRuPO ferromagnetic below 15K. dimorphic CeOsPO antiferromagnetic related phosphide oxides La2AuP2O C2/m, a=15.373 b=4.274 c=10.092 β=131.02 V=500.3 dark-red-violet Ce2AuP2O C2/m, a = 15.152, b = 4.2463, c = 9.992 pm, β = 130.90° dark-red-violet Pr2AuP2O C2/m, a = 15.036, b = 4.228, c = 9.930 pm, β = 130.88(2)° dark-red-violet Nd2AuP2O C2/m, a = 15.0187, b = 4.2085, c = 9.903 pm, β = 131.12(1)° dark-red-violet Oxy-pnictides Related compounds are the oxybismuthides and oxyarsenides. References Phosphides Oxides
Oxyphosphides
[ "Chemistry" ]
777
[ "Oxides", "Salts" ]