id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
54,968,454
https://en.wikipedia.org/wiki/ActivityPub
ActivityPub is a protocol and open standard for decentralized social networking. It provides a client-to-server (C2S) API for creating and modifying content, as well as a federated server-to-server (S2S) protocol for delivering notifications and content to other servers. ActivityPub has become the main standard used in the fediverse, a popular network used for social networking that consists of software such as Mastodon, Pixelfed and PeerTube. ActivityPub is considered to be an update to the ActivityPump protocol used in pump.io, and the official W3C repository for ActivityPub is identified as a fork of ActivityPump. The creation of a new standard for decentralized social networking was prompted by the complexity of OStatus, the most commonly used protocol at the time. OStatus was built using a multitude of technologies (such as Atom, Salmon, WebSub and WebFinger), a product of the infrastructure used in GNU social (the originator and largest user of the OStatus protocol), which made it difficult to implement the protocol into new software. OStatus was also only designed to work with microblogging services, with little flexibility to the types of data that it could hold. The standard was first published by the World Wide Web Consortium (W3C) as a W3C Recommendation in January 2018 by the Social Web Working Group (SocialWG), a working group chartered to build the protocols and vocabularies needed to create a standard for social functionality. Shortly after, further development was moved to the Social Web Community Group (SocialCG), the successor to the SocialWG. Design ActivityPub uses the ActivityStreams 2.0 format for building its content, which itself uses JSON-LD. The three main data types used in ActivityPub are Objects, Activities and Actors. Objects are the most common data type, and can be images, videos, or more abstract items such as locations or events. Activities are actions that create and modify objects, for example a activity creates an object. Actors are representative of an individual, a group, an application or a service, and are the owners of objects. Every actor type contains an inbox and outbox stream, which sends and receives activities for a user. In order to publish data (for example liking an article), a user creates an activity that declares that they liked an Article object and publishes it to their outbox, where it is then delivered by the ActivityPub server via a POST request to the inboxes listed in the activity's , , and fields. The receiving servers then account for the newly received activity and update the article by adding the like action to it. Example data An example actor object that represents a user account: { "@context": ["https://www.w3.org/ns/activitystreams", {"@language": "ja"}], "type": "Person", "id": "https://kenzoishii.example.com/", "following": "https://kenzoishii.example.com/following.json", "followers": "https://kenzoishii.example.com/followers.json", "liked": "https://kenzoishii.example.com/liked.json", "inbox": "https://kenzoishii.example.com/inbox.json", "outbox": "https://kenzoishii.example.com/feed.json", "preferredUsername": "kenzoishii", "name": "石井健蔵", "summary": "この方はただの例です", "icon": [ "https://kenzoishii.example.com/image/165987aklre4" ] } An example activity that likes an article object: { "@context": ["https://www.w3.org/ns/activitystreams", {"@language": "en"}], "type": "Like", "actor": "https://dustycloud.org/christine/", "summary": "Christine liked 'Minimal ActivityPub update client'", "object": "https://rhiaro.co.uk/2016/05/minimal-activitypub", "to": ["https://rhiaro.co.uk/#amy", "https://dustycloud.org/followers", "https://rhiaro.co.uk/followers/"], "cc": "https://e14n.com/evan" } An example article object: { "@context": ["https://www.w3.org/ns/activitystreams", {"@language": "en-GB"}], "id": "https://rhiaro.co.uk/2016/05/minimal-activitypub", "type": "Article", "name": "Minimal ActivityPub update client", "content": "Today I finished morph, a client for posting ActivityStreams2...", "attributedTo": "https://rhiaro.co.uk/#amy", "to": "https://rhiaro.co.uk/followers/", "cc": "https://e14n.com/evan" } Project status Lead author Christine Lemmer-Webber notes that the team predominantly identified as queer, which led to features that help users and administrators protect against "undesired interaction". She also notes that the team authoring ActivityPub had no corporate participation. The SocialCG previously organized a yearly free conference called ActivityPub Conf about the future of ActivityPub. Triages are held regularly to review issues pertaining to the ActivityPub and ActivityStreams 2.0 specifications as part of the SocialCG. In 2023, Germany's Sovereign Tech Fund donated €152,000 to socialweb.coop with the goal of building a new suite for testing various ActivityPub implementations and their compliance with the specification. Adoption The initial wave of adoption for ActivityPub (circa 2016-2018) came from software that was already using OStatus as their federation protocol, such as Mastodon, GNU social and Pleroma. Following the acquisition of Twitter by Elon Musk in 2022, many groups of users that were critical of the acquisition migrated to Mastodon, bringing new attention to the ActivityPub protocol with it. Various major social media platforms and corporations have since pledged to implement ActivityPub support, including Tumblr, Flipboard and Meta Platforms' Threads. Criticism Accidental denial-of-service attacks Poorly optimized ActivityPub implementations can cause unintentional distributed-denial-of-service attacks on other websites and servers, due to the decentralized nature of the network. An example would be Mastodon's implementation of OpenGraph link previews, wherein every instance that receives a post that contains a link with OpenGraph metadata will download the associated data, such as a thumbnail, in a very short timeframe, which can slow down or crash servers as a result of the sudden burst of requests. Account migration ActivityPub has been criticized for not natively supporting moving accounts from one server to another, forcing implementations to build their own solutions. While there has been work on building a standardized system for migrating accounts using the Move activity via the Fediverse Enhancement Proposal organization, the current proposal only allows for basic follower migration, with all other data remaining linked to the original account. Missing content and data ActivityPub implementations have been criticized for missing replies and parts of reply threads from remote posts, and presenting outdated statistics (e.g. likes and reposts) about remote posts. However, this isn't a problem with the ActivityPub protocol itself, but with implementations not refreshing their content for updated data when needed. Software using ActivityPub Future implementations GitLab, a Git forge and development platform Ghost, a blogging platform Forgejo, a Git forge and development platform Tumblr, a microblogging platform See also AT Protocol Comparison of microblogging and similar services Comparison of software and protocols for distributed social networking Fediverse Micropub OStatus References External links ActivityPub specification 2018 introductions Distributed computing Microblogging software Social software Fediverse Web applications Communications protocols Software that federates via ActivityPub
ActivityPub
[ "Technology" ]
1,842
[ "Mobile content", "Computer standards", "Communications protocols", "Social software" ]
54,971,608
https://en.wikipedia.org/wiki/Theseus1
Theseus1 (THE1) is a transmembrane receptor-like kinase (RLK) that is found in plant cells. It was originally discovered in Arabidopsis thaliana as part of a family of 17 related proteins, commonly referred to as the Theseus1/Feronia family or the CrRLK family. So far, THE1 and 5 other members in the same family of RLKs have been found to play key roles in cell elongation during vegetative growth through interacting mostly with the cell wall. Though the exact mechanism for this process is still unknown, it is thought to be very similar to, and even partially regulated by, the brassinosteroid pathway. In addition, Theseus1 has the ability to detect changes in cell wall integrity and could possibly even recognize pathogenic sequences. While the workings of THE1 and other members of the CrRLK family are understood on a general level, research of the specific interactions between them has yet to be published. Discovery Theseus1 was discovered, along with the other members in its family of RLKs, while researchers were attempting to describe a pathway of monitoring cell-wall stability in plant cells. It was first characterised from its interaction with a Procuste1 mutant (prc1-1). This Procuste1 mutant produces less cellulose because of alterations to the cellulose synthase site, resulting in drastically decreased cell wall elongation. When THE1 was also mutated in the presence of the prc1-1 mutant, the rate of cell elongation was increased to half-way between the normal growth and the prc1-1 only growth rate. Because of this interaction, it was named after Theseus, the mythological founder of Athens and killer of Procrustes. Theseus 1 was originally found in, and is still most commonly obtained from, Arabidopsis thaliana. Other members of the same RLK family are named according to other mythological figures, such as Feronia, Anxur, and Hercules. Structure Theseus1 is an 855 residue, type I transmembrane protein that has an extracellular N-terminus and an intracellular C-terminus. The serine/threonine kinase domain that is typical of RLKs is present on the intracellular C-terminal along with an adjacent binding site for ATP. There are also two internal phosphorylation sites that could possibly act as molecular switching sites for THE1 activation/suppression. The N-terminal contains a roughly 19 residue long dissociable sequence that is thought to be used for signaling about an issue in the cell wall. Additionally, there are a few regions on the extracellular N-terminus of Theseus1 that closely resemble the structure of ML domains in other proteins, suggesting that it could have an additional function of monitoring pathogenic response in the cell wall. Activity Theseus1 is normally expressed in all cells, with increased expression in tissues that are expanding. The enzymatic activity of THE1 can be described on its own, but most of its actions happen in coordination with other members of the CrRLK family, most of which have yet to be described let alone given a proven mechanism. Sensing Changes in Cell Wall and Pathogen Response Theseus1 is commonly believed to be able to detect changes in the cell wall and respond to perturbations. This thought has been applied to a few different scenarios. First, it has been considered that THE1 detects fragments of cell walls, and then signals for the inhibition of cell elongation. Another proposition is that THE1 responds to changes in the cell wall composition before signalling for the inhibition of cell elongation. Both fragmentation and alternate composition of the cell wall are commonly due to the presence of pathogens. The final commonly supported idea is that THE1 could just directly identify the presence of pathogens themselves though the use of its ML domain-like regions. All of these ideas support the theory that THE1 is part of the cell's response to pathogenic activity. Also, THE1 has been shown to upregulate the same genes that are regulated by pattern recognition receptors (PRRs) and code for defense-related proteins, which further suggests that THE1 plays a role in pathogenic response Vital for Cellular Elongation Theseus1 and other members of the CrRLK family are important to cellular elongation. In particular, Theseus1 and Hercules1 (HERK1) have shown to perform similar roles in the process of elongation. Arabidopsis thaliana plants with a loss-of-function mutation to only one of these proteins will maintain a similar growth rate to the wild-type plant. However, if both proteins are mutated, the plant displays a greatly inhibited growth rate. Though the specific mechanism that causes this is unknown, it can be seen that these proteins are redundant but necessary for regular vegetative growth. Additionally, both THE1 and HERK1 function in coordination with the brassinosteroid pathway with a slight regulatory overlap between the two. Regulation by Rate of Cellulose Synthesis Theseus1 has been shown to regulate cell elongation in response to decreased cellulose synthesis. The most commonly described example of this is in coordination with the procuste1 mutant for decreased cellulose synthase activity (prc1-1). When prc1-1 is present, THE1 greatly inhibits cellular elongation; however, when a less-functional mutant of theseus1 was used in combination with prc1-1, the growth rate increased to somewhere between the natural growth rate and the THE1 repressed growth rate. This shows that while a cell is able to expand further with decreased cellulose levels, THE1 represses elongation because of the change in the rate of cellulose production. This is also thought to be another method of pathogenic response, as some pathogens inhibit cellulose production. References Transmembrane receptors Arabidopsis thaliana Plant hormones
Theseus1
[ "Chemistry" ]
1,233
[ "Transmembrane receptors", "Signal transduction" ]
54,972,969
https://en.wikipedia.org/wiki/Vasudeva%20Krishnamurthy
Vasudeva Krishnamurthy (1921–2014), nicknamed Prof. V.K., was an Indian algologist. He established Krishnamurthy Institute of Algology at Chennai to promote the study of algology. Krishnamurthy, son of Sanskrit professor R. Vasudeva Sharma, was born on 14 August 1921 at Valavanur, Viluppuram district. He died in Tamil Nadu on 9 May 2014. Education Krishnamurthy acquired a B.A. (1940, at St. Joseph's college, affiliated to Madras University) and a B.Sc. (Hons.) degree (1942, Presidency College, Madras University) at the University of Madras. University of Madras awarded him with the Pulny Gold Medal. He also gained an M.A. (1947, University of Madras), an M.Sc. (1952, Presidency College, Madras University) and a Ph.D. (1957, University of Manchester, England). At the age of 21, Krishnamurthy became a research scholar at the botany laboratory of the University of Madras and worked under the father of Indian algology, Prof. M.O.P. Iyengar. Career Krishnamurthy was the reader in botany from 1943 to 1960. In 1960 he joined Thanjavur Medical College as a professor of biology, and as professor of microbiology and bacteriology in the Department of Public Health Engineering in the college of Engineering. In 1961 he joined Central Salts and Marine Chemicals Research Institute (CSIR), Bhavnagar, Gujarat as a scientist. After working in CSIR's laboratories for a decade (1961–1971) he returned to Tamil Nadu and served as professor of botany in various colleges. When he retired in 1979, he was principal at Arignar Anna Government Arts College for Men Namakkal. Contribution to algology He started the Seaweed Research and Utilization Association to encourage research on seaweed in India. On behalf of this association he published the journal Seaweed Research Utilization. He was the founding president of the association and he served as president until he died. He founded Krishnamurthy Institute of Algology (KIA), Chennai, India. KIA has the largest library in Tamil Nadu on algal studies, and it is fully equipped for algal research. Prof. V. K. worked in this laboratory until his death. Some publications Krishnamurthy, V. and M.Baluswami 1982 Some species of Ectocarpaceae new to India. Seaw. Res. & Util. 5(2):102–112. Krishnamurthy, V. and M.Baluswami 1983 Some species of Ectocarpaceae new to India. Seaw. Res. & Util. 6(1):47–48. Krishnamurthy, V. and M.Baluswami 1984 The species of Porphyra from the Indian region. Seaw. Res.& Util. 7(1):31–38 Krishnamurthy, V. and M.Baluswami 1986 On Mesospora schmidtii Weber van Bosse(Ralfsiaceae, Phaeophyceae) from the AndamanIslands. Curr. Sci. 55(12):571–572. Krishnamurthy, V., A.Balasundaram, M. Baluswami and K.Varadarajan 1990 Vertical distribution of marine algae on the east coast of South India. In:Ed. V.N.Raja Rao, Perspectives in Phycology, Today & Tomorrow's Printers and Publishers, New Delhi, pp. 267–281 Krishnamurthy, V. and M.Baluswami 1988 A new species of Sphacelaria Lyngbyefrom South India. Seaw. Res. & Util.11(1):67–69 Baluswami, M. and M. Rajasekaran 2000 Morphology of Draparnaldiopsis krishnamurthyi Baluswami and Rajasekaranfrom Kambakkam, Andhra Pradesh. Ind. Hydrobiol. 3: 39–42 Krishnamurthy, V. and M.Baluswami 2000 Some new species of algae from India. Ind. Hydrobiol. 3(1):45–48 Rajasulochana, N. M.Baluswami, M.D. Vijaya Parthasarathy and V.Krishnamurthy 2002 Short term chemical analysis of Grateloupia lithophila Boergesen from Kovalam, near Chennai. Ind. Hydrobiol. 5(2): 155–161. Rajasulochana, N., M.Baluswami, M.D. Vijaya Parthasarathy and V.Krishnamurthy 2002 Chemical analysis of Grateloupia lithophila Boergesen. Seaw.Res. & Utiln. 24(1): 79–82. Angelin, T. Sylvia, M.Baluswami, M.D. Vijaya Parthasarathy and V.Krishnamurthy 2004 Physico-chemical properties of carrageenans extracted from Sarconema filiforme and Hypnea valentiae. Seaw. Res. & Utiln. 26(1&2): 197–207. Kanthimathi, V., M. Baluswami and V. Krishnamurthy 2004 Pithophora polymorpha Wittrock from Mahabalipuram near Chennai. Seaw. Res. Utiln. 26 (Special issue):33–37. Rajasulochana, N., M.Baluswami, M.D. Vijaya Parthasarathy and V.Krishnamurthy 2005 Seasonal variation in bio-chemical constituents of Grateloupia lithophila Boergesen. Seaw. Res. & Utiln. 27(1&2):53–56. Sylvia, S. M. Baluswami, M.D. Vijaya Parthasarathy and V. Krishnamurthy 2005 Effect of liquid seaweed fertilizers extracted from Gracilaria edulis (Gmel.)Silva, Sargassum wightii Greville and Ulva lactuca Linn. On the growth and yield of Abelmoschus esculentus (L.) Moench. Indian Hydrobiology,7 (Supplement): 69–88. Babu.B. and M. Baluswami 2005 Tuomeya Americana (Kuetzing) Papenfuss, a fresh-water redalga, new to India. Indian Hydrobiology, 8(1):1–4. Rajasulochana, N., M.Baluswami, M.D. Vijaya Parthasarathy and V. Krishnamurthy 2006 Seasonal variation in major metabolic products of some marine Rhodophyceae from the south east coast of Tamil Nadu. Ind. Hydrobiol. 9(2):317–321 Rajasulochana, N., M.Baluswami, M.D. Vijaya Parthasarathy and V. Krishnamurthy 2007 Diversity of phycocolloids in selected members of Rhodomelaceae. Ind. Hydrobiol. 10(1):145–151. Rajasulochana, N., M.Baluswami, M.D. Vijaya Parthasarathy and V. Krishnamurthy 2008 a FT-IR spectroscopic of investigations on the agars from Centroceras clavulatum and Spyridia hypnoides. Int. J. Phycol. Phycochem. 4(2):125–130. Rajasulochana, N., M.Baluswami, M.D. Vijaya Parthasarathy and V. Krishnamurthy 2008 b Seasonal variation in cell wall polysaccharides of Grateloupia filicina and Grateloupia lithophila. Seaweed Res. Utiln. 30(1&2):161–169. References Algae biomass producers Indian phycologists 20th-century Indian botanists 1921 births 2014 deaths
Vasudeva Krishnamurthy
[ "Engineering", "Biology" ]
1,715
[ "Synthetic biology", "Algae biomass producers", "Genetic engineering" ]
54,973,127
https://en.wikipedia.org/wiki/American%20Eclipse%20%28book%29
American Eclipse: A Nation's Epic Race to Catch the Shadow of the Moon and Win the Glory of the World is a non-fiction book by journalist David Baron, published by Liveright in 2017, about the popular impression of the 1878 solar eclipse as observed across the United States. It won the American Institute of Physics Science Writing Award in 2018. Background Baron was inspired to write the book after viewing his first total solar eclipse in Aruba in 1998. He decided that he would publish it in 2017 in order to coincide with the solar eclipse of August 21, 2017. Synopsis American Eclipse follows three scientists, James Craig Watson, Maria Mitchell, and Thomas Edison as they traveled to view the total solar eclipse on July 29, 1878. Reception Kirkus Reviews described American Eclipse as a "compelling... timely, energetic combination of social and scientific history." Graham Ambrose, writing for The Denver Post, lauded Baron's social history of a scientific topic and that Baron "successfully swerves from the dry, impenetrable prose of science writing, grasping instead at something poetic, often funny." Publication American Eclipse was released in hardcover in June 2017, paperback in 2018, and rereleased with a new afterword in 2024, to coincide with the solar eclipse of April 8, 2024. The book was also adapted into a musical in and premiered at Baylor University on April 7, 2024. Further reading References External links 2017 non-fiction books American history books English-language non-fiction books Books about scientists Astronomy books History books about the United States History books about science 19th-century solar eclipses Boni & Liveright books 1878 in the United States
American Eclipse (book)
[ "Astronomy" ]
338
[ "Astronomy books", "Works about astronomy" ]
54,974,049
https://en.wikipedia.org/wiki/Cybernetics%20in%20the%20Soviet%20Union
Cybernetics in the Soviet Union had its own particular characteristics, as the study of cybernetics came into contact with the dominant scientific ideologies of the Soviet Union and the nation's economic and political reforms: from the unmitigated anti-Americanist criticism of cybernetics in the early 1950s; its legitimization after Stalin's death and up to 1961; its total saturation of Soviet academia in the 1960s; and its eventual decline through the 1970s and 1980s. Initially, from 1950 to 1954, the reception of cybernetics by the Soviet Union establishment was exclusively negative. The Soviet Department for Agitation and Propaganda had called for anti-Americanism to be intensified in Soviet media, and in an attempt to fill the Department's quotas, Soviet journalists latched on to cybernetics as an American "reactionary pseudoscience" to denounce and mock. This attack was interpreted as a signal of an official attitude to cybernetics, so, under Joseph Stalin's premiership, cybernetics was inflated into "a full embodiment of imperialist ideology" by Soviet writers. Upon Stalin's death, the wide-reaching reforms of Nikita Khrushchev's premiership allowed cybernetics to legitimize itself as "a serious, important science", and in 1955, articles on cybernetics were published in the state philosophical organ, Voprosy Filosofii, after a group of Soviet scientists realized the potential of this new science. Under the formerly suppressive scientific culture of the Soviet Union, cybernetics began to serve as an umbrella term for previously maligned areas of Soviet science, such as structural linguistics and genetics. Under the leadership of academician Aksel Berg, the Council of Cybernetics was formed, an umbrella organization dedicated to providing funding for these new lights of Soviet science. By the 1960s, this fast legitimization put cybernetics in fashion, as "cybernetics" became a buzzword among career-minded scientists. Additionally, Berg's administration left many of the original cyberneticians of the organization disgruntled; complaints were made that he seemed more focused on administration than scientific research, citing Berg's grand plans to expand the council to subsume "practically all of Soviet science". By the 1980s, cybernetics had lost relevance in Soviet scientific culture, as its terminology and political function were succeeded by those of informatics in the Soviet Union and, eventually, post-Soviet states. Official criticism: 1950–1954 The initial reception of cybernetics in the stifling scientific culture of Soviet state-sanctioned media and academic publication was exclusively negative. Under the plans of the Soviet Department for Agitation and Propaganda, Soviet anti-American propaganda was to be intensified, in order "to show the decay of bourgeois culture and morals" and "debunk the myths of American propaganda" in the wake of the formation of NATO. This imperative put Soviet newspaper editors in a frantic search for topics to criticize, in order to fill these propagandistic quotas. The first to latch onto Cybernetics was science journalist, Boris Agapov, following the post-war American interest in the developments in computer technology. The cover of the January 23, 1950, issue of Time had boasted an anthropomorphic cartoon of a Harvard Mark III under the slogan "Can Man Build a Superman?". On 4 May 1950, Agapov published an article in the Literaturnaya Gazeta entitled "Mark III, a Calculator", ridiculing this American excitement at the "sweet dream" of the military and industrial uses of these new "thinking machines", and criticizing cybernetics originator Norbert Wiener as an example of the "charlatans and obscurantists, whom capitalists substitute for genuine scientists". Though it was not commissioned by any Soviet authority and never mentioned the science by name, Agapov's article was taken as a signal of an official critical attitude towards cybernetics; editions of Wiener's Cybernetics were removed from library circulation, and several other periodicals followed suit, denouncing cybernetics as a "reactionary pseudoscience". In 1951, , of the Institute of Philosophy, led a public campaign against the philosophy of "semantic idealism", characterizing Wiener, and cybernetics as a whole, as a part of this "reactionary philosophy". In 1952, another more explicitly anti-cybernetic article was published in the Literaturnaya Gazeta, definitively starting the campaign and leading the way for a flurry of popular titles denouncing the topic. At the zenith of this criticism, an article in the October 1953 issue of the state ideological organ, Voprosy Filosofii, was published under the pseudonym "Materialist", entitled "Whom Does Cybernetics Serve?"; it condemned cybernetics as a "misanthropic pseudo-theory" consisting of "mechanicism turning into idealism", pointing to the American military as the "god whom cybernetics served". During this period, Stalin himself never engaged in this rabid criticism of cybernetics, with the head of the Soviet Department of Sciences, Iurii Zhdanov, recalling that "he never opposed cybernetics" and made every effort "to advance computer technology" in order to give the USSR the technological advantage. Though the scale of this campaign was modest, with only around 10 anti-cybernetic publications being produced, Valery Shilov has argued it constituted a "strict directive to action" from the "central ideological organs", a universal declaration of cybernetics as a bourgeois pseudoscience to be criticized and destroyed. Few of these critics had any access to primary sources on cybernetics. Agapov's sources were limited to the January 1950 issue of Time; the institute's criticisms were based on the 1949 volume of ETC: A Review of General Semantics; and, among Soviet articles on cybernetics, only the "Materialist" quoted Wiener's Cybernetics directly. Select sensational quotes of Wiener and speculations based "exclusively on the basis of other [Soviet] books already written on the same or similar subject", were used to characterize Wiener as both an idealist and a mechanicist, criticizing his supposed reduction of scientific and sociological ideas to mere "mechanical model[s]". Wiener's gloomy speculations on the "second industrial revolution" and the "assembly line without human agents" were distorted to brand him as a "technocrat", wishing for "the process of production realized without workers, only with machines controlled by the gigantic brain of the computer" with "no strikes or strike movements, and moreover no revolutionary insurrections". According to Slava Gerovitch, "each critic carried criticism one step further, gradually inflating the significance of cybernetics until it was seen as a full embodiment of imperialist ideology". Legitimization and rise: 1954–1961 The reformed academic culture of the Soviet Union, after the death of Stalin and reforms of the Khrushchev era, allowed cybernetics to tear down its previous ideological criticisms and redeem itself in the public view. To Soviet scientists, cybernetics emerged as a possible vector of escape from the ideological traps of Stalinism, replacing it with the computational objectivity of cybernetics. Military computer scientist Anatoly Kitov recalled stumbling onto Cybernetics in the secret library of the Special Construction Bureau and realizing instantly that "cybernetics was not a bourgeois pseudo-science, as official publications considered it at the time, but the opposite—a serious, important science". He joined with the dissident mathematician Alexey Lyapunov, and, in 1952, presented a pro-cybernetic paper to Voprosy Filosofii, which the journal tacitly endorsed, though the Communist Party required that Lyapunov and Kitov present public lectures on cybernetics before its publication, with 121 seminars produced in total from 1954 until 55. A very different academic, the Soviet philosopher and former ideological watchdog Ernst Kolman, also joined this rehabilitation. In November 1954, Kolman presented a lecture at the Academy of Social Sciences, condemning this stifling of cybernetics to a shocked audience, who had expected a lecture rehearsing previous Stalinist criticisms, and marched down to the office of Voprosy Filosofii to have his lecture published. The beginning of a Soviet cybernetic movement was therefore first signalled by two articles, published together in the July–August 1955 volume of Voprosy Filosofii: "The Main Features of Cybernetics" by Sergei Sobolev, Alexey Lyapunov, and Anatoly Kitov, and "What is Cybernetics" by Ernst Kolman. According to Benjamin Peters, these "two Soviet articles set the stage for the revolution of cybernetics in the Soviet Union". The first article—authored by three Soviet military scientists—attempted to present the tenets of cybernetics as a coherent scientific theory, retooling it for Soviet use; they purposely avoided any discussion of philosophy, and presented Wiener as an American anti-capitalist, in order to avoid any politically dangerous confrontation. They asserted cybernetics' main tenets as: information theory, the theory of automatic high-speed electronic calculating machines as a theory of self-organizing logical processes, the theory of automatic control systems (particularly, the theory of feedback). In contrast, Kolman's defense of cybernetics mirrored the Stalinist criticisms it had endured. Kolman created a spurious historiography of cybernetics (which inevitably found its origins in Soviet science) and corrected the supposed "deviations" of the anti-cybernetic philosophers, employing well-placed quotes from Marxist authorities and philosophical epithets (e.g. "idealist" or "vitalist"), implying cybernetics' opponents fell into the same philosophical errors Marx and Lenin had criticized decades earlier, within their dialectical materialist framework. With this, Soviet cybernetics began its journey towards legitimization. Academician Aksel Berg, at the time deputy minister of defense, authored secret reports beleaguering the deficient state of information science in the USSR, pointing towards the suppression of cybernetics as the prime culprit. Party officials allowed a small Soviet delegation to be sent to the First International Congress on Cybernetics in June 1956, and they informed the Party of the extent to which USSR was "lagging behind the developed countries" in computer technology. Unfavorable descriptions of cybernetics were removed from official literature, and in 1958, the first Russian translations of Wiener's Cybernetics and The Human Use of Human Beings were published. Alongside these translations, in 1958 the first Soviet journal on cybernetics, (Problems of Cybernetics), was launched with Lyapunov as its editor. For the 1960 First International Federation of Automatic Control, Wiener came to Russia to lecture on cybernetics at the Polytechnic Museum. He arrived to see the booked hall swarmed with scientists eager to hear his lecture, some of whom sat on aisles and stairs to hear him speak; several Soviet publications, including the formerly anti-cybernetic Voprosy Filosofii, crammed in to get interviews from Wiener. Wiener himself spoke to American newspapers about this enormous enthusiasm for cybernetic research. In the Krushchev Thaw, Soviet cybernetics had not only been legitimized as a science, but had entered the vogue in Soviet academia. On 10 April 1959, Berg sent a report edited by Lyapunov to a presidium of the Academy of Sciences, recommending the establishment of an organization dedicated to advancing cybernetics. The presidium determined that the Council on Cybernetics would be formed, with Berg as the chairman (due to his strong administrative connections) and Lyapunov his deputy. This council was wide-reaching, subsuming as many as 15 disciplines as of 1967, from "cybernetic linguistics" to "legal cybernetics". During Khrushchev's relaxation of scientific culture, the Council on Cybernetics served as an umbrella organization for formerly suppressed research, including such subjects as non-Pavlovian physiology ("physiological cybernetics"), structural linguistics ("cybernetic linguistics"), and genetics ("biological cybernetics"). Thanks to Lyapunov, a further, 20-person Department of Cybernetics was created to solicit official funding for cybernetic research. Even with these institutions, Lyapunov still lamented that "the field of cybernetics in our country is not organized", and, from 1960 to 1961, worked with the department to establish an official Institute of Cybernetics. Lyapunov joined forces with the structural linguists, who had been authorized to create the Institute of Semiotics directed by Andrey Markov Jr., and, in June 1961, together planned to create an Institute of Cybernetics. Despite these efforts, Lyapunov lost faith in the project after Krushchev's refusal to build more Moscow scientific institutes, and the institute never emerged, settling with the Council of Cybernetics instead gaining the formal powers of an institute, without any expansion of staff. Peak and decline: 1961–1980s Berg continued with his campaign for Soviet cybernetics into the 1960s, as cybernetics entered the Soviet mainstream. Berg's council sponsored pro-cybernetic programs in Soviet media. 20-minute radio broadcasts, entitled "Cybernetics in Our Lives", were produced; a series of broadcasts on Moscow TV detailed advances in computer technology; and hundreds of lectures were given before various party members and workers on the subject of cybernetics. In 1961, the council produced an official volume proffering cybernetics as a socialist science: entitled Cybernetics—in the Service of Communism. The work of the council was rewarded when, at the 22nd Party Congress, cybernetics was declared one of the "major tools of the creation of a communist society". Khrushchev declared the development of cybernetics an "imperative" in Soviet science. According to Gerovitch, this put cybernetics "in fashion" as "many career-minded scientists began using 'cybernetics' as a buzzword" and the movement swelled with its new membership. The CIA reported that the July 1962 'Conference on the Philosophical Problems of Cybernetics' received "approximately 1000 specialists, mathematicians, philosophers, physicists, economists, psychologists, biologists, engineers, linguists, physicians". American intelligence apparently bought into the hype, though it confused institutional enthusiasm with Soviet government policy. Special Assistant Arthur Schlesinger Jr warned President John F. Kennedy that the Soviet commitment to cybernetics provided them "a tremendous advantage" in technology and economic productivity; in the absence of any complementary American program, Schlesinger wrote, "we are finished". In July 1962, Berg created a plan for the radical restructuring of the Council such that it covered "practically all of Soviet science". This was met with cold reception from many of the researchers of the council, with one cybernetician complaining, in a letter to Lyapunov, that "[t]here are almost no results from the Council. Berg only demands paperwork and strives for the expansion of the Council." Lyapunov, disgruntled with Berg and the non-academic direction of cybernetics, refused to write for Cybernetics—in the Service of Communism and gradually lost his influence in cybernetics. As one memoirist put it, this resignation meant that "the center that had unified cybernetics disappeared, and cybernetics [would] naturally split into numerous branches." While the old guard of cyberneticians complained, the cybernetics movement, as a whole, was exploding; with the council subsuming 170 projects and 29 institutions by 1962, and 500 projects and 150 institutions by 1967. According to Gerovitch, "by the early 1970s, the cybernetics movement [...] no longer challenged the orthodoxy; instead, tactical uses of cyberspeak overshadowed the original reformist goals that aspired the first Soviet cyberneticians." The ideas which were once seen as controversial, and huddled under the umbrella organization of cybernetics, now entered the scientific mainstream, leaving cybernetics as a loose and incoherent ideological patchwork. Some cyberneticians, whose dissident styles had been sheltered by the cybernetics movement, now felt persecuted, and some, such as Valentin Turchin, Alexander Lerner, and Igor Mel'čuk emigrated to escape this newfound scientific atmosphere. By the 1980s, cybernetics had lost its cultural relevance, being replaced in Soviet scientific culture with the concepts of 'informatics'. Notable Soviet cyberneticists Aksel Berg (1893–1979) Deputy Minister of Defense of the Soviet Union (September 1953–November 1957) Yuri Gastev (1928–1993) dissident who emigrated in 1981 Victor Glushkov (1923–1982) Soviet mathematician and founding father of Soviet cybernetics Anatoly Kitov (1920–2005) Andrey Kolmogorov (1903–1987) Leonid Kraizmer (1912–2002) Alexey Lyapunov (1911–1973) Sergei Sobolev (1908–1989) Notes References Bibliography External links Cybernetics Science and technology in the Soviet Union Computing in the Soviet Union
Cybernetics in the Soviet Union
[ "Technology" ]
3,642
[ "Computing in the Soviet Union", "History of computing" ]
59,798,889
https://en.wikipedia.org/wiki/Computational%20materials%20science
Computational materials science and engineering uses modeling, simulation, theory, and informatics to understand materials. The main goals include discovering new materials, determining material behavior and mechanisms, explaining experiments, and exploring materials theories. It is analogous to computational chemistry and computational biology as an increasingly important subfield of materials science. Introduction Just as materials science spans all length scales, from electrons to components, so do its computational sub-disciplines. While many methods and variations have been and continue to be developed, seven main simulation techniques, or motifs, have emerged. These computer simulation methods use underlying models and approximations to understand material behavior in more complex scenarios than pure theory generally allows and with more detail and precision than is often possible from experiments. Each method can be used independently to predict materials properties and mechanisms, to feed information to other simulation methods run separately or concurrently, or to directly compare or contrast with experimental results. One notable sub-field of computational materials science is integrated computational materials engineering (ICME), which seeks to use computational results and methods in conjunction with experiments, with a focus on industrial and commercial application. Major current themes in the field include uncertainty quantification and propagation throughout simulations for eventual decision making, data infrastructure for sharing simulation inputs and results, high-throughput materials design and discovery, and new approaches given significant increases in computing power and the continuing history of supercomputing. Materials simulation methods Electronic structure Electronic structure methods solve the Schrödinger equation to calculate the energy of a system of electrons and atoms, the fundamental units of condensed matter. Many variations of electronic structure methods exist of varying computational complexity, with a range of trade-offs between speed and accuracy. Density functional theory Due to its balance of computational cost and predictive capability density functional theory (DFT) has the most significant use in materials science. DFT most often refers to the calculation of the lowest energy state of the system; however, molecular dynamics (atomic motion through time) can be run with DFT computing forces between atoms. While DFT and many other electronic structures methods are described as ab initio, there are still approximations and inputs. Within DFT there are increasingly complex, accurate, and slow approximations underlying the simulation because the exact exchange-correlation functional is not known. The simplest model is the Local-density approximation (LDA), becoming more complex with the generalized-gradient approximation (GGA) and beyond. An additional common approximation is to use a pseudopotential in place of core electrons, significantly speeding up simulations. Atomistic methods This section discusses the two major atomic simulation methods in materials science. Other particle-based methods include material point method and particle-in-cell, most often used for solid mechanics and plasma physics, respectively. Molecular dynamics The term Molecular dynamics (MD) is the historical name used to classify simulations of classical atomic motion through time. Typically, interactions between atoms are defined and fit to both experimental and electronic structure data with a wide variety of models, called interatomic potentials. With the interactions prescribed (forces), Newtonian motion is numerically integrated. The forces for MD can also be calculated using electronic structure methods based on either the Born-Oppenheimer Approximation or Car-Parrinello approaches. The simplest models include only van der Waals type attractions and steep repulsion to keep atoms apart, the nature of these models are derived from dispersion forces. Increasingly more complex models include effects due to coulomb interactions (e.g. ionic charges in ceramics), covalent bonds and angles (e.g. polymers), and electronic charge density (e.g. metals). Some models use fixed bonds, defined at the start of the simulation, while others have dynamic bonding. More recent efforts strive for robust, transferable models with generic functional forms: spherical harmonics, Gaussian kernels, and neural networks. In addition, MD can be used to simulate groupings of atoms within generic particles, called coarse-grained modeling, e.g. creating one particle per monomer within a polymer. Kinetic Monte Carlo Monte Carlo in the context of materials science most often refers to atomistic simulations relying on rates. In kinetic Monte Carlo (kMC) rates for all possible changes within the system are defined and probabilistically evaluated. Because there is no restriction of directly integrating motion (as in molecular dynamics), kMC methods are able to simulate significantly different problems with much longer timescales. Mesoscale methods The methods listed here are among the most common and the most directly tied to materials science specifically, where atomistic and electronic structure calculations are also widely used in computational chemistry and computational biology and continuum level simulations are common in a wide array of computational science application domains. Other methods within materials science include cellular automata for solidification and grain growth, Potts model approaches for grain evolution and other Monte Carlo techniques, as well as direct simulation of grain structures analogous to dislocation dynamics. Dislocation dynamics Plastic deformation in metals is dominated by the movement of dislocations, which are crystalline defects in materials with line type character. Rather than simulating the movement of tens of billions of atoms to model plastic deformation, which would be prohibitively computationally expensive, discrete dislocation dynamics (DDD) simulates the movement of dislocation lines. The overall goal of dislocation dynamics is to determine the movement of a set of dislocations given their initial positions, and external load and interacting microstructure. From this, macroscale deformation behavior can be extracted from the movement of individual dislocations by theories of plasticity. A typical DDD simulation goes as follows. A dislocation line can be modelled as a set of nodes connected by segments. This is similar to a mesh used in finite element modelling. Then, the forces on each of the nodes of the dislocation are calculated. These forces include any externally applied forces, forces due to the dislocation interacting with itself or other dislocations, forces from obstacles such as solutes or precipitates, and the drag force on the dislocation due to its motion, which is proportional to its velocity. The general method behind a DDD simulation is to calculate the forces on a dislocation at each of its nodes, from which the velocity of the dislocation at its nodes can be extracted. Then, the dislocation is moved forward according to this velocity and a given timestep. This procedure is then repeated. Over time, the dislocation may encounter enough obstacles such that it can no longer move and its velocity is near zero, at which point the simulation can be stopped and a new experiment can be conducted with this new dislocation arrangement. Both small-scale and large-scale dislocation simulations exist. For example, 2D dislocation models have been used to model the glide of a dislocation through a single plane as it interacts with various obstacles, such as precipitates. This further captures phenomena such as shearing and bowing of precipitates. The drawback to 2D DDD simulations is that phenomena involving movement out of a glide plane cannot be captured, such as cross slip and climb, although they are easier to run computationally. Small 3D DDD simulations have been used to simulate phenomena such as dislocation multiplication at Frank-Read sources, and larger simulations can capture work hardening in a metal with many dislocations, which interact with each other and can multiply. A number of 3D DDD codes exist, such as ParaDiS, microMegas, and MDDP, among others. There are other methods for simulating dislocation motion, from full molecular dynamics simulations, continuum dislocation dynamics, and phase field models. Phase field Phase field methods are focused on phenomena dependent on interfaces and interfacial motion. Both the free energy function and the kinetics (mobilities) are defined in order to propagate the interfaces within the system through time. Crystal plasticity Crystal plasticity simulates the effects of atomic-based, dislocation motion without directly resolving either. Instead, the crystal orientations are updated through time with elasticity theory, plasticity through yield surfaces, and hardening laws. In this way, the stress-strain behavior of a material can be determined. Continuum simulation Finite element method Finite element methods divide systems in space and solve the relevant physical equations throughout that decomposition. This ranges from thermal, mechanical, electromagnetic, to other physical phenomena. It is important to note from a materials science perspective that continuum methods generally ignore material heterogeneity and assume local materials properties to be identical throughout the system. Materials modeling methods All of the simulation methods described above contain models of materials behavior. The exchange-correlation functional for density functional theory, interatomic potential for molecular dynamics, and free energy functional for phase field simulations are examples. The degree to which each simulation method is sensitive to changes in the underlying model can be drastically different. Models themselves are often directly useful for materials science and engineering, not only to run a given simulation. CALPHAD Phase diagrams are integral to materials science and the development computational phase diagrams stands as one of the most important and successful examples of ICME. The Calculation of PHase Diagram (CALPHAD) method does not generally speaking constitute a simulation, but the models and optimizations instead result in phase diagrams to predict phase stability, extremely useful in materials design and materials process optimization. Comparison of methods For each material simulation method, there is a fundamental unit, characteristic length and time scale, and associated model(s). Multi-scale simulation Many of the methods described can be combined, either running simultaneously or separately, feeding information between length scales or accuracy levels. Concurrent multi-scale Concurrent simulations in this context means methods used directly together, within the same code, with the same time step, and with direct mapping between the respective fundamental units. One type of concurrent multiscale simulation is quantum mechanics/molecular mechanics (QM/MM). This involves running a small portion (often a molecule or protein of interest) with a more accurate electronic structure calculation and surrounding it with a larger region of fast running, less accurate classical molecular dynamics. Many other methods exist, such as atomistic-continuum simulations, similar to QM/MM except using molecular dynamics and the finite element method as the fine (high-fidelity) and coarse (low-fidelity), respectively. Hierarchical multi-scale Hierarchical simulation refers to those which directly exchange information between methods, but are run in separate codes, with differences in length and/or time scales handled through statistical or interpolative techniques. A common method of accounting for crystal orientation effects together with geometry embeds crystal plasticity within finite element simulations. Model development Building a materials model at one scale often requires information from another, lower scale. Some examples are included here. The most common scenario for classical molecular dynamics simulations is to develop the interatomic model directly using density functional theory, most often electronic structure calculations. Classical MD can therefore be considered a hierarchical multi-scale technique, as well as a coarse-grained method (ignoring electrons). Similarly, coarse grained molecular dynamics are reduced or simplified particle simulations directly trained from all-atom MD simulations. These particles can represent anything from carbon-hydrogen pseudo-atoms, entire polymer monomers, to powder particles. Density functional theory is also often used to train and develop CALPHAD-based phase diagrams. Software and tools Each modeling and simulation method has a combination of commercial, open-source, and lab-based codes. Open source software is becoming increasingly common, as are community codes which combine development efforts together. Examples include Quantum ESPRESSO (DFT), LAMMPS (MD), ParaDIS (DD), FiPy (phase field), and MOOSE (Continuum). In addition, open software from other communities is often useful for materials science, e.g. GROMACS developed within computational biology. Conferences All major materials science conferences include computational research. Focusing entirely on computational efforts, the TMS ICME World Congress meets biannually. The Gordon Research Conference on Computational Materials Science and Engineering began in 2020. Many other method specific smaller conferences are also regularly organized. Journals Many materials science journals, as well as those from related disciplines welcome computational materials research. Those dedicated to the field include Computational Materials Science, Modelling and Simulation in Materials Science and Engineering, and npj Computational Materials. Related fields Computational materials science is one sub-discipline of both computational science and computational engineering, containing significant overlap with computational chemistry and computational physics. In addition, many atomistic methods are common between computational chemistry, computational biology, and CMSE; similarly, many continuum methods overlap with many other fields of computational engineering. See also References External links TMS World Congress on Integrated Computational Materials Engineering (ICME) nanoHUB computational materials resources Computational science Computational physics
Computational materials science
[ "Physics", "Mathematics" ]
2,619
[ "Computational science", "Applied mathematics", "Computational physics" ]
59,799,789
https://en.wikipedia.org/wiki/Ogbonnaya%20Onu
Ogbonnaya Onu (1 December 1951 – 11 April 2024) was a Nigerian politician, author and engineer. He was the first civilian governor of Abia State and was the minister of science, technology and innovation of Nigeria from November 2015 until his resignation in 2022. He was the longest serving minister of the ministry. Biography Ogbonnaya Onu was born on 1 December 1951, to the family of Eze David Aba Onu in Amata, Uburu, Ohaozara Local Government Area of then Eastern region, later Imo State, then Abia State and now Ebonyi State Nigeria. He started his education at Izzi High School in Abakaliki, now the Ebonyi State capital. Here, he obtained grade one with distinction in his West African School Certificate Examination. He also sat for the High School Examination at College of Immaculate Conception (C.I.C) Enugu, graduating as the overall best student. He proceeded to the University of Lagos and graduated with a first class degree in chemical engineering in 1976. He went for his doctoral studies at the University of California, Berkeley, and obtained a Doctor of Philosophy degree in chemical engineering in 1980. Onu died on 11 April 2024, at the age of 72. Career Teaching career After his graduation from the University of Lagos, Ogbonnaya Onu became a teacher at St. Augustine's Seminary, Ezzamgbo, Ebonyi State. After the completion of his doctoral studies at the University of California, Berkeley, Onu became a lecturer in the Department of Chemical Engineering at the University of Port Harcourt, and later became the pioneer head of the department. He also served as the acting dean of the Faculty of Engineering and was also elected as a member of the Governing Council of the university.< Political career Ogbonnaya Onu started his political career as an aspirant for a senatorial seat in the old Imo State on the platform of the National Party of Nigeria (NPN).< He contested for the position of Governor of Abia State in 1991 under the umbrella of the National Republican Convention and won. He was sworn in as the first executive governor of the state in January 1992. He was the first chairman, Conference of Nigerian elected governors. In 1999, he was the presidential flag bearer for the All People's Party but relinquished the position to Olu Falae after a merger of his party with the Alliance for Democracy who lost to Olusegun Obasanjo of the PDP. He became the national party chairman of the All Nigerian People's Party in 2010. In 2013, he and his party (ANPP) successfully merged with the Action Congress of Nigeria (ACN), Congress for Progressive Change (CPC), Democratic People's Party (DPP) and some members of the All Progressives Grand Alliance (APGA) to form the All Progressives Congress (APC). In November 2015, he was appointed Minister of Science and Technology by President Muhammadu Buhari. On 21 August 2019, he was sworn in again as Minister of Science and Technology by President Muhammadu Buhari. Awards and achievements Onu was a certified member of Council for the Regulation of Engineering in Nigeria, a fellow of the Nigerian Academy of Engineering, fellow of the Nigerian Society of Chemical Engineers. In October 2022, a Nigerian national honour of Commander of the Order of the Niger (CON) was conferred on him by President Muhammadu Buhari. Controversies Onu said Nigeria would begin local production of pencils by 2018 which he said would provide 400,000 jobs. As of 2019, he said production of pencils had not commenced. In 1999, prior to the presidential election and the alliance between the All People's Party and Alliance for Democracy, Onu was involved in a conflict involving both APP/AD picking Olu Falae as the joint presidential flag bearer. See also List of people from Ebonyi State Federal Ministry of Science, Technology and Innovation Cabinet of Nigeria References 1951 births 2024 deaths Governors of Abia State University of Lagos alumni University of California, Berkeley alumni People from Ebonyi State Writers from Ebonyi State Nigerian chemical engineers Chemical engineering academics Commanders of the Order of the Niger
Ogbonnaya Onu
[ "Chemistry" ]
859
[ "Chemical engineering academics", "Chemical engineers" ]
59,799,798
https://en.wikipedia.org/wiki/Methylfluorophosphonylcholine
Methylfluorophosphonylcholine (MFPCh) is an extremely toxic chemical compound related to the G-series nerve agents. It is an extremely potent acetylcholinesterase inhibitor which is around 100 times more potent than sarin at inhibiting acetylcholinesterase in vitro, and around 10 times more potent in vivo, depending on route of administration and animal species tested. MFPCh is resistant to oxime reactivators, meaning the acetylcholinesterase inhibited by MFPCh can't be reactivated by cholinesterase reactivators. MFPCh also acts directly on the acetylcholine receptors. MFPCh is a relatively unstable compound and degrades rapidly in storage, so despite its enhanced toxicity it was not deemed suitable to be weaponised for military use. See also GV (nerve agent) Sarin TMTFA References Acetylcholinesterase inhibitors Methylphosphonofluoridates Choline esters
Methylfluorophosphonylcholine
[ "Chemistry" ]
206
[ "Organic compounds", "Organic compound stubs", "Organic chemistry stubs" ]
59,800,917
https://en.wikipedia.org/wiki/IEEE%2011073%20service-oriented%20device%20connectivity
The IEEE 11073 service-oriented device connectivity (SDC) family of standards defines a communication protocol for point-of-care (PoC) medical devices. The main purpose is to enable manufacturer-independent medical device-to-device interoperability. Furthermore, interconnection between medical devices and medical information systems is enabled. However, IEEE 11073 SDC does not compete with established and emerging standards like HL7 v2 or HL7 FHIR. IEEE 11073 SDC is part of the established ISO/IEEE 11073 family of standards. IEEE 11073 SDC is based on the paradigm of a service-oriented architecture (SOA). The IEEE 11073 SDC family of standards currently comprises three parts: Core Standards, Participant Key Purpose (PKIP) standards, and Devices Specialisation (DevSpec) standards. The Core Standards consist of a transport standard, ISO/IEEE 11073-20702, called Medical Devices Communication Profile for Web Services, a Domain Information and Service Model (ISO/IEEE 11073-10207), and Architecture and Binding definition (ISO/IEEE 11073-20701). While the three Core standards have been approved and published by the IEEE as well as by ISO, PKIPs and DevSpecs are currently under development.   The concepts have been technically and clinically evaluated. Comprehensive demonstrators were shown, for example, at the conhIT exhibitions 2016 and 2017. IEEE 11073 SDC Core Standards ISO/IEEE 11073-20702 The standard "ISO/IEEE International Standard for Health informatics - Point-of-care medical device communication - Part 20702: Medical devices communication profile for web services" (short Medical DPWS or MDPWS) enables the foundational interoperability between medical devices. This includes the ability of medical devices to exchange data safely in a distributed system and the ability to discover network participants dynamically. MDPWS is derived from the OASIS standard Devices Profile for Web Services (DPWS). It defines extensions and restrictions to meet safety requirements of medical devices for high acuity environments. ISO/IEEE 11073-10207 The Standard "ISO/IEEE International Standard - Health informatics--Point-of-care medical device communication - Part 10207: Domain Information and Service Model for Service-Oriented Point-of-Care Medical Device Communication" is derived from the IEEE 11073-10201 Domain Information Model. It is designed to meet the requirements of networked systems of medical devices establishing multipoint-to-multipoint communication. The Domain Information Model defines the capability description of the medical devices as well as the representation of the current state. The Service Model specifies the way in which service consumer can interact with medical devices implementing the role of a service provider. IEEE 11073-10207 enables the structural interoperability between medical devices. The non-normative name is Basic Integrated Clinical Environment Protocol Specification (BICEPS). ISO/IEC/IEEE 11073-20701 The "ISO/IEC/IEEE International Standard for Health informatics - Device interoperability - Part 20701:Point-of-care medical device communication--Service oriented medical device exchange architecture and protocol binding" defines the allover service-oriented architecture, specifies the binding between IEEE 11073-20702 and IEEE 11073–10207, and specifies the binding to other standards like Network Time Protocol (NTP) or Differentiated Services (DiffServ) for aspects like time synchronization and Quality of Service (QoS) requirements. Together with the usage of terminology standards (like IEEE 11073-10101), this standard contributes to the semantic interoperability of medical devices. Due to its nature of binding the other SDC standards together it is often referred to as "SDC GLUE". IEEE 11073-1070X Participant Key Purpose (PKP) Series PKPs describe process requirements according to the role of a network participant. While P11073-10700 defines the Base PKP with basic requirements for participating providers and consumers, the three additional PKP standards focus on specific functionalities: Providing and consuming information in terms of metric data (IEEE P11073-10701), Providing and consuming alerts (IEEE P11073-10702), and Providing and consuming external control functionalities (IEEE P11072-10703). PKPs are thus independent from the particular medical devices and their concrete medical use case. However, they mainly restrict the IEEE 11073 SDC Core standards to enable safe and interoperable medical device systems and to facilitate the approval process. IEEE 11073-1072X Devices Specialisation (DevSpec) Series In contrast to PKPs, the DevSpecs are standards for particular classes of medical devices. DevSpecs describe the way the devices are modelled in the network representation and define requirements for the interaction of provider and consumer via SDC, if necessary. Currently, the PoCSpec project develops DevSpecs for High-Frequency Surgical Equipment (IEEE P11073-10721), endoscopic camera and light source (IEEE P11073-10722 and -10723), insufflator (IEEE P11073-10724), and medical suction and irrigation pump (IEEE P11073-10725). Modules that can be used by different types of device are defined in the so-called Module Specifications (ModSpecs, IEEE P11073-10720). Open Source Implementations There are open source libraries available implementing the IEEE 11073 SDC standards: SDCLib/C (written in C++, formerly known as OSCLib) SDCLib/J (written in Java, formerly known as SoftICE) SDCLib/J (fork) (written in Java, fork of the former main author which implements the latest features) SDCri (SDC Reference Implementation) (written in Java) sdc11073 (written in Python, formerly known as pySDC) protoSDC-rs (written in Rust) openSDC (written in Java, not maintained since 2019) References Computing in medical imaging 11073 IEEE standards Health standards
IEEE 11073 service-oriented device connectivity
[ "Technology" ]
1,319
[ "Computer standards", "IEEE standards" ]
59,800,990
https://en.wikipedia.org/wiki/NGC%204278
NGC 4278 is an elliptical galaxy located in the constellation Coma Berenices. It is located at a distance of circa 55 million light years from Earth, which, given its apparent dimensions, means that NGC 4278 is about 65,000 light years across. It was discovered by William Herschel on March 13, 1785. NGC 4278 is part of the Herschel 400 Catalogue and can be found about one and 3/4 of a degree northwest of Gamma Comae Berenices even with a small telescope. Charasteristics NGC 4278 is an elliptical galaxy. Its nucleus has been found to be active (AGN) and based on its spectrum has been identified as a LINER. The most accepted theory for the power source of active galactic nuclei is the presence of an accretion disk around a supermassive black hole. In the centre of NGC 4278 lies a supermassive black hole with an estimated mass based on stellar velocity dispersion. The X-ray emission of the nucleus is consistent with one of a black hole fed by a low radiative efficiency accretion flow. The nucleus is a source of radio waves and two small symmetric S-shaped radio jets have been observed spanning for 20 mas each, which corresponds to 1.4 parsec at the distance of NGC 4278, emanating from the central source. The nucleus also hosts a compact ultraviolet source, which features strong variability in the form of flares. One such flare was observed between June 1994 and January 1995, when the nuclear source became 1.6 times brighter in six months. Similar flares have also being observed in other low luminosity AGN. Variability has also being observed in X-rays. The galaxy has been found to brighten 5 times within three years, while fluctuations have been observed in shorter time periods, even within an hour, with the flux increasing by a factor of 10% in one observation by XMM-Newton. The spectrum obtained by XMM-Newton can be accounted for by an absorbed power-law, with column density of the order of 1020 cm-2. The FeKα emission line has not been detected, as is typical for LINERs. The spectral energy distribution of NGC 4278 resembles a LINER at lower fluxes while at higher fluxes it resembles a low luminosity Seyfert galaxy. Dust features have been observed in the central part of the galaxy and the area that appears northwest of the nucleus. The dust forms knots and filaments that spiral down to the nucleus. Moreover, the galaxy, contrary to most elliptical galaxies which lack neutral hydrogen emission, has been found to possess a massive HI disk, probably formed after the accretion of a dwarf satellite galaxy. The total mass of the HI disk is estimated to be . The disk rotates at the same sense as the stars, but is misaligned by 20° to 70°. Molecular clouds, as pointed by CO emission, have also been detected in the galaxy. The galaxy has been observed with the InfraRed Spectrograph (IRS) onboard Spitzer Space Telescope, examining the dust features of NGC 4278. Multiphase gas and dust have been observed in the same elongated features. Another uncommon finding for an elliptical galaxy is the detection of emission by polycyclic aromatic hydrocarbons (PAHs) and of strong [Si II] 34.8-μm emission. PAHs in other elliptical galaxies are believed to be destroyed by the hot interstellar medium. Emission by molecular hydrogen and ionised gas has also been reported. The observed emission of the gas in the nuclear region has been suggested to be the result of the accretion of cold gas by the HI disk. NGC 4278 is home to a larger than average number of globular clusters compared with galaxies of similar luminosity, with an estimated total number of . As it has been found in other galaxies, the colour distribution of the globular clusters in the galaxy features bimodiality, with the clusters forming a red and a blue subpopulation. The blue clusters have been found to be larger than the red ones at the same galactocentric distance, while the size of the clusters increases with galactocentric radius. Nearby galaxies NGC 4278 has been identified as a member of a galaxy group known as NGC 4274 or NGC 4062 group. Other members of this group are NGC 4020, NGC 4062, NGC 4136, NGC 4173, NGC 4203, NGC 4245, NGC 4251, NGC 4274, NGC 4283, NGC 4310, NGC 4314, NGC 4359, NGC 4414, NGC 4509, and NGC 4525. Another survey placed NGC 4278 in the same group with NGC 4631, NGC 4656, NGC 4559, NGC 4448, and NGC 4414. It is part of the Coma I Group which is part of the Virgo Supercluster. NGC 4283 lies 3.5 arcminutes to the northeast and NGC 4286 9 arcminutes northeast of NGC 4278 in the sky. NGC 4274 lies about 20 arcminutes north of NGC 4278. See also IC 1459 - a similar elliptical galaxy References External links NGC 4278 on SIMBAD Elliptical galaxies Radio galaxies Coma Berenices Coma I Group 4278 07386 39764 Astronomical objects discovered in 1785 Discoveries by William Herschel
NGC 4278
[ "Astronomy" ]
1,124
[ "Coma Berenices", "Constellations" ]
59,802,920
https://en.wikipedia.org/wiki/Puffin%20Island%20virus
Puffin Island virus, is a strain of Dugbe orthonairovirus belonging to the Hughes serogroup. References Nairoviridae Infraspecific virus taxa
Puffin Island virus
[ "Biology" ]
38
[ "Virus stubs", "Viruses" ]
59,803,219
https://en.wikipedia.org/wiki/Chernobyl%20groundwater%20contamination
The Chernobyl disaster remains the major and most detrimental nuclear catastrophe which completely altered the radioactive background of the Northern Hemisphere. It happened in April 1986 on the territory of the former Soviet Union (modern Ukraine). The catastrophe led to the increase of radiation in nearly one million times in some parts of Europe and North America compared to the pre-disaster state. Air, water, soils, vegetation and animals were contaminated to a varying degree. Apart from Ukraine and Belarus as the worst hit areas, adversely affected countries included Russia, Austria, Finland and Sweden. The full impact on the aquatic systems, including primarily adjacent valleys of Pripyat river and Dnieper river, are still unexplored. Substantial groundwater contamination is one of the gravest environmental impacts caused by the Chernobyl disaster. As a part of overall freshwater damage, it relates to so-called “secondary” contamination, caused by the delivery of radioactive materials through unconfined aquifers to the groundwater network It proved to be particularly challenging because groundwater basins, especially deep-laying aquifers, were traditionally considered invulnerable to diverse extraneous contaminants. To the surprise of scientists, radionuclides of Chernobyl origin were found even in deep-laying waters with formation periods of several hundred years. History Subsurface water was especially affected by radioactivity in the 30-km zone of evacuation (so called “exclusion zone”), surrounding the Chernobyl Nuclear Power Plant, or CNPP (Kovar&Herbert, 1998). The major and most hazardous contaminant from the perspective of hydrological spread was Strontium-90. This nuclide showed the most active mobility in subsurface waters; its rapid migration through groundwater aquifer was first discovered in 1988-1989 Other perilous nuclear isotopes included Cesium-137, Cesium-143, Ruthenium-106, Plutonium-239, Plutonium-240, Americium-241 The primary source of contamination was the damaged 4th reactor, which had actually been a crash site and where concentration of Strontium-90 initially exceeded the admissible levels for drinking water in 103-104 times. The reactor remained an epicenter of irradiation even after the emergency personnel built “Sarcophagus”, or “Shelter”, a protective construction aimed to isolate it from the environment. The structure proved to be non-hermetic, permeable to rainfall, snow and dew concentrations in many parts of 1000 m2 area Additionally, high amounts of cesium, tritium and plutonium were delivered to groundwater due to leakage of enriched water from the 4th reactor while building of the “Shelter” was in progress As a result, considerable amounts of water condensed inside the “Shelter” and absorbed radiation from nuclides-containing dust and fuels. Although most of this water evaporated, some portions of it leaked to groundwater from the surface layers under the reactor chambers. Other sources of groundwater contamination included: radioactive waste dumps on the territory of “exclusion zone”; cooling water reservoirs connected with aquifer; initial radioactive fallout which took place in first hours after the accident; and forest fires that led to accelerated spread of contaminated particles on soils of the surrounding area On the whole, the researchers recorded the probability of accumulation of nearly 30% of the overall surface contamination in the underground rock medium. This discovery demonstrates hazardous scales of radionuclides underground migration on the one hand, but the important function of igneous rock as protective shield against further spread of contaminants. Recent revelations of facts concealed by the Soviets show that the problem of groundwater radioactive contamination in Chernobyl zone existed long before the actual disaster. The analyses conducted in 1983-1985 showed deviation of radioactive standards in 1,5-2 times, as a result of earlier accidental malfunctions of CNPP in 1982 When the catastrophe occurred, groundwater irradiation was caused due to contamination of lands in the area of the wrecked fourth reactor. Furthermore, subsurface water was contaminated through unconfined aquifer in correlation and proportionally to contamination of soil by isotopes of Strontium and Caesium . Upper groundwater aquifer and most of Artesian aquifers were damaged in first place due to massive surface contamination with radioactive isotopes Strontium-90 and Cesium-137. At the same time, considerable levels of radioactive content were fixed on the periphery of exclusion zone, including part of potable water delivery system. This revelation proved the fact of migration of radioactive contaminants through the groundwater aquifers After the disaster, the Soviet Government aimed took delayed and inefficient measures at neutralization of consequences of the accident. The issue of groundwater contamination was improperly addressed the first several months after the disaster, leading to colossal financial expenses with negligible result. At the same time, proper monitoring of the situation was mostly absent The primary attempts of disaster relief workers were directed to prevention of surface waters contamination. Large-scale radionuclide content in the underground water was monitored and detected only in April–May 1987, almost a year after the disaster Migration pathways of contamination Unfortunately, hydrological and geological conditions in Chernobyl area promoted rapid radionuclide migration to subsurface water network. These factors include flat terrain, abundant precipitation and highly permeable sandy sediments Main natural factors of nuclides migration in the region can be divided into four groups, including: weather and climate-related (evaporation and precipitation frequency, intensity and distribution); geological (sediment permeability, drainage regimes, forms of vegetation); soil-borne (physical, hydrological and mechanical properties of lands); and lithological (terrain structures and types of rock). In meliorated areas migration processes are additionally influenced by anthropogenic drivers related to human agricultural activities. In this relation, specific parameters and type of drainage regime, melioration practices, water control and sprinkling can substantially accelerate natural tempos of migration of contaminants. For example, artificial drainage leads to substantial increase of absorption and flushing rates. These technological factors are particularly significant for the regions along Pripyat river and Dnieper river, which are almost totally subject to artificial irrigation and drainage within the network of constructed reservoirs and dams. At the same time, both natural and artificial factors of migration have specific prioritization for different contaminants. The primary way of Strontium-90 transportation to the groundwater is its infiltration from contaminated soils and subsequent transition through the porous surfaces of unconfined aquifer. The scholars also fixed two additional alternative ways of migration of this radionuclide. The first one is “technogenous” transition, caused by poor construction of wells for water withdrawal or insufficient quality of materials used for their shells. During electric pumping of deep-laying artesian water, the stream unprotected passes through contaminated layers of upper aquifers and absorbs radioactive particles before getting into a well. This way of contamination was experimentally verified at the Kiev water intake wells. Another abnormal way of radionuclides migration are weak zones of crystalline rocks. The researches of Center of Radio-ecological Studies of the National Academy of Sciences of Ukraine showed that crustal surface has unconsolidated zones characterized by increased electric productivity, as well as higher moisture and emanation capacity. As to Cesium-137, this nuclide demonstrates lower migration potential in Chernobyl soils and aquifers. Its mobility is hampered by such factors as: clay minerals which fixate radionuclides in rock, absorption and neutralization of isotopes through ion-exchange with other chemical components of water; partial neutralization by vegetation metabolic cycles; overall radioactive decay. Heavy isotopes of Plutonium and Americium have even lower transportation capacity both in and outside the exclusion zone. However, their hazardous potential should not be discarded considering extremely long half-life and unpredictable geo-chemical behavior Agricultural damage Groundwater transportation of radionuclides belongs to the key pathways of contamination of lands engaged in agricultural production. In particular, due to vertical migration with rises of water levels, radioactive particles infiltrate soils and subsequently get into plants through the absorption system of their roots. This leads to internal irradiation of animals and people during consumption of contaminated vegetables This situation is aggravated by a predominantly rural type of settlement in the Chernobyl area, with most of population engaged in active agricultural production. It makes the authorities either remove the contaminated areas near Chernobyl from agricultural activities or spend funds for excavation and treatment of surface layers. These problems of damage to initially intact soils puts a heavy burden primarily on the Ukrainian and especially the Belarusian economy. Nearly one-quarter of the entire territory of Belarus was seriously contaminated with isotopes of Cesium. The authorities were obliged to exclude nearly 265 thousands hectares of cultivated lands from agricultural use till present day. Although complex chemical and agro-technological measures led to limited decrease of radionuclide content in food produced on contaminated territories, the problem remains largely unresolved Apart from economical damage, agricultural contamination via groundwater pathways is detrimental for biophysical security of the population. Consumption of food containing radionuclides became the major source of radioactive exposure of people in the region Thus agricultural damage eventually means direct and long-lasting threat to the public health. Health risks The health impacts of groundwater contamination for population of Ukraine, Belarus and bordering states are usually perceived as extremely negative. The Ukrainian government initially implemented a costly and sophisticated remediation program. However, in view of limited financial resources and other more urgent health problems caused by the disaster, these plans were abandoned Not least, such a decision owed to the research results of domestic scholars showing that groundwater contamination does not contribute to the overall health risks substantially in regard to other active pathways of radioactive exposure in the “exclusion zone”, In particular, radioactive contamination of unconfined aquifer, which is usually considered a serious threat, has fewer economical and health impact in Chernobyl because subsurface water in “exclusion zone” is not used for household and drinking needs. The probability of using this water by local residents is excluded by a special status of Chernobyl area and relevant administrative prohibitions. The only group directly and inevitably exposed to health threats are emergency workers engaged in water drainage practices related to Chernobyl Nuclear Power Plant reactors deactivation and waste disposal operations. As to contamination of confined aquifer, which is a source of technical and household water supply for Pripyat city (the largest city in Chernobyl area), it also does not pose immediate health threat due to permanent monitoring of water delivery system. In case any indexes of radioactive content exceed the norm, withdrawal of water from local boreholes will be suspended. Yet such situation poses a certain economic risk due to high expenditures necessary for ensuring alternative water supply system . At the same time, lethal doses of radiation in unconfined aquifer retain substantial prospective danger due to their considerable capacity of migration to confined aquifer and subsequently to surface water, primarily in the Pripyat River. This water can furthermore enter tributaries of the Dnieper River and Kiev Reservoir. In this way the number of animals and people using contaminated water for domestic purposes can drastically increase. Considering that Dnieper is one of the key water arteries of Ukraine, in case of breaching of integrity of the “Shelter” or long-lived waste repositories, extensive spill of radionuclides in groundwater can reach the scale of national emergency. According to official position of the monitoring staff, such scenario is unlikely because before getting to the Dnieper the content of Strontium-90 is usually considerably diluted in the Pripyat River and Kiev Reservoir. Yet this assessment is considered inaccurate by some experts due to imperfect evaluation model implemented Thus groundwater contamination led to a paradoxical situation in the realm of public health: direct exposure to radiation by using contaminated subsurface water for household purposes is incomparably less than indirect impact caused by nuclides migration to cultivated lands. In this regard, can be distinguished on-site and off-site health risks from contaminants in groundwater network of the exclusion zone Low on-site risks are produced by direct water takeoff for drinking and domestic needs. It was calculated that even if hypothetical residents use water on the territory of radioactive waste dumps, the risks would be far below admissible levels. Such results can be explained by underground water purification during its hydrological transportation in surface waters, rains and snowmelt Primary health risks are off-site, posed by radionuclide contamination of agricultural lands and caused, among other factors, by groundwater migration through unconfined aquifer. This process eventually leads to internal irradiation of people using food from the contaminated areas. Water protection measures The urgency to take immediate measures for underground water protection in Chernobyl and Pripyat region was caused by perceived danger of transportation of radionuclides to the Dnieper River, thus contaminating Kiev, the capital of Ukraine, and 9 million other water users downstream. In this regard, on May 30, 1986 the government adopted the Decree on groundwater protection policy and launched a costly program of water remediation. However, these measures proved to be insufficient as they grounded upon incomplete data and absence of efficient monitoring. Without credible information, emergency staff launched “worst case” scenario, expecting maximum contamination density and minimal slowdown indexes. When the updated survey information showed negligible risks of excessive nuclides migration, remediation program was stopped. However, to this moment Ukraine already spent giant monetary funds equal to nearly 20 million dollars for this project, as well as exposed relief workers to needless danger of irradiation. In 1990-2000s, the focus of protective measures shifted from remediation to construction of protective systems for the complete isolation of contaminated areas along Pripyat River and Chernobyl Nuclear Power Plant from the rest of the region. Since it was done, local authorities were advised to concentrate efforts on the permanent monitoring of the situation. The process of degradation of radionuclides was let to itself under so called “observed natural attenuation” Monitoring measures In face of persistent disintegration of radioactive materials and highly unfavorable radiation background in “exclusion zone”, permanent monitoring was and remains crucial both for deescalation of environmental degradation and preventing humanitarian catastrophes among neighboring communities. Monitoring also allows to reduce parameter uncertainties and improve models of assessment, thus actually leading to more realistic vision of the problem and its scales. Until the late 1990s, methods of data collection for groundwater quality monitoring were of low efficiency and reliability. During installation of monitoring boreholes, the wells were contaminated with “hot fuel” particles from the surface ground, what made initial data inaccurate. Decontamination of boreholes from extraneous polluters could take 1,5–2 years. Another problem was insufficient purging of monitoring wells before sampling. This procedure, necessary for replacement of stale water inside boreholes with new water from aquifer, was introduced by monitoring personnel only in 1992. The importance of purging was immediately proved by substantial growth of Strontium-90 indexes in samples The quality of data was additionally worsened by corrosion of steel components of monitoring wells. Corrosive particles substantially altered radioactive background of aquifer. In particular, excessive content of iron compounds in water got into compensatory reactions with Strontium thus leading to deceptively lower Strontium-90 indexes in samples. In some cases, irrelevant design of well cages also impeded monitoring accuracy. The well constructions implemented by Chernobyl Nuclear Power Plant personnel in early 1990s had 12 meters long screening sections allowing only vertically arranged sampling. Such samples are hard to interpret as an aquifer usually has unequal vertical distribution of contaminants) Since 1994, the quality of groundwater observation in Chernobyl zone sufficiently improved. New monitoring wells are constructed with poli-vinylcloride materials instead of steel, with shortened screening sections, 1–2 m Additionally, in 1999-2012 there was created an experimental monitoring site in proximity to radioactive waste dumps area westward Chernobyl Nuclear Power Plant, called “Chernobyl Red Forest”. The elements of the new monitoring system include laboratory module, station for unsaturated zone monitoring, network of monitoring boreholes and meteorological station Its primary objectives include monitoring of such processes as: radionuclides extraction from “hot fuel particles” (HFP) dispersed in surface layer; their subsequent transition through the unsaturated aquifer, and condition of phreatic (saturation) zone. HFP are particles which emerged from burnt wood and concrete during initial explosion and subsequent fire in the “exclusion zone”. Unsaturated aquifer is provided with water and soil sampler, water containment sensors and tensiometers. Work of an experimental site allows to make real-time surveillance of Strontium-90 migration and condition in aquifer, yet simultaneously raises new questions. The monitoring staff noticed that fluctuations of water levels directly influence the release of radionuclides from sediments, while accumulation of organic matter in sediment correlates with geochemical parameters of aquifer. Additionally, for the first time the researchers detected Plutonium in deep-laying groundwater, which means that this contaminant also has a capacity to migrate in confined aquifer. However, specific means of this migration still remain unknown. The researchers forecast that in case of inviolated protection of nuclear waste dumps in exclusion zone, the concentration of Strontium-90 up to 2020 will be much lower in subsurface water than admissible maximum indexes. Also, contamination of the Pripyat River as the most vulnerable surface water route by underground tributaries is unlikely in the next 50 years At the same time, the number of monitoring wells is still insufficient and needs expansion and modification. Also, the boreholes are distributed within the exclusion zone unevenly, without consideration of hydrological and radioactive specifics of the area (Kovar&Herbert, 1998 Lessons learned Chernobyl accident revealed complete unpreparedness of the local authorities to the resolution of environment-related issues of a nuclear disaster. Groundwater management is no exception. Without accurate real-time data and adjusted emergency management plans, the government spent enormous funds for groundwater remediation, which later proved to be needless. At the same time, really crucial top-priority measures, such as reliable isolation of the damaged 4th reactor, were performed on a poor-quality level. If the “Shelter” had been constructed without deficiencies as completely hermetic and isolating the 4th reactor from contact with external aerial, soil and groundwater mediums, it would make much greater contribution to prevent nuclides from entering in and migrating throughout the groundwater network Taking these failures into account, the following are lessons learned from Chernobyl tragedy for groundwater management: The necessity of consistent and technologically reliable monitoring system capable to produce high-quality real-time data; Exact monitoring data as a primary basis for any remedial practices and melioration policies; Criteria and purposes of groundwater management activities, be it remediation, construction works or agricultural restrictions, are to be identified at the stage of analysis and prior to any practical realization; Problems of groundwater contamination must be regarded in the wider perspective, with close correlation to other pathways and forms of contamination, because they all are interconnected and mutually influenced; It is always highly advisable to engage international experts and leading scholars to peer-reviewing of designed action plans; Groundwater management in areas of radioactive contamination must be based on integrated ecosystem approach, i.e. considering its influence on local and global ecosystems, well-being of local communities and long-lasting environmental impacts. References Ground Water pollution Radioactive contamination
Chernobyl groundwater contamination
[ "Chemistry", "Technology", "Environmental_science" ]
4,057
[ "Aftermath of the Chernobyl disaster", "Environmental impact of nuclear power", "Radioactive contamination", "Water pollution" ]
59,803,519
https://en.wikipedia.org/wiki/Wildlife%20of%20Kuwait
The wildlife of Kuwait consists of the flora and fauna of Kuwait and their natural habitats. Kuwait is a country in the Middle East at the head of the Persian Gulf, located between Iraq and Saudi Arabia. Geography Kuwait is in size, being about from north to south and from east to west. It has of coastline on the Persian Gulf and includes nine islands, the largest being Bubiyan Island. The main geographical feature is the large Kuwait Bay, which provides a natural harbour and on the shores of which Kuwait City is located. The country consists largely of undulating flat land with low hills. The country is divided into four zones; a desert plateau to the west; salt marshes, mud flats and saline depressions around Kuwait Bay; sand dunes to the east; and a desert plain occupying the bulk of the country. Kuwait has an arid climate. The summer is hot and dry and the precipitation, which averages less than falls mainly in the winter in the form of unpredictable showers and as thunderstorms in spring. Dust storms can occur at any time of year but are more common in spring and summer. Average daily temperatures in the summer are around . In the winter, average temperate are , and there can be night frosts. Due to Kuwait's proximity to Iraq and Iran, the winter season in Kuwait is colder than other Arabian Peninsula countries. Flora Over 400 species of wild plant have been recorded in Kuwait. The arfaj is the national flower of Kuwait. Desert plants are typically coarse grasses and salt-tolerant shrubs which tend to be low growing and often spiny; one of the most common plants is Rhanterium epapposum, known locally as arfaj, which is used for forage by camels and sheep. After rainfall, annual plants spring up from seeds which may have lain dormant for years. The flowers they produce are often blue or purple and as soon as the seed is set, the plants wither and die. About two thirds of the plant species are annuals with smaller numbers of perennial plants, shrubs and sub-shrubs. The native flora is in transition between semi-desert and desert vegetation and is of importance in the study of how humans are impacting semi-desert habitats. The aftermath of the Gulf War and the inundation of much of the land with hydrocarbon residues has caused considerable damage to the soil structure and considerable changes to the environment. Date palms have been planted at oases and near the coast and mangroves and sea grasses grow on the mudflats near Kuwait Bay, their roots helping to stabilise the coastline. Fauna Currently, 442 species of birds have been recorded in Kuwait, 18 species of which breed in the country. Kuwait is situated at the crossroads of several major bird migration routes and between two and three million birds pass each year. The marshes in northern Kuwait and Jahra have become increasingly important as a refuge for passage migrants. Kuwaiti islands are important breeding areas for four species of tern and the socotra cormorant. The Mubarak Al-Kabeer Reserve Ramsar Site on Boubyan Island consists of lagoons and saltmarshes and is visited annually by wetland birds migrating from Eurasia to Africa, and others travelling from Turkey to India. Other birds live and breed on these wetlands all year round, including the world's largest breeding colony of crab-plovers. Among the resident birds, the commonest is the desert lark, and inland the kestrel and short-toed snake eagle are to be seen hunting over the desert. Away from the coast the searing heat and absence of surface water means that animals need to have special adaptations and behaviours to survive. Kuwait has only one species of amphibian, the variable toad (Bufotes variabilis), and has about 38 species of reptile. These include the Arabian sand boa, the black desert cobra, the monitor lizard and a number of different spiny-tailed lizards (Uromastyx spp.). Many of these spend the heat of the day in burrows, emerging at night to feed. The frog-eyed gecko (Teratoscincus scincus) does this, burrowing as deep as below the surface among the dunes of the coastal plains, where it remains cool and humid. Around 28 species of mammals have been recorded in the country. Terrestrial mammals include several small desert rodents, the desert hedgehog, the African wildcat, the sand cat, the caracal, the Indian grey mongoose, the striped hyena, the golden jackal, the fennec fox, the honey badger, the Saudi gazelle, the goitered gazelle, the Arabian oryx, the dromedary and two species of bat. The dugong has been recorded in Kuwaiti waters in the Persian Gulf, as well as the Bryde's whale, the pygmy blue whale, the humpback whale, the finless porpoise, the Indo-Pacific humpbacked dolphin and the Risso's dolphin. Scorpions and dung beetles abound, and in the wetlands and mudflats around Kuwait Bay and the islands there are crabs and mudskippers, numerous species of fish, waterfowl, gulls, flamingoes and dugongs. References Kuwait Fauna of Kuwait
Wildlife of Kuwait
[ "Biology" ]
1,084
[ "Biota by country", "Wildlife by country" ]
59,803,665
https://en.wikipedia.org/wiki/Grizzly%20399
Grizzly 399 (1996 – October 22, 2024) was a grizzly bear living in Grand Teton National Park and Bridger-Teton National Forest in Wyoming, United States. She was followed by as many as 40 wildlife photographers, and millions of tourists came to the Greater Yellowstone Ecosystem to see her and other grizzly bears. There are official Facebook, Twitter, and Instagram accounts for Grizzly 399. Background Grizzly bears (Ursus arctos horribilis) are a subspecies of the North American Brown bear species U. arctos. Several decades ago, grizzlies were assessed as being at risk of rapid extinction due to the rate at which the population was declining. Protection under the Endangered Species Act of 1973 has resulted in a population rebound: there are now approximately 2,000 grizzly bears in the contiguous United States, of which about half are estimated to live in the Greater Yellowstone Ecosystem. Grizzlies are stereotyped as ferocious, but the typical bear avoids contact with humans, living away from settlements and attacking only to protect themselves when startled by a human. Life Grizzly 399 was a grizzly bear who resided on federal land in a range of hundreds of miles throughout the Grand Teton National Park and the Bridger-Teton National Forest. She was born in a den in Pilgrim Creek, Wyoming, in the winter of 1996. She was captured in 2001 and fitted with a radio collar by the Interagency Grizzly Bear Study Team. She was the 399th bear to be tracked with this method as part of the long-term research project. In 2018, monitoring of 399 via radio telemetry ceased, with the research continuing as she resided in an area where she was easily observable. 399 reached age 28, becoming older than is usual for a grizzly bear, as "more than 85 percent of them are killed because of some kind of human activity before they reach old age". She weighed almost . When standing upright on her hind legs, she was . Unlike the typical grizzly, she lived in close proximity to humans, although she was not particularly concerned with their presence; scientists have speculated that this was in response to a death of a cub in a more remote area, so she wanted to avoid that area. She never killed a human despite at least two known close encounters. Cubs Grizzly 399 successfully reared many progeny, including 22 cubs and grandcubs. In mid-May 2020 she was observed with four new cubs born the previous winter. She taught her offspring habits to benefit from rather than be harmed by human proximity, such as loitering during the fall elk hunt to consume abandoned elk innards and looking both ways before crossing roadways to avoid being struck by vehicles, a common cause of death among bears. Despite this, at least three of her cubs were killed due to human encounters, including Grizzly 399's only 2016 cub, nicknamed "Snowy" because of his whitish-blonde facial coloration. In June of that year, Snowy was struck and killed by a car in Grand Teton National Park, an incident investigated as a potential hit-and-run accident. In all, she lost half of her descendants due to encounters with people or male bears. On May 21, 2020, a wildlife photographer saw Grizzly 399 coming out of hibernation in Pilgrim Creek with four cubs. This was her largest brood to date. On May 16, 2023, Grizzly 399 emerged from hibernation and appeared in the area of Pilgrim Creek in Grand Teton National Park. She was seen with a single cub. At age 26 or 27, this made her the oldest female bear known to have reproduced in the Greater Yellowstone Ecosystem. Unlike the typical bear, Grizzly 399 regularly gave birth to triplets rather than twins. This typically has a paradoxical effect on the bear population. A mother bear with three cubs expends significantly more energy in caring for them, which can potentially decrease rather than increase the survival rate. Grizzly 399, conversely, typically handled triplets well. One of her triplet cubs also grew to be a prolific mother and was tagged for research as Grizzly 610. In 2011, Grizzly 610 had twins, while Grizzly 399 had another set of triplets. The scientists observing the bears were concerned due to 399's advanced age, but to their surprise Grizzly 610 amicably adopted one of her mother's triplet cubs. One of 399's 2017 twin cubs, numbered 964, was relocated to Yellowstone in 2019. She was spotted with twins in 2023. Grizzly 610's daughter, numbered 926, released twins in 2023. Relationship with humans Grizzly 399 was known to be habituated to people when near roads and lightly developed areas. A researcher determined that she sought out these roadside areas rather than backcountry because it was safer for her cubs, which male bears often attempted to kill. The fact that she spent so much time near roads also contributed to her popularity. In 2011, the sight of a mother grizzly bear and her three cubs near a road in central Grand Teton National Park was enough to cause traffic to come to a halt in both directions for miles. In Willow Flats, Grizzly 399 taught each set of cubs to hunt elk calves, within view of the guests at Jackson Lake Lodge. Grizzly 399 was usually found along the roadside near the Oxbow Bend of the Snake River. The number of photographers following her grew to approximately 40–50 by of 2015. 399 was considered the "grand matriarch of the park's roadside bears." In 2016, Grizzly 399 was feared dead after a hunter claimed to have killed her; however, she emerged from hibernation on May 10, 2016, with one cub in tow. She emerged from the Bridger-Teton National Forest into the Grand Teton National Park with a white-faced cub at her side. In 2017, now older than the age beyond which most brown bears usually breed, she was spotted in a spring snowstorm with two cubs following her. Death On the evening of October 22, 2024, Grizzly 399 was fatally struck by a vehicle on Highway 26/89 in Snake River Canyon, south of Jackson, Wyoming. The bear's identity was confirmed through ear tags and a microchip. Grand Teton Wildlife Brigade Created in 2007 in response to the magnitude of visitors coming to Grand Teton to view Grizzly 399 and her cubs, the Grand Teton Wildlife Brigade keeps animals and people apart and safe. In 2011, ranger Kate Wilmot, whose official title is "bear management specialist", said that that year things had become "completely chaotic". The real duty was managing the behavior of park visitors. This was partly due to social media increasing the popularity of the bears, and drawing more people to seek them out. Wilmot directs 16 volunteers in the brigade throughout the summer until snowfall. If not for the brigaders, "wildlife watching would be a mess". The brigaders carry bear spray, but their primary role is to persuade tourists to respect the 100-yard viewing guideline established after incidents with Grizzly 610, 399's daughter. Feeding the bears is illegal, so the brigadiers prevent this. If bears receive food from people, they can become habituated to people and more aggressive toward them. The brigadiers remind tourists of their role in respecting bears' space. The brigadiers' success can be measured in the rarity of major incidents and bear removals from the park. When bears become too habituated to human presence and aggressive in their pursuit of human food, or when a bear attacks a human, the "problem bear" is typically euthanized. Grizzly mothers are known for being aggressively protective of their progeny. In 2011, in Yellowstone National Park, a mother bear fatally mauled a hiker who got too close. Grizzly 610, 399's daughter, twice "charged" tourists who got too close. No injuries were reported. Endangered species protection and hunting In 2017, the United States Fish and Wildlife Service officials removed grizzly bears from the endangered species list and turned management of grizzlies outside Yellowstone and Grand Teton National Parks over to Wyoming, Montana, and Idaho. Grizzlies live in ranges covering hundreds of miles, which can place them outside of parks, where they become the targets of hunters. Grizzly 399 lived outside of the parks. Hunters in the area targeted 399 because she was the biggest and most famous trophy. Daryl Hunter, a wildlife photographer who followed Grizzly 399, related a conversation with an outfitter who said, "I met a guy who wants Grizzly 399's rug on his wall, stating that because she is famous, she makes a better trophy". Grizzly 399 spent part of the year in Grand Teton National Park, but also hibernated in the national forest where hunting is allowed. For the 2018 hunting season, Montana decided against a hunt. Idaho, with the fewest grizzlies, decided to allow hunting of only one bear. On May 23, 2018, the Wyoming wildlife commission voted unanimously to approve a grizzly bear hunt. The Wyoming Game and Fish Department let a vote decide the number of grizzlies to be killed. The tally came to 22 grizzlies in a unanimous vote of 7–0. The hunting season was planned for September 15 to November 15. This was to be the first authorized hunt in Wyoming in 44 years - since 1975 - a time when they were first listed as endangered, when no hunting was allowed inside the national parks or on the connecting road between them, and when the grizzly population had fallen to around 136 individuals. Wyoming's planned hunt met with a public outcry. Five women in Jackson Hole quickly organized "Shoot'em With A Camera-Not A Gun", which encouraged opponents of trophy hunting to join the tag lottery in hopes of preventing hunters from winning tags. Approximately 7,000 people applied for Wyoming bear tags, including wildlife photographer Thomas D. Mangelsen, Jane Goodall, and other conservationists. In July 2018, Mangelsen learned he was positioned high enough on a hunting lottery to actually receive a hunting tag, as he held slot number 8 in the queue. In September, just weeks before hunting season was to begin, a federal judge in Montana restored protection to all of the bears in the Greater Yellowstone Ecosystem. The judge ruled that the United States Fish and Wildlife Service officials were "arbitrary and capricious" when they removed protection from the bears under the Endangered Species Act of 1973. In July 2020, the Ninth Circuit Court of Appeals upheld the Montana judge's ruling. In March 2021, the U.S. Fish and Wildlife Service recommended no change to the protection status of the grizzly bear in the lower-48 states. According to the ESA after a five-year status review, they remained threatened . In popular culture Books Grizzly 399: The Story of a Remarkable Bear is a children's book published in May 2020 in Idaho Falls. The book is written by Sylvia M. Medina, illustrated by Morgan Spicer and includes photographs by American nature and wildlife photographer, Thomas D. Mangelsen. The publisher published a subsequent book with the same author, illustrator and photographer in April 2021 to include Grizzly 399's new cubs, titled, "Grizzly 399's Hibernation Pandemonium" after the 24-year-old mother bear surprised the world with the birth of four more cubs in the spring of 2020. Grizzlies of Pilgrim Creek In 2015, Thomas D. Mangelsen collaborated with Wilkinson to create the book about Grizzly 399 and her progeny. Mangelsen made it one of his priorities for over ten years to record her life, including her hibernation schedule, feeding, and mothering; he recorded the birth of three sets of triplets and a set of twins. His photographs, especially the one he dubbed, "An Icon of Motherhood", helped make her the most famous mother grizzly, maybe the most famous grizzly, in the world. Millions of people visit the Greater Yellowstone Ecosystem just to see these grizzly bears. Facebook account By 2015, Grizzly 399 had a full social media presence, although it was a mystery who is running the accounts. She had her own Facebook page, Instagram account, and a Twitter handle. "These aren't just any bears", explained Thomas D. Mangelsen, a global wildlife photographer who lives in Jackson Hole, Wyoming, "They might be the most famous grizzlies alive today on the planet. For all these people, catching a glimpse of them is the thrill of a lifetime." Mangelsen followed her movements for over ten years. Grizzly 399 dispelled the stereotype that all grizzlies are agents of terror, wrote Bozeman author Todd Wilkinson: "She's more well-behaved a lot of times than people around her. But she's wild." See also List of individual bears References External links Grand Teton Parks Grizzly Bears 399 & 610 − YouTube Always Endangered − The Story of Grizzly 399 − YouTube Grizzlies of Pilgrim Creek (Book Trailer) − YouTube VIDEO: Grizzly 399 and cubs | Multimedia | jhnewsandguide.com PBS Nature "Grizzly 399: Queen of the Tetons" full episode 1996 animal births 2024 animal deaths Individual grizzly bears Female mammals Roadkill Road incident deaths in Wyoming Individual animals in the United States
Grizzly 399
[ "Engineering", "Biology" ]
2,791
[ "Cloning", "Genetic engineering" ]
59,805,140
https://en.wikipedia.org/wiki/IC%205052
IC 5052 is a barred spiral galaxy located in the constellation Pavo. It is located at a distance of circa 25 million light years from Earth, which, given its apparent dimensions, means that IC 5052 is about 40,000 light years across. It was discovered by DeLisle Stewart on August 23, 1900. IC 5052 is viewed edge-on. When spiral galaxies are viewed from this angle, it is very difficult to fully understand their properties and how they are arranged. IC 5052 is actually a barred spiral galaxy – its spiral arms do not begin from the centre point but are instead attached to either end of a straight "bar" of stars that cuts through the galaxy's middle. The profile of the galaxy is irregular, with the northwest side having a much higher surface brightness than the southeast side. Also, one half of the galactic disk appears thicker that the other. A number of irregular dust lanes are observed across the disk, but none is prominent. No bulge is observed. A population of older stars has been detected off the center of the disk, as well as a stream-like structure, that could indicate a galaxy merger took place in the recent past. The galaxy is close enough that its stars can be resolved with large telescopes, with the brightest ones having an apparent magnitude of 21. The younger and hotter of the stars lie within HII regions, the largest of which have apparent diameters of at least 2 arcseconds. These pockets of extremely hot newborn stars are visible across the galaxy's length as bursts of pale blue light, partially blocked out by weaving lanes of darker gas and dust. IC 5052 is characterised as an isolated galaxy, which doesn't belong to a group of galaxies. The nearest large galaxy to IC 5052 is NGC 6744, which is characterised as the main disturber of IC 5052. References External links IC 5052 on SIMBAD Barred spiral galaxies Pavo (constellation) 5052 65603 19000823 Discoveries by DeLisle Stewart
IC 5052
[ "Astronomy" ]
409
[ "Constellations", "Pavo (constellation)" ]
59,808,372
https://en.wikipedia.org/wiki/Canadian%20Society%20for%20Pharmaceutical%20Sciences
The Canadian Society for Pharmaceutical Research (CSPS) advocates for excellence in pharmaceutical research, promotes the allocation of research funds, seeks involvement in decision and policy making processes and provides a forum for early scientists. It was founded in 1997. The Journal of Pharmacy and Pharmaceutical Sciences. is the official journal of CSPS. References External links Learned societies of Canada Biology societies Organizations established in 1997 1997 establishments in Canada Pharmacological societies
Canadian Society for Pharmaceutical Sciences
[ "Chemistry" ]
86
[ "Pharmacology", "Pharmacological societies" ]
59,809,245
https://en.wikipedia.org/wiki/Transpiration%20cooling
Transpiration cooling is a thermodynamic process where cooling is achieved by a process of moving a liquid or a gas through the wall of a structure to absorb some portion of the heat energy from the structure while simultaneously actively reducing the convective and radiative heat flux coming into the structure from the surrounding space. One approach to transpiration cooling is to move liquid through small pores in the outer wall of a body leading to evaporation of the liquid to a gas via the physical mechanism of evaporative cooling. Other approaches are possible. Applications Transpiration cooling is used in the aerospace industry, in jet and rocket engines. In 2018, researchers at the University of Oxford were experimentally testing transpiration cooling as a Thermal Protection System for Hypersonic Vehicles such as rockets or spaceplanes. Transpiration cooling is one of a variety of cooling techniques that may be used to reduce regenerative cooling loads in rocket engines and subsequently reduce propellant requirements. Other techniques exist, such as film cooling, ablative cooling, radiative cooling, heat sink cooling and dump cooling. Transpiration cooling is being considered for use in space vehicles reentering the Earth's atmosphere at hypersonic velocities where a transpirationally cooled outer skin could serve as a part of the thermal protection system of the reentering spacecraft. SpaceX publicly mentioned such a system in 2019 for use on their Starship reusable second stage and orbital spacecraft to mitigate the harsh conditions of reentry. The design concept envisioned a double stainless-steel skin, with active coolant flowing between the two layers, with some areas additionally containing multiple small pores that would allow for transpiration cooling. After design and testing in terrestrial labs, SpaceX subsequently stated that although an alternative heat mitigation approach—using low-cost ceramic tiles on the windward side of Starship—was being developed, transpiration cooling could be used in some areas. Few details on the design are expected to be publicly released, as US law prevents SpaceX from releasing such information. See also Evapotranspiration References Thermodynamic processes Spaceflight technology
Transpiration cooling
[ "Physics", "Chemistry" ]
435
[ "Thermodynamic processes", "Thermodynamics" ]
59,809,591
https://en.wikipedia.org/wiki/Symbiosome
A symbiosome is a specialised compartment in a host cell that houses an endosymbiont in a symbiotic relationship. The term was first used in 1983 to describe the vacuole structure in the symbiosis between the animal host the Hydra, and the endosymbiont Chlorella. Symbiosomes are also seen in other cnidaria-dinoflagellate symbioses, including those found in coral-algal symbioses. In 1989 the concept was applied to the similar structure found in the nitrogen-fixing root nodules of certain plants. The symbiosome in the root nodules has been much more successfully researched due in part to the complexity of isolating the symbiosome membrane in animal hosts. The symbiosome in a root nodule cell in a plant is an organelle-like structure that has formed in a symbiotic relationship with nitrogen-fixing bacteria. The plant symbiosome is unique to those plants that produce root nodules. The majority of such symbioses are made between legumes and diazotrophic Rhizobia bacteria. The rhizobia-legume symbioses are the most studied due to the importance in agriculture. Each symbiosome in a root nodule cell encloses a single rhizobium that differentiates into a bacteroid. However, in some cases a symbiosome may house several bacteroids. The symbiosome membrane, or peribacteroid membrane, surrounds the bacteroid membrane, separated by a symbiosome space. This unit provides an inter-kingdom, micro-environment for the production of nitrogen for the plant, and the receipt of malate for energy for the bacteroid. History The concept of the symbiosome was first described in 1983, by Neckelmann and Muscatine, as seen in the symbiotic relationship between Chlorella ( a class of green algae, and Hydra a cnidarian animal host. Until then it had been described as a vacuole. A few years later in 1989, Lauren Roth with Gary Stacey as well as Robert B Mellor applied this concept to the nitrogen-fixing unit seen in the plant root nodule, previously called an infection vacuole. This has since engendered a great deal of research, one result of this has been the provision of a more detailed description of the symbiosome (peribacteroid) membrane, as well as comparisons with similar structures in Vesicular Arbuscular Mycorrhizal symbioses in plants. In the animal models, the symbiosome has a more complex arrangement of membranes, such that it has proved difficult to isolate, purify and study. Structure and formation A symbiosome is formed as a result of a complex and coordinated interaction between the symbiont host and the endosymbiont. At the point of entry into a symbiont host cell, part of the cell's membrane envelops the endosymbiont and breaks off into the cytoplasm as a discrete unit, an organelle-like vacuole called the symbiosome. This is an endocytosis-like process that forms a symbiosome rather than an endosome. In plants this process is unique. The symbiosome membrane is separated from the endosymbiont membrane by a space known as the symbiosome space, which allows for the exchange of solutes between the symbionts. In the plant root nodule the symbiosome membrane is also called the peribacteroid membrane. In the plant In the legume-rhizobia symbioses the symbiosome is the nitrogen-fixing unit in the plant, formed by an interaction of plant and bacterial signals, and their cooperation. The legumes are protein-rich, and have a high demand for nitrogen that is usually available from nitrates in the soil. When these are scarce the plant secretes flavonoids that attract free-living diazotrophic (nitrogen-fixing) rhizobia to their root hairs. In turn the bacteria release Nod factors that stimulate the infection process in the plant. To enable infection the tip of the root hair curls over the rhizobia and by an inward growth produces an infection thread to carry the endosymbionts into the cortical cells. At the same time the cortical cells divide to produce the tough root nodules that will house and protect the bacteria. The bacterial production of extracellular polymeric substance (EPS) is seen to be necessary for enabling infection. The rhizobia infect the plant in large numbers, only seen to be actively dividing at the tip of the injection thread, where they are released into the cells inside symbiosomes. The symbiosome is formed as a result of an endocytosis-like process that produces an endosome. Typically endosomes target to lysosomes, but the symbiosome re-targets the host-cell proteins. The changes in the plant needed to form the infection thread, the increased division of the cortical cells, the formation of the root nodule, and symbiosome, are brought about by dynamic changes in the actin cytoskeleton. Filamentous actin (F-actin) channels the elongation of the injection threads and short F-actin fragments are dotted around the symbiosome membrane. The bacteria are released as injection drops into the host root nodule cells where the plasma membrane encloses them in the organelle-like structure of the symbiosome. In most plants a symbiosome encloses a single endosymbiont bacterium but some types may contain more than one. A negative feedback loop called the autoregulation of nodulation works to balance the need for nitrogen and thus the formation of nodules. Differentiation The outer host-cell derived symbiosome membrane encloses a space called the symbisome space or the peribacteroid space that surrounds the endosymbiont. In order for the symbiosome to be established as a nitrogen-fixing unit the enclosed bacterium has to be terminally differentiated into a morphologically changed bacteroid. The bacterium in the soil is free-living and motile. In the symbiosome it has to change its gene expression to adapt to a non-motile, non-reproductive form as the bacteroid. This change is noted by an increase in the size of the bacterium and its elongation. The bacterial membrane is also made permeable. The process of differentiation is plant-driven using peptides known as nodule specific cysteine-rich peptides (NCR peptides). NCRs are antimicrobial peptides that are similar to the defensin peptides used in mammals in response to invading pathogens. The NCRs are targeted to the symbiosome where they induce differentiation of the bacterium to the bacteroid. A major effect of NCR targeting is to limit the reproductive ability of the endosymbiont. These changes are controlled, since the bacterium is not killed as a result of exposure to the NCRs. Some of that control comes from the bacterium itself. In order to survive the NCR activities, the bacteria need to produce a protein called BacA. In addition the lipopolysaccharide produced by the bacteria is modified by an unusual fatty acid that also gives protection against environmental stresses. These defensive measures help the differentiation process and ensures their survival as bacteroids. Some strains of rhizobia produce a peptidase that degrades the NCRs. Nitrogen-fixing unit The established bacteroid is able to fix nitrogen into a chemically usable form of ammonium for the plant. This is an energy-demanding process fuelled by the plant's carbohydrates. Transport vesicles form in the symbiosome membrane allowing the passage of ammonium into the symbiosome space from the bacteroid, and the passage of plant nutrients to the bacteroid. The rhizobia infect the plant in large numbers where they are released into the cells inside symbiosomes. They are protected by the tough structure of the root nodule. In the animal The most well studied symbiosis involving an animal host is that between the cnidaria and the dinoflagellates, most commonly the single-celled zooxanthellae. The symbiosis of the Chlorella–Hydra first described the symbiosome. The coral Zoanthus robustus has been used as a model organism to study the symbiosis with its microsymbiont algal species of Symbiodinium, with a focus on the symbiosome and its membranes. Methods for isolating the symbiosome membranes have been looked for – the symbiont in the animal host has a multilayered membrane complex which has proved resistant to disruption making their isolation difficult. The endosymbiont dinoflagellates are used for their ability to photosynthesise and provide energy, giving the host cnidarians such as corals, and anemones, plant properties. Free-living dinoflagellates are ingested into the gastrodermal cells of the host, and their symbiosome membrane is derived from the host cell. The process of symbiosome formation is often seen in the animal host to be that of phagocytosis, and it is hypothesised that the symbiosome is a phagosome that has been subject to early arrest. Similar structures A similar structure to the symbiosome is the parasitophorous vacuole formed within host cells infected by apicomplexan parasites. The vacuole is derived from the host cell plasma membrane. It is made safe from the host's endolysomal system by modifying-proteins released by the parasite. The parasitophorous vacuole membrane is greatly remodelled by the parasite. See also Bacteriome Trophosome References Symbiosis Organelles
Symbiosome
[ "Biology" ]
2,149
[ "Biological interactions", "Behavior", "Symbiosis" ]
59,809,686
https://en.wikipedia.org/wiki/BB%20Phoenicis
BB Phoenicis is a variable star in the constellation of Phoenix. It has an average visual apparent magnitude of 6.17, being visible to the naked eye with excellent viewing conditions. From parallax measurements by the Gaia spacecraft, it is located at a distance of from Earth. Its absolute magnitude is calculated at 0.6. BB Phoenicis is a Delta Scuti variable, and shows stellar pulsations that cause brightness variations with an amplitude of 0.04 magnitudes. Its variability was discovered by accident in 1981, when the star was used as a comparison star for the eclipsing binary AG Phoenicis. Photometric and spectroscopic data have allowed the detection of at least 13 modes of radial and non-radial pulsations, the strongest one having a period of 0.174 days and an amplitude of 11.1 milli-magnitudes. Observations in different epochs show evidence that the pulsations modes vary in amplitude, which is common among Delta Scuti variables. Pulsation models indicate that the stellar rotation axis is inclined by 50–70° in relation to the line of sight. This star is classified as an F-type giant with a spectral type of F0/2III. It appears to be expanding after depleting all the nuclear hydrogen and leaving the main sequence. BB Phoenicis has an estimated mass of 2.25 times the solar mass and a radius of 4.7 times the solar radius. It is radiating 55 times the Sun's luminosity from its photosphere at an effective temperature of 7,200 K. References Delta Scuti variables Phoenix (constellation) F-type giants Durchmusterung objects 002724 002388 0119 Phoenicis, BB
BB Phoenicis
[ "Astronomy" ]
364
[ "Phoenix (constellation)", "Constellations" ]
59,811,433
https://en.wikipedia.org/wiki/Verastem%20Oncology
Verastem, Inc., doing business as Verastem Oncology, is an American pharmaceutical company that develops medicines to treat certain cancers. Headquartered and founded in Boston, Massachusetts, the firm is a member of NASDAQ Biotechnology Index. History Verastem Oncology (Verastem Inc) was co-founded in 2010 by entrepreneur Christoph H. Westphal and venture capitalist Michelle Dipp, who provided seed funding and initial office space in Cambridge, MA. The company was formed to commercialize the work of the three other co-founders, MIT biologists Robert F. Weinberg, Eric S. Lander and Piyush Gupta, by discovering and developing drugs to treat cancer by targeting cancer stem cells. The company raised $16 million in the initial Series A financing. Westphal served as CEO and chairman of the board from 2010 to 2013. Under his leadership, the company raised $55 million through an IPO in 2012. Mr. Robert Forrester succeeded Christoph Westphal as Verastem's president and CEO in 2013. In July 2019, Brian Stuglik was appointed to chief executive officer (CEO) of Verastem Oncology. Pipeline Their leading investigational drug is defactinib (VS-6063), is a small-molecule focal adhesion kinase (FAK) inhibitor designed to kill cancer stem cells, intended for the treatment of malignant pleural mesothelioma. In October 2015, they announced the premature termination of the company's late-stage clinical trial for defactinib after data analysis of the Phase II COMMAND trial found no significant differences in efficacy versus placebo. . Following the failure of the study, the company had to cut 50% of its workforce. In November 2016, Verastem Oncology licensed global rights from Infinity Pharmaceuticals to duvelisib (IPI-145), a novel inhibitor of PI3K delta and gamma. In April 2018, Verastem filed a New Drug Application (NDA) for duvelisib for the treatment of relapsed or refractory chronic lymphocytic leukemia/small lymphocytic lymphoma (CLL/SLL) and accelerated approval for relapsed or refractory follicular lymphoma (FL). The results of the clinical study DUO were published in Blood Journal. Verastem Oncology received FDA approval for duvelisib on September 24, 2018, as a treatment for adults with 3rd-line chronic lymphocytic leukemia/small lymphocytic lymphoma, and an accelerated approval as a 3rd-line treatment for follicular lymphoma, contingent on the results of a confirmatory trial. The drug label carries a black box warning due to the risk of potentially fatal or serious toxicities: infections, diarrhea or colitis, cutaneous reactions and pneumonitis. In July 2019, Verastem Oncology signed an exclusive agreement with Sanofi for the commercialization of duvelisib in Russia and CIS, Turkey, the Middle East and Africa. References External links Pharmaceutical companies of the United States Companies listed on the Nasdaq Pharmaceutical companies established in 2010 Life sciences industry Specialty drugs
Verastem Oncology
[ "Biology" ]
670
[ "Specialty drugs", "Life sciences industry" ]
59,812,578
https://en.wikipedia.org/wiki/Artashat%20orthonairovirus
Artashat orthonairovirus, also called Artashat virus (ARTSV), is a species in the genus Orthonairovirus. It was first isolated in Armenia in 1972 from Ornithodoros alactagalis, a soft tick of the family Argasidae. References Nairoviridae
Artashat orthonairovirus
[ "Biology" ]
69
[ "Virus stubs", "Viruses" ]
70,628,622
https://en.wikipedia.org/wiki/Cronartium%20quercuum
Cronartium quercuum, also known as pine-oak gall rust is a fungal disease of pine (Pinus spp.) and oak (Quercus spp.) trees. Similar to pine-pine gall rust, this disease is found on pine trees but its second host is an oak tree rather than another pine. Hosts and symptoms The pathogen requires pine and oak trees to complete its life cycle. Aecial hosts in North America are two- and three-needled Pinus species. Pinus hosts include Austrian (P. nigra), Jack pine (P. banksiana), Mugo pine (P. mugo), Red pine (P. resinosa), Ponderosa pine (P. ponderosa), and Scots pine (P. sylvestris). Telial hosts are Quercus species. Quercus hosts are generally made up of the red oak group and include Northern pin oak (Q. ellipsoidalis), Bur oak (Q. macrocarpa), Pin oak (Q. palustris), and Northern red oak (Q. rubra). Galls start to form as slight, rounded swelling on the tree stem, then grow to become spherical and elongate. Inside the galls are hyphae which occur in rays. Hyphae are typically found in the bark, as opposed to the wood. In the spring, aecia break through the bark covering the galls. Galls that form on branches of older pine trees cause only a little damage. Although infected seedlings could have severely stunted growth or even die off. Distribution Cronartium quercuum is found throughout North, Central, and South America, the Caribbean, and Asia. In North America, C. quercuum is found in Canada, the United States, and Mexico. C. quercuum is typically found in the eastern United States, spreading as far west as the Great Lakes region. Within Asia, C. quercuum is found in China, India, Japan, the Republic of Korea, the Democratic People's Republic of Korea, and the Philippines. Life cycle Pycnia and aecia are produced on pine host in the spring and early summer one to several years after infection. The aecia usually appear one year after the pycnia arise. Aeciospores move by wind to infect the telial host (Quercus). Because they move by wind, aeciospores are able to travel long distances to infect the telial host. The aeciospores are unable to re-infect pine species. Uredinia form on the oak species 1–3 weeks after infection, telia develop about 15 days later. Teliospores germinate and produce basidiospores. Basidiospores are also wind-dispersed and travel to infect first-year pine needles. The telial host can't be re-infected by basidiospores. Basidiospore infection occurs in summer and fall. The life cycle is complete when Pinus is infected by basidiospores. Management Management of pine-oak gall rust is fairly simple and straightforward. Prune out galls to reduce spreading of spores to nearby pine or oak trees. Prune galls from pine branches in the late winter or early spring. References Pucciniales Tree diseases Fungus species
Cronartium quercuum
[ "Biology" ]
701
[ "Fungi", "Fungus species" ]
70,629,128
https://en.wikipedia.org/wiki/Auricularia%20heimuer
Auricularia heimuer, also known as heimuer () or black wood ear, is a species of fungus in the order Auriculariales. It is commercially cultivated for food in China at a value exceeding $4 billion (USD) per year. The species was previously referred to as the European Auricularia auricula-judae, but the latter is not known to occur in east Asia. Auricularia heimuer is a popular ingredient in many Chinese dishes, such as hot and sour soup, and it is also used in traditional Chinese medicine. Description Fruitbodies are gelatinous, ear-shaped, and laterally attached to wood. They are up to across and thick. The upper surface is finely tomentose, coloured fawn to reddish brown when fresh, and coloured grey-brown when dry. The colour of cultivated specimens is often darker. The spore-producing underside is smooth to slightly veined, coloured pinkish buff when fresh, and coloured purplish grey when dry. Microscopic features The basidia are cylindrical, 40–65 x 3–6.5 μm, with three transverse septa. The basidiospores are allantoid (sausage shaped), 11–13 x 4–5 μm. Hairs on the upper surface are 50–150 x 4–6,5 μm. When cross-sectioned, a medulla (a central band of parallel hyphae) is normally present. Similar species The Asian Auricularia villosula is very similar, but distinguishable microscopically by its shorter hairs (30–70 μm long). Some strains of heimuer cultivated in China have proved to be A. villosula. The European A. auricula-judae is superficially similar, but it is not as dark as cultivated A. heimuer and microscopically distinguishable by its larger basidia and spores, the latter measuring 14.5–18 x 5–6 μm. Fruitbodies of both these species lack a medulla when cross-sectioned. Taxonomy A. heimuer was described in 2014 as a result of molecular research, based on cladistic analysis of DNA sequences, into wild and cultivated species of Auricularia in China. This research revealed that the most frequently cultivated species was previously misdetermined as Auricularia auricula-judae, a species confined to Europe, and was instead a separate and distinctive species restricted to east Asia. It was given the name Auricularia heimuer based on the Chinese vernacular name for the fungus: heimuer (黑木耳), which translates to black wood ear. Distribution and habitat Auricularia heimuer is a wood-rotting species, typically found on the dead standing or fallen wood of broadleaf trees. In the wild, it occurs most frequently on oak (Quercus) trees and less frequently on other broadleaf trees. In cultivation, it is sometimes grown on broadleaf logs, and is more commonly grown on growing media containing sawdust. The species occurs in temperate areas of northern China, as well as the Russian Far East, Korea, and Japan. Uses In China, the use of an Auricularia species, probably A. heimuer, as a food and a medicine was recorded in the 3rd-century Chinese medicinal book Shennong Ben Cao Jing. Species were being cultivated in China as early as the Tang dynasty (618–907). Li Shizhen, in his Pen Tsao Kang Mu, quotes Tang Ying-chuan from that period as saying "put the steamed bran on logs, cover with straw, Wood Ear will grow". The fungus is widely used as an ingredient in savoury dishes and is also cooked and served as a salad with vegetables and flavourings. A soup containing the species is used medicinally for dealing with colds and fevers in the belief that it reduces the heat of the body. According to a 2010 publication, the annual production of Auricularia species worldwide is the fourth highest among all industrially cultivated culinary and medicinal mushrooms. The estimated annual output in China in 2013 was 4.75 billion kg in fresh weight, with a value of about four billion US dollars. In Japan, the fungus is known as kikurage (キクラゲ; "wood jellyfish"), and is commonly shredded and used as a topping in ramen. A 2018 Japanese study surveyed 26 local specimens originally determined as "A. auricula-judae". The molecular identification was as follows: 4 samples of A. heimuer, 7 of A. minutissima, 10 of A. villosula, and 5 of A. thailandica. In Korea, the mushroom is called heuk-mogi (). It is commercially cultivated and commonly used in japchae. References Chinese edible mushrooms Auriculariales Fungi of Asia Medicinal fungi Fungi in cultivation Buddhist cuisine Fungi described in 2014 Fungus species
Auricularia heimuer
[ "Biology" ]
1,033
[ "Fungi", "Fungus species" ]
70,629,248
https://en.wikipedia.org/wiki/Carbide%20bromide
Carbide bromides are mixed anion compounds containing bromide and carbide anions. Many carbide bromides are cluster compounds, containing on, two or more carbon atoms in a core, surrounded by a layer of metal atoms, encased in a shell of bromide ions. These ions may be shared between clusters to form chains, double chains or layers. The great majority of these carbide bromide compounds contain rare earth elements. Since these elements have similar properties, similar structures can be made by substituting the elements. R2CBr2 forms a structure with layers of R6C clusters that contain one carbon atom. Each layer has bromide coating the top and bottom. Very similar is R2CBr2 which has layers of R6C2 clusters containing pairs of carbon atoms. This dicarbon is an (C24−), and contains a double bond. Layers have bromide on both sides, and so they are only weakly held together by van der Waals forces. If these layers are aligned with each other a 1T- form results with a small c measurement on the unit cell. In some compounds the layers are not quite aligned, but repeat after three layers giving a 3R form, with a larger c unit cell height. Where the layers align, the crystal system is trigonal. But if the layers never quite align at any height, a monoclinic crystal results. The C2 unit sits at an angle to the layers, and thus reduces symmetry compared to compounds with single carbon atoms in the cluster. In R2CBr there are layers of R6C that share bromide between layers. List References Bromides Carbides Mixed anion compounds
Carbide bromide
[ "Physics", "Chemistry" ]
350
[ "Matter", "Mixed anion compounds", "Salts", "Bromides", "Ions" ]
70,629,654
https://en.wikipedia.org/wiki/Marchandiomyces%20corallinus
Marchandiomyces corallinus is a lichenicolous fungus that parasitizes lichens, particularly those in the genera Physcia, Parmelia, Flavoparmelia, Lepraria, Pertusaria, Lasallia, and Lecanora. It is commonly found in eastern North America and Europe. Parasitism Despite most lichen parasites belonging to the phylum Ascomycota (95%), M. corallinus is in the phylum Basidiomycota. References Corticiales Lichenicolous fungi Fungi described in 1847 Fungi of Europe Fungi of North America Fungus species
Marchandiomyces corallinus
[ "Biology" ]
136
[ "Fungi", "Fungus species" ]
70,629,678
https://en.wikipedia.org/wiki/2D%20Materials%20%28journal%29
2D Materials is a monthly peer-reviewed scientific journal published by IOP Publishing. It covers fundamental and applied research on graphene and related two-dimensional materials. The editor-in-chief is Wencai Ren (Chinese Academy of Sciences). Abstracting and indexing The journal is abstracted and indexed in: Astrophysics Data System Chemical Abstracts Ei Compendex Inspec International Nuclear Information System ProQuest databases Science Citation Index Expanded Scopus According to the Journal Citation Reports, the journal has a 2023 impact factor of 4.5. References External links Materials science journals IOP Publishing academic journals Monthly journals English-language journals Academic journals established in 2014
2D Materials (journal)
[ "Materials_science", "Engineering" ]
136
[ "Materials science journals", "Materials science" ]
70,629,712
https://en.wikipedia.org/wiki/NGC%206822-WR%2012
NGC 6822-WR 12 is a WN-type Wolf-Rayet star located in the galaxy NGC 6822, about 1.54 million light years away in the constellation of Sagittarius. NGC 6822-WR 12 was the first Wolf-Rayet star to be discovered in the galaxy, and is one of only four known in the galaxy. Discovery In 1983, a Wolf-Rayet (WR) star was identified in the barred irregular galaxy NGC 6822. The appearance of strong ionised helium emission lines in its spectrum, together with ionised nitrogen emission lines but no carbon lines, led to the assignment of the spectral class WN3. At the time, it was the only known WR star in NGC 6822. NGC 6822-WR 12 was the 12th candidate of 12 candidate WR stars found in NGC 6822 during a survey of NGC 6822 and IC 1613. In a follow-up study, only 4 of the WR candidates in NGC 6822 were confirmed as WR stars, and they are still the only WR stars known in NGC 6822. Properties High-resolution spectroscopy of NGC 6822-WR 12 gives a spectral type of WN4 and CMFGEN atmosphere models give a very high temperature of . Combined with a radius of , this leads to a bolometric luminosity of , which would likely make it one of the most luminous stars in its relatively small galaxy. Assuming mass-luminosity relations for Wolf-Rayet stars points to a very high mass, about 36 solar masses. NGC 6822-WR 12 has a powerful stellar wind, which ejects per year from its surface at a relatively slow speed of 1,100 kilometres per second. Composition As is typical of WN stars, NGC 6822-WR 12 has almost no hydrogen, it having been either fused to helium or lost through strong stellar winds. However, due to the very low metallicity of NGC 6822, similar to that of the Small Magellanic Cloud, it has a lower nitrogen abundance than that of galactic WN stars, containing just 0.3% nitrogen. The emission lines of ionised nitrogen in the spectrum are correspondingly weak. The rest of the star is helium and its spectrum is dominated by emission lines of ionised helium. References Wolf–Rayet stars Sagittarius (constellation) Extragalactic stars NGC 6822
NGC 6822-WR 12
[ "Astronomy" ]
486
[ "Sagittarius (constellation)", "Constellations" ]
70,630,193
https://en.wikipedia.org/wiki/Trifolium%20ornithopodioides
Trifolium ornithopodioides, the bird's foot clover, is a species of flowering plant in the family Fabaceae. It is native to Europe, Madeira, and northwestern Africa, and has been introduced to Australia and New Zealand. It is a halophyte. References ornithopodioides Halophytes Flora of Ireland Flora of Great Britain Flora of the Netherlands Flora of Germany Flora of Hungary Flora of Southwestern Europe Flora of Italy Flora of Romania Flora of Crete Flora of the East Aegean Islands Flora of Madeira Flora of Morocco Flora of Algeria Plants described in 1753 Taxa named by Carl Linnaeus
Trifolium ornithopodioides
[ "Chemistry" ]
127
[ "Halophytes", "Salts" ]
70,630,441
https://en.wikipedia.org/wiki/Minimal%20BASIC
Minimal BASIC is a dialect of the BASIC programming language developed as an international standard. The effort started at ANSI in January 1974, and was joined in September by a parallel group at ECMA. The first draft was released for comments in January 1976 and the final standard, known alternately as ANSI X3.60-1978 or ECMA-55, was published in December 1977. The US Bureau of Standards introduced the NBSIR 77-1420 test suite to ensure implementations met the definition. By this time, Microsoft BASIC was beginning to take over the market after its introduction on early microcomputers in 1975, and especially after the introduction of the 1977 "trinity" - the Apple II, Commodore PET and TRS-80, all of which would cement MS-style BASICs as the de facto standard. ISO standardization of Minimal BASIC began as ISO 6373:1984 but was abandoned in 1998. An effort to produce a more powerful dialect, Full BASIC (also known as Standard BASIC), was not released until January 1987 and had little impact on the market. History Previous developments Dartmouth BASIC was introduced in May 1964 at Dartmouth College as a cleaned up, interactive language inspired by FORTRAN. The system brought together several concepts which were hot topics in the computer industry at the time, notably timesharing to allow multiple users to access a single machine, and direct interaction with the machine using computer terminals. General Electric, who supplied the GE-225 computer it ran on, marketed a slight variation to commercial users and saw immediate uptake. A number of other companies soon introduced similar systems of their own, selling online time by the minute. By the end of the 1960s there was a version of BASIC for almost every mainframe platform and online service. In 1966, Hewlett-Packard (HP) introduced a new minicomputer, the HP 2100. Intended to be used in laboratories and factory settings, the company was surprised to find most were being sold for business processing. Looking to take advantage of this, in November 1968 they introduced the HP 2000, a system using two HP 2100 CPUs which implemented timesharing to support up to 32 users. The system worked in a fashion similar to the Dartmouth model, using one machine to control input/output and another to run the programs. In contrast to the Dartmouth versions which were compilers, HP Time-Shared BASIC was an interpreter. Interpreters quickly became common on smaller machines and minicomputers. Other vendors quickly copied the HP dialect, notably Data General for their Nova series which were very successful in the early 1970s. Wang Laboratories also had some success with their dedicated BASIC machines, the Wang 2200 series. Each version had its own differences. One holdout was Digital Equipment Corporation (DEC), who had been involved with the JOSS program at the Stanford Research Institute (SRI) and introduced their FOCAL language based on it. By the early 1970s the success of BASIC forced DEC to introduce a BASIC of their own with its own set of modifications. Standards efforts The divergence of BASIC led to interest in producing a standard to try to bring them back together. The first meetings on such a possibility took place in January 1974 under the newly-formed ANSI working group X3J2. This led to a corresponding group being set up in September 1974 in Europe under the ECMA, TC 21. The two groups remained in close contact throughout the effort and released their respective standards at the same time. The first draft, was released by ANSI in January 1976. The final version was prepared in June 1977, and officially adopted by the ECMA on 14 December 1977. Minimal BASIC was essentially the original 1964 Dartmouth BASIC written as a formal standard using an Extended Backus–Naur form with an associated test suite to ensure implementation complied with the definition. It clarified formerly undefined concepts like whether GO TO and were the same thing, in this case stating that goto statement = GO space* TO line number, meaning , GO TO and even GO     TO were identical. Where differences between implementations existed, like in the handling of the statements or whether or not spaces were required between keywords and values, the standard always selected the Dartmouth pattern. It was always understood that Minimal BASIC was not really useful on its own as it lacked many common features like string manipulation. These more advanced features would be a focus of the follow-up effort, Full BASIC, which began serious work after the publication of Minimal. Full BASIC was not simply a version of Minimal with more features, instead, it was based on Dartmouth's Structured BASIC efforts and was designed to offer structured programming to support the construction of large programs. In contrast to Minimal, Standard BASIC was designed to significantly update BASIC. Irrelevance While the Minimal BASIC effort was taking place, the first widely available microcomputer was released, the Altair 8800. Shortly thereafter, Altair BASIC was released by Microsoft. Within the year, dozens of new micros were released and as many new versions of BASIC. By the time the Minimal standard was ratified, there were already tens of thousands of machines running some variation of the language. Which dialect any particular interpreter followed was generally based on the machines used to develop it; MS BASIC was developed on a PDP-10 and has many features from DEC's BASIC-PLUS, while Apple BASIC was written by Steve Wozniak based on an HP manual and uses HP's system of string handling. The first draft of the Minimal standard was released for comments in January 1976. Numerous comments were used to update the draft and its final release was prepared in June 1977 and formally ratified by the ECMA on 14 December 1977. The US Bureau of Standards released the NBSIR 77-1420 test suite to allow vendors to test compliance with the standard. As there were no microcomputer vendors in the standards groups, the system mostly found use on mainframe versions, which invariably had many extensions. One of the few microcomputer versions to implement the standard was Microsoft's BASIC-80 for the Zilog Z80, better known as MBASIC, which gained compliance with the standard in its 5.0 version. After the release of Minimal, the standards groups turned their attention to Full BASIC, but this dragged on for years. The effort proceeded so slowly that the Dartmouth participants left and released their own version of the still-emerging standard as True Basic in 1984. This was bug-ridden and confusing, leading Jerry Pournelle to deride it as "madness" and John Dvorak to dismiss it as "sad" and "doomed to failure." Plans to move Minimal BASIC to the International Organization for Standardization (ISO) were abandoned, and the ANSI group broke up leaving the original standards inactive. Description Minimal BASIC is closely based on early versions of Dartmouth BASIC and follows its conventions. The standard mostly clarifies certain limitations in an effort to produce a standard that can run on almost any machine. The following description assumes a basic familiarity with common BASICs, and highlights the differences in Minimal. Program code Like most BASIC implementations, Minimal is based on the underlying source code being edited using a line editor and thus every line of code in Minimal has to have a line number. The standard allows line numbers between 0 and 9999. In contrast to some interpreters, Minimal requires a space before every keyword, and a space or end-of-line after it. Keywords include , , , ,, , , , , , , , , , and , , and . Programs are required to have an as their last line. may have an optional prompt string, but that is up to the implementation, not part of the standard. did not allow a line number, an option seen in most interpreters of the era. loops are top tested, and will not execute their body if the test fails on the first iteration. Variable names can consist of a single letter, or a letter and a single digit. Two-letter variable names are not allowed. Numbers are limited to the range 1E-38 to 1E38. String variables can have a maximum of 18 characters. Arrays can be one or two dimensional using , but only numeric arrays are supported. All variables are normally allocated space in an associated one-dimensional array without using , they are given space for 11 items, indexes 0 to 10. The lower bound for arrays is typically 0, but using can change the index to 1. There are 11 defined functions; , , , , , , , , , and . Operators include , , , . Strings could only be compared for equals or not-equals, larger and smaller comparisons were not supported. Note that the logical operators, , and , are not supplied. User-defined functions using were supported, but only for numerics. No built-in or user functions for strings were available. Example This code implements the Sieve of Eratosthenes: 1000 REM SIEVE OF ERATOSTHENES 1010 REM MODIFIED FROM QUICK BASIC MATH PROJECT DEMO 1020 REM 2010 REM L IS THE LIMIT OF THE SIEVE 2020 REM WE WILL FIND ALL PRIME NUMBERS UP TO L 2030 LET L = 1000 2040 REM N IS THE SIEVE ITSELF 2050 DIM N(1000) 2060 REM FILL THE SIEVE WITH ALL NUMBERS UP TO L 2070 FOR I = 1 TO L 2080 LET N(I) = I 2090 NEXT I 2100 REM START WITH THE FIRST PRIME NUMBER: 2 2110 LET P = 2 2120 PRINT P, 2130 REM "CROSS OUT" MULTIPLES OF P 2140 FOR I = P TO L STEP P 2150 LET N(I) = 0 2160 NEXT I 2170 REM FIND THE NEXT NUMBER NOT CROSSED OUT 2180 LET P = P + 1 2190 IF P = L THEN 2220 2200 IF N(P) <> 0 THEN 2120 2210 GOTO 2180 2220 PRINT 2230 END Notes References Citations Bibliography Alt URL Alt URL Further reading External links Currently Maintained Open Source Implementations Jorge's bas55 ECMA-55 Minimal BASIC Interpreter John's ECMA-55 Minimal BASIC Compiler BASIC programming language American National Standards Institute standards Ecma standards
Minimal BASIC
[ "Technology" ]
2,075
[ "American National Standards Institute standards", "Computer standards", "Ecma standards" ]
70,631,222
https://en.wikipedia.org/wiki/Hygrocybe%20aurantiosplendens
Hygrocybe aurantiosplendens, commonly known as the orange waxcap, is a gilled fungus in the family Hygrophoraceae. It mainly occurs in Europe, but is also found in Siberia, and on both the East and West coasts of North America. It is uncertain if the continental ecotypes are in fact conspecific and are sometimes treated as distinct species. It inhabits old, unimproved, calcareous grasslands in Europe, and forests elsewhere. It is rare throughout its relatively broad range and is currently in decline due to habitat loss. It is classified as a "high diversity indicator" (HDI) species by the Joint Nature Conservation Committee (JNCC) in the U.K. because its presence indicates high-quality grasslands. It is red-listed as endangered or vulnerable in many European countries. Taxonomy It was originally described in Switzerland in 1954 by R. Haller Aar, a Swedish mycologist. The genus name comes from the Greek ῦγρὁς (= moist) + κυβη (= head), referring to the moisture-retaining caps of many of the species in this genus. The specific name comes from the Latin aurantius (= orange) + splendens (= shining). The placement of H. aurantiosplendens in the genus Hygrocybe has remained unchanged since it was named, although the synonym Hygrophorus aurantiosplendens (R.Haller Aar.) P.D.Orton is used, albeit extremely rarely. Description Initially the cap is conical maturing to plano-umbonate; colored bright orange to bright red, turning yellow with age. It is 3.5-5 cm (1.5-2 in) across, smooth, slimy to viscid when wet, with translucent margins. Gills are pale yellow, narrowly adnate; stipe is yellow, white at base, sometimes tapering from the base, lacking an annulus, 6-9 cm (2-3.5 in) long; 0.5-1 cm (0.2-0.4 in) thick. The spores are 8-10 x 4-7 μm, ellipsoid, smooth, inamyloid, and the 4-spored basidia is approximately 60 μm long, with a white spore print. No distinct smell or taste. Similar Species H. chlorophana, the golden waxcap, is visually similar to H. aurantiosplendens, however it is much more abundant. The gills of the former are adnexed (narrowly attached) to the stipe, not adnate; additionally, the latter often develops pruina near the top of the stipe. H. punicea is distinguished by having a rougher stem and larger spores. H. acutoconica, has a dry conical cap and longer spores; Humidicutis marginata, has a dry, pale cap and bright orange gills which contrasts with the pale gills and bright cap of H. aurantiosplendens. Habitat and distribution This species, like many other members of Hygrocybe, grows in calcareous, nutrient-poor grasslands in Europe. In other localities it can be found in forests with weakly acidic to basic soil. Although Hygrocybes have been thought of as saprotrophic, new evidence points to a biotrophic or symbiotic association with moss. The fruiting body can grow individually or in clusters. H. aurantiosplendens is widespread in Europe yet it is very rare to uncommon throughout its range. The British Isles and Scandinavia appear to be the regions with the greatest abundance of H. aurantiosplendens, but it also occurs in Finland, Iceland, Western Russia and high elevations in Southern Europe. In Eastern North America its range extends sporadically from Maine south to Florida and west to Northern Wisconsin. On the West Coast it is largely restricted to coastal regions from Northern Washington to Central California, however this western form may be distinct enough to be considered its own species. References aurantiosplendens Fungi described in 1954 Fungi of Asia Fungi of Europe Fungi of North America Fungus species
Hygrocybe aurantiosplendens
[ "Biology" ]
883
[ "Fungi", "Fungus species" ]
70,631,248
https://en.wikipedia.org/wiki/Thermally%20induced%20shape-memory%20effect%20%28polymers%29
The thermally induced unidirectional shape-shape-memory effect is an effect classified within the new so-called smart materials. Polymers with thermally induced shape-memory effect are new materials, whose applications are recently being studied in different fields of science (e.g., medicine), communications and entertainment. There are currently reported and commercially used systems. However, the possibility of programming other polymers is present, due to the number of copolymers that can be designed: the possibilities are almost endless. General information Polymers with thermally induced shape-memory effect are those polymers that respond to external stimuli and because of this have the ability to change their shape. The thermally induced shape-memory effect results from a combination of proper processing and programming of the system. This effect can be observed in polymers with very different chemical composition, which opens a great possibility of applications. Description of the effect on polymers In the first step the polymers are processed by means of common techniques, such as injection or extrusion, thermoforming, at a temperature (THigh) at which the polymer melts, obtaining a final shape which is called "permanent" shape. The next step is called system programming and involves heating the sample to a transition temperature (TTrans). At that temperature the polymer is deformed, reaching a shape called "temporary". Immediately afterwards the temperature of the sample is lowered. The final step of the effect involves the recovery of the permanent shape. The sample is heated to the transition temperature (TTrans) and within a short time the recovery of the permanent shape is observed. This effect is not a natural property of the polymer, but results from proper programming of the system with the appropriate chemistry. For a polymer to exhibit this effect, it must have two components at the molecular level: bonds (chemical or physical) to determine the permanent shape and "trigger" segments with a TTrans to fix the temporary shape. Characteristics of the effect on polymers Metals exhibit a bidirectional shape-memory effect, maintaining one shape at each temperature. Polymers recover their shape only once. Polymers can change their shape with elongations up to 200% while metals have a maximum of 8-10% elongation. Recovery in metals and ceramics involves a change in crystal structure, while recovery in polymers is due to the action of entropic forces and anchor points. Polymers can be designed according to the desired application, they can be: biodegradable, drug delivery systems (medicinal), antibacterial, etc. The transition temperature is designed with "trigger" segments, which makes temperature adjustment easier than in ceramics, since they depend on equiatomic quantities. Functioning It should first be noted that the first inelastic mechanism of these polymers is the mobility of the chains and the conformational rearrangement of the groups. Then the effect on semi-crystalline and amorphous polymers must be distinguished. In both cases, anchor points must be created that act as "triggers" for the effect. In the case of amorphous polymers, these will be the knots or "tangles" of the chains, and in the case of semi-crystalline polymers, the crystals themselves will form these anchor points. By modifying the shape of the material under minimal critical stress, the chains slide and a metastable structure is created, which increases the organization and order of the chains (lower entropy), when the deformation load is eliminated, the anchor points provide a storage mechanism for macroscopic stresses in the form of small localized stresses and decreasing entropy. In the glassy state the rotational motions of the molecules are frozen and impeded, as the temperature increases and the glassy state is reached, these motions thaw and rotations and relaxations occur, the molecules take the form that is entropically most favorable to them, the one with the lowest energy. These movements are called relaxation process and the formation of "random strings" to eliminate stresses is called shape-memory loss. A polymer will exhibit the shape-memory effect if it is susceptible to being stabilized in a given state of deformation, preventing the molecules from slipping and regaining their higher entropy (lower energy) form. This can be achieved almost entirely by creating crosslinking or vulcanization, these new bonds act as anchors and prevent the relaxation of the chains, the anchor points can be physical or chemical. Comparison with metals and ceramics The unidirectional shape-shape-memory effect was first observed by Chand and Read in 1951 in a Gold-Cadmium alloy and in 1963 Buehler described this effect for nitinol, which is an equiatomic Nickel-Titanium alloy. This effect in metals and ceramics is based on a change in the crystal structure, called martensitic phase transition. The disadvantage of these materials is that it is an equiatomic alloy and deviations of 1% in the composition modify the transition temperature by approximately 100 K. Some metals and ceramics present the effect bidirectionally, which means that at a certain temperature there is a shape and this can be changed by changing the temperature, but if the first temperature is recovered, also the first shape is recovered. This is achieved by training the material for each shape at each temperature. Metals and ceramics with thermally induced bidirectional shape-memory effect have had great application in medical implants, sensors, transducers, etc. Many present a risk however due to their high toxicity. Phases in the system To obtain the effect, it is necessary to achieve a phase separation, one of these phases works as the trigger for the temporary form, using a transition temperature that can be Tm or Tg and in this effect is called TTrans. A second phase has the higher transition temperature and above this temperature the polymer melts and is processed by conventional methods. The ratio of the elements forming the phase separation largely regulates the TTrans transition temperature; this is much easier to control than in metallic alloys. An example of this is the poly(ethylene oxide-ethylene terephthalate) or EOET copolymer. The polyethylene terephthalate (PET) segment has a relatively high Tg and its Tm is commonly referred to as the "hard" segment, whereas polyethylene ethylene oxide (PEO), has a relatively low Tm and Tg and is referred to as the "soft" segment. In the final polymer these segments separate into two phases in the solid state. PET has a high degree of crystallinity and the formation of these crystals provides for the flow and rearrangement of the PEO chains as they are stretched at temperatures higher than their Tm. Experimentation Achieving of the effect A commercial, high purity (non-recycled) polymer sample with known molecular mass distribution can be obtained or synthesized according to standard procedures. Common properties such as elastic modulus, tan δ, crystallinity, viscosity, density should be characterized. Anchor points, physical or chemical (chain entanglement, crystallinity or vulcanization), must be decided. If crosslinking with slight vulcanization is desired, standardized methods for each polymer must be taken into account. In the case of PCO, for example, it is a polymer without shape-shape-memory effect because it does not present a clear "plateau", but the addition of a minimum amount of peroxide (~1%) provides PCO with all the requirements to present this effect. A permanent stress-free shape with known dimensions is prepared by conventional methods. The system is programmed, i.e. it is heated up to TTrans and at that temperature the shape is modified by applying pressure or stress. Then the material is cooled and finally the pressure or stress is removed. After heating the sample again to TTrans, the stresses are released and the permanent shape is recovered. Some polymers fatigue first, so each system can be evaluated with a simple experiment that consists of programming the system 10 or 20 times in a row and measuring the recovery in % and time. Crystallizable polymers Polymers that can crystallize are (with the exception of PP) guarantee to obtain this effect, mainly due to their ordering capacity, which is reflected in the crystallinity, the crystals have affinity for their constituent elements and form new bonds these achieve anchoring forces that give stability to the temporary form. Crystallization, vulcanization, and final properties To analyze the behavior of the crystals in this type of polymers, the WAXS and DSC techniques are used; these techniques help to determine what percentage of the polymer are crystals and how they are organized. This is due to the fact that the crystallinity decreases as the crosslinking increases, since the chains lose the ability to arrange themselves and order is essential to achieve crystallinity. A second problem present when crosslinking molecules is melting, since an excess of crosslinking modifies the molecule in such a way that it stops melting (similar to a thermoset) and therefore the temporary shape cannot be obtained. The control of curing either by electromagnetic waves or with peroxides is very important since it increases the TTrans and decreases the crystallinity, determining factors in the shape-shape-memory effect. In the case of biocompatible semicrystalline systems such as poly(ε-caprolactone) and poly(n-butyl acrylate), crosslinked by photopolymerization it has been reported that the crystallization behavior is affected by the cooling rate, as in any other semicrystalline polymer, but the heat of crystallization remains independent of the cooling rate. The influence of the crosslinking of the molecules, the cooling rate and the crystallization behavior are specific to each system and impossible to enumerate since the synthesis possibilities are almost infinite. Crystallizable polymers such as oligo(ε-caprolactone) can have amorphous segments such as poly(n-butyl acrylate) and the molecular mass ratio of each determine the behavior of the system in programming temporary form and recovery to permanent form. Factores que influencian el efecto Molecular mass of the crosslinked polymer. Molecular weight of the crystallizable polymer. Degree of crosslinking. Phase separation. Moduli of the original polymers and proportion in the copolymer. Moisture (in polymers susceptible to moisture degradation). Cooling speed. Amorphous polymers If the polymeric system is amorphous, then the anchor points of the crystalline structure are not available and the only way to ensure the stability of the temporary shape is through chain entanglements (physical entanglements and not chemical crosslinking), in addition to the possibility of crosslinking. Relaxation processes In the glassy state, the movements of the long chain segments are frozen, the movements of these segments depend on an activation temperature that brings the polymer to a smoothing and elastic state, the rotation on the carbon bonds and the movements of the chains no longer have strong impediments to accommodate and acquire the conformation that requires less energy, the chains then "unravel" forming random strings, without order and therefore with higher entropy. If a polymer sample is stretched for a short time in the elastic range, when the load is removed, the sample will recover its original shape, but if the load remains for a sufficiently long period, the chains rearrange and the original shape is not recovered, the result is an irreversible deformation, also called relaxation process (in this case: creep). In order for a polymer to exhibit the thermally induced shape-memory effect, it is necessary to fix the chains with anchor points to avoid these relaxation processes that inelastically modify the system. Glass transition Amorphous polymers do not have a crystallization temperature (Tm) like semi-crystalline polymers and have only a glass transition temperature (Tg). This has a decisive influence on the behavior of shape-shape-memory polymer systems. A crystalline copolymer system alone can result in the crosslinker-treated copolymer losing its crystallinity and becoming practically amorphous. An amorphous polymer depends on the level of crosslinking or the degree of polymerization to exhibit this effect. In the case of poly(norbornene), which is a linear, amorphous polymer, with a content of 70 to 80% of trans bonds in commercial products, molecular mass of approximately 3x106 g mol−1 and Tg of approximately 35 to 45°C. Because it achieves an unusually high degree of polymerization, chain entanglements can be relied upon as anchor points to achieve the thermally induced shape-memory effect. Therefore, this polymer relies solely on physical anchor points. When heated up to Tg, the material abruptly changes from a rigid state to a tapered state (softens). To achieve the effect, the shape must be changed rapidly to avoid rearrangement of the segments of the polymer chains and immediately cool the material also very rapidly below Tg. Reheating the material back to Tg will show the recovery of the original shape. Influence of chemical structure In designing copolymers for thermally induced shape-memory effect it is very important to keep in mind that a slight change in chemical structure (cis/trans ratios, tacticity, molecular mass, etc.) produces a significant change in the shape-memory polymer. An example is the copolymer of poly(methylmethacrylate-co-methacrylic acid) or poly(MAA-co-MMA) compared to poly(MAA-co-MMA)-PEG, where PEG is short for poly(ethylene glycol) which forms complexes in the copolymer. Changes in the morphology of the material including PEG provide shape-memory effect to the copolymer, showing two phases, the three-dimensional network providing a stable phase and the reversible phase formed by the amorphous part of the PEG-PMAA complexes. The complexes show a high modulus storage capacity, so when a PEG of higher molecular mass is introduced into the copolymer, an increase in the elastic modulus, higher modulus in the glassy state and faster recovery are observed. Its properties can be studied with differential scanning calorimetry (DSC), wide-angle X-ray diffraction (WAXD) and dynamic mechanical analysis (DMA) techniques to determine its physicochemical arrangement. Overview For a polymer to exhibit the thermally induced shape-memory effect, it must have anchor points for temporary and permanent shape. These can be physical (chain entanglements, crystals) or chemical (chemical crosslinking, curing, vulcanization). This effect in polymers depends on entropic forces and not on martensitic transitions like metals. The most important physical properties are: elastic modulus, recovery speed, temporary shape stability. The transition temperature TTrans can be Tm or Tg or a mixture of both. All crystalline polymers (except for PP) can exhibit thermally induced shape-memory effect. Inelastic mechanisms that decrease the effect are: moisture degradation (for moisture sensitive polymers e.g. polyurethanes), unraveling of the chains, degradation of the bonds that fix the permanent or temporary shape. Applications Most of the applications of polymers with this effect are only suggestions for now, many possibilities have been proposed, but so far only a few have been used, the most important being medical devices and automotive elements, although the greatest success has been achieved with heat-shrinkable polyethylene, which is also an exception in the programming step, since it is processed in a different way. Healthcare applications Orthodontic items, such as wires and foams for endovascular procedures. Microelements for intelligent suturing. Intravenous needles that soften in the body and laparoscopy devices Drug delivery systems. In-body degradable implants for minimally invasive surgeries. Inner soles of orthopedic or special needs shoes and utensils for people with disabilities. Intravenous catheters. Everyday life applications Seals for adjustable pipes and fittings, shrinkable or adjustable pipes. Braille reprintable boards and reprintable advertisements. Adjustable anti-corrosion films. Hair for dolls, toys, hair styling items. New items packaged in smaller volume and that change their shape upon first use. Protections for automobiles, fenders, etc. Artificial nails. Smart textiles. See also Shape-shape-memory polymer Shape-shape-memory alloy Polymer Copolymer Smart material Bibliographical references Charlesby A. Atomic Radiation and Polymers. Pergamon Press, Oxford, pp. 198–257 (1960). Gall, K; Dunn, M; Liu, Y. Internal stress storage in shape shape-memory polymer nanocomposites. Applied physical letters. 85, (Jul-2004). Jeong, Han Mo; Song H, Chi W. Shape-shape-memory effect of poly (methylene-1,3-cyclopentane) and its copolymer with polyethylene. Polymer International, 51:275-280 (2002). Kawate, K. Creep Recovery of Acrylate Urethane Oligomer/Acrylate Networks. Creep recovery, shape shape-memory. Journal of polymer science. 35. Kim B K, Lee S Y, Xu M. Polyurethanes having shape-shape-memory effects. Polymer 37: 5781–93, (1998). Langer, R; Tirrell, D. A. Designing materials for biology and medicine. Nature 428: (Apr-2004). Lendlein, A; Kelch, S; Kratz, K. Shape-shape-memory Polymers. Encyclopedia of Materials: Science and Technology. 1–9. (2005). Lendlein, A; Langer, R. Biodegradable, elastic shape-shape-memory polymers for potential biomedical applications. Science. 296, 1673–1676 (2002). Lendlein, A; Kelch, S. Shape-Memory Polymers. Angew. Chemie. Chem. Int. 41: 2034 – 2057. (2002). Lendlein, A; Schmidt, A M; Langer R, AB-polymer networks based on oligo(ε-caprolactone) segments showing shape-shape-memory properties. Proc. Natl. Acad. Sci. USA. 98(3): 842–7 (2001). Li F, Chen Y, Zhu W, Zhang X, Xu M. Shape shape-memory effects of polyethylene/nylon 6 graft copolymers. Polymer 39(26):6929–6934 (1998). Liu, Chun, Mather. Chemically Cross-Linked Polycyclooctene: Synthesis, Characterization, and Shape Memory Behavior. Macromolecules, 35: 9868-9874 (2002). Nakasima A, Hu J, Ichinosa M, Shimada H. Potential application of shape-shape-memory plastic as elastic material in clinical orthodontics. (1991) Eur. J. Orthodontics 13:179–86. Ortega, Alicia M; Gall, Ken. The Effect of Crosslink Density on the Thermo-Mechanical Response of Shape Memory Polymers. Peng P; Wang, W; Xuesi C; and Jing X. Poly(ε-caprolactone) Polyurethane and Its Shape-Memory Property. Biomacromolecules 6:587-592 (2005). Wang, M; Zhang, L. Recovery as a Measure of Oriented Crystalline Structure in Poly (ether ester) s Based on Poly (ethylene oxide) and poly(ethylene terephthalate) Used as Shape Memory Polymers. Journal of Polymer Science: Part B: Polymer Physics, 37: 101–112 (1999). Yiping C. Ying G; Juan D; Juan L; Yuxing P; Albert S. Hydrogen-bonded polymer network—poly (ethylene glycol) complexes with shape shape-memory effect. Journal of Materials Chemistry. 12: 2957–2960 (2002). Katime I, Katime O, Katime D "Los materiales inteligentes de este Milenio: los hidrogeles polímeros". Editorial de la Universidad del País Vasco, Bilbao 2004. ISBN 84-8373-637-3. Katime I, Katime O y Katime D."Introducción a la Ciencia de los materiales polímeros: Síntesis y caracterización". Servicio Editorial de la Universidad del País Vasco, Bilbao 2010. ISBN 978-84-9860-356-9 Polymer chemistry Polymer physics Polymers
Thermally induced shape-memory effect (polymers)
[ "Chemistry", "Materials_science", "Engineering" ]
4,337
[ "Polymer physics", "Polymers", "Materials science", "Polymer chemistry" ]
70,632,581
https://en.wikipedia.org/wiki/Habitability%20of%20yellow%20dwarf%20systems
Habitability of yellow dwarf systems defines the suitability for life of exoplanets belonging to yellow dwarf stars. These systems are the object of study among the scientific community because they are considered the most suitable for harboring living organisms, together with those belonging to K-type stars. Yellow dwarfs comprise the G-type stars of the main sequence, with masses between 0.9 and 1.1 M☉ and surface temperatures between 5000 and 6000 K, like the Sun. They are the third most common in the Milky Way Galaxy and the only ones in which the habitable zone coincides completely with the ultraviolet habitable zone. Since the habitable zone is farther away in more massive and luminous stars, the separation between the main star and the inner edge of this region is greater in yellow dwarfs than in red and orange dwarfs. Therefore, planets located in this zone of G-type stars are safe from the intense stellar emissions that occur after their formation and are not as affected by the gravitational influence of their star as those belonging to smaller stellar bodies. Thus, all planets in the habitable zone of such stars exceed the tidal locking limit and their rotation is therefore not synchronized with their orbit. The Earth, orbiting a yellow dwarf, represents the only known example of planetary habitability. For this reason, the main goal in the field of exoplanetology is to find an Earth analog planet that meets its main characteristics, such as size, average temperature and location around a star similar to the Sun. However, technological limitations make it difficult to find these objects due to the infrequency of their transits, a consequence of the distance that separates them from their stars or semi-major axis. Characteristics Yellow dwarf stars correspond to the G-class stars of the main sequence, with a mass between 0.9 and 1.1 M☉, and surface temperatures between 5000 and 6000 K. Since the Sun itself is a yellow dwarf, of type G2V, these types of stars are also known as solar analogs. They rank third among the most common main sequence stars, after red and orange dwarfs, with a representativeness of 10% of the total Milky Way. They remain in the main sequence for approximately 10 billion years. After the Sun, the closest G-type star to the Earth is Alpha Centauri A, 4.4 light-years away and belonging to a multiple star system. All stars go through a phase of intense activity after their formation due to their rotation, which is much faster at the beginning of their lives. The duration of this period varies according to the mass of the object: the least massive stars can remain in this state for up to 3 billion years, compared to 500 million for G-type stars. Studies by the team of Edward Guinan, an astrophysicist at Villanova University, reveal that the Sun rotated ten times faster in its early days. Since the rotation speed of a star affects its magnetic field, the Sun's X-ray and UV emissions were hundreds of times more intense than they are today. The extension of this phase in red dwarfs, as well as the probable tidal locking of their potentially habitable planets with respect to them, could wipe out the magnetic field of these planets, resulting in the loss of almost all their atmosphere and water to space by interaction with the stellar wind. In contrast, the semi-major axis of planetary objects belonging to the habitable zone of G-type stars is wide enough to allow planetary rotation. In addition, the duration of the period of intense stellar activity is too short to eliminate a significant part of the atmosphere on planets with masses similar to or greater than that of the Earth, which have a gravity and magnetosphere capable of counteracting the effects of stellar winds. Habitable area The habitable zone around yellow dwarfs varies according to their size and luminosity, although the inner boundary is usually at 0.84 AU and the outer one at 1.67 in a G2V class dwarf like the Sun. For a G5V class star with a radius of 0.95 R☉—smaller than the Sun—the habitable zone would correspond to the region located between 0.8 and 1.58 AU with respect to the star. For a G0V star—larger than the Sun—it would be located at a distance of between 1 and 2 AU from the stellar body. In orbits smaller than the inner boundary of the habitable zone, a process of water evaporation, hydrogen separation by photolysis and loss of hydrogen to space by hydrodynamic escape would be triggered. Beyond the outer limit of the habitable zone, temperatures would be low enough to allow CO2 condensation, which would lead to an increase in albedo and a feedback reduction of the greenhouse effect until a permanent global glaciation would occur. The size of the habitable zone is directly proportional to the mass and luminosity of its star, so the larger the star, the larger the habitable zone and the farther from its surface. Red dwarfs, the smallest of the main sequence, have a very small habitable zone close to them, which subjects any potentially habitable planets in the system to the effects of their star, including probable tidal locking. Even in a small yellow dwarf like Tau Ceti, of type G8.5V, the locking limit is at 0.4237 AU versus the 0.522 AU that marks the inner boundary of the habitable zone, so any planetary object orbiting a G-class star in this region will far exceed the locking limit, and will have day-night cycles like Earth. In yellow dwarfs, this region coincides entirely with the ultraviolet habitability zone. This area is determined by an inner limit beyond which exposure to ultraviolet radiation would be too high for DNA and by an outer limit that provides the minimum levels for living things to carry out their biogenic processes. In the solar system, this region is located between 0.71 and 1.9 AU with respect to the Sun, compared to the 0.84–1.67 AU that mark the extremes of the habitable zone. Life potential Given the length of the main sequence in G-type stars, the levels of ultraviolet radiation in their habitable zone, the semi-major axis of the inner boundary of this region and the distance to their tidal locking limit, among other factors, yellow dwarfs are considered to be the most hospitable to life next to K-type stars. One goal in exoplanetary research is to find an object that has the main characteristics of our planet, such as radius, mass, temperature, atmospheric composition and belonging to a star similar to the Sun. In theory, these Earth analogs should have comparable habitability conditions that would allow the proliferation of extraterrestrial life. Based on the serious problems for planetary habitability presented by red dwarf systems and stellar bodies of type F or higher, the only stars that might offer a bearable scenario for life would be those of type K and G. Solar analogs used to be considered as the most likely candidates to host a solar-like planetary system, and as the best positioned to support carbon-based life forms and liquid water oceans. Subsequent studies, such as "Superhabitable Worlds" by René Heller and John Armstrong, establish that orange dwarfs may be more suitable for life than G-type dwarfs, and host hypothetical superhabitable planets. However, yellow dwarfs still represent the only stellar type for which there is evidence of their suitability for life. Moreover, while in other types of stars the habitable zone does not coincide entirely with the ultraviolet habitable zone, in G-class stars the habitable zone lies entirely within the limits of the latter. Finally, yellow dwarfs have a much shorter initial phase of intense stellar activity than K-type stars, which allows planets belonging to solar analogs to preserve their primordial atmospheres more easily and to maintain them for much of the main sequence. Discoveries Most of the exoplanets discovered have been detected by the Kepler space telescope, which uses the transit method to find planets around other systems. This procedure analyzes the brightness of stars to detect dips that indicate the passage of a planetary object in front of them from the perspective of the observatory. It is the method that has been most successful in exoplanetary research, together with the radial velocity method, which consists of analyzing the vibrations caused in the stars by the gravitational effects of the planets orbiting them. The use of these procedures with the limitations of current telescopes makes it difficult to find objects with orbits similar to the Earth's orbits or higher, which generates a bias in favor of planets with a short semi-major axis. As a consequence, most of the exoplanets detected are either excessively hot or belong to low-mass stars, whose habitable zone is close to them and any object orbiting in this region will have a significantly shorter year than the Earth. Planetary bodies belonging to the habitable zone of yellow dwarfs, such as Kepler-22b, Kepler-452b or Earth, take hundreds of days to complete an orbit around their star. The higher luminosity of these stars, the scarcity of transits and the semi-major axis of their planets located in the habitable zone reduce the probabilities of detecting this class of objects and considerably increase the number of false positives, as in the cases of KOI-5123.01 and KOI-5927.01. The ground-based and orbital observatories projected for the next ten years may increase the discoveries of Earth analogs in yellow dwarf systems. Kepler-452b Kepler-452b lies 1400 light-years from Earth, in the Cygnus constellation. Its radius of about 1.6 R⊕ places it right on the boundary separating telluric planets from mini-Neptunes established by the team of Courtney Dressing, a researcher at the Harvard-Smithsonian Center for Astrophysics (CfA). If the planet's density is similar to Earth's, its mass will be about 5 M⊕ and its gravity twice as great. A G2V-type yellow dwarf like the Sun belongs to Kepler-452, with an estimated age of 6 billion years (6 Ga) versus the solar system's 4.5 Ga. The mass of its star is slightly higher than that of the Sun, 1.04 M☉, so despite the fact that it completes an orbit around it every 385 days versus 365 terrestrial days, it is warmer than the Earth. If it has similar albedo and atmospheric composition, the average surface temperature will be around 29 °C. According to Jon Jenkins of NASA's Ames Research Center, it is not known whether Kepler-452b is a terrestrial planet, an ocean world or a mini-Neptune. If it is an Earth-like telluric object, it is likely to have a higher concentration of clouds, intense volcanic activity, and is about to suffer an uncontrolled greenhouse effect similar to that of Venus due to the constant increase in the luminosity of its star, after having remained throughout the main sequence in its habitable zone. Doug Caldwell, a SETI Institute scientist and member of the Kepler mission, estimates that Kepler-452b may be undergoing the same process that the Earth will undergo in a billion years. Tau Ceti e Tau Ceti e orbits a G8.5V-type star in the constellation Cetus, 12 light-years from Earth. It has a radius of 1.59 R⊕ and a mass of 4.29 M⊕, so like Kepler-452b it lies at the separation boundary between terrestrial and gaseous planets. With an orbital period of only 168 days, its temperature assuming an Earth-like atmospheric composition and albedo would be about 50 °C. The planet is located just at the inner edge of the habitable zone and receives about 60% more light than Earth. Its size may also imply a higher concentration of gases in its atmosphere, making it a super-Venus type object. Otherwise, it could be the first thermoplanet discovered. Kepler-22b Kepler-22b is at a distance of 600 light-years, in the Cygnus constellation. It completes one orbit around its G5V-type star every 290 days. Its radius is 2.35 R⊕ and its estimated mass, for an Earth-like density, would be 20.36 M⊕. If the planet's atmosphere and albedo were similar to Earth's, its surface temperature would be around 22 °C. It was the first exoplanet found by the Kepler telescope belonging to the habitability zone of its star. Because of its size, considering the limit established by Courtney Dressing's team, its probability to be a mini-Neptune is very high. See also Astrobiology Circumstellar habitable zone Earth analog Superhabitable planet Habitability of natural satellites Habitability of red dwarf systems Habitability of K-type main-sequence star systems Habitability of F-type main-sequence star systems List of potentially habitable exoplanets References Bibliography Exoplanets NASA Planets Stars Planetary habitability
Habitability of yellow dwarf systems
[ "Astronomy" ]
2,688
[ "Astronomical objects", "Stars", "Planets" ]
70,633,260
https://en.wikipedia.org/wiki/Hector%20%28microcomputer%29
Hector (or Victor Lambda) are a series of a microcomputers produced in France in the early 1980s. In January 1980, Michel Henric-Coll founded a company named Lambda Systems in Toulouse, that would import a computer (produced by Interact Electronics Inc of Ann Arbor, Michigan) to France. The computer was sold under the name of Victor Lambda. Lambda Systems went bankrupt in July 1981, along with Interact. In December 1981, Micronique, an electronic components company based in southern Paris, acquires the rights to the Victor Lambda. In 1982, Victor Lambda Diffusion, a subsidiary, distributes the Victor Lambda. The first machines built in the United States were not a success, and the following models were designed and produced in France at the headquarters of the Micronique company. The company uses the slogan: "The French Personal Computer". In 1983, the Victor is renamed Hector, to avoid confusion with the machines from the Californian company Victor Technologies (formerly Sirius Systems Technology). The last model introduced was the Hector MX, with production of the series ending in 1985. The series was not successful, due to the focus on the French market, intense competition from Amstrad machines and high prices. Models Victor Lambda The Victor Lambda was a rebranded Interact Home Computer(also called The Interact Family Computer 2) microcomputer. Introduced in 1980, it had a chiclet keyboard and built-in cassette recorder for data storage. Specifications: CPU: Intel i8080, 2.0 MHz Memory: 8K RAM, expandable to 16K RAM; 2K ROM OS: Basic Level II (Microsoft BASIC v4.7); EDU-Basic (both loaded from tape) Keyboard: 53-key chiclet Display: 17 × 12 characters text in 8 colors; 112 × 78 with 4 colors from a palette of 8 Sound: SN76477 (one voice, four octaves) Ports: Television (RGB), two joysticks, RS232 (optional) Built-in cassette recorder (1200 B/s) PSU: External AC transformer Hector 1 (Victor Lambda 2) The Hector 1 was a 1983 computer, based on the Victor Lambda. Initially sold as Victor Lambda 2 it was renamed to avoid trademark confusion. Also known as Hector 16K. More than 100 games were published for this machine. It was eventually considered as an entry level machine. Specifications: CPU: Zilog Z80 @ 1.7 MHz Memory: 16K RAM OS: Basic Level III (loaded from tape) Keyboard: mechanical Display: 17 × 12 text in 8 colors, 112 × 78 in 8 colors Sound: SN76477N (one voice, four octaves) Ports: Television (RGB), two joysticks, RS232 (optional) Built-in cassette recorder (1200 B/s) PSU: Built-in Hector 2 HR (Victor Lambda 2HR) The Hector 2HR is a 1983 computer with a Zilog Z80 processor, 16KB of ROM and 48KB of RAM. Initially sold as Victor Lambda 2HR, it was renamed avoid trademark confusion. Graphics were improved, with a resolution of 243x231 in 4 colors, and 40x23 character text. It has an built-in cassette recorder and an optional disk drive (DISK II). At launch there were sixty software titles available on tape. It was considered as a more serious machine for those wishing to program their own games. Specifications: CPU: Zilog Z80A @ 5 MHz Memory: 48K RAM; 4KB ROM OS: Basic Level III (loaded from tape) Keyboard: mechanical Display: 40 × 22 text; 112 × 78 in 8 colours, 243 × 231 in 4 colours Sound: SN76477 (one voice, four octaves) Built-in cassette recorder DISK II device The "Disk II" is a dual external -inch floppy disk drive with a dedicated processor. The Hector processor would handle the screen, keyboard and printer, while the floppy drive processor would run CP/M and manage floppy disks. Communication took place via the bi-directional parallel port. Programming languages The programming language is not available in ROM but loaded at startup. This makes it possible to distribute several languages, with BASIC 80, Pascal MT+, Cobol 80, Fortran 77, Forth and Assembly being available. Hector 2HR+ The Hector 2HR+, also released in 1983, is similar to the previous model, but including the BASIC language in ROM (thus freeing up more RAM memory for user programs). Specifications: CPU: Zilog Z80A @ 5 MHz Memory: 48K RAM; 16KB ROM OS: Basic Level III Keyboard: mechanical Display: 40 × 22 text in 8 colours; 112 × 78 in 8 colours, 243 × 231 in 4 colours Sound: SN76477 (one voice, four octaves) Ports: Television (RGB, SECAM), two joysticks, Centronics, Disc Drive Built-in cassette recorder PSU: Built-in Hector HRX The Hector HRX, also released in 1983, is similar to the previous model, but changes BASIC for a Forth language interpreter in ROM and features a 64KB RAM. An early 1983 review mentioned as positive compatibility with existing Lambda II HR software, but pointed lack of high-profile titles like arcade game conversions. It was considered as a professional machine, capable of running small business applications like text processors, spreadsheets and databases. A 1985 review of the system praised the varied peripherals available, but again criticized the lack of software. Specifications: CPU: Zilog Z80A @ 5 MHz Memory: 64K RAM; 16K ROM OS: Forth Keyboard: mechanical Display: 40 × 22 text in 8 colours; 112 × 78 in 8 colours, 243 × 231 in 4 colours Sound: SN76477 (one voice, four octaves) Ports: Television (RGB, SECAM), two joysticks, Centronics, Disc Drive Built-in cassette recorder PSU: Built-in Hector MX The Hector MX, released in 1985, is similar to the HRX but offers BASIC, Forth, Logo and Assembly as languages available in ROM. Specifications: CPU: Zilog Z80A @ 5 MHz Memory: 48K RAM; 64K ROM OS: BASIC 3X, HRX Forth, Logo, Assembly Keyboard: mechanical Display: 40 × 22 text in 8 colours; 112 × 78 in 8 colours, 243 × 231 in 4 colours Sound: SN76477 (one voice, four octaves) Ports: Television (RGB, SECAM), two joysticks, Centronics, Disc Drive Built-in cassette recorder PSU: Built-in Software Some software like Wordstar or Multiplan exists for this series of machines, along with many small games. They are also compatible with the Interact Home Computer References Microcomputers Computers designed in France History of computing in France Z80-based home computers
Hector (microcomputer)
[ "Technology" ]
1,447
[ "History of computing", "History of computing in France" ]
70,633,465
https://en.wikipedia.org/wiki/Vivo%20X80
Vivo X80 is a line of Android-based smartphones developed and manufactured by Vivo. It features a Zeiss co-engineered imaging system. Notes References Android (operating system) devices Vivo smartphones Mobile phones introduced in 2022
Vivo X80
[ "Technology" ]
48
[ "Mobile technology stubs", "Mobile phone stubs" ]
70,633,584
https://en.wikipedia.org/wiki/Oppo%20Find%20X5
Oppo Find X5 Series are Android-based smartphones manufactured by Oppo, succeeding the Find X3 Series. These phones were announced on 24 February 2022. Design The Find X5 and Find X5 Pro use aluminium for the frame, while the display is protected by Corning Gorilla Glass Victus which is curved around the edges. The back panel is made from ceramic for the find X5 pro, and glass for the find X5, with a contoured camera protrusion housing three rear cameras and the dual-LED flash. The find x5 pro has IP68 water resistance unlike the find x5; colour options are Glaze Black and Ceramic White, Black and white respectively. Specifications Hardware The Find X5 and Find X5 Pro use the Snapdragon 888 and Snapdragon 8 Gen 1 processors respectively. Both devices offer UFS 3.1 with no expandable storage.The Find X5 has 128 GB or 256 GB paired with 8 GB or 12 GB of RAM, while the Find X5 Pro has 256 GB or 512 GB paired with 8 or 12 GB of RAM. Find X5 Pro has a 6.7-inch (170 mm) LTPO AMOLED display of 1440p resolution with an adaptive 120Hz refresh rate. The display has HDR10+ support, and is capable of showing over 1 billion colours. The Find X5 has also the same screen but with no LTPO tech, a smaller 6.55" screen and only 1080p resolution. Find X5 and Find X5 Pro battery capacity is 4800 mAh and 5000 mAh respectively; wired fast charging is supported at 80 W, and wireless charging at 30 W and 50 W respectively. Both phones include Dolby Atmos stereo speakers with active noise cancellation, and has no audio jack. Biometric options include an optical fingerprint scanner and facial recognition. Camera The Find X5 and Find X5 Pro have identical camera hardware from Hasselblad and the MariSilicon X image processing CPU. The 50 MP Sony IMX766 is utilized for the wide and ultrawide sensors, featuring native 10-bit colour capture. The telephoto sensor has a 13 MP sensor with 2x optical zoom. A removed feature from the Find X3 series was a 3 MP microlens advertised with up to 60x magnification. The front camera remains unchanged, using a 32 MP sensor. Software The Find X5 and Find X5 Pro run on ColorOS 12.1, which is based on Android 12. References External links Find X5 Android (operating system) devices Mobile phones introduced in 2022 Mobile phones with multiple rear cameras Mobile phones with 4K video recording Flagship smartphones
Oppo Find X5
[ "Technology" ]
556
[ "Discontinued flagship smartphones", "Flagship smartphones" ]
70,634,216
https://en.wikipedia.org/wiki/Vivo%20X%20Note
Vivo X Note is an Android-based phablet developed and manufactured by Vivo. This phone announced on 11 April 2022. References Android (operating system) devices Mobile phones introduced in 2022 Phablets Vivo smartphones Discontinued flagship smartphones
Vivo X Note
[ "Technology" ]
51
[ "Phablets", "Crossover devices", "Discontinued flagship smartphones", "Flagship smartphones" ]
70,634,752
https://en.wikipedia.org/wiki/Atomitat
Atomitat (1962) was an underground bunker-home in Plainview, Texas, designed by architect Jay Swayze. The name of the home came from the combination of the words "atomic" and "habitat". It was the first home in the U.S. to meet civil defense specifications for a nuclear shelter. History Architect Jay Swayze stated that the idea for the Atomitat was born when he attended a civil defense discussion on fallout shelters. The home completed in 1962 and it was designed during the cold war when Americans feared nuclear war. Swayze said that the Atomitat was designed to be an atomic habitat which met the civil defense specifications. The cost of the furnished Atomitat with two vehicles was estimated to be $135,000. The Swayze's also stated that because the Atomitat home was secure against damaging weather, their home insurance rate was about 87.5% less than the rate of an above ground home. In 1967 the Atomitat was featured in a U.S. Information Agency propaganda film. The film was part of a series showing scenes of American life, and it would be shown in Arab countries. Design Architect Jay Swayze compared his design to a "ship in a bottle". There was a reinforced steel and concrete shell and it was underground and it is under of soil. It is in size. The bunker had 4 bedrooms and 3 bathrooms and windows throughout which were meant to mimic outdoor scenes and outdoor lighting. The home was outfitted with an emergency generator and sewage system. The above ground structure was a garage with a door between two large garage doors. The door led to the shelter which had 2 large steel lined things with lead to protect against radiation. The house was designed to make the occupant feel as if they were above ground. Lights could be made to mimic the different parts of the day and there was an space between the living space and the outer wall which had a flow of air. This allowed an occupant to open a window and feel a breeze. The house was occupied by the same family for 35 years. The couple who owned it decided to sell it in 2002 because it was too large now that their family had grown up. References External links 1962 introductions Air raid shelters in the United States Cold War sites Survivalism Radiation protection Nuclear fallout
Atomitat
[ "Chemistry", "Technology" ]
473
[ "Nuclear fallout", "Environmental impact of nuclear power", "Radioactive contamination" ]
70,636,048
https://en.wikipedia.org/wiki/Developing%20Countries%20Vaccine%20Manufacturers%20Network
The Developing Countries Vaccine Manufacturers Network (DCVMN) is a voluntary non-partisan public health alliance of health organizations and vaccine manufacturers. It has the goal of protecting people globally against known and emerging infectious diseases through the provision of a consistent supply of high quality vaccines at affordable prices for developing countries, to achieve vaccine equity. DCVMN includes manufacturers in Brazil, China, Cuba, India, Indonesia, Mexico, South Africa and other low and middle income countries (LMICs). It was established in 2000/2001, and is headquartered in Switzerland. As of 2021, the President is Sai D. Prasad, and the CEO is Rajinder Suri. In 2018, DCVMN members supplied more than half of the 2.36 billion doses of vaccines used globally by UNICEF. In 2019, a survey of 41 DCVMN members assessed their ability to use technology platforms, cell cultures and filling technologies for the manufacture of drug products. DCVMN members reported that they had the capability to supply over 50 distinct vaccines to 170 countries, totalling more than 3.5 billion vaccine doses annually. At least 15 manufacturer members have achieved WHO prequalification for their vaccines. Members are developing and producing novel vaccines for illnesses including neglected tropical diseases: rotavirus, Japanese encephalitis, pertussis, haemophilus influenzae, hepatitis B, hepatitis E, meningitis A, cholera, poliovirus, human papillomavirus infection, dengue fever, Chikungunya virus and COVID-19. Developing countries that have the capacity for production of whole inactivated virus (WIV) and protein-based vaccines may be critical in addressing COVID-19 vaccine access gaps and achieving vaccine equity for LMICs. As of 29 December 2020, 18 DCVMN members were involved in preclinical or clinical trials for possible COVID-19 vaccines, three of them in Phase III trials. The DCVMN is a vaccine manufacturers partner of COVAX, a worldwide initiative for equitable access to COVID-19 vaccines. As of 2016, the timeline from a vaccine's first regulatory submission in its country of origin to its approval for use in Sub-Saharan Africa could take up to seven years. The DCVMN is active in identifying obstacles in the processes of vaccine registration and use. It works to increase coordination of requirements and procedures to improve the prequalification, procurement and supply of vaccines. This can involve governments in different countries, the World Health Organization (WHO), and United Nations agencies such as UNICEF. The Developing Countries Vaccine Manufacturers Network has received funding from the Bill & Melinda Gates Foundation. References External links Official website: Developing Countries Vaccine Manufacturers Network Collaborative projects Scientific organisations based in Switzerland International medical and health organizations Vaccination-related organizations International responses to the COVID-19 pandemic Deployment of COVID-19 vaccines
Developing Countries Vaccine Manufacturers Network
[ "Biology" ]
593
[ "Vaccination-related organizations", "Vaccination" ]
70,636,825
https://en.wikipedia.org/wiki/Thomas%E2%80%93Yau%20conjecture
In mathematics, and especially symplectic geometry, the Thomas–Yau conjecture asks for the existence of a stability condition, similar to those which appear in algebraic geometry, which guarantees the existence of a solution to the special Lagrangian equation inside a Hamiltonian isotopy class of Lagrangian submanifolds. In particular the conjecture contains two difficulties: first it asks what a suitable stability condition might be, and secondly if one can prove stability of an isotopy class if and only if it contains a special Lagrangian representative. The Thomas–Yau conjecture was proposed by Richard Thomas and Shing-Tung Yau in 2001, and was motivated by similar theorems in algebraic geometry relating existence of solutions to geometric partial differential equations and stability conditions, especially the Kobayashi–Hitchin correspondence relating slope stable vector bundles to Hermitian Yang–Mills metrics. The conjecture is intimately related to mirror symmetry, a conjecture in string theory and mathematical physics which predicts that mirror to a symplectic manifold (which is a Calabi–Yau manifold) there should be another Calabi–Yau manifold for which the symplectic structure is interchanged with the complex structure. In particular mirror symmetry predicts that special Lagrangians, which are the Type IIA string theory model of BPS D-branes, should be interchanged with the same structures in the Type IIB model, which are given either by stable vector bundles or vector bundles admitting Hermitian Yang–Mills or possibly deformed Hermitian Yang–Mills metrics. Motivated by this, Dominic Joyce rephrased the Thomas–Yau conjecture in 2014, predicting that the stability condition may be understood using the theory of Bridgeland stability conditions defined on the Fukaya category of the Calabi–Yau manifold, which is a triangulated category appearing in Kontsevich's homological mirror symmetry conjecture. Statement The statement of the Thomas–Yau conjecture is not completely precise, as the particular stability condition is not yet known. In the work of Thomas and Thomas–Yau, the stability condition was given in terms of the Lagrangian mean curvature flow inside the Hamiltonian isotopy class of the Lagrangian, but Joyce's reinterpretation of the conjecture predicts that this stability condition can be given a categorical or algebraic form in terms of Bridgeland stability conditions. Special Lagrangian submanifolds Consider a Calabi–Yau manifold of complex dimension , which is in particular a real symplectic manifold of dimension . Then a Lagrangian submanifold is a real -dimensional submanifold such that the symplectic form is identically zero when restricted to , that is . The holomorphic volume form , when restricted to a Lagrangian submanifold, becomes a top degree differential form. If the Lagrangian is oriented, then there exists a volume form on and one may compare this volume form to the restriction of the holomorphic volume form: for some complex-valued function . The condition that is a Calabi–Yau manifold implies that the function has norm 1, so we have where is the phase angle of the function . In principle this phase function is only locally continuous, and its value may jump. A graded Lagrangian is a Lagrangian together with a lifting of the phase angle to , which satisfies everywhere on . An oriented, graded Lagrangian is said to be a special Lagrangian submanifold if the phase angle function is constant on . The average value of this function, denoted , may be computed using the volume form as and only depends on the Hamiltonian isotopy class of . Using this average value, the condition that is constant may be written in the following form, which commonly occurs in the literature. This is the definition of a special Lagrangian submanifold: Hamiltonian isotopy classes The condition of being a special Lagrangian is not satisfied for all Lagrangians, but the geometric and especially physical properties of Lagrangian submanifolds in string theory are predicted to only depend on the Hamiltonian isotopy class of the Lagrangian submanifold. An isotopy is a transformation of a submanifold inside an ambient manifold which is a homotopy by embeddings. On a symplectic manifold, a symplectic isotopy requires that these embeddings are by symplectomorphisms, and a Hamiltonian isotopy is a symplectic isotopy for which the symplectomorphisms are generated by Hamiltonian functions. Given a Lagrangian submanifold , the condition of being a Lagrangian is preserved under Hamiltonian (in fact symplectic) isotopies, and the collection of all Lagrangian submanifolds which are Hamiltonian isotopic to is denoted , called the Hamiltonian isotopy class of . Lagrangian mean curvature flow and stability condition Given a Riemannian manifold and a submanifold , the mean curvature flow is a differential equation satisfied for a one-parameter family of embeddings defined for in some interval with images denoted , where . Namely, the family satisfies mean curvature flow ifwhere is the mean curvature of the submanifold . This flow is the gradient flow of the volume functional on submanifolds of the Riemannian manifold , and there always exists short time existence of solutions starting from a given submanifold . On a Calabi–Yau manifold, if is a Lagrangian, the condition of being a Lagrangian is preserved when studying the mean curvature flow of with respect to the Calabi–Yau metric. This is therefore called the Lagrangian mean curvature flow (Lmcf). Furthermore, for a graded Lagrangian , Lmcf preserves Hamiltonian isotopy class, so for all where the Lmcf is defined. Thomas introduced a conjectural stability condition defined in terms of gradings when splitting into Lagrangian connected sums. Namely a graded Lagrangian is called stable if whenever it may be written as a graded Lagrangian connected sumthe average phases satisfy the inequalityIn the later language of Joyce using the notion of a Bridgeland stability condition, this was further explained as follows. An almost-calibrated Lagrangian (which means the lifted phase is taken to lie in the interval or some integer shift of this interval) which splits as a graded connected sum of almost-calibrated Lagrangians corresponds to a distinguished trianglein the Fukaya category. The Lagrangian is stable if for any such distinguished triangle, the above angle inequality holds. Statement of the conjecture The conjecture as originally proposed by Thomas is as follows:Conjecture: An oriented, graded, almost-calibrated Lagrangian admits a special Lagrangian representative in its Hamiltonian isotopy class if and only if it is stable in the above sense.Following this, in the work of Thomas–Yau, the behaviour of the Lagrangian mean curvature flow was also predicted.Conjecture (Thomas–Yau): If an oriented, graded, almost-calibrated Lagrangian is stable, then the Lagrangian mean curvature flow exists for all time and converges to a special Lagrangian representative in the Hamiltonian isotopy class .This conjecture was enhanced by Joyce, who provided a more subtle analysis of what behaviour is expected of the Lagrangian mean curvature flow. In particular Joyce described the types of finite-time singularity formation which are expected to occur in the Lagrangian mean curvature flow, and proposed expanding the class of Lagrangians studied to include singular or immersed Lagrangian submanifolds, which should appear in the full Fukaya category of the Calabi–Yau. Conjecture (Thomas–Yau–Joyce): An oriented, graded, almost-calibrated Lagrangian splits as a graded Lagrangian connected sum of special Lagrangian submanifolds with phase angles given by the convergence of the Lagrangian mean curvature flow with surgeries to remove singularities at a sequence of finite times . At these surgery points, the Lagrangian may change its Hamiltonian isotopy class but preserves its class in the Fukaya category.In the language of Joyce's formulation of the conjecture, the decomposition is a symplectic analogue of the Harder-Narasimhan filtration of a vector bundle, and using Joyce's interpretation of the conjecture in the Fukaya category with respect to a Bridgeland stability condition, the central charge is given by,the heart of the t-structure defining the stability condition is conjectured to be given by those Lagrangians in the Fukaya category with phase , and the Thomas–Yau–Joyce conjecture predicts that the Lagrangian mean curvature flow produces the Harder–Narasimhan filtration condition which is required to prove that the data defines a genuine Bridgeland stability condition on the Fukaya category. References Symplectic geometry Conjectures
Thomas–Yau conjecture
[ "Mathematics" ]
1,903
[ "Unsolved problems in mathematics", "Mathematical problems", "Conjectures" ]
70,638,612
https://en.wikipedia.org/wiki/David%20M.%20Brink
David Maurice Brink (20 July 1930, Hobart, Tasmania, Australia – 8 March 2021, Oxford, UK) was an Australian-British nuclear physicist. He is known for the Axel-Brink hypothesis. Education and career Brink matriculated in 1947 at the University of Tasmania, where he graduated with a B.Sc. in physics in 1951. As a Rhodes Scholar he became a graduate student in physics at Magdalen College, Oxford, where he received his PhD in 1955. His doctoral dissertation Some aspects of the interactions of light with matter was supervised by Maurice Pryce. From 1954 to 1958 Brink was a Rutherford Scholar of the Royal Society. For the academic year 1957–1958 he was an instructor at the Massachusetts Institute of Technology (MIT). From 1958 to 1993 he was a Fellow of Balliol College, Oxford. At the University of Oxford he was from 1958 to 1988 a university lecturer and from 1988 to 1993 a Moseley Reader. In 1993 he moved to Trento, Italy. There from 1993 to 1998 he was the vice-director of the European Centre for Theoretical Studies in Nuclear Physics (under the auspices of the European Centre of Technology), as well as, at the University of Trento a professor of the history of physics. Brink was a visiting scientist at Copenhagen's Niels Bohr Institute in 1964. He has been a visiting professor at the Institut de physique nucléaire d'Orsay (1969 and 1981–1982), the University of British Columbia (1975), the Technical University of Munich (1982), the University of Trento (1988), the University of Catania (1988), and Michigan State University (1988–1989). As a theoretical physicist he did important research on "the study of nuclear structure via the shell model and effective interactions, and nuclear reactions via statistical methods." He was elected in 1981 a Fellow of the Royal Society. He received in 1982 the Rutherford Medal of the Institute of Physics. He was made in 1992 a Foreign Member of the Royal Society of Sciences in Uppsala. In 2006 he received the Lise Meitner Prize "for his many contributions to the theory of nuclear structure and nuclear reactions over several decades, including his seminal work on the theory of nuclear masses using Skyrme effective interactions, nuclear giant resonances, clustering in nuclei and quantum and semi-classical theories of heavy-ion scattering and reactions." Selected publications Articles (over 650 citations) (over 950 citations) (over 3050 citations) Books with George Raymond Satchler: Angular Momentum 1962. 2nd edition 1971. 3rd edition. Clarendon Press, Oxford 1993, ISBN 0-19-851759-9 Nuclear Forces, Pergamon Press 1965 German translation: Kernkräfte, WTB Texte, 1971 Semi-classical methods in nucleus-nucleus scattering, Cambridge University Press 1985; 2009 edition as editor with Feodor Karpechine, F. Bary Malik, João Da Providência: with Ricardo A. Broglia: Nuclear superfluidity: pairing in finite systems, Cambridge University Press 2005; e-book ; hbk References 1930 births 2021 deaths 20th-century Australian physicists 21st-century Australian physicists 20th-century British physicists 21st-century British physicists University of Tasmania alumni Alumni of Magdalen College, Oxford Fellows of Balliol College, Oxford Academic staff of the University of Trento Fellows of the Royal Society Nuclear physicists Theoretical physicists People from Hobart
David M. Brink
[ "Physics" ]
702
[ "Theoretical physics", "Theoretical physicists" ]
70,640,828
https://en.wikipedia.org/wiki/Airport%20of%20the%20Pacific
The Airport of the Pacific () or International Airport of the Pacific () is a planned joint-use civilian international airport and military base that will be located in Conchagua, El Salvador. It will serve the city of La Unión and the planned Bitcoin City. The airport was proposed by Salvadoran president Nayib Bukele during his 2019 presidential campaign as a part of his Cuscatlán Plan and construction was approved by the Legislative Assembly on 26 April 2022. Preliminary terraforming began in March 2023 and construction is planned to begin in December 2024. Planning Initial proposals In 2019, then Salvadoran presidential candidate Nayib Bukele published his Cuscatlán Plan, an outline of his objectives and goals as president of El Salvador. Bukele proposed the construction of a new airport in eastern El Salvador, then referred to as the "Airport in the East" (). Bukele said many of the passengers who pass through El Salvador's Saint Óscar Arnulfo Romero y Galdámez International Airport in south-central El Salvador live in eastern El Salvador so a new airport in the east would simultaneously ease congestion of El Salvador's main international airport and bring jobs and an economic boost to the east of the country. On 9 March 2020, the Salvadoran government began an international public offering, titled the "Pacific Airport Project", to foreign companies to study the design of the airport. In total, 44 companies showed interest in the project and 11 of those presented economic models to the Salvadoran government. On 24 February 2022, the government was given an economic and financial report by Peyco-ALBEN 4000 Consortium, an air transportation company. The report estimated that the airport would create 4,700 new jobs in its first year of operation. The plan estimated that in the first ten years of operation, the airport would accommodate 1 to 3 million passengers and around 18,000 aircraft movements. Federico Anliker, the president of the (CEPA) separately estimated that the airport's construction would create over 23,700 new jobs. In March 2022, CEPA confirmed that the airport would be located in the Condadillo of Conchagua, a district of the La Unión department just south of the city of La Unión and the planned Bitcoin City. The residents of Condadillo and the nearby of Flor de Mangle will be relocated due to the airport's construction. Government approval On 25 April 2022, the Legislative Assembly's Economic Commission approved a law for the construction of the airport. The following day the full Legislative Assembly approved the law, titled "Law for the Construction, Administration, Operation, and Maintenance of the Airport of the Pacific", with 67 of the 84 votes to officially authorize the construction of the airport. According to the Legislative Assembly, nine locations were considered; six were eliminated due to the perceived difficulty that landing airplanes would face and the Legislative Assembly settled on the selected site to minimize effects on the environment. Facilities The Airport of the Pacific will have one passenger terminal; the first floor will have of floor space for check-in, baggage claim, and customs, while the second floor will have of floor space for restaurants and other commercial spaces . The airport will have one , runway and turnaround platforms at the ends of the runway for aircraft to position themselves for takeoff. The Airport of the Pacific will also have a Salvadoran Air Force installation. According to Oscar Avalle, a representative of the Development Bank for Latin America, the Airport of the Pacific will have two gates that will be able to serve two or three planes each. He added that the airport would be able to be expanded if demand for tourism to eastern El Salvador increased. Avalle stated that aircraft such as the Airbus A320, Boeing 737, and Boeing 757 would be able to land at the airport. The government estimates that the airport will service 300,000 to 500,000 passengers annually. Construction Preliminary terraforming for the Airport of the Pacific began in March 2023. In October 2024, CEPA announced that a groundbreaking ceremony would be held in late 2024 and that proper construction would begin in 2025. Anliker stated that the government was moving at a "very accelerated pace" ("") regarding the airport's construction. In April 2022, the government estimated that construction would cost US$500 million over 10 years; in December 2024, Avalle revised the government's estimate to US$328 million and Bukele stated that the airport would be inaugurated in two years' time. Opposition In October 2022, 10 of the 150 landowners affected by the airport's construction stated that they would not sell their land to the government. CEPA stated that they will seek to come to a settlement with the landowners and Federico Anliker, the president of CEPA, accused them of being manipulated by political parties in the Opposition to hold up the airport's construction. In January 2023, Cristosal, a non-governmental organization, filed a lawsuit claiming that three laws, including one for the airport's construction, would "open the door to corruption" ("") and called upon the Supreme Court of Justice to block the laws. The Indigenous Movement for the Integration of the Struggles of the Ancestral Peoples of El Salvador (MILPA) opposes the airport's construction, arguing that its construction violates the local people's right to private property and degrades the area's environment. Environmental concerns In October 2021, the (MARN) advised CEPA to change the location of the Airport of the Pacific citing environmental concerns. MARN suggested moving the airport slightly to the northeast, as its planned location disrupted the habitats of several endangered species and was at risk of being submerged by rising sea levels. CEPA did not accept MARN's recommendation. Airlines On 1 December 2024, Bukele stated that several airlines had made agreements with the government for direct flights from the United States to the Airport of the Pacific. See also List of airports in El Salvador References External links Airports in El Salvador La Unión Department Proposed airports in El Salvador
Airport of the Pacific
[ "Engineering" ]
1,243
[ "Construction", "Buildings and structures under construction" ]
70,640,991
https://en.wikipedia.org/wiki/Common%20Attack%20Pattern%20Enumeration%20and%20Classification
The Common Attack Pattern Enumeration and Classification or CAPEC is a catalog of known cyber security attack patterns to be used by cyber security professionals to prevent attacks. Originally released in 2007 by the United States Department of Homeland Security, the project began as an initiative of the Office of Cybersecurity and Communication, and it is now supported by Mitre Corporation and governed under a board of corporate representatives. References External links MITRE CAPEC Classification systems Computer standards Mitre Corporation
Common Attack Pattern Enumeration and Classification
[ "Technology" ]
96
[ "Computer security stubs", "Computer standards", "Computing stubs" ]
70,641,326
https://en.wikipedia.org/wiki/SMT%20Goupil
SMT Goupil (SMT - "Society of Microcomputing and Telecommunications") was a French IT company created in 1979 by Claude Perdrillat, previously a senior executive in the General Directorate of Telecommunications. The company produced many microcomputers during the 1980s, mainly for French government agencies. This market collapsed at the end of the 1980s with the appearance of drastic budgetary restrictions in the French public sector and competition from more aggressive technological rivals like IBM, Apple and Olivetti. Despite a significant debt of 40 million francs, the company went public in 1985, claiming to hold 15% of the French microcomputer market. In January 1990, Goupil claimed to hold 18% of the market for professional microcomputers in France. The company filed for bankruptcy in June 1991 with the accounting books revealing a debt of 700 million francs and a real turnover of 830 million francs in 1990. Models The first Goupil G1 and G2 computers offered a promising architecture, with the integration of the Motorola 6808 processor coupled with the FLEX operating system. The machines had a sober and integrated design with particular colors - slate blue and red. The G3 extended compatibility in order to conquer foreign markets, by offering two processors at a time (selected at start-up by a switch) among three choices: the very common Motorola 6809, Zilog Z80 and Intel 8088. This allowed the Flex 9 and UniFLEX operating systems that came with the machine to run under MS-DOS, CP/M and UCSD Pascal. The dark gray case, designed by designer Roger Tallon, came with an integrated 12-inch monochrome monitor, two floppy disk drives, and a bay for 7 extension cards in Goupil format. Further machines would seek IBM PC compatibility, as it became a standard regarding government equipment. 1979: Goupil G1, basic desktop computer 1981: Goupil G2, desktop computer with multiple configurations similar to those of Micral 1983: Goupil G3, Nanoréseau network machine, similar to Micral 1985: Goupil G4, PC-compatible desktop computer 1986: Goupil G40, desktop server version of the G4 1986: Goupil Club, PC Kaypro 2000 compatible laptop sold under license 1988: Goupil G5, PC-compatible desktop computer, several versions 1988: Goupil Golf, PC-compatible portable computer 1990: Goupil G50, tower server version of the G5 1990: Goupil G100, UNIX server initially designed by SFENA, characterized by input-output co-processors 1991: Goupil G6, PC-compatible desktop computer 1991: Goupil TOP, laptop with 10" backlit LCD screen under MS-DOS 5.0 (and Windows 3.1 installed later), offered in 2 versions: TOP (80286 @ 12.5 MHz) & TOP SX (80386 SX @ 20 MHz), both with 20 MB hard drive See also Computing for All, a French government plan to introduce computers to the country's pupils References Computer companies of France Computer science education in France Computing for All Defunct computer hardware companies Defunct computer systems companies French companies established in 1979 French companies disestablished in 1991 History of computing in France
SMT Goupil
[ "Technology" ]
676
[ "History of computing", "History of computing in France" ]
70,641,670
https://en.wikipedia.org/wiki/Carbide%20iodide
Carbide iodides are mixed anion compounds containing iodide and carbide anions. Many carbide iodides are cluster compounds, containing one, two or more carbon atoms in a core, surrounded by a layer of metal atoms, and encased in a shell of iodide ions. These ions may be shared between clusters to form chains, double chains or layers. The metal in carbide iodides is most often a rare earth element. Similar formulas tend to have similar structures. Where R is a rare earth element: R12C6I17 contains chains of R6 octahedra with a C26− core and a shell of iodide. R4I5C contains similar chains, but with a single C4− carbide atom. Double chain structures with single carbon atom cores include R6I7C2 and R3I3C. Layers of joined octahedra include R2I2C2 with an ethanide C24− core; R2I2C and R2IC with one carbide per octahedron. Related compounds include carbide chlorides, and carbide bromides. Carbon may be substituted by hydrogen, boron or nitrogen in the core of cluster compounds. This list does not include cyanides, carbonyls, cyanamides or carbido borates, where carbon has bonds to other non-metals. However, there are carbide iodides that also contain nitride, oxide or other halides. List Do not confuse Cl for chlorine, and CI for carbon and iodine. References Carbides Iodides Mixed anion compounds
Carbide iodide
[ "Physics", "Chemistry" ]
352
[ "Ions", "Matter", "Mixed anion compounds" ]
70,643,850
https://en.wikipedia.org/wiki/Ultrastructural%20identity
Ultrastructural identity is a concept in biology. It asserts that evolutionary lineages of eukaryotes in general and protists in particular can be distinguished by complements and arrangements of cellular organelles. These ultrastructural components can be visualized by electron microscopy. The concept emerged following the application of electron microscopy to protists. Protists Early ultrastructural studies revealed that many previously accepted groupings of protists based on optical microscopy included organisms with differing cellular organelles. Those groups included amoebae, flagellates, heliozoa, radiolaria, sporozoa, slime molds, and chromophytic algae. They were deemed likely to be polyphyletic, and their inclusion in efforts to assemble a phylogenetic tree would cause confusion. As an example of this work, German cell biologist Christian Bardele established unexpected diversity with the simply organized heliozoa. His work made it evident that heliozoa were not monophyletic and subsequent studies revealed that the heliozoa was composed of seven types of organisms: actinophryids, centrohelids, ciliophryids, desmothoracids, dimporphids, gymnosphaerids and taxopodids. A critical advance was made by British phycologist David Hibberd. He demonstrated that two types of chromophytic algae, previously presumed to be closely related, had different organizations that were revealed by electron microscopy. The number and organization of locomotor organelles differed (chrysophyte - two flagella; haptophyte - two flagella and haponema), the surfaces of which differed (chrysophyte - with tripartite flagellar hairs now regarded as apomorphic for stramenopiles; haptophyte - naked), as did the transitional zone between axoneme and basal body (chrysophyte with helix); as did flagellar anchorage systems; presence or absence of embellishments on the cell surface (chrysophyte - naked; haptophyte - with scales), plastids especially eyespot, location and functions of dictyosomes, inter alia. This careful study prompted further examination of algal and flagellate organization. Protozoologists Brugerolle and Patterson were the first to use the term 'ultrastructural identity' in discussing the differences between ciliates and a lookalike protist, Stephanopogon. Patterson later applied the concept to all eukaryotes, classifying their diversity into 71 types, each without clear sister group affinities. A further 200 or so genera that had not yet been studied by electron microscopy were listed. The catalog of groups with distinctive ultrastructural identities has been used as a base-line for efforts to build a stable tree for all eukaryotes using molecular data. An indirect benefit of the focus on ultrastructural characters was that it allowed synapomorphies to be identified for emerging lineages. Molecular protistologist Gunderson and colleagues established that dinoflagellates, apicomplexa and ciliates were likely related. They, and some related flagellates, were shown to share a distinctive system of sacs or alveoli under the cell membrane, and because of this were given the name Alveolates. Similarly, tripartite tubular hairs attached to various algae, fungi and protozoa provided the synapomorphy for the 'stramenopiles' (straw-hairs) A distinctive flagellar root system that caused grooving on their cell surface was treated as a synapomorphy of the excavate flagellates. References Biology theories
Ultrastructural identity
[ "Biology" ]
795
[ "Biology theories" ]
70,644,950
https://en.wikipedia.org/wiki/Dextran%20drug%20delivery%20systems
Dextran drug delivery systems involve the use of the natural glucose polymer dextran in applications as a prodrug, nanoparticle, microsphere, micelle, and hydrogel drug carrier in the field of targeted and controlled drug delivery. According to several in vitro and animal research studies, dextran carriers reduce off-site toxicity and improve local drug concentration at the target tissue site. This technology has significant implications as a potential strategy for delivering therapeutics to treat cancer, cardiovascular diseases, pulmonary diseases, bone diseases, liver diseases, colonic diseases, infections, and HIV. Although there are many FDA approved natural polymeric-based drug carriers available for clinical use, dextran has failed to obtain any clinical applications. Research must address several challenges and obstacles associated with dextran before it can become a viable, clinically approved drug delivery strategy. Characterization Dextran has many favorable properties that make it an ideal candidate for applications as a drug delivery system. As a natural polymer, dextran is biocompatible and biodegradable in the human body. Dextran can also be chemically modified to produce derivatives at a low cost, which can address a few of the undesirable characteristics including its low mechanical strength and uncontrollable hydration rate [4]. This natural glucose polymer has excellent water solubility and prolonged circulation in the blood as well. Dextran prodrug Dextran prodrugs are chemically linked drug-polymer complexes in which enzymatic processes and hydrolysis in vivo cause the drug to become pharmacologically active. Therapeutic agents can be linked to dextran via an ester bond which can be hydrolyzed slowly by esterases to produce sustained, stable drug release. Drug-dextran complexes can also be formed by chemical linkage through an amide bond, which is hydrolyzed by amidase. Prodrugs coupled by amide bonds provide much slower drug release than by ester bonds. Succinic acid and glutaric acid carboxyl groups, amino acids, pH and reductivity sensitive disulfide bonds, and click chemistry are also methods of coupling drugs to dextran. Dextran prodrug applications These drug-polymer complexes have advantages such as longer drug half-life and improved targeted drug delivery. Dextran prodrugs have potential applications in the treatment of liver diseases, pulmonary diseases, colonic diseases, and cancer. Dextran nanoparticles Dextran nanoparticles are 1-100 nm sized particles with drug encapsulation capability. The high surface area of these nanoparticles allows more drugs to be loaded and encapsulated, leading to higher drug concentrations at the target site. The small size of these particles also encourages cellular uptake, which makes dextran nanoparticles a potential effective drug delivery system for targeting tumor cells. Dextran-coated nanoparticles Dextran has indirect applications in nanoparticles as a coating. Iron oxide nanoparticles coated with dextran can be loaded with the microRNA miR-29a to selectively target breast cancer cells and down-regulate anti-apoptotic genes leading to successful breast cancer treatment. Dextran-coated iron oxide nanoparticles loaded with heparinase-like antisense nucleic acid effectively target uterine cancer cells and inhibit tumor growth. Supermagnetic nanospheres composed of iron oxide coated with dextran can be loaded with doxorubicin to effectively target tumor cells and limit off-site toxicity. Gold magnetic nanoparticles coated with dextran can effectively target desired tissue sites with the aid of an externally applied magnetic field. Dextran coatings can further improve the drug targeting capability of other types of nanoparticles. Dextran conjugate nanoparticles Dextran conjugates are also utilized in nanoparticle drug delivery system formulations. Nanoparticles composted of dextran and stearic acid with a polyethylene glycol (PEG) coating can be loaded with antiviral drugs and be effectively internalized by cells. This nanosystem has the advantages of providing protection against immune responses and providing stability to the encapsulated drug. This technology has applications in the treatment of HIV and AIDS. Dextran can be grafted with folic acid to develop doxorubicin-loaded nanoparticles. Dextran-folic acid nanoparticles effectively target tumors, reduce off-site toxicity, and prolong blood circulation. Dextran-spermine nanoparticles loaded with doxorubicin can achieve targeted and sustained drug release in tumors. Dextran nanoparticle applications Dextran nanoparticles have advantages such as increased drug-loading capacity, improved cellular uptake, reduce off-site toxicity, and increase local drug concentrations at the target tissue site. The current research indicates that dextran nanoparticles can potentially have applications in the delivery of anti-tumor therapeutics. Dextran microspheres Dextran microspheres are 1 to 250 micrometer sized polymeric particles that can encapsulate drugs. Microspheres composed of dextran have several advantages as a drug delivery system including controlled drug release, localized drug concentration, and reduced adverse reactions. Controlled drug release by these dextran microparticles is achieved by degradation, which is the breakdown of chemical bonds in the molecular structure of the polymeric network. Dextran microspheres are formulated in many forms including native dextran, dextran as a cross-linker, dextran conjugates, and chemically modified dextran. Dextran microspheres Dextran can be used as a standalone material in microspheres. Dextran microspheres can provide controlled drug release in gastric and intestinal pH environments, which is ideal for targeting of the colon. Dextran-crosslinked microspheres One application of the glucose polymer dextran in microsphere compositions is as a cross-linker. Dextran and oxidized dextran can be used to crosslink gelatin microspheres to reduce gelatin dissolution, which slows the drug release rate. These dextran/gelatin microspheres can be used to provide slow-release of TRAPP-Br, which is a cancer therapeutic. Hydrogel microspheres synthesized by using porous chitosan polyelectrolyte complex with dextran sulfate as a cross-linker can deliver hydrophobic drugs to the intestines with high efficacy. Dextran conjugate microspheres Dextran can be conjugated with other materials to synthesize microspheres. Dextran grafted with PLGA forms microspheres that can provide effective delivery of insulin in diabetic patients. Dextran/chitosan microspheres efficiently deliver recombinant bone morphogenic protein (rhBMP-2) for the treatment of bone diseases. Chemically modified dextran microspheres Microspheres can also be developed by chemically modifying dextran. Acetated dextran can be modified with amine groups and grafted with heparin to form microspheres that provide protamine-stimulated, targeted drug release for the delivery of therapeutics to treat cardiovascular diseases. Dextran modified with an octyl- group creates microspheres that provide extended release of doxorubicin, which is an antitumor therapeutic. Dextran microsphere applications Dextran-based microspheres can encapsulate a variety of drugs and provide therapeutic delivery in the treatment of diseases such as cancer, colonic diseases, bone diseases, and cardiovascular diseases. Dextran micelles Dextran micelles are 10 to 100 nm sized amphiphilic polymeric particles which have the advantages of avoiding drug clearance by the kidneys and traveling through blood vessels. The core of these micelles are hydrophobic, allowing for loading of hydrophobic drugs into the micelle. The outer shell of the particles is hydrophilic, which allows for long circulation times in the blood. Dextran can be conjugated with other materials to form polymeric micelles including stearic acid and cholesterol to further improve sustained release of the loaded hydrophobic drug. The size of the micelles can be controlled by altering the ratio of stearic acid to dextran. Dextran micelles can also be formed from conjugation with polycaprolactone, folic acid, retinoic acid, and PLGA. Stimuli-responsive dextran micelles Dextran micelles can be synthesized and modified to be stimuli-responsive. These stimuli include pH, temperature, and redox conditions. Micelles composed of dextran grafted with deoxycholic acid or polycaprolactone via a disulfide bond are responsive to a redox environment. Dextran micelles conjugated with cholesterol exhibit pH responsiveness when modified with histidine. Dextran-benzimidazole conjugate micelles also exhibit pH-responsiveness. When the polymeric micelles encounter these stimuli, release of the drug from the hydrophobic core is triggered by various mechanisms depending on the stimuli and the conjugated material. Stimuli-responsive dextran grafted micelles decrease off-site drug toxicity and increase localized drug concentration in the target site. Dextran micelle applications Dextran micelles and dextran copolymer micelles can be loaded with a variety of hydrophobic drugs such as doxorubicin, rapamycin, and paclitaxel, indicating a significant application in the delivery of anti-cancer therapeutics. Dextran hydrogels Dextran hydrogels and dextran conjugate hydrogels are heavily cross-linked polymeric networks that have a strong affinity for water. These gels have soft, elastic physical properties and are biocompatible and biodegradable. Dextran hydrogels have also been shown to be stable and safe in vivo. Glucose-based polymeric gels have the advantage of being able to be chemically or physically modified to improve targeted drug delivery. Swelling is one mechanism by which drugs are released from the dextran hydrogels. Swelling can be reduced by increasing the molecular weight of dextran, leading to a slower drug diffusion rate out of the hydrogel. Swelling can also be lessened by increasing the amount of the conjugated species and introducing ethanol during the cross-linking reaction. Degradation of chemical linkages in the dextran hydrogels is another mechanism by which drugs are released from the polymeric matrices. An increase in degradation of the dextran hydrogel leads to an increase in drug release rate. Degradation of dextran hydrogels specifically is caused by dextranases, which are microbial enzymes mostly located in the colon. Dextran hydrogel colon-targeting The colon is an ideal target for dextran hydrogel drug delivery systems due to the presence of dextranases. Dextran can be cross-linked with diisocyanate to form a hydrogel that can be loaded with hydrocortisone to treat swelling or inflammation in the colon. Hydrogels can also be synthesized from crosslinking epichlorohydrin (ECH) with dextran. Dextran-ECH hydrogels can be loaded with salmon calcitonin (sCT) to treat bone diseases. Dextran-ECH hydrogels loaded with sCT achieved comparable release rates to other polymeric hydrogels in the colon. Other dextran hydrogel targeted sites Dextran conjugate hydrogels can also target other desirable sites. Paclitaxel-loaded dextran-sericin hydrogels can effectively target tumor growth in mice. Hydrogels composed of translocator protein (TSPO) ligands conjugated to dextran have the potential to induce apoptosis in tumor cells via the TSPO receptor on the mitochondria. Dextran/polyacrylamide hydrogels with covalently bound silver nanoparticles can effectively release ornidazole to treat infections. Dextran conjugated with oligolactide chains through a disulfide bond can form hydrogels that have potential applications in cancer treatment drug delivery systems. Dextran hydrogels that release drugs in response to an external electrical field can also be synthesized. Dextran hydrogel applications Dextran hydrogel and dextran conjugate hydrogel drug delivery systems have a variety of applications. These gels can be used to release therapeutics to treat cancer, swelling, inflammation, bone diseases, and infections. Clinical translation Dextran has yet to be approved for any clinical uses in drug delivery due to a wide variety of limitations including heterogeneity, undesirable side effects, and unknown biological pathways. Changes in the molecular weight of dextran have been shown to alter biological activity, indicating a need for separation and purification processes to ensure batch homogeneity. Dextran, although considered relatively safe and nontoxic in vivo, exhibits a few side effects with the most notable being thrombocytopenia and liver toxicity. The exact biological mechanisms by which dextran-based drug delivery systems act on the drug target must be elucidated as well. Dextran-based drug delivery systems have an enormous potential for clinical use in the treatment of a variety of disease states. References Drug delivery devices
Dextran drug delivery systems
[ "Chemistry" ]
2,855
[ "Pharmacology", "Drug delivery devices" ]
61,251,536
https://en.wikipedia.org/wiki/Tuna%20Alt%C4%B1nel
Tuna Altınel is a Turkish mathematician, born February 12, 1966, in Istanbul, who has worked at the University Lyon 1 in France since 1996. He is a specialist in group theory and mathematical logic. With Alexandre Borovik and Gregory Cherlin, he proved a major case of the Cherlin–Zilber conjecture. In the political sphere, Altınel is active in the Academics for Peace movement, which supports a peaceful resolution of the conflict in south-eastern Turkey, and calls for the human rights of the civilian population to be respected. Accused by the Turkish authorities of membership in a terrorist organization, Altınel has been imprisoned since May 11, 2019, at the Kepsut prison in Turkey. Education and career After undergraduate studies in mathematics and computer science at Boğaziçi University, Istanbul, Altınel received his doctorate from Rutgers University (New Jersey, USA) under the direction of Gregory Cherlin. In 1996 he joined the department of mathematics of the university Lyon-1, as maître de conférences, and completed his French habilitation in 2001. Altınel has written 26 mathematical articles, principally on the subject of groups in model theory, more particularly groups of finite Morley rank and the Cherlin–Zilber Algebraicity Conjecture, concerning the structure of the simple groups of finite Morley rank. He is joint author with Alexandre Borovik and Gregory Cherlin of a book in which this conjecture is proved in the case of infinite 2-rank, after the development of a body of machinery analogous to certain chapters of finite simple group theory. Altınel's doctoral advisees include Éric Jaligot, winner of the 2000 Sacks Prize, a prize given annually for an outstanding doctoral thesis in mathematical logic (doctoral thesis supervised jointly by Tuna Altınel and Bruno Poizat). He is active in the domain of scientific cooperation with Turkey; in particular, he was an organizer of an international mathematics conference held in Istanbul in 2016 in honor of Alexandre Borovik and Ali Nesin (Leelavati prize winner, 2018). Political activities Overview Altınel has been an active supporter of a peaceful resolution of the conflict in southeastern Turkey and of human rights and civil liberties in Turkey. With regard to the Kurdish conflict in southeastern Turkey, he was one of 116 academics who signed a 2003 letter in support of a peaceful resolution of that conflict, among the first group of signatories of a similar peace petition in January 2016 that garnered 1128 signatures at the time of its promulgation under the title "We will not be parties to this crime," among the 132 intellectuals calling for assistance to those wounded in the conflict at Cizre, and one of 170 academics to sign a letter in 2018 opposing the Afrin operation. On February 21, 2019, he acted as translator for a former member of parliament of the Peoples' Democratic Party (HDP) at a public meeting in Lyon, France, in which a documentary on the Cizre massacres was shown, followed by a discussion. With the resumption of active conflict in August 2015 following a period of relative calm, Altınel reached out to the affected community and began to visit the areas involved in September 2015. His own account of these activities is quoted below, from subsequent court testimony. With the trials of the signatories of the January 2016 petition and the broader wave of repression following the attempted coup of July 2016, described in more detail below, questions of academic freedom and freedom of speech become more prominent. Altınel's actions in this direction include a petition responding to the suicide of Mehmet Fatih Traş, an academic fired for his involvement with the peace petition (February 2017) denunciation of the role of the Turkish research council TÜBİTAK in the state of emergency following the attempted coup d'état of 2016 (April 2017); the CNRS Scientific Council voted unanimously to recommend to the CNRS to reconsider its agreements concerning collaboration with TÜBİTAK (April 24–25, 2017). publication of a review article on the trials of the Academics for Peace entitled "Les procès contre les Universitaires pour la paix : extraits d’une comédie politico-juridique (The trials of the Academics for Peace: scenes from a politico-juridical spectacle)". petition in support of Academic for Peace Füsün Üstel These activities have led to two separate court cases against Altınel in Turkey and his social media postings have been used to justify the second of these cases. January 2016 petition and Academics for Peace Altınel was one of the first signatories of the January 2016 peace petition entitled "We will not be parties to this crime!", which was promulgated by the Academics for Peace on January 11, 2016. The following day, President Erdoğan publicly criticized the signatories, and within a few days 27 had been arrested." At the same time foreign reaction was strongly supportive of the signatories. The peace petition ultimately garnered 2212 signatures of academics, largely in Turkey. Altınel is one of over 750 signatories from the first group of 1128 such who have been prosecuted or sentenced as individuals for that act under Turkish Anti-Terrorism legislation, through June 2019, on a charge of "propaganda in support of a terrorist organization." Since 2016 Altınel has been an active and vocal supporter both of the content of this petition and of the civil rights of its signers. In the second hearing in his case, February 28, 2019, at the 29th Central Criminal Court, Çağlayan Courthouse, Istanbul, Altınel testified that he had aided civilian victims of military operations that took place in the towns placed under military curfew: The sentencing hearing for Altınel's trial for "propaganda on behalf of a terrorist organization" in the context of the Academics for Peace Trials is scheduled for July 16, 2019. 2019 charge and imprisonment On April 12, 2019, on arriving for a visit to Turkey, Altınel's passport was confiscated at the airport. On May 10 he requested a new passport at the Balıkesir prefecture and was taken into custody for interrogation and placed in pre-trial detention on the following day. It was learned later that a new charge had been filed against him on April 30, 2019, at the prosecutor general's office in Balıkesir. This new charge is "membership in a terrorist organization", based on his participation on February 21, 2019, at a public meeting in Villeurbanne, near Lyon, France. This meeting was organized by the local Kurdish Society; a documentary was shown on the subject of the Cizre massacres and a discussion was held with a former member of the Turkish parliament, Faysal Sarıyıldız (HDP), now in exile. At that public meeting, Altınel acted as translator for the former MP. On May 8 Füsun Üstel was incarcerated and began serving a 15-month sentence for signing the peace petition of January 2016. Altınel was arrested on May 11. After his first hearing on the new charge was scheduled for July 30, 2019, he was released. Reactions Press reports Altınel's May 11 arrest was widely reported in the press, notably in France and in Turkey. Some early reports of the arrest in Turkey quoting variously from Altınel's lawyer or Academics for Peace put the case in the context of the Academics for Peace trials and the conference held in Lyon, France. Other reports originating with the İhlas News Agency and reported on Habertürk and elsewhere described the case as the capture of a wanted terrorist; one of these reports stated that an anti-terrorist operation captured five members of Gülen Movement and the Kurdistan Workers' Party (PKK), listing Altınel's arrest as the fifth. The first article in France, in Mediapart, appeared that same day and was followed rapidly by articles in Le Progrès, Le Monde, 20 minutes, Lyon Capitale, Lyon Mag, Le Figaro Étudiant, Le Figaro, Le Canard enchaîné, Libération, and L’Humanité. Altınel was featured as L’Humanité's Man of the Day on May 16, 2019. Euronews TV reported on the case on May 30, 2019. Official reactions Less than weeks after the confiscation of Altınel's passport, on April 23, 2019, the French Applied Mathematics Society and the French Mathematical Society wrote jointly to President Macron of France. On May 11, the day of Altınel's arrest, the Turkish Consul General in Lyon, Mehmet Özgür Çakar, stated "Tuna Altınel organized, and moderated, a meeting in Lyon consisting entirely of propaganda in favor of the PKK. ... It is possible that this had a negative effect on his situation." The consul also noted that the PKK remains classified a terrorist group by Ankara, the United States, and the European Union. The French Ministry of Europe and Foreign Affairs expressed its "disquiet" on May 13, 2019. A support committee formed at Lyon created a website to document the evolution of the affair, and on May 23 the committee launched a petition in favor of the liberation of Altınel, with over 6000 signatories as of June, 2019, predominantly academics, along with approximately 60 members of the French National Assembly. Professional societies from a number of countries, including mathematics societies in the United States, France, Great Britain, Germany Austria, Italy, and Belgium, as well as the European Mathematical Society, the Association for Symbolic Logic, and the Committee of Concerned Scientists have issued statements in support of Altınel. National Assembly, France On June 11, 2019, the French mathematician and politician Cédric Villani (LREM), Member of Parliament for Essonne's fifth district and Fields medalist, who is a colleague and an outspoken supporter of Altınel, posed a question on the subject during a session of the National Assembly to the Minister for Europe and Foreign Affairs Jean-Yves Le Drian, who stated that the government was committed to doing "everything in its power" in favor of his liberation, notably on the occasion of his June 13 visit to Turkey to consult his counterpart there. See also Stable group, Presidency of Recep Tayyip Erdoğan, State of emergency and purges Censorship in Turkey: Article 301 Kurdish–Turkish conflict (2015–present) References External links Tuna Altınel: CV Altinel Support Committee, Lyon Webpage, Academics for Peace Observations from 2ith February 2019, in the Caglayan Courts ("The Turkish State vs. Academics for Peace"), David Bradley-Williams, April/May 2019 Translation of statement by Altınel, Feb. 28, 2019, Çağlayan Courthouse Scientists from Istanbul Turkish expatriates in France Algebraists Mathematical logicians Academic staff of the University of Lyon Turkish activists 1966 births 20th-century Turkish mathematicians Group theorists Rutgers University alumni Living people Boğaziçi University alumni 21st-century Turkish mathematicians
Tuna Altınel
[ "Mathematics" ]
2,272
[ "Mathematical logic", "Mathematical logicians", "Algebra", "Algebraists" ]
61,252,293
https://en.wikipedia.org/wiki/CBMAR
CBMAR (Comprehensive β-lactamase Molecular Annotation Resource) is a database focused on the annotation and discovery of novel beta-lactamase genes and proteins in bacteria. Beta-lactamases are characterized on CBMAR using the Ambler Classification system. CBMAR organizes beta-lactamases according to their classes: A, B, C, and D. They are then further categorized by their (i) sequence variability, (ii) antibiotic resistance profile, (iii) inhibitor susceptibility, (iv) active site, (v) family fingerprints, (vi) mutational profile, (vii) variants, (viii) gene location, (ix) phylogenetic tree, etc. The primary sources of database for CBMAR are GenBank and Uniprot. CBMAR is built on an Apache HTTP Server 2.2.17 with MySQL Ver 14.14 and hosted on Ubuntu 11.04 Linux platform. See also Antimicrobial Resistance databases References Antimicrobial resistance organizations Beta-lactam antibiotics Biological databases
CBMAR
[ "Chemistry", "Biology" ]
223
[ "Molecular biology techniques", "Biochemistry databases", "Enzyme databases", "Protein classification" ]
61,252,437
https://en.wikipedia.org/wiki/C23%20%28C%20standard%20revision%29
C23, formally ISO/IEC 9899:2024, is the current open standard for the C programming language, which supersedes C17 (standard ISO/IEC 9899:2018). It was started in 2016 informally as C2x, and was published on October 31, 2024. The freely available draft most similar to the one published is document N3220 (see Available texts, below). The first WG14 meeting for the C2x draft was held in October 2019, virtual remote meetings were held in 2020 due to the COVID-19 pandemic, then various teleconference meetings continued to occur through 2024. In C23, the value of __STDC_VERSION__ changes from 201710L to 202311L. The common names "C17" and "C23" reflect these values, which are frozen prior to final adoption, rather than the years in the ISO standards identifiers (9899:2018 and 9899:2024). Features Changes integrated into the latest working draft of C23 are listed below. Standard Library New functions Add memset_explicit() function in <string.h> to erase sensitive data, where memory store must always be performed regardless of optimizations. Add memccpy() function in <string.h> to efficiently concatenate strings – similar to POSIX and SVID C extensions. Add strdup() and strndup() functions in <string.h> to allocate a copy of a string – similar to POSIX and SVID C extensions. Add memalignment() function in <stdlib.h> to determine the byte alignment of a pointer. Add bit utility functions / macros / types in new header <stdbit.h> to examine many integer types. All start with stdc_ to minimize conflict with legacy code and 3rd party libraries. In the following, replace * with uc, us, ui, ul, ull for five function names, or blank for a type-generic macro. Add stdc_count_ones*() and stdc_count_zeros*() to count number of 1 or 0 bits in value. Add stdc_leading_ones*() and stdc_leading_zeros*() to count leading 1 or 0 bits in value. Add stdc_trailing_ones*() and stdc_trailing_zeros*() to count trailing 1 or 0 bits in value. Add stdc_first_leading_one*() and stdc_first_leading_zero*() to find first leading bit with 1 or 0 in value. Add stdc_first_trailing_one*() and stdc_first_trailing_zero*() to find first trailing bit with 1 or 0 in value. Add stdc_has_single_bit*() to determine if value is an exact power of 2 (return true if and only if there is a single 1 bit). Add stdc_bit_floor*() to determine the largest integral power of 2 that is not greater than value. Add stdc_bit_ceil*() to determine the smallest integral power of 2 that is not less than value. Add stdc_bit_width*() to determine number of bits to represent a value. Add timegm() function in <time.h> to convert time structure into calendar time value - similar to function in glibc and musl libraries. New <math.h> functions based on IEEE 754-2019 recommendations, such as trigonometry functions operating on units of and exp10. Existing functions Add %b binary conversion specifier to printf() function family. Add %b binary conversion specifier to scanf() function family. Add 0b and 0B binary conversion support to strtol() and wcstol() function families. Make the functions bsearch(), bsearch_s(), memchr(), strchr(), strpbrk(), strrchr(), strstr(), and their wide counterparts wmemchr(), wcschr(), wcspbrk(), wcsrchr(), wcsstr() return a const qualified object if one was passed to them. Preprocessor Add and directives, which are essentially equivalent to and . Both directives were added to standard and GCC 12. Add directive for binary resource inclusion and allowing the availability of a resource to be checked by preprocessor directives. Add directive for diagnostics. Add allowing the availability of a header to be checked by preprocessor directives. Add allowing the availability of an attribute to be checked by preprocessor directives. (see " compatibility" group for new attribute feature) Add functional macro for variadic macros which expands to its argument only if a variadic argument has been passed to the containing macro. Types Add , a null pointer type. Add and types for bit-precise integers. Add macro for maximum bit width. Add ckd_add(), ckd_sub(), ckd_mul() macros for checked integer operations. Variably-modified types (but not VLAs which are automatic variables allocated on the stack) become a mandatory feature. Better support for using const with arrays. Standardization of the typeof(...) operator. The meaning of the keyword was changed to cause type inference while also retaining its old meaning of a storage class specifier if used alongside a type. Unlike C++, C23 allows type inference only for object definitions (no inferring function return type or function parameter type). Compatibility rules for structure, union, and enumerated types were changed to allow a redeclaration of a compatible type with the same tag. Exact-width integer may now exceed (N2888). Constants Add constant for nullptr_t type. Add wb and uwb integer literal suffixes for and types, such as yields an unsigned _BitInt(3), and yields a signed _BitInt(4) which has three value bits and one sign bit. Add 0b and 0B binary literal constant prefixes, such as (equating to 0xAA). Add ' digit separator to literal constants, such as (equating to 0xFEDCBA98), (equating to 299792458), (equating to 1.414213562). Add the ability to specify the underlying type of an enum. Allow s with no fixed underlying type to store values that are not representable by . Keywords Add and keywords. Add , , , , keywords. Previously defined keywords become alternative spellings: , , , , . Add keyword (see "types" group) Add and keywords (see "types" group) Add keyword (see "constants" group) Add keyword (see "other" group) Add , , keywords for (optional) decimal floating-point arithmetic (see "other" group) Syntax Labels can appear before declarations and at the end of compound statements. Unnamed parameters in function definitions. Zero initialization with {} (including initialization of VLAs). Variadic functions no longer need a named argument before the ellipsis and the macro no longer needs a second argument nor does it evaluate any argument after the first one if present. Add C++11 style attribute syntax using double square brackets . In addition to C++11 attributes listed below, add new attributes: [[unsequenced]] allows compiler optimizations for functions producing repeatable outputs only based on their parameters [[reproducible]], similar to [[unsequenced]], but for functions whose call order also matters Add single-argument _Static_assert for compatibility with C++17. Functions with no arguments listed in the prototype (e.g. void foo()) are understood as taking no arguments (see removal of K&R function declarations) C++ compatibility Various syntax changes improve compatibility with C++, such as labels before declarations, unnamed function arguments, zero initialization with {}, variadic functions without named argument, C++11 style attributes, _Static_assert (see Syntax). For labels at the end of compound statements a corresponding change was made to C++23. Add C++-style attributes (see Syntax). Add attributes , , , , and attribute for compatibility with C++11, then deprecate , , header <stdnoreturn.h> features introduced in C11. Duplicate attributes are allowed for compatibility with C++23. All standard attributes can also be surrounded by double underscores (e.g. is equivalent to ). Add u8 prefix for character literals to represent UTF-8 encoding for compatibility with C++17. Add and preprocessing directives for compatibility with C++23. (see "preprocessor" group) Other features Support for the ISO/IEC 60559:2020, the current version of the IEEE 754 standard for floating-point arithmetic, with extended binary floating-point arithmetic and (optional) decimal floating-point arithmetic. The constexpr specifier for objects but not functions, unlike C++'s equivalent. Add type for storing UTF-8 encoded data and change the type of u8 character constants and string literals to . Also, the functions mbrtoc8() and c8rtomb() to convert a narrow multibyte character to UTF-8 encoding and a single code point from UTF-8 to a narrow multibyte character representation respectively. Clarify that all strings and literals shall be UTF-16 encoded, and all strings and literals shall be UTF-32 encoded, unless otherwise explicitly specified. Allow storage class specifiers to appear in compound literal definition. Obsolete features Some old obsolete features are either removed or deprecated from the working draft of C23: Remove trigraphs. Remove K&R function definitions/declarations (with no information about the function arguments). Remove representations for signed integers other than two's complement. Two's complement signed integer representation will be required. The macros in <float.h> are obsolescent features. Compiler support The following compilers implement an experimental compiler flag to support this standard: GCC 9, Clang 9.0, Pelles C 11.00 Available texts Like other editions of the C standard, the official ISO text of the standard is not freely available. The latest working draft pre-C23 that was made public was N3096, dated 2023-04-01. In the months that followed this draft, hundreds of changes were made before producing the working draft N3149 dated 2023-07-09 and the official draft standard N3219 dated 2024-02-22. Neither of these later drafts is public. On the same date that the draft standard N3219 was announced, a new working draft N3220 was made public. While this document is officially described as a draft of the future version "C2Y" of the standard, the accompanying "Editor's Report" specifies that N3220 differs from the draft C23 standard N3219 only by a fix to one footnote in Annex K. See also C++23, C++20, C++17, C++14, C++11, C++03, C++98, versions of the C++ programming language standard Compatibility of C and C++ References Further reading N3096 (last freely-available working draft before C23); WG14; April 2023. (free download) N3149 (working draft of C23 standard); WG14; July 2023. (not available to public) N3219 (ISO/IEC 9899:2023 DIS Draft); WG14; February 2024. (ISO draft available but not free) ISO/IEC 9899:2024 (official C23 standard); ISO; 2024. (planning for release in 2024) N3220 (first working draft after C23; differs from draft standard N3219 only in one footnote); WG14; February 2024. (free download) External links C Language WG14 (Working Group 14) WG14 Document Repository WG14 Meetings - agenda and minutes WG14 Charters: C2x Charter, C23 Charter, Interpreting the C23 Charter, C Standard Charter C (programming language) Programming language standards
C23 (C standard revision)
[ "Technology" ]
2,683
[ "Computer standards", "Programming language standards" ]
61,252,480
https://en.wikipedia.org/wiki/Turbot%20%28business%29
Turbot (Turbot HQ, Inc) is a privately held software company headquartered in the United States. Turbot provides automated cloud governance controls for enterprise cloud applications and infrastructure. History Turbot was founded by Nathan Wallace in 2014 as a US Corporation based in NJ. In 2016 the company expanded its virtual office across varying US states. In 2017, Turbot announced an expansion in the United Kingdom (Turbot HQ Limited) and India (Turbot HQ India Private Limited). In 2018 Turbot expanded its footprint through multiple cities in India and started operations in Australia (Turbot HQ Private Limited). Turbot provides real-time, automated configuration and control of software-defined infrastructure in cloud platforms. Turbot is an AWS Advanced Technology Partner with multiple certified competencies for Security, Cloud Management, Life Sciences, etc. Turbot is also a Microsoft Azure Partner and Google Cloud Platform partner, as well as a member of the Cloud Native Computing Foundation, Linux Foundation, and Center for Internet Security. Turbot's Automated Governance Platform enables an enterprise cloud team to focus on delivering higher-level value while your development teams remain agile through use of native cloud tools. Turbot has varying Consulting, Managed Services, and Technology Integration partnerships providing varying use cases for partners to empower their services and capabilities. References External links Official site Cloud computing providers Cloud infrastructure Cloud platforms
Turbot (business)
[ "Technology" ]
274
[ "Cloud infrastructure", "Cloud platforms", "Computing platforms", "IT infrastructure" ]
61,252,857
https://en.wikipedia.org/wiki/Antimicrobial%20Drug%20Database
AMDD otherwise known as Antimicrobial Drug Database is a biological database that seeks to consolidate antibacterial and antifungal drug information from a variety of sources such as PubChem, PubChem Bioassay, ZINC, ChemDB and DrugBank in order to advance the field of treatment of resistance microbes. As of 2012, AMDD contains ~2900 antibacterial and ~1200 antifungal compounds. These compounds are organized via their description, target, format, bioassay, molecular weight, hydrogen bond donor, hydrogen bond acceptor and rotatable bond. AMDD was built on Apache server 2.2.11. The function of this database is to ultimately provide a comprehensive tool that is intuitively navigated such that development of novel antibacterial and antifungal compounds are facilitated. See also Antimicrobial Resistance databases References Antimicrobial resistance organizations Biological databases
Antimicrobial Drug Database
[ "Biology" ]
192
[ "Bioinformatics", "Biological databases" ]
61,253,455
https://en.wikipedia.org/wiki/INTEGRALL
Integrall is a database that seeks to document and annotate integrons and all other transposable elements that confer resistance to antibiotics in bacteria. As of release 1.2, Integrall contains ~4800 integron sequences. Transposable elements and Integrons in bacteria are a major threat in the field of antimicrobial drug research because they allow bacteria to develop resistances through interbacterial interactions. They allow bacteria to develop resistances that they typically cannot. Thus, Integrall seeks to be a comprehensive and unified database that facilitates the understanding and usage of integron information. Integrall is built on PHP5 using MySQL 5.0. See also Antimicrobial resistance databases References Antimicrobial resistance organizations Biological databases
INTEGRALL
[ "Biology" ]
160
[ "Bioinformatics", "Biological databases" ]
61,256,010
https://en.wikipedia.org/wiki/Marine%20primary%20production
Marine primary production is the chemical synthesis in the ocean of organic compounds from atmospheric or dissolved carbon dioxide. It principally occurs through the process of photosynthesis, which uses light as its source of energy, but it also occurs through chemosynthesis, which uses the oxidation or reduction of inorganic chemical compounds as its source of energy. Almost all life on Earth relies directly or indirectly on primary production. The organisms responsible for primary production are called primary producers or autotrophs. Most marine primary production is generated by a diverse collection of marine microorganisms called algae and cyanobacteria. Together these form the principal primary producers at the base of the ocean food chain and produce half of the world's oxygen. Marine primary producers underpin almost all marine animal life by generating nearly all of the oxygen and food marine animals need to exist. Some marine primary producers are also ecosystem engineers which change the environment and provide habitats for other marine life. Primary production in the ocean can be contrasted with primary production on land. Globally the ocean and the land each produce about the same amount of primary production, but in the ocean primary production comes mainly from cyanobacteria and algae, while on land it comes mainly from vascular plants. Marine algae includes the largely invisible and often unicellular microalgae, which together with cyanobacteria form the ocean phytoplankton, as well as the larger, more visible and complex multicellular macroalgae commonly called seaweed. Seaweeds are found along coastal areas, living on the floor of continental shelves and washed up in intertidal zones. Some seaweeds drift with plankton in the sunlit surface waters (epipelagic zone) of the open ocean. Back in the Silurian, some phytoplankton evolved into red, brown and green algae. These algae then invaded the land and started evolving into the land plants we know today. Later in the Cretaceous some of these land plants returned to the sea as mangroves and seagrasses. These are found along coasts in intertidal regions and in the brackish water of estuaries. In addition, some seagrasses, like seaweeds, can be found at depths up to 50 metres on both soft and hard bottoms of the continental shelf. Marine primary producers Primary producers are the autotroph organisms that make their own food instead of eating other organisms. This means primary producers become the starting point in the food chain for heterotroph organisms that do eat other organisms. Some marine primary producers are specialised bacteria and archaea which are chemotrophs, making their own food by gathering around hydrothermal vents and cold seeps and using chemosynthesis. However, most marine primary production comes from organisms which use photosynthesis on the carbon dioxide dissolved in the water. This process uses energy from sunlight to convert water and carbon dioxide into sugars that can be used both as a source of chemical energy and of organic molecules that are used in the structural components of cells. Marine primary producers are important because they underpin almost all marine animal life by generating most of the oxygen and food that provide other organisms with the chemical energy they need to exist. The principal marine primary producers are cyanobacteria, algae and marine plants. The oxygen released as a by-product of photosynthesis is needed by nearly all living things to carry out cellular respiration. In addition, primary producers are influential in the global carbon and water cycles. They stabilize coastal areas and can provide habitats for marine animals. The term division has been traditionally used instead of phylum when discussing primary producers, although the International Code of Nomenclature for algae, fungi, and plants now accepts the terms as equivalent. In a reversal of the pattern on land, in the oceans, almost all photosynthesis is performed by algae and cyanobacteria, with a small fraction contributed by vascular plants and other groups. Algae encompass a diverse range of organisms, ranging from single floating cells to attached seaweeds. They include photoautotrophs from a variety of groups. Eubacteria are important photosynthetizers in both oceanic and terrestrial ecosystems, and while some archaea are phototrophic, none are known to utilise oxygen-evolving photosynthesis. A number of eukaryotes are significant contributors to primary production in the ocean, including green algae, brown algae and red algae, and a diverse group of unicellular groups. Vascular plants are also represented in the ocean by groups such as the seagrasses. Unlike terrestrial ecosystems, the majority of primary production in the ocean is performed by free-living microscopic organisms called phytoplankton. It has been estimated that half of the world's oxygen is produced by phytoplankton. Larger autotrophs, such as the seagrasses and macroalgae (seaweeds) are generally confined to the littoral zone and adjacent shallow waters, where they can attach to the underlying substrate but still be within the photic zone. There are exceptions, such as Sargassum, but the vast majority of free-floating production takes place within microscopic organisms. The factors limiting primary production in the ocean are also very different from those on land. The availability of water, obviously, is not an issue (though its salinity can be). Similarly, temperature, while affecting metabolic rates (see Q10), ranges less widely in the ocean than on land because the heat capacity of seawater buffers temperature changes, and the formation of sea ice insulates it at lower temperatures. However, the availability of light, the source of energy for photosynthesis, and mineral nutrients, the building blocks for new growth, play crucial roles in regulating primary production in the ocean. Available Earth System Models suggest that ongoing ocean bio-geochemical changes could trigger reductions in ocean NPP between 3% and 10% of current values depending on the emissions scenario. In 2020 researchers reported that measurements over the last two decades of primary production in the Arctic Ocean show an increase of nearly 60% due to higher concentrations of phytoplankton. They hypothesize new nutrients are flowing in from other oceans and suggest this means the Arctic Ocean may be able to support higher trophic level production and additional carbon fixation in the future. Cyanobacteria Cyanobacteria are a phylum (division) of bacteria, ranging from unicellular to filamentous and including colonial species, which fix inorganic carbon into organic carbon compounds. They are found almost everywhere on earth: in damp soil, in both freshwater and marine environments, and even on Antarctic rocks. In particular, some species occur as drifting cells floating in the ocean, and as such were amongst the first of the phytoplankton. These bacteria function like algae in that they can process nitrogen from the atmosphere when none is in the ocean. The first primary producers that used photosynthesis were oceanic cyanobacteria about 2.3 billion years ago. The release of molecular oxygen by cyanobacteria as a by-product of photosynthesis induced global changes in the Earth's environment. Because oxygen was toxic to most life on Earth at the time, this led to the near-extinction of oxygen-intolerant organisms, a dramatic change which redirected the evolution of the major animal and plant species. The tiny marine cyanobacterium Prochlorococcus, discovered in 1986, forms today part of the base of the ocean food chain and accounts for more than half the photosynthesis of the open ocean and an estimated 20% of the oxygen in the Earth's atmosphere. It is possibly the most plentiful genus on Earth: a single millilitre of surface seawater may contain 100,000 cells or more. Originally, biologists thought cyanobacteria was algae, and referred to it as "blue-green algae". The more recent view is that cyanobacteria are bacteria, and hence are not even in the same Kingdom as algae. Most authorities exclude all prokaryotes, and hence cyanobacteria from the definition of algae. Biological pigments Biological pigments are any coloured material in plant or animal cells. All biological pigments selectively absorb certain wavelengths of light while reflecting others. The primary function of pigments in plants is photosynthesis, which uses the green pigment chlorophyll and several colourful pigments that absorb as much light energy as possible. Chlorophyll is the primary pigment in plants; it is a chlorin that absorbs yellow and blue wavelengths of light while reflecting green. It is the presence and relative abundance of chlorophyll that gives plants their green colour. Green algae and plants possess two forms of this pigment: chlorophyll a and chlorophyll b. Kelps, diatoms, and other photosynthetic heterokonts contain chlorophyll c instead of b, while red algae possess only chlorophyll a. All chlorophylls serve as the primary means plants use to intercept light in order to fuel photosynthesis. Chloroplasts Chloroplasts (from the Greek chloros for green, and plastes for "the one who forms") are organelles that conduct photosynthesis, where the photosynthetic pigment chlorophyll captures the energy from sunlight, converts it, and stores it in the energy-storage molecules while freeing oxygen from water in plant and algal cells. They then use the stored energy to make organic molecules from carbon dioxide in a process known as the Calvin cycle. A chloroplast is a type of organelle known as a plastid, characterized by its two membranes and a high concentration of chlorophyll. They are highly dynamic—they circulate and are moved around within plant cells, and occasionally pinch in two to reproduce. Their behavior is strongly influenced by environmental factors like light colour and intensity. Chloroplasts, like mitochondria, contain their own DNA, which is thought to be inherited from their ancestor—a photosynthetic cyanobacterium that was engulfed by an early eukaryotic cell. Chloroplasts cannot be made by the plant cell and must be inherited by each daughter cell during cell division. Most chloroplasts can probably be traced back to a single endosymbiotic event, when a cyanobacterium was engulfed by the eukaryote. Despite this, chloroplasts can be found in an extremely wide set of organisms, some not even directly related to each other—a consequence of many secondary and even tertiary endosymbiotic events. Microbial rhodopsin Phototrophic metabolism relies on one of three energy-converting pigments: chlorophyll, bacteriochlorophyll, and retinal. Retinal is the chromophore found in rhodopsins. The significance of chlorophyll in converting light energy has been written about for decades, but phototrophy based on retinal pigments is just beginning to be studied. In 2000 a team of microbiologists led by Edward DeLong made a crucial discovery in the understanding of the marine carbon and energy cycles. They discovered a gene in several species of bacteria responsible for production of the protein rhodopsin, previously unheard of in bacteria. These proteins found in the cell membranes are capable of converting light energy to biochemical energy due to a change in configuration of the rhodopsin molecule as sunlight strikes it, causing the pumping of a proton from inside out and a subsequent inflow that generates the energy. The archaeal-like rhodopsins have subsequently been found among different taxa, protists as well as in bacteria and archaea, though they are rare in complex multicellular organisms. Research in 2019 shows these "sun-snatching bacteria" are more widespread than previously thought and could change how oceans are affected by global warming. "The findings break from the traditional interpretation of marine ecology found in textbooks, which states that nearly all sunlight in the ocean is captured by chlorophyll in algae. Instead, rhodopsin-equipped bacteria function like hybrid cars, powered by organic matter when available—as most bacteria are—and by sunlight when nutrients are scarce." There is an astrobiological conjecture called the Purple Earth hypothesis which surmises that original life forms on Earth were retinal-based rather than chlorophyll-based, which would have made the Earth appear purple instead of green. Marine algae Algae is an informal term for a widespread and diverse collection of photosynthetic eukaryotic organisms which are not necessarily closely related and are thus polyphyletic. Unlike higher plants, algae lack roots, stems, or leaves. Algal groups Marine algae have traditionally been placed in groups such as: green algae, red algae, brown algae, diatoms, coccolithophores and dinoflagellates. Green algae Green algae live most of their lives as single cells or are filamentous, while others form colonies made up from long chains of cells, or are highly differentiated macroscopic seaweeds. They form an informal group containing about 8,000 recognized species. Red algae Modern red algae are mostly multicellular with differentiated cells and include many notable seaweeds. As coralline algae, they play an important role in the ecology of coral reefs. They form a (disputed) phylum containing about 7,000 recognized species. Brown algae Brown algae are mostly multicellular and include many seaweeds, including kelp. They form a class containing about 2,000 recognized species. Diatoms Altogether, about 45 percent of the primary production in the oceans is contributed by diatoms. Coccolithophores Coccolithophores are almost exclusively marine and are found in large numbers throughout the sunlight zone of the ocean. They have calcium carbonate plates (or scales) of uncertain function called coccoliths, which are important microfossils. Coccolithophores are of interest to those studying global climate change because as ocean acidity increases, their coccoliths may become even more important as a carbon sink. The most abundant species of coccolithophore, Emiliania huxleyi is an ubiquitous component of the plankton base in marine food webs. Management strategies are being employed to prevent eutrophication-related coccolithophore blooms, as these blooms lead to a decrease in nutrient flow to lower levels of the ocean. Dinoflagellate Mixotrophic algae Other groups Traditionally the phylogeny of microorganisms, such as the algal groups discussed above, was inferred and their taxonomy established based on studies of morphology. However developments in molecular phylogenetics have allowed the evolutionary relationship of species to be established by analyzing their DNA and protein sequences. Many taxa, including the algal groups discussed above, are in the process of being reclassified or redefined using molecular phylogenetics. Recent developments in molecular sequencing have allowed for the recovery of genomes directly from environmental samples and avoiding the need for culturing. This has led for example, to a rapid expansion in knowledge of the abundance and diversity of marine microorganisms. Molecular techniques such as genome-resolved metagenomics and single cell genomics are being used in combination with high throughput techniques. Between 2009 and 2013, the Tara Oceans expedition traversed the world oceans collecting plankton and analysing them with contemporary molecular techniques. They found a huge range of previously unknown photosynthetic and mixotrophic algae. Among their findings were the diplonemids. These organisms are generally colourless and oblong in shape, typically about 20 μm long and with two flagella. Evidence from DNA barcoding suggests diplonemids may be among the most abundant and most species-rich of all marine eukaryote groups. By size Algae can be classified by size as microalgae or macroalgae. Microalgae Microalgae are the microscopic types of algae, not visible to the naked eye. They are mostly unicellular species which exist as individuals or in chains or groups, though some are multicellular. Microalgae are important components of the marine protists, as well as the marine phytoplankton. They are very diverse. It has been estimated there are 200,000–800,000 species of which about 50,000 species have been described. Depending on the species, their sizes range from a few micrometers (μm) to a few hundred micrometers. They are specially adapted to an environment dominated by viscous forces. Macroalgae Macroalgae are the larger, multicellular and more visible types of algae, commonly called seaweeds. Seaweeds usually grow in shallow coastal waters where they are anchored to the seafloor by a holdfast. Seaweed that becomes adrift can wash up on beaches. Kelp is a large brown seaweed that forms large underwater forests covering about 25% of the world coastlines. They are among the most productive and dynamic ecosystems on Earth. Some Sargassum seaweeds are planktonic (free-floating) and form floating drifts. Like microalgae, macroalgae (seaweeds) are technically marine protists since they are not true plants. Evolution of land plants The diagram on the right shows an evolutionary scenario for the conquest of land by streptophytes. Streptophyte algae include all green algae, and are the only photosynthetic eukaryotes from which the macroscopic land flora evolved (red lines). That said, throughout the course of evolution, algae from various other lineages have colonized land (yellow lines)—but also streptophyte algae have continuously and independently made the wet to dry transition (convergence of red and yellow). Throughout history, numerous lineages have become extinct (X labels). Terrestrial algae of various taxonomic affiliations dwell on rock surfaces and form biological soil crusts. From the diversity of the paraphyletic streptophyte algae, however, did an organism whose descendants eventually conquered land on a global scale emerge: a likely branched filamentous—or even parenchymatous—organism that formed rhizoidal structures and experienced desiccation from time to time. From this "hypothetical hydro-terrestrial alga", the lineages of Zygnematophyceae and embryophytes (land plants) arose. In its infancy, the trajectory leading to the embryophytes was represented by the—now extinct—earliest land plants. The earliest land plants probably interacted with beneficial substrate microbiota that aided them in obtaining nutrients from their substrate. Furthermore, the earliest land plants had to successfully overcome a barrage of terrestrial stressors (including ultraviolet light and photosynthetically active irradiance, drought, drastic temperature shifts, etc.). They succeeded because they had the right set of traits—a mix of adaptations that were selected for in their hydro-terrestrial algal ancestors, exaptations, and the potential for co-option of a fortuitous set of genes and pathways. During the course of evolution, some members of the populations of the earliest land plants gained traits that are adaptive in terrestrial environments (such as some form of water conductance, stomata-like structures, embryos, etc.); eventually, the "hypothetical last common ancestor of land plants" emerged. From this ancestor, the extant bryophytes and tracheophytes evolved. While the exact trait repertoire of the hypothetical last common ancestor of land plants is uncertain, it will certainly have entailed properties of vascular and non-vascular plants. What is also certain is that the last common ancestor of land plants had traits of algal ancestry. Marine plants Back in the Silurian, some phytoplankton evolved into red, brown and green algae. Green algae then invaded the land and started evolving into the land plants we know today. Later, in the Cretaceous, some of these land plants returned to the sea as mangroves and seagrasses. Plant life can flourish in the brackish waters of estuaries, where mangroves or cordgrass or beach grass might grow. Flowering plants grow in sandy shallows in the form of seagrass meadows, mangroves line the coast in tropical and subtropical regions and salt-tolerant plants thrive in regularly inundated salt marshes. All of these habitats are able to sequester large quantities of carbon and support a biodiverse range of larger and smaller animal life. Marine plants can be found in intertidal zones and shallow waters, such as seagrasses like eelgrass and turtle grass, Thalassia. These plants have adapted to the high salinity of the ocean environment. Light is only able to penetrate the top so this is the only part of the sea where plants can grow. The surface layers are often deficient in biologically active nitrogen compounds. The marine nitrogen cycle consists of complex microbial transformations which include the fixation of nitrogen, its assimilation, nitrification, anammox and denitrification. Some of these processes take place in deep water so that where there is an upwelling of cold waters, and also near estuaries where land-sourced nutrients are present, plant growth is higher. This means that the most productive areas, rich in plankton and therefore also in fish, are mainly coastal. Mangroves Mangroves provide important nursery habitats for marine life, acting as hiding and foraging places for larval and juvenile forms of larger fish and invertebrates. Based on satellite data, the total world area of mangrove forests was estimated in 2010 as . Spalding, M. (2010) World atlas of mangroves, Routledge. . . Seagrasses Like mangroves, seagrasses provide important nursery habitats for larval and juvenile forms of larger fish and invertebrates. The total world area of seagrass meadows is more difficult to determine than mangrove forests, but was conservatively estimated in 2003 as . Stoichiometry The stoichiometry (measurement of chemical reactants and products) of primary production in the surface ocean plays a crucial role in the cycling of elements in the global ocean. The ratio between the elements carbon (C), nitrogen (N), and phosphorus (P) in exported organic matter expressed in terms of the C:N:P ratio helps determine how much atmospheric carbon is sequestered in the deep ocean with respect to the availability of limiting nutrients. On geologic timescales, the N:P ratio reflects the relative availability of nitrate with respect to phosphate, both of which are externally supplied from the atmosphere via nitrogen fixation and/or continents via river supply and lost by denitrification and burial. On shorter timescales, the average stoichiometry of exported bulk particulate organic matter reflects the elemental stoichiometry of phytoplankton, with additional influences from biological diversity and secondary processing of organic matter by zooplankton and heterotrophic bacteria. In the face of global change, understanding and quantifying the mechanisms that lead to variability in C:N:P ratios are crucial in order to have an accurate projection of future climate change. A key unresolved question is what determines C:N:P of individual phytoplankton. Phytoplankton grows in the upper light-lit layer of the ocean, where the amount of inorganic nutrients, light, and temperature vary spatially and temporally. Laboratory studies show that these fluctuations trigger responses at the cellular level, whereby cells modify resource allocation in order to adapt optimally to their ambient environment. For example, phytoplankton may alter resource allocation between the P-rich biosynthetic apparatus, N-rich light-harvesting apparatus, and C-rich energy storage reserves. Under a typical future warming scenario, the global ocean is expected to undergo changes in nutrient availability, temperature, and irradiance. These changes are likely to have profound effects on the physiology of phytoplankton, and observations show that competitive phytoplankton species can acclimate and adapt to changes in temperature, irradiance, and nutrients on decadal timescales. Numerous laboratory and field experiments have been conducted that study the relationship between the C:N:P ratio of phytoplankton and environmental drivers. It is, however, challenging to synthesize those studies and generalize the response of phytoplankton C:N:P to changes in environmental drivers. Individual studies employ different sets of statistical analyses to characterize the effects of the environmental driver(s) on elemental ratios, ranging from a simple t test to more complex mixed models, which makes interstudy comparisons challenging. In addition, since environmentally induced trait changes are driven by a combination of plasticity (acclimation), adaptation, and life history, stoichiometric responses of phytoplankton can be variable even amongst closely related species. Meta-analysis/systematic review is a powerful statistical framework for synthesizing and integrating research results obtained from independent studies and for uncovering general trends. The seminal synthesis by Geider and La Roche in 2002, as well as the more recent work by Persson et al. in 2010, has shown that C:P and N:P could vary by up to a factor of 20 between nutrient-replete and nutrient-limited cells. These studies have also shown that the C:N ratio can be modestly plastic due to nutrient limitation. A meta-analysis study by Hillebrand et al. in 2013 highlighted the importance of growth rate in determining elemental stoichiometry and showed that both C:P and N:P ratios decrease with the increasing growth rate. In 2015, Yvon-Durocher et al. investigated the role of temperature in modulating C:N:P. Although their dataset was limited to studies conducted prior to 1996, they have shown a statistically significant relationship between C:P and temperature increase. MacIntyre et al. (2002) and Thrane et al. (2016) have shown that irradiance plays an important role in controlling optimal cellular C:N and N:P ratios. Most recently, Moreno and Martiny (2018) provided a comprehensive summary of how environmental conditions regulate cellular stoichiometry from a physiological perspective. The elemental stoichiometry of marine phytoplankton plays a critical role in global biogeochemical cycles through its impact on nutrient cycling, secondary production, and carbon export. Although extensive laboratory experiments have been carried out over the years to assess the influence of different environmental drivers on the elemental composition of phytoplankton, a comprehensive quantitative assessment of the processes is still lacking. Here, the responses of P:C and N:C ratios of marine phytoplankton have been synthesized to five major drivers (inorganic phosphorus, inorganic nitrogen, inorganic iron, irradiance, and temperature) by a meta-analysis of experimental data across 366 experiments from 104 journal articles. These results show that the response of these ratios to changes in macronutrients is consistent across all the studies, where the increase in nutrient availability is positively related to changes in P:C and N:C ratios. The results show that eukaryotic phytoplankton are more sensitive to the changes in macronutrients compared to prokaryotes, possibly due to their larger cell size and their abilities to regulate their gene expression patterns quickly. The effect of irradiance was significant and constant across all studies, where an increase in irradiance decreased both P:C and N:C. The P:C ratio decreased significantly with warming, but the response to temperature changes was mixed depending on the culture growth mode and the growth phase at the time of harvest. Along with other oceanographic conditions of the subtropical gyres (e.g., low macronutrient availability), the elevated temperature may explain why P:C is consistently low in subtropical oceans. Iron addition did not systematically change either P:C or N:C. Evolutionary timeline See also Algae Aquatic plants Biological pump Evolutionary history of plants Oceanic carbon cycle Plant evolution Timeline of plant evolution Evolution of photosynthesis References Further reading Falkowski, Paul (Ed.) (2013) Primary Productivity in the Sea Springer. . Falkowski, Paul and Raven, John A. (2013) Aquatic Photosynthesis Second edition revised, Princeton University Press. . Falkowski P and Knoll AH (2011) Evolution of Primary Producers in the Sea Academic Press. . Kirk, John T. O. (2010) Light and Photosynthesis in Aquatic Ecosystems Third edition revised, Cambridge University Press. . Evolution-related timelines Marine botany Algae Seagrass Seaweeds Branches of botany
Marine primary production
[ "Biology" ]
5,987
[ "Seaweeds", "Algae", "Branches of botany", "Marine biology" ]
61,256,222
https://en.wikipedia.org/wiki/Hajek%20projection
In statistics, Hájek projection of a random variable on a set of independent random vectors is a particular measurable function of that, loosely speaking, captures the variation of in an optimal way. It is named after the Czech statistician Jaroslav Hájek . Definition Given a random variable and a set of independent random vectors , the Hájek projection of onto is given by Properties Hájek projection is an projection of onto a linear subspace of all random variables of the form , where are arbitrary measurable functions such that for all and hence Under some conditions, asymptotic distributions of the sequence of statistics and the sequence of its Hájek projections coincide, namely, if , then converges to zero in probability. References Asymptotic analysis Multivariate statistics Probability theory
Hajek projection
[ "Mathematics" ]
162
[ "Mathematical analysis", "Asymptotic analysis" ]
61,256,626
https://en.wikipedia.org/wiki/C15H17ClN2O2
{{DISPLAYTITLE:C15H17ClN2O2}} The molecular formula C15H17ClN2O2 (molar mass: 292.76 g/mol) may refer to: Climbazole Lortalamine (LM-1404) Molecular formulas
C15H17ClN2O2
[ "Physics", "Chemistry" ]
64
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
61,259,381
https://en.wikipedia.org/wiki/Lel%20and%20Polel
Lel and Polel (Latin: Leli, Poleli) are Polish divine twins, first mentioned by Maciej Miechowita in the 16th century where he presents them as equivalents of Castor and Pollux and the sons of the goddess Łada, the equivalent of Leda. There is no complete agreement about the authenticity of the cult of Lel and Polel. Sources Lel and Polel were first mentioned in the Chronica Polonorum by Maciej Miechowita where he is correcting Jan Długosz who wrote that Łada was Polish equivalent of Roman god of war Mars: Marcin Kromer, Maciej Stryjkowski, Marcin Bielski and his son Joachim also mention the twins. Alessandro Guagnini claimed that the cult of Lel and Polel existed during his lifetime in Greater Poland. The priest Jakub Wujek also mentions "Lelipoleli". Research Initially, the authenticity of the gods Lel and Polel was not questioned, as evidenced by their popularity among major Polish writers such as Ignacy Krasicki, Juliusz Słowacki and Stanisław Wyspiański. Aleksander Brückner, who was one of the first researchers to tackle the topic of the Polish pantheon, categorically rejected the authenticity of Lel and Polel. He believed that the cry Łada, Łada, Ilela and Leli Poleli cited by Miechowita was in fact only a drinking song, an exclamation similar to tere-fere or fistum-pofistum, and the alleged names were derived from the word lelać "to sway". Despite Brückner's significant achievements, many modern researchers accuse him of a hypercritical or even pseudoscientific approach to the subject of the Polish pantheon. The attitude towards the cult of Lel and Polel changed in 1969 when two cult figures of oak tree dating from the 11th or 12th century were discovered on the island of Fischerinsel on the Tollensesee in Mecklenburg. One of them is 178 cm high and presents two male figures with a moustache, in headgear (helmets?), which are fused with heads and torsos. The second primitive representation, which is 157 cm high, shows a female figure with clearly outlined breasts. Some researchers allege that these idols depict Lel and Polel and their mother Łada. Following the abandonment of Brückner's hypercritical attitude and the discovery of twin figures on the island of Fischerinsel, modern researchers are more confident about the authenticity of their cult. Against the origin of the names from drinking songs are testified by Karol Potkański the own names Lel and Lal and the Russian song Lelij, Lelij, Lelij zelenyj and my Lado! where the first word may be associated with the dialectal Russian word lelek, which meant a "strong, healthy youth". Voditь leli is a women's pageant to honour young married women that shows the original ritual and mythical connotations, which after several centuries could have become drunken chants. From the 17th century, the term lelum polelum in the sense of "slow, sluggish" was recorded, which may have been the result of desacralization. According to Andrzej Szyjewski, Lelum and Polelum could have been zodiacal twins, and in the opinion Alexander Gieysztor they brought happiness, which may be reflected in faith in the magical power of a double ear [of grain]. However, according to Grzegorz Niedzielski, Lel and Polel are the invention of Miechowita and the Slavic twin brothers were to be Łada and Leli, where Łada was the fire god and the remains of the divine twins is the legend of Waligóra and Wyrwidąb. Lel and Polel in culture Literature Janusz Christa: Kajko and Kokosz ("Lelum polelum" is a favorite saying of Breakbone) Ignacy Krasicki: Myszeis. ("Popiel calls, begs Lelum Polelum") Juliusz Słowacki: Lilla Weneda. Lelum and Polelum are sons of king of Veneti and they were kidnapped by Lechites. Stanisław Wyspiański: Skałka. ("Lel, cause it, my friend") Adam Mickiewicz: Pan Tadeusz. ("Castor and his brother Pollux glittered at their head, once called among the Slavs Lele and Polele") Władysław Orkan: Drzewiej. Powieść. ("there was Lel, uncle of the god, and Lada or Polel, the son who charged the sword; there was Lelej or Lelek, the keeper of the herds") Music Lao Che – Lelum Polelum Rod – Lelum Polelum (album) Sulin – Lelum Polelum Video games The twin divines appear in the video game Blacktail by Polish developer The Parasight, as standing stones. (touching one transports you to the other) Footnotes References Bibliography Slavic gods Divine twins Castor and Pollux
Lel and Polel
[ "Astronomy" ]
1,103
[ "Castor and Pollux", "Astronomical myths" ]
61,259,475
https://en.wikipedia.org/wiki/C17H20ClNO
{{DISPLAYTITLE:C17H20ClNO}} The molecular formula C17H20ClNO may refer to: Chlodantane Clemeprol Clofedanol
C17H20ClNO
[ "Chemistry" ]
42
[ "Isomerism", "Set index articles on molecular formulas" ]
61,259,685
https://en.wikipedia.org/wiki/C15H11ClO2
{{DISPLAYTITLE:C15H11ClO2}} The molecular formula C15H11ClO2 (molar mass: 258.70 g/mol) may refer to: Cloridarol Fluorenylmethyloxycarbonyl chloride (Fmoc-Cl) Molecular formulas
C15H11ClO2
[ "Physics", "Chemistry" ]
67
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
61,259,800
https://en.wikipedia.org/wiki/C18H22ClNO3
{{DISPLAYTITLE:C18H22ClNO3}} The molecular formula C18H22ClNO3 (molar mass: 335.83 g/mol, exact mass: 335.1288 u) may refer to: 25C-NB3OMe 25C-NB4OMe 25C-NBOMe
C18H22ClNO3
[ "Chemistry" ]
73
[ "Isomerism", "Set index articles on molecular formulas" ]
61,260,201
https://en.wikipedia.org/wiki/List%20of%20metal-organic%20chemical%20vapour%20deposition%20precursors
In chemistry, a precursor is a compound that contributes in a chemical reaction and produces another compound, or a chemical substance that gives rise to another more significant chemical product. Since several years metal-organic compounds are widely used as molecular precursors for the chemical vapor deposition process (MOCVD). The success of this method is mainly due to its adaptability and to the increasing interest for the low temperature deposition processes. Correlatively, the increasing demand of various thin film materials for new industrial applications is also a significant reason for the rapid development of MOCVD. Certainly, a wide variety of materials which could not be deposited by the conventional halide CVD process, because halide reactive do not exist or are not volatile, can now be grown by MOCVD. This includes metals and different multi-component materials such as semiconductor and intermetallic compounds as well as carbides, nitrides, oxides, borides, silicides and chalcogenides. Further significant advantages of MOCVD over physical processes are a capability for large scale production, an easier automation, a good conformal coverage, the selectivity and the ability to produce metastable materials. Thus, much effort has been aimed at the synthesis of new molecular precursors. A productive overview is provided by several exceptional reviews covering fields of MOCVD such as, for instance, epitaxial growth of semiconductor compounds, and low temperature deposition of metals. An overview of metal-organic compounds used for the MOCVD growth of different kind of materials is reported in the following reviews. This is a list of prominent precursor complexes synthesized thus far with suited properties to be utilized for MOCVD processes. List References Chemical vapour deposition precursors
List of metal-organic chemical vapour deposition precursors
[ "Chemistry" ]
354
[ "Chemical vapour deposition precursors", "Chemical synthesis" ]
61,260,221
https://en.wikipedia.org/wiki/Klebsazolicin
Klebsazolicin (KLB) is a peptide antibiotic encoded in the genome of a gram-negative bacterium Klebsiella pneumoniae subsp. and targeting prokaryotic ribosome. Klebsazolicin is a ribosomally synthesized and post-translationally modified peptide (RiPP) and a linear -containing peptide (LAP). Discovery The discovery of KLB represents an example of a “genome mining” approach. Cluster of genes encoding KLB biosynthetic pathway was found in genome database using low-level homology of one of the proteins to microcin B17 synthetase. Cloning and expression of cluster in a heterologous host (Escherichia coli) yielded the active compound. The name given to the compound reflects the original bacterium where the biosynthetic cluster was found (Klebs-) and the presence of azole cycles (-azolicin). Structure The structure of KLB was established by a combination of mass-spectrometry and NMR methods. Klebsazolicin is 23-aminoacid long peptide, containing four azoles (3 thiazoles and an oxazole) and an N-terminal lactamidine ring. The latter is formed by a linkage between two N-terminal amino acids (serine and glutamine) and is absolutely essential for bioactivity. Activity Klebsazolicin is active against Gram-negative bacteria closely related to Klebsiella, such as Escherichia coli, Klebsiella pneumoniae, and Yersinia pseudotuberculosis. KLB inhibits protein synthesis on the prokaryotic ribosome by binding to and blocking peptide exit tunnel and thus preventing the passage of the nascent peptide. The activity of KlpE export pump encoded in KLB biosynthetic gene cluster confers self-resistance of the producing bacterium to the action of the antibiotic. Biosynthesis Typically to other RiPPs, klebsazolicin is produced in three steps. At the first step, a 47-aa precursor peptide KlpA is synthesized using cellular translation machinery. Then an N-terminal leader peptide serves as a recognition element for KlpBCD, a heterocyclase-dehydrogenase complex which converts serine and cysteine residues of KlpA into oxazole and thiazole heterocycles. Finally, the leader is cleaved off by the action of cellular proteases such as TldD/E, and at the same time KlpBCD activates the new N-terminus to form lactamidine. Thus, KlpBCD is able to introduce both azole heterocycles and lactamidine linkages, using side chains of Ser/Cys residues and N-terminal amino group as nucleophiles. References Antimicrobial peptides Antibiotics Hexadecapeptides
Klebsazolicin
[ "Biology" ]
622
[ "Antibiotics", "Biocides", "Biotechnology products" ]
61,260,322
https://en.wikipedia.org/wiki/Repository%20of%20Antibiotic%20resistance%20Cassettes
RAC otherwise known as Repository of Antibiotic resistance Cassettes is a database that uses the automatic Attacca annotation system in order to comprehensively annotate gene-cassettes and transposable elements in a stream-lined manner and to discover novel gene cassettes. Antibiotic resistance is often due to horizontal gene transfer, which allows resistance to arise through cell-to-cell interaction. This poses a major challenge in the field of antibiotic resistance. Hence, the creation of RAC which would provide researchers a comprehensive and unique tool for the endeavor of documenting resistance due to gene-cassettes and transposable elements. Attacca helps discover novel gene cassettes when any three of the following occurs as mentioned in Tsafnat et al, 2011: the Attacca discovery heuristics (19) identify a gap in a cassette array that could correspond to a novel cassette; a cassette encoding a potentially novel β-lactamase variant is detected; or the type of sequence submitted (e.g. isolated cassette) suggests that a gene cassette should be present but a gene cassette is not found by Attacca. If any of these cases occur, the gene-cassette would be sent to review at the Centre for Infectious Diseases and Microbiology, University of Sydney for further examination. See also Antimicrobial Resistance databases References Antimicrobial resistance organizations Biological databases
Repository of Antibiotic resistance Cassettes
[ "Biology" ]
283
[ "Bioinformatics", "Biological databases" ]
61,260,864
https://en.wikipedia.org/wiki/Proposed%20wildlife%20crossings%20in%20Jackson%2C%20Wyoming
Many animal migration patterns are still intact in the greater Jackson area due to the large quantity of protected land. Large animals such as elk, mule deer, and pronghorn have separate winter and summer habitats and are moving in the spring and fall. Elk, moose, and other large animals also converge in the low-lying areas around Jackson during the winter months to escape deep snow at higher elevations. All of this movement increases the likelihood of wildlife-vehicle collisions on roads. Jackson and Teton County have recorded over 5,000 wildlife–vehicle collisions since 1990. As a result, the Western Transportation Institute (WTI) was tasked with completing a report to examine the feasibility of a wildlife-crossing master plan. In May 2018, the WTI and Teton County published the Teton County Wildlife Crossing Master Plan. The master plan studied and identified seven highway segments suitable for wildlife crossings and other vehicle-collision mitigation. These highways include U.S. Route 191 (US 191), US 26, US 89, US 26, Wyoming Highway 220 (WYO 22) and WYO 390. Overview From 2010 to 2018, 43 animal species were recorded in Teton County's wildlife–vehicle collision database. Elk (327 collisions), moose (143), and mule deer (1,427) had the largest numbers of collisions in the Jackson area. The WTI examined crash and carcass data when it developed the Teton County Wildlife Crossing Master Plan. Although the data tends to reflect large- and medium-sized collisions, safety and conservation concerns extend beyond these documented encounters; ecological concerns extend to aquatic habitats and fish populations. Although traffic and animal-migration patterns vary by season, there is a constant presence of vehicular traffic and large mammals in and around Jackson. Each of the seven identified highways averages 10,000 vehicles per day, with volumes varying from 23,000 vehicles per day at peak season on one road to 1,500 vehicles per day on the lowest-volume stretch of highway. Human safety and economic impact Vehicle collisions with large mammals can result in property damage, injury and death. The Wyoming Department of Transportation (WYDOT) estimates that there is a 79-percent chance of a driver's colliding with wildlife on Wyoming roads. WYDOT also estimates that these collisions result in $50 million in damage per year (vehicle damage and personal-injury costs). WYDOT estimates that the average animal collision in Wyoming will cost a driver $11,600. In a 2003 study by the Jackson Hole Wildlife Foundation, it was estimated that wildlife collisions in Teton County cost $1.2 million per year. More recently, the Jackson Hole Wildlife Foundation completed an economic impact assessment for 2016-2017 that found total economic loss from wildlife collisions to be $3,172,837 in Teton County. The Nature Conservancy (TNC) estimates that wildlife collisions cost Wyoming drivers $25 million in injury and property damage, and $24 million to taxpayers in lost wildlife productivity. TNC also estimates that Wyoming wildlife collision costs involving mule deer, elk and moose are $10,500, $25,319, and $37,873, respectively. A number of studies document the economic value of wildlife, including the major local economic drivers of tourism, recreation and hunting. Studies indicate that there are approximately 211 fatalities and 29,000 human injuries in the United States each year due to vehicle collisions with wildlife. Biological conservation Wildlife is impacted by roads and traffic in five broad categories: loss of habitat, road mortality, barriers to wildlife movement, decreases in habitat quality, and the introduction of non-native species. These categories are used to estimate the impact of roads on biological conservation. In addition to deer, elk and moose, animals with high conservation value in the area include river otter, lynx, grizzly bear, and bison. According to Anthony Clevenger of the WTI, "generalizations about the conservation value of habitat corridors remain elusive because of the species-specific nature of the problem". Due to the complex nature of habitats in and around Jackson, the Teton County Wildlife Crossing Master Plan evaluates the biological conservation value of each roadway based on the migration patterns of large and medium-sized mammals most likely to encounter vehicle traffic on a specific route. The Jackson Hole Wildlife Foundation (JHWF) demonstrated the urgency of this problem by tracking moose-specific deaths due to vehicle collisions in the Jackson area. JHWF tracked 50 moose deaths due to collisions over a decade, and compared that to the estimated 70 moose that live near town. Their research indicates that this moose population could be at risk due to the frequency of collisions. JHWF believes that it "can save an average of 190 moose, 210 elk and 360 deer every 20 years" with the construction of wildlife crossings. In assessing biological conservation value, the concept of ecological permeability becomes an important element of the master plan. Ecological permeability has broad scientific consensus. Although vehicle–wildlife collisions are the most acute aspect of biological conservation with respect to wildlife overpasses, the ability for species to migrate and interact with the larger ecosystem is considered crucial for the health and maintenance of species and ecosystem. Research also demonstrates that facilitating permeability requires that wildlife crossings be species-specific; for example, ungulates prefer overpasses and carnivores prefer underpasses. The Teton County Master Plan takes these biological-conservation and permeability factors into account in its analysis and planning process. Proposed locations The Teton County Wildlife Crossing Master Plan has identified 12 crossing priorities, and has ranked the locations based on eight criteria: land security, political viability, key-partner support, technical feasibility, long-term solution, human-safety impact, wildlife-mortality impact, and habitat connectivity value. Among other data sets, the master-plan research team used extensive nature mapping, collision data, WYDOT traffic data, and migration data for mule deer, elk and moose to compile the rankings. Research indicates that combining wildlife crossings with other mitigation measures can result in an 83-percent reduction in collisions. Without complex planning and the integration of multiple measures, an average mitigation reduces collisions by 40 percent. Some advocates cite statistics indicating that a combination of fencing and crossing structures can reduce collisions by 90 percent. Recommended crossing locations are ranked as follows: Highway 22 / 390 Intersection / Snake River Bridge Highway 22 Spring Creek to Bar Y Camp Creek (near-term, non-structural measures) Camp Creek (Hoback Junction to Hoback Canyon) North of Jackson to Fish Hatchery South of Jackson to Rafter J Horse Creek to Hoback Junction Broadway (Flat Creek Bridge near five-way intersection to High School Road) Teton Pass west side (WY 22 and ID 33) Game Creek, Dog Creek (South of Highway 89) Blackrock / Togwotee WY 390 north of the Highway 22 / 390 intersection. The Teton County Wildlife Crossing Master Plan evaluates and recommends the site-specific use of warning signs and animal-detection systems, speed management, wildlife fencing, wildlife crossings (overpasses and underpasses), and multiple-use structures. Wildlife-collision mitigation systems vary in complexity, cost and design. A number of studies indicate a high success rate when mitigation measures are designed with target species in mind, rather than being applied broadly to a region. Other studies indicate the highest rate of success when several mitigation measures, such as crossing structures and fencing, are used in conjunction. Wildlife warning signs and animal-detection systems Studies indicate that wildlife warning signs can reduce collisions by nine to 50 percent, and Teton County has a variety of warning signs in place. Animal-detection systems that alert drivers when an animal is active in the area can reduce collisions by 33 to 97 percent. Speed management Speed limits vary along the target roads between , with the average speed limit (variable speed limits, depending on daylight, are set along two stretches of highway). Since research shows that most collisions occur at dawn and dusk, Teton County has implemented variable speed limits that decrease driving speeds at night. The master plan does not recommend a general reduction in speed limits, however, due to a possible negative impact on safe highway driving. Wildlife fences The master plan cites four studies indicating that wildlife fences are "one of the most effective and robust mitigation measures to reduce collisions with large animals." Wildlife fencing can keep animals off the road and funnel them to safe crossing locations. Best practices should be used to ensure that fencing is species-specific, minimizing larger ecological impacts. Wildlife crossings There are numerous types of wildlife crossings that can be implemented in Teton County. These include overpasses, open span bridges, underpasses and pipes. Similar to wildlife fencing, the success of crossings depends on their design process and how they relate to target species. The Master Plan used existing research to make recommendations based on animal type, including six ungulate species and eight carnivore species. Multiple-use structures Multiple use structures facilitate the movement of humans and wildlife. Although the master plan evaluated their feasibility, it does not recommend them due to their potential to increase the number of vehicle-wildlife collisions. Funding Teton County allocated $150,000 in the 2019 fiscal year budget to jump-start early planning and design for wildlife crossing and other mitigation. Although the county has used the master plan to prioritize projects, it does not have a public assessment of their total cost. In addition to the Teton County plan, WYDOT has completed a project along a stretch of US Highway 191. The project built six underpasses, two overpasses and installed fencing to enhance safety during the annual pronghorn-sheep migration. WYDOT has allocated $3.5 million for a crossing at the HWY 22 / 390 intersection, with an estimated total project cost of $7.5 million. It also plans to set aside $900,000 to extend an animal underpass along the same stretch of road. To facilitate the construction of wildlife crossings closer to Jackson, advocacy groups lobbied to include a $15 million wildlife-crossing program in the list of items funded by a special-purpose excise tax (SPET). The SPET, which will be voted on in November 2019, would enable Jackson and Teton County to begin the construction of wildlife crossings and supplement WYDOT funding for specific projects. On July 15, 2019, Teton County and Jackson officials approved a ballot initiative that allows residents to vote for 10 projects totaling $77 million. The language approved by officials allows citizens to select any or all of the initiatives, which would be funded by a one-percent sales tax. After a brief discussion about bundling the 10 projects into an all-or-nothing ballot measure, elected officials opted to let voters choose each item independently. In the vote removing the "bundling" language, officials elected to add a $10 million initiative for wildlife crossings ($5 million less than the amount proposed by advocates) within the list of 10 projects put to voter approval in November 2019. References Conservation projects in the United States Ecological connectivity Tunnels in the United States Bridges in Wyoming Road traffic management Ecological restoration Buildings and structures in Teton County, Wyoming
Proposed wildlife crossings in Jackson, Wyoming
[ "Chemistry", "Engineering" ]
2,294
[ "Ecological restoration", "Environmental engineering" ]
61,260,930
https://en.wikipedia.org/wiki/Augmented%20Analytics
Augmented Analytics is an approach of data analytics that employs the use of machine learning and natural language processing to automate analysis processes normally done by a specialist or data scientist. The term was introduced in 2017 by Rita Sallam, Cindi Howson, and Carlie Idoine in a Gartner research paper. Augmented analytics is based on business intelligence and analytics. In the graph extraction step, data from different sources are investigated. Defining Augmented Analytics Machine Learning – a systematic computing method that uses algorithms to sift through data to identify relationships, trends, and patterns. It is a process that allows algorithms to dynamically learn from data instead of having a set base of programmed rules. Natural language generation (NLG) – a software capability that takes unstructured data and translates it into plain-English, readable, language. Automating Insights – using machine learning algorithms to automate data analysis processes. Natural Language Query – enabling users to query data using business terms that are either typed onto a search box or spoken. Data Democratization Data Democratization is the democratizing data access in order to relieve data congestion and get rid of any sense of data "gatekeepers". This process must be implemented alongside a method for users to make sense of the data. This process is used in hopes of speeding up company decision making and uncovering opportunities hidden in data. There are three aspects to democratising data: Data Parameterisation and Characterisation. Data Decentralisation using an OS of blockchain and DLT technologies, as well as an independently governed secure data exchange to enable trust. Consent Market-driven Data Monetisation. When it comes to connecting assets, there are two features that will accelerate the adoption and usage of data democratisation: decentralized identity management and business data object monetization of data ownership. It enables multiple individuals and organizations to identify, authenticate, and authorize participants and organizations, enabling them to access services, data or systems across multiple networks, organizations, environments, and use cases. It empowers users and enables a personalized, self-service digital onboarding system so that users can self-authenticate without relying on a central administration function to process their information. Simultaneously, decentralized identity management ensures the user is authorized to perform actions subject to the system’s policies based on their attributes (role, department, organization, etc.) and/ or physical location. Use cases Agriculture – Farmers collect data on water use, soil temperature, moisture content and crop growth, augmented analytics can be used to make sense of this data and possibly identify insights that the user can then use to make business decisions. Smart Cities – Many cities across the United States, known as Smart Cities collect large amounts of data on a daily basis. Augmented analytics can be used to simplify this data in order to increase effectiveness in city management (transportation, natural disasters, etc.). Analytic Dashboards – Augmented analytics has the ability to take large data sets and create highly interactive and informative analytical dashboards that assist in many organizational decisions. Augmented Data Discovery – Using an augmented analytics process can assist organizations in automatically finding, visualizing and narrating potentially important data correlations and trends. Data Preparation – Augmented analytics platforms have the ability to take large amounts of data and organize and "clean" the data in order for it to be usable for future analyses. Business – Businesses collect large amounts of data, daily. Some examples of types of data collected in business operations include; sales data, consumer behavior data, distribution data. An augmented analytics platform provides access to analysis of this data, which could be used in making business decisions. References Data analysis Machine learning algorithms Natural language processing
Augmented Analytics
[ "Technology" ]
744
[ "Natural language processing", "Natural language and computing" ]
61,263,057
https://en.wikipedia.org/wiki/C43H52N4O5
{{DISPLAYTITLE:C43H52N4O5}} The molecular formula C43H52N4O5 (molar mass: 704.912 g/mol, exact mass: 704.3938 u) may refer to: Conodurine Voacamine Molecular formulas
C43H52N4O5
[ "Physics", "Chemistry" ]
67
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
66,225,357
https://en.wikipedia.org/wiki/PS-2000
The PS-2000 (ПС-2000, , reconfigurable system) was a Soviet supercomputer built in the 1980s. History In the middle of the 1970s, it appeared, in the USSR, that the computing power available to process geophysics data, real-time space probes data, mineral prospecting, weather forecast, etc. was far to be sufficient, and that a new class of supercomputers, hundreds of times more powerful than the existing installed systems, was needed. The development of ПС-2000 began in 1978, as a joint project between the Institute of Control Problems (IPU) in Moscow and the Impul's Scientific Production Association in Severodonetsk, under the supervision of Il’ya Itenberg and Vladislav Rezanov of Impul's and Iveri Prangishvili of IPU. The computer entered production in 1981, and was manufactured in various configurations until 1988. During the 1980s and the 1990s, the Roscosmos mission control computing complex was organized around an Elbrus 2 supercomputer, with a PS-2000 as a front-processing supercomputer for telemetry data. Architecture The PS-2000 is a SIMD-type supercomputer. It consists of 8 to 64 processing element (PE), cadenced at 3 MHz, that are connected to each other, under the control of a common command unit (OUU), and connected each to 12 or 48 KB of memory. Eight processing elements are grouped in a processing device (UO). Each PE use 24-bit registers, in fixed or floating-point format. An addition takes 0.96 μs and a multiplication takes 1.6 μs, allowing theoretic peak performances of 200 MIPS for a full configuration system. The computer is formed from 3 cabinets types: Base module : one UO and one OUU Extension module 1 : one UO (8 PE) Extension module 2 : two UO (16 PE) In the simplest configuration, the supercomputer consists of only one cabinet. In the full configuration, with 8 UO, the supercomputer consists of 5 cabinets, organised in a double-Y shape. References External links Russian virtual computer museum Soviet high-speed computers : the new generation Advanced Architecture Computers 1989 Soviet computer systems Supercomputers
PS-2000
[ "Technology" ]
485
[ "Supercomputers", "Supercomputing", "Computer systems", "Soviet computer systems" ]
66,226,078
https://en.wikipedia.org/wiki/N-%28n-Butyl%29thiophosphoric%20triamide
N-(n-Butyl)thiophosphoric triamide (NBPT) is the organophosphorus compound with the formula SP(NH2)2(NHC4H9). It is an amide of thiophosphoric acid. A white solid, NBPT is an "enhanced efficiency fertilizer", intended to limit the release of nitrogen-containing gases following fertilization. Regarding its chemical structure, the molecule features tetrahedral phosphorus bonded to sulfur and three amido groups. Use NBPT functions as an inhibitor of the enzyme urease. Urease, pervasive in soil microorganisms, converts urea into ammonia, which is susceptible to volatilization if produced faster than it can be utilized by plants. Approximately 0.5% by weight NBPT is mixed with the urea. See also Phenyl phosphorodiamidate, another urease inhibitor References Thiophosphoryl compounds Soil improvers Fertilizers
N-(n-Butyl)thiophosphoric triamide
[ "Chemistry" ]
217
[ "Fertilizers", "Soil chemistry", "Functional groups", "Thiophosphoryl compounds" ]
66,226,706
https://en.wikipedia.org/wiki/N-Acetyl-L-tyrosine
{{DISPLAYTITLE:N-Acetyl-L-tyrosine}} N-Acetyl-L-tyrosine is an amino acid, an N-acetyl derivative of tyrosine. It is used for parenteral nutrition and as a dietary supplement. See also Acetylcarnitine Acetylcysteine N-Acetylserotonin References Acetamides Amino acid derivatives
N-Acetyl-L-tyrosine
[ "Chemistry" ]
91
[ "Pharmacology", "Pharmacology stubs", "Medicinal chemistry stubs" ]
66,227,179
https://en.wikipedia.org/wiki/C.%20Frederick%20Koelsch
Charles Frederick Koelsch (31 January 1907 - 24 December 1999) was an American organic chemist who spent his faculty career at the University of Minnesota. Education and academic career Koelsch was born in Boise, Idaho in 1907 in a family of German descent. He attended the University of Wisconsin and earned his bachelor's degree in 1928 and his Ph.D. from the same institution in 1931, working under the supervision of Samuel M. McElvain. After a postdoctoral fellowship at Harvard University with Elmer Peter Kohler, Koelsch was recommended for a position at the University of Minnesota by Lee Irvin Smith. He joined the faculty there as an instructor in 1932 and became an assistant professor in 1934. Koelsch was awarded the ACS Award in Pure Chemistry in 1934. He advanced to associate professor in 1937 and full professor in 1946. He remained at the University of Minnesota until his retirement, assuming professor emeritus status, in 1973. Through much of his academic career, Koelsch also served as an industry consultant, working first with Smith, Kline & French and later with Sterling Drug and Union Carbide. During his work at Harvard, Koelsch attempted to publish a paper describing an unusually stable radical compound, but it was rejected at the time on the grounds that the compound's properties were unlikely to describe a radical. Subsequent experimental evidence and quantum mechanics calculations suggested his interpretation of the original experiment was correct, resulting in the publication of the paper nearly 25 years after the original experiments. The compound - 1,3-bisdiphenylene-2-phenylallyl (BDPA) - is now often referred to as the "Koelsch radical". Personal life Koelsch married his wife Helen in 1938 and the couple had three children. He was a ham radio enthusiast. He died in Rochester, Minnesota in 1999. References 20th-century American chemists American organic chemists University of Minnesota faculty University of Wisconsin–Madison alumni 1907 births 1999 deaths People from Boise, Idaho Scientists from Idaho
C. Frederick Koelsch
[ "Chemistry" ]
413
[ "Organic chemists", "American organic chemists" ]
66,228,158
https://en.wikipedia.org/wiki/Numerical%20analytic%20continuation
In many-body physics, the problem of analytic continuation is that of numerically extracting the spectral density of a Green function given its values on the imaginary axis. It is a necessary post-processing step for calculating dynamical properties of physical systems from Quantum Monte Carlo simulations, which often compute Green function values only at imaginary times or Matsubara frequencies. Mathematically, the problem reduces to solving a Fredholm integral equation of the first kind with an ill-conditioned kernel. As a result, it is an ill-posed inverse problem with no unique solution and where a small noise on the input leads to large errors in the unregularized solution. There are different methods for solving this problem including the maximum entropy method, the average spectrum method and Pade approximation methods. Examples A common analytic continuation problem is obtaining the spectral function at real frequencies from the Green function values at Matsubara frequencies by numerically inverting the integral equation where for fermionic systems or for bosonic ones and is the inverse temperature. This relation is an example of Kramers-Kronig relation. The spectral function can also be related to the imaginary-time Green function be applying the inverse Fourier transform to the above equation with . Evaluating the summation over Matsubara frequencies gives the desired relation where the upper sign is for fermionic systems and the lower sign is for bosonic ones. Another example of the analytic continuation is calculating the optical conductivity from the current-current correlation function values at Matsubara frequencies. The two are related as following Software The Maxent Project: Open source utility for performing analytic continuation using the maximum entropy method. Spektra: Free online tool for performing analytic continuation using the average spectrum Method. SpM: Sparse modeling tool for analytic continuation of imaginary-time Green’s function. See also Analytic continuation Analytic continuation along a curve Fredholm integral equation Green's function Kramers–Kronig relations Quantum Monte Carlo References Mathematical physics Quantum Monte Carlo
Numerical analytic continuation
[ "Physics", "Chemistry", "Mathematics" ]
400
[ "Quantum chemistry", "Applied mathematics", "Theoretical physics", "Quantum Monte Carlo", "Mathematical physics" ]
66,228,382
https://en.wikipedia.org/wiki/Tetrachlorodinitroethane
Tetrachlorodinitroethane is a chlorinated nitroalkane produced by nitration of tetrachloroethylene with dinitrogen tetroxide or fuming nitric acid. It's a powerful lachrymatory agent and pulmonary agent that is six times more toxic than chloropicrin. Tetrachlorodinitroethane may be used as a fumigant. See also Chloropicrin Trifluoronitrosomethane Trichloronitrosomethane References Nitro compounds Organochlorides Lachrymatory agents Pulmonary agents Fumigants
Tetrachlorodinitroethane
[ "Chemistry" ]
136
[ "Lachrymatory agents", "Pulmonary agents", "Chemical weapons" ]
66,228,478
https://en.wikipedia.org/wiki/Cost%20distance%20analysis
In spatial analysis and geographic information systems, cost distance analysis or cost path analysis is a method for determining one or more optimal routes of travel through unconstrained (two-dimensional) space. The optimal solution is that which minimizes the total cost of the route, based on a field of cost density (cost per linear unit) that varies over space due to local factors. It is thus based on the fundamental geographic principle of Friction of distance. It is an optimization problem with multiple deterministic algorithm solutions, implemented in most GIS software. The various problems, algorithms, and tools of cost distance analysis operate over an unconstrained two-dimensional space, meaning that a path could be of any shape. Similar cost optimization problems can also arise in a constrained space, especially a one-dimensional linear network such as a road or telecommunications network. Although they are similar in principle, the problems in network space require very different (usually simpler) algorithms to solve, largely adopted from graph theory. The collection of GIS tools for solving these problems are called network analysis. History Humans seem to have an innate desire to travel with minimal effort and time. Historic, even ancient, roads show patterns similar to what modern computational algorithms would generate, traveling straight across flat spaces, but curving around mountains, canyons, and thick vegetation. However, it was not until the 20th century that geographers developed theories to explain this route optimization, and algorithms to reproduce it. In 1957, during the Quantitative revolution in Geography, with its propensity to adopt principles or mathematical formalisms from the "hard" sciences (known as social physics), William Warntz used refraction as an analogy for how minimizing travel cost will make transportation routes change direction at the boundary between two landscapes with very different friction of distance (e.g., emerging from a forest into a prairie). His principle of "parsimonious movement," changing direction to minimize cost, was widely accepted, but the refraction analogy and mathematics (Snell's law) was not, largely because it does not scale well to normally complex geographic situations. Warntz and others then adopted another analogy that proved much more successful in the common situation where travel cost varies continuously over space, by comparing it to terrain. They compared the cost rate (i.e., cost per unit distance, the inverse of velocity if the cost is time) to the slope of a terrain surface (i.e., elevation change per unit distance), both being mathematical derivatives of an accumulated function or field: total elevation above a vertical datum (sea level) in the case of terrain. Integrating the cost rate field from a given starting point would create an analogous surface of total accumulated cost of travel from that point. In the same way that a stream follows the path of least resistance downhill, the streamline on the cost accumulation surface from any point "down" to the source will be the minimum-cost path. Additional lines of research in the 1960s further developed the nature of the cost rate field as a manifestation of the concept of friction of distance, studying how it was affected by various geographic features. At the time, this solution was only theoretical, lacking the data and computing power for the continuous solution. Raster GIS provided the first feasible platform for implementing the theoretical solution by converting the continuous integration into a discrete summation procedure. Dana Tomlin implemented cost distance analysis in his Map Analysis Package by 1986, and Ronald Eastman added it to IDRISI by 1989, with a more efficient "pushbroom" cost accumulation algorithm. Douglas (1994) further refined the accumulation algorithm, which is basically what is implemented in most current GIS software. Cost raster The primary data set used in cost distance analysis is the cost raster, sometimes called the cost-of-passage surface, the friction image, the cost-rate field, or cost surface. In most implementations, this is a raster grid, in which the value of each cell represents the cost (i.e., expended resources, such as time, money, or energy) of a route crossing the cell in a horizontal or vertical direction. It is thus a discretization of a field of cost rate (cost per linear unit), a spatially intensive property. This cost is a manifestation of the principle of friction of distance. A number of different types of cost may be relevant in a given routing problem: Travel cost, the resource expenditure required to move across the cell, usually time or energy/fuel. Construction cost, the resources (usually monetary) required to build the infrastructure that makes travel possible, such as roads, pipes, and cables. While some construction costs are constant (e.g., paving material), others are spatially variant, such as property acquisition and excavation. Environmental impacts, the negative effects on the natural or human environment caused by the infrastructure or the travel along it. For example, building an expressway through a residential neighborhood or a wetland would incur a high political cost (in the form of environmental impact assessments, protests, lawsuits, etc.). Some of these costs are easily quantifiable and measurable, such as transit time, fuel consumption, and construction costs, thus naturally lending themselves to computational solutions. That said, there may be significant uncertainty in predicting the cost prior to implementing the route. Other costs are much more difficult to measure due to their qualitative or subjective nature, such as political protest or ecological impact; these typically require operationalization through the creation of a scale. In many situations, multiple types of cost may be simultaneously relevant, and the total cost is a combination of them. Because different costs are expressed in different units (or, in the case of scales, no units at all), they usually cannot be directly summed, but must be combined by creating an index. A common type of index is created by scaling each factor to a consistent range (say, [0,1]), then combining them using weighted linear combination. An important part of the creation of an index model like this is Calibration (statistics), adjusting the parameters of the formula(s) to make the modeled relative cost match real-world costs, using methods such as the Analytic hierarchy process. The index model formula is typically implemented in a raster GIS using map algebra tools from raster grids representing each cost factor, resulting in a single cost raster grid. Directional cost One limitation of the traditional method is that the cost field is isotropic or omni-directional: the cost at a given location does not depend on the direction of traversal. This is appropriate in many situations, but not others. For example, if one is flying in a windy location, an airplane flying in the direction of the wind incurs a much lower cost than an airplane flying against it. Some research has been done on extending cost distance analysis algorithms to incorporate directional cost, but it is not yet widely implemented in GIS software. IDRISI has some support for anisotropy. Least-cost-path algorithm The most common cost distance task is to determine the single path through the space between a given source location and a destination location that has the least total accumulated cost. The typical solution algorithm is a discrete raster implementation of the cost integration strategy of Warntz and Lindgren, which is a deterministic (NP-complete) optimization. Inputs: cost field raster, source location, destination location (most implementations can solve for multiple sources and destinations simultaneously) Accumulation: Starting at the source location compute the lowest total cost needed to reach every other cell in the grid. Although there are several algorithms, such as those published by Eastman and Douglas, they generally follow a similar strategy. This process also creates, as an important byproduct, a second raster grid usually called the backlink grid (Esri) or movement direction grid (GRASS), in which each cell has a direction code (0-7) representing which of its eight neighbors had the lowest cost. Find a cell that is adjacent to at least one cell that already has an accumulated cost assigned (initially, this is only the source cell) Determine which neighbor has the lowest accumulated cost. Encode the direction from the target to the lowest-cost neighbor in the backlink grid. Add the cost of the target cell (or an average of the costs of the target and neighbor cells) to the neighbor accumulated cost, to create the accumulated cost of the target cell. If the neighbor is diagonal, the local cost is multiplied by The algorithm must also take into account that indirect routes may have lower cost, often using a hash table to keep track of temporary cost values along the expanding fringe of computation that can be reconsidered. Repeat the procedure until all cells are assigned. Drain: In keeping with the terrain analogy, trace the optimal route from the given destination back to the source like a stream draining away from a location. At its most basic, this is accomplished by starting at the destination cell, moving in the direction indicated in the backlink grid, then repeating for the next cell, and so on until the source is reached. Recent software adds some improvements, such as looking across three or more cells to recognize straight lines at angles other than the eight neighbor directions. For example, the r.walk function in GRASS can recognize the "knight's move" (one cell straight, then one cell diagonal) and draw a straight line bypassing the middle cell. Corridor analysis A slightly different version of the least-cost path problem, which could be considered a fuzzy version of it, is to look for corridors more than one cell in width, thus providing some flexibility in applying the results. Corridors are commonly used in transportation planning and in wildlife management. The solution to this problem is to compute, for every cell in the study space, the total accumulated cost of the optimal path between a given source and destination that passes through that cell. Thus, every cell in the optimal path derived above would have the same minimum value. Cells near this path would be reached by paths deviating only slightly from the optimal path, so they would have relatively low cost values, collectively forming a corridor with fuzzy edges as more distant cells have increasing cost values. The algorithm to derive this corridor field is created by generating two cost accumulation grids: one using the source as described above. Then the algorithm is repeated, but using the destination as the source. Then these two grids are added using map algebra. This works because for each cell, the optimal source-destination path passing through that cell is the optimal path from that cell to the source, added to the optimal path from that cell to the destination. This can be accomplished using the cost accumulation tool above, along with a map algebra tool, although ArcGIS provides a Corridor tool that automates the process. Cost-based allocation Another use of the cost accumulation algorithm is to partition space among multiple sources, with each cell assigned to the source it can reach with the lowest cost, creating a series of regions in which each source is the "nearest". In the terrain analogy, these would correspond to watersheds (one could thus call these "cost-sheds," but this term is not in common usage). They are directly related to a voronoi diagram, which is essentially an allocation over a space with constant cost. They are also conceptually (if not computationally) similar to location-allocation tools for network analysis. A cost-based allocation can be created using two methods. The first is to use a modified version of the cost accumulation algorithm, which substitutes the backlink grid for an allocation grid, in which each cell is assigned the same source identifier of its lowest-cost neighbor, causing the domain of each source to gradually grow until they meet each other. This is the approach taken in ArcGIS Pro. The second solution is to first run the basic accumulation algorithm, then use the backlink grid to determine the source into which each cell "flows." GRASS GIS uses this approach; in fact, the same tool is used as for computing watersheds from terrain. Implementations Cost distance tools are available in most raster GIS software: GRASS GIS (often bundled into QGIS), with separate accumulation (r.cost) and drain (r.walk) functions ArcGIS Desktop and ArcGIS Pro, with separate accumulation (Cost Distance) and drain (Cost Path) geoprocessing tools, as well as Corridor generation. Recently, starting with ArcGIS Pro version 2.5, a new set of cost distance tools was introduced, using more advanced algorithms with more flexible options. TerrSet (formerly Idrisi) has several tools, implementing a variety of algorithms to solve different kinds of cost distance problems, including anisotropic (directional) cost. Applications Cost distance analysis has found applications in a wide range of geography related disciplines including archeaology and landscape ecology. See also Distance decay Tobler's first law of geography Tobler's second law of geography Tobler's hiking function Travelling salesman problem Canadian traveller problem Traveling purchaser problem Vehicle routing problem References External links Distance toolset documentation for Esri ArcGIS Pro Cost Surface tools in GRASS GIS Geographic information systems
Cost distance analysis
[ "Physics", "Technology" ]
2,688
[ "Spatial analysis", "Information systems", "Space", "Spacetime", "Geographic information systems" ]
66,228,643
https://en.wikipedia.org/wiki/Elementary%20Number%20Theory%2C%20Group%20Theory%20and%20Ramanujan%20Graphs
Elementary Number Theory, Group Theory and Ramanujan Graphs is a book in mathematics whose goal is to make the construction of Ramanujan graphs accessible to undergraduate-level mathematics students. In order to do so, it covers several other significant topics in graph theory, number theory, and group theory. It was written by Giuliana Davidoff, Peter Sarnak, and Alain Valette, and published in 2003 by the Cambridge University Press, as volume 55 of the London Mathematical Society Student Texts book series. Background In graph theory, expander graphs are undirected graphs with high connectivity: every small-enough subset of vertices has many edges connecting it to the remaining parts of the graph. Sparse expander graphs have many important applications in computer science, including the development of error correcting codes, the design of sorting networks, and the derandomization of randomized algorithms. For these applications, the graph must be constructed explicitly, rather than merely having its existence proven. One way to show that a graph is an expander is to study the eigenvalues of its adjacency matrix. For an -regular graph, these are real numbers in the interval , and the largest eigenvalue (corresponding to the all-1s eigenvector) is exactly . The spectral expansion of the graph is defined from the difference between the largest and second-largest eigenvalues, the spectral gap, which controls how quickly a random walk on the graph settles to its stable distribution; this gap can be at most . The Ramanujan graphs are defined as the graphs that are optimal from the point of view of spectral expansion: they are -regular graphs whose spectral gap is exactly . Although Ramanujan graphs with high degree, such as the complete graphs, are easy to construct, expander graphs of low degree are needed for the applications of these graphs. Several constructions of low-degree Ramanujan graphs are now known, the first of which were by and . Reviewer Jürgen Elstrod writes that "while the description of these graphs is elementary, the proof that they have the desired properties is not". Elementary Number Theory, Group Theory and Ramanujan Graphs aims to make as much of this theory accessible at an elementary level as possible. Topics Its authors have divided Elementary Number Theory, Group Theory and Ramanujan Graphs into four chapters. The first of these provides background in graph theory, including material on the girth of graphs (the length of the shortest cycle), on graph coloring, and on the use of the probabilistic method to prove the existence of graphs for which both the girth and the number of colors needed are large. This provides additional motivation for the construction of Ramanujan graphs, as the ones constructed in the book provide explicit examples of the same phenomenon. This chapter also provides the expected material on spectral graph theory, needed for the definition of Ramanujan graphs. Chapter 2, on number theory, includes the sum of two squares theorem characterizing the positive integers that can be represented as sums of two squares of integers (closely connected to the norms of Gaussian integers), Lagrange's four-square theorem according to which all positive integers can be represented as sums of four squares (proved using the norms of Hurwitz quaternions), and quadratic reciprocity. Chapter 3 concerns group theory, and in particular the theory of the projective special linear groups and projective linear groups over the finite fields whose order is a prime number , and the representation theory of finite groups. The final chapter constructs the Ramanujan graph for two prime numbers and as a Cayley graph of the group or (depending on quadratic reciprocity) with generators defined by taking modulo a set of quaternions coming from representations of as a sum of four squares. These graphs are automatically -regular. The chapter provides formulas for their numbers of vertices, and estimates of their girth. While not fully proving that these graphs are Ramanujan graphs, the chapter proves that they are spectral expanders, and describes how the claim that they are Ramanujan graphs follows from Pierre Deligne's proof of the Ramanujan conjecture (the connection to Ramanujan from which the name of these graphs was derived). Audience and reception This book is intended for advanced undergraduates who have already seen some abstract algebra and real analysis. Reviewer Thomas Shemanske suggests using it as the basis of a senior seminar, as a quick path to many important topics and an interesting example of how these seemingly-separate topics join forces in this application. On the other hand, Thomas Pfaff thinks it would be difficult going even for most senior-level undergraduates, but could be a good choice for independent study or an elective graduate course. References Mathematics books 2003 non-fiction books Algebraic graph theory
Elementary Number Theory, Group Theory and Ramanujan Graphs
[ "Mathematics" ]
984
[ "Mathematical relations", "Graph theory", "Algebra", "Algebraic graph theory" ]
66,229,826
https://en.wikipedia.org/wiki/Collinder%20228
Collinder 228 is an open cluster within the southern part of the Carina Nebula NGC 3372, about 25' south of . It is probably composed of stars which recently formed from the material in the nebula. QZ Carinae is the brightest member of Collinder 228 with an apparent magnitude between 6.16–6.49 . See also List of most massive stars References Carina Nebula Open clusters Carina (constellation) Star-forming regions
Collinder 228
[ "Astronomy" ]
91
[ "Carina (constellation)", "Constellations" ]
66,230,909
https://en.wikipedia.org/wiki/Ruprecht%2044
Ruprecht 44 is an open cluster in the Milky Way galaxy. It is about 6,600 pc away in the constellation Puppis. Ruprecht 44 is a very young open cluster, only about several million years old. References Open clusters Puppis Star-forming regions
Ruprecht 44
[ "Astronomy" ]
57
[ "Puppis", "Constellations" ]
66,231,266
https://en.wikipedia.org/wiki/Hymenoptera%20paleobiota%20of%20Burmese%20amber
Burmese amber is fossil resin dating to the early Late Cretaceous Cenomanian age recovered from deposits in the Hukawng Valley of northern Myanmar. It is known for being one of the most diverse Cretaceous age amber paleobiotas, containing rich arthropod fossils, along with uncommon vertebrate fossils and even rare marine inclusions. A mostly complete list of all taxa described up to the end of 2023 can be found in Ross (2024). Hymenoptera References Prehistoric fauna by locality
Hymenoptera paleobiota of Burmese amber
[ "Biology" ]
107
[ "Prehistoric fauna by locality", "Prehistoric biotas" ]
66,234,171
https://en.wikipedia.org/wiki/Movable%20scaffolding%20system
A movable scaffolding system (MSS) is a special-purpose self-launching form used in bridge construction, specifically prestressed concrete bridges with segments or spans that are cast in place. The movable scaffolding system is used to support a form while the concrete is cured; once the segment is complete, the scaffold and forms are moved to the end of the new segment and another segment is poured. While superficially similar, movable scaffolding systems should not be confused with launching gantry machines, which also are used in segmental bridge construction. Both feature long girders spanning multiple bridge spans which move with and temporarily support the work, but launching gantry machines are used to lift and support precast bridge segments and bridge girders, while movable scaffolding systems are used for cast-in-place construction. Operation and design An MSS is generally used instead of a launching gantry to minimize the number of joints, since the cast in place segments typically are longer than precast segments. Once several bridge piers are complete, support brackets are attached to adjacent piers and the main parallel girders of the MSS are lifted in place to support the scaffold and concrete forms. Jacks are used to raise the girders and forms and the concrete is poured for the segment (or span) after rebar is placed. After the concrete has cured and the tendons have been tensioned, the jacks are lowered and the MSS girders are launched to bridge the next span. This process is repeated until the bridge is complete. Both overhead (forms suspended from support girder(s) above the bridge deck level) and underslung (forms supported by support girder(s) below bridge deck level) MSS are available. History MSS construction was developed in the 1960s in Europe; the first bridge built with a MSS was the in Germany, completed in 1959. The first bridge constructed with a MSS in California was the Long Beach International Gateway in Long Beach that replaced the Gerald Desmond Bridge, completed in 2020. References External links Civil engineering Construction equipment
Movable scaffolding system
[ "Engineering" ]
434
[ "Construction", "Construction equipment", "Civil engineering", "Industrial machinery" ]
66,234,255
https://en.wikipedia.org/wiki/%C3%97%20Beruladium%20procurrens
× Beruladium procurrens is an intergeneric hybrid plant in the umbellifer family (Apiaceae); the result of hybridisation between Berula erecta (lesser water parsnip) and Helosciadium nodiflorum (fool's water cress). Discovery In July 1979 Max Walters collected an unidentified plant from Chippenham Fen, Cambridgeshire, England; it resembled H. nodiflorum, but grew as a floating mass in a fen ditch with small, pedunculate umbels rising above the water surface. Later that year, the specimen was exhibited as a living plant at the annual BSBI exhibition meeting at the British Museum in London. An initial putative determination of H. repens was made, but as the plants were found to produce poor pollen and did not develop ripe fruits a hybrid origin was deemed more likely, possibly H. repens x H. nodiflorum. Later suggestions included a depauperate example of B. erecta, which can be confused with H. nodiflorum in the vegetative state, or else an intergeneric hybrid between the two. The original material was cultivated for a number of years in Cambridge University Botanic Garden, but this stock is no longer extant, nor was material placed in Cambridge University Herbarium (CGE). However, the original collections made by Walters are present as dried specimens in the University of Leicester Herbarium (LTR), presumably retained after being sent to then herbarium director Tom Tutin for determination. Plants considered to be the same as those collected by Walters in 1979 still occur on Chippenham Fen and, in 2014, Alan Leslie reexamined the plants and sent them for molecular and cytogenetic analysis at the University of Leicester, which revealed a previously unknown intergeneric hybrid between B. erecta and H. nodiflorum. Chromosome number The original Walters' material and the 2014 collection from Chippenham Fen are both 2n = 20, which is consistent with an intergeneric hybrid between B. erecta (2n = 18) and H. nodiflorum (2n = 22) Distribution East Anglia (particularly Cambridgeshire and Suffolk). Notable sites: Chippenham Fen and Carlton Marshes. Description Creeping perennial herb that roots at most nodes. Leaves simply pinnate with up to 5 pairs of leaflets, which are ovate to broadly ovate. Petioles without petiolar ring characteristic of B. erecta. Flowering umbels, typically small, are borne on peduncles, which vary from very short to longer than the rays of the umbel, and subtended at the base by an involucre of (1)2-3 bracts. Sterile; ripe fruit absent. References Apiaceae Hybrid plants Intergeneric hybrids
× Beruladium procurrens
[ "Biology" ]
583
[ "Intergeneric hybrids", "Hybrid plants", "Plants", "Hybrid organisms" ]
66,234,970
https://en.wikipedia.org/wiki/Address%20family%20identifier
An address family identifier is used to identify individual network address schemes or numbering plans for network communication in contexts where the use of individual addresses might otherwise be ambiguous. Address family identifiers were first defined in . Examples of address families include 32-bit IPv4 addresses, 128-bit IPv6 addresses, X.121 addresses used by the X.25 protocol suite, E.164 telephone numbers, and F.69 Telex addresses. Address family identifiers are used in communications protocols and APIs that support multiple network address schemes, including routing protocols such as BGP and RIPv2. The list of address family identifiers is maintained by IANA. References External links IANA address family identifier table Internet Assigned Numbers Authority Unique identifiers
Address family identifier
[ "Technology" ]
158
[ "Computing stubs", "Computer network stubs" ]
66,234,987
https://en.wikipedia.org/wiki/Phenyl%20phosphorodiamidate
Phenyl phosphorodiamidate is an organophosphorus compound with the formula C6H5OP(O)(NH2)2. A white solid, it is used as an inhibitor of urease, an enzyme that accelerates the hydrolysis of urea. In this way, phenyl phosphorodiamidate enhances the effectiveness of urea-based fertilizers. It is a component of the technology of controlled release fertilizers. In terms of its molecular structure, phenyl phosphorodiamidate is a tetrahedral molecule structurally related to urea, hence its inhibitory function. It is a derivative of phosphoryl chloride. See also N-(n-butyl)thiophosphoric triamide, a related urease inhibitor References Phosphoramides Soil improvers Fertilizers
Phenyl phosphorodiamidate
[ "Chemistry" ]
192
[ "Fertilizers", "Soil chemistry" ]
66,235,492
https://en.wikipedia.org/wiki/Koelsch%20radical
The Koelsch radical (also known as Koelsch's radical and 1,3-bisdiphenylene-2-phenylallyl or α,γ-bisdiphenylene-β-phenylallyl, abbreviated BDPA) is a chemical compound that is an unusually stable carbon-centered radical, due to its resonance structures. Properties BDPA is an unusually stable radical compound due to the extent to which its electrons are delocalized through resonance structures. The unpaired electron is located predominantly at the 1 and 3 positions. Steric effects from the biphenyl substituents also contribute to the compound's stability. Uses BDPA and closely related compounds are used as molecular standards in electron paramagnetic resonance (EPR) and electron nuclear double resonance (ENDOR) experiments, and as a polarizing agent in dynamic nuclear polarization (DNP) nuclear magnetic resonance (NMR) experiments. Because BDPA itself is hydrophobic, derivatives have been developed that are more soluble in aqueous solution. History The compound was first synthesized by C. Frederick Koelsch while he was a postdoctoral fellow at Harvard University in the 1930s. He attempted to publish a paper describing the compound, but the paper was rejected on the grounds that the described properties, particularly stability, were unlikely to be those of a radical. Subsequent experimental evidence and quantum mechanics calculations suggested his interpretation of the original experiment was correct, resulting in the publication of the paper in 1957, nearly 25 years after the original experiments. Although the original report described stability on the order of years, modern experiments suggest that this family of compounds, while unusually stable for radicals, shows measurable degradation in months after preparation. References Aromatic compounds Free radicals
Koelsch radical
[ "Chemistry", "Biology" ]
363
[ "Aromatic compounds", "Free radicals", "Organic compounds", "Senescence", "Biomolecules" ]
66,236,993
https://en.wikipedia.org/wiki/Lee%20Irvin%20Smith
Lee Irvin Smith (July 22, 1891 – March 29, 1973) was an American organic chemist who spent his research career on the faculty at the University of Minnesota, where he served as chief of the chemistry department's organic chemistry division. Early life and education Smith was born in Indianapolis, Indiana in 1891, the oldest of three sons. He was raised mostly in Columbus, Ohio, where the family moved when Smith was a child. His father was a piano maker and Smith learned to play from a young age. Smith attended Ohio State University and developed an interest in chemistry after a course taught by William Lloyd Evans. Smith received his bachelor's degree in 1913 and remained at Ohio State for a master's degree received in 1915. He then moved to Harvard University, where he studied organic chemistry under the supervision of Elmer Peter Kohler. He received a second master's degree from Harvard in 1917 and his PhD in 1920. His graduate work was interrupted by World War I, during which he served as a second lieutenant and worked on a wartime project with Kohler and others on Lewisite from late 1917 to the end of the conflict in 1918. Academic career Smith was appointed as an instructor of chemistry at the University of Minnesota – not yet a major research institution in chemistry at the time – following the completion of his PhD in 1920. By 1932 he had become full professor and the chief of the organic division of the department of chemistry, a position he would occupy for over 25 years. His work in this position is recognized as influential in establishing the university's role in organic chemistry research. Starting in 1932 he recruited young scientists to expand the department, including C. Frederick Koelsch, Paul Doughty Bartlett, and later Richard T. Arnold. Smith stepped down from his chief position in 1958 and was succeeded by William E. Parham, and retired fully in 1960. Throughout his academic career Smith also worked as an industry consultant with Merck & Co. and General Mills. Smith served on a number of editorial boards and was the president of the American Chemical Society's organic division in 1941–2. He was elected to the United States National Academy of Sciences in 1944. Research Smith is best known for his synthesis of vitamin E in 1939. He also published extensively on related tocopherol compounds. He was also the first to publish a synthesis of a bicyclopropyl ketone. In addition his research group studied alkylbenzenes, benzoquinones, and the Jacobsen rearrangement. References 20th-century American chemists American organic chemists University of Minnesota faculty 1891 births 1973 deaths Ohio State University College of Arts and Sciences alumni Harvard University alumni Members of the United States National Academy of Sciences
Lee Irvin Smith
[ "Chemistry" ]
551
[ "Organic chemists", "American organic chemists" ]
58,145,978
https://en.wikipedia.org/wiki/Rolled%20plate%20glass
Rolled plate is a type of industrially produced glass. It was invented and patented by James Hartley circa 1847. Rolled-plate glass is used architecturally; in the mid-19th century, uses included roofing railway stations and greenhouses. References Glass production Industrial processes History of glass
Rolled plate glass
[ "Materials_science", "Engineering" ]
58
[ "Glass engineering and science", "Glass production" ]
58,148,382
https://en.wikipedia.org/wiki/Fosdevirine
Fosdevirine is an experimental antiviral agent of the non-nucleoside reverse transcriptase inhibitor class that was studied for potential use in the treatment of HIV-AIDS. It was discovered by Idenix Pharmaceuticals and was being developed by GlaxoSmithKline and ViiV Healthcare, but it has now been discontinued due to unexpected side effects. References Non-nucleoside reverse transcriptase inhibitors Indoles Nitriles Chloroarenes Phosphinates Abandoned drugs
Fosdevirine
[ "Chemistry" ]
106
[ "Nitriles", "Drug safety", "Functional groups", "Abandoned drugs" ]
58,150,431
https://en.wikipedia.org/wiki/Foreshadow
Foreshadow, known as L1 Terminal Fault (L1TF) by Intel, is a vulnerability that affects modern microprocessors that was first discovered by two independent teams of researchers in January 2018, but was first disclosed to the public on 14 August 2018. The vulnerability is a speculative execution attack on Intel processors that may result in the disclosure of sensitive information stored in personal computers and third-party clouds. There are two versions: the first version (original/Foreshadow) () targets data from SGX enclaves; and the second version (next-generation/Foreshadow-NG) () targets virtual machines (VMs), hypervisors (VMM), operating systems (OS) kernel memory, and System Management Mode (SMM) memory. A listing of affected Intel hardware has been posted. Foreshadow is similar to the Spectre security vulnerabilities discovered earlier to affect Intel and AMD chips, and the Meltdown vulnerability that also affected Intel. AMD products are not affected by the Foreshadow security flaws. According to one expert, "[Foreshadow] lets malicious software break into secure areas that even the Spectre and Meltdown flaws couldn't crack". Nonetheless, one of the variants of Foreshadow goes beyond Intel chips with SGX technology, and affects "all [Intel] Core processors built over the last seven years". Foreshadow may be very difficult to exploit. As of 15 August 2018, there seems to be no evidence of any serious hacking involving the Foreshadow vulnerabilities. Nevertheless, applying software patches may help alleviate some concern, although the balance between security and performance may be a worthy consideration. Companies performing cloud computing may see a significant decrease in their overall computing power; people should not likely see any performance impact, according to researchers. The real fix, according to Intel, is by replacing today's processors. Intel further states, "These changes begin with our next-generation Intel Xeon Scalable processors (code-named Cascade Lake), as well as new client processors expected to launch later this year [2018]." On 16 August 2018, researchers presented technical details of the Foreshadow security vulnerabilities in a seminar, and publication, entitled "Foreshadow: Extracting the Keys to the Intel SGX Kingdom with Transient Out-of-Order Execution" at a USENIX security conference. History Two groups of researchers discovered the security vulnerabilities independently: a Belgian team (including Raoul Strackx, Jo Van Bulck, Frank Piessens) from imec-DistriNet, KU Leuven reported it to Intel on 3 January 2018; a second team from Technion – Israel Institute of Technology (Marina Minkin, Mark Silberstein), University of Adelaide (Yuval Yarom), and University of Michigan (Ofir Weisse, Daniel Genkin, Baris Kasikci, Thomas F. Wenisch) reported it on 23 January 2018. The vulnerabilities were first disclosed to the public on 14 August 2018. Mechanism The Foreshadow vulnerability is a speculative execution attack on Intel processors that may result in the disclosure of sensitive information stored in personal computers and third-party clouds. There are two versions: the first version (original/Foreshadow) ( [attacks SGX]) targets data from SGX enclaves; and the second version (next-generation/Foreshadow-NG) ( [attacks the OS Kernel and SMM mode] and [attacks virtual machines]) targets virtual machines (VMs), hypervisors (VMM), operating systems (OS) kernel memory, and System Management Mode (SMM) memory. Intel considers the entire class of speculative execution side channel vulnerabilities as "L1 Terminal Fault" (L1TF). For Foreshadow, the sensitive data of interest is the encrypted data in an SGX enclave. Usually, an attempt to read enclave memory from outside the enclave is made, speculative execution is permitted to modify the cache based on the data that was read, and then the processor is allowed to block the speculation when it detects that the protected-enclave memory is involved and reading is not permitted. Speculative execution can use sensitive data in a level 1 cache before the processor notices a lack of permission. The Foreshadow attacks are stealthy, and leave few traces of the attack event afterwards in a computer's logs. On 16 August 2018, researchers presented technical details of the Foreshadow security vulnerabilities in a seminar, and publication, at a USENIX security conference. Impact Foreshadow is similar to the Spectre security vulnerabilities discovered earlier to affect Intel and AMD chips, and the Meltdown vulnerability that affected Intel. AMD products, according to AMD, are not affected by the Foreshadow security flaws. According to one expert, "[Foreshadow] lets malicious software break into secure areas that even the Spectre and Meltdown flaws couldn't crack". Nonetheless, one of the variants of Foreshadow goes beyond Intel chips with SGX technology, and affects "all [Intel] Core processors built over the last seven years". Intel notes that the Foreshadow flaws could produce the following: Malicious applications, which may be able to infer data in the operating system memory, or data from other applications. A malicious guest virtual machine (VM) may infer data in the VM's memory, or data in the memory of other guest VMs. Malicious software running outside of SMM may infer data in SMM memory. Malicious software running outside of an Intel SGX enclave or within an enclave may infer data from within another Intel SGX enclave. According to one of the discoverers of the computer flaws: "... the SGX security hole can lead to a "Complete collapse of the SGX ecosystem." A partial listing of affected Intel hardware has been posted, and is described below. (Note: a more detailed - and updated - listing of affected products is on the official Intel website.) Intel Core i3/i5/i7/M processor (45 nm and 32 nm) 2nd/3rd/4th/5th/6th/7th/8th generation Intel Core processors Intel Core X-series processor family for Intel X99 and X299 platforms Intel Xeon processor 3400/3600/5500/5600/6500/7500 series Intel Xeon Processor E3 v1/v2/v3/v4/v5/v6 family Intel Xeon Processor E5 v1/v2/v3/v4 family Intel Xeon Processor E7 v1/v2/v3/v4 family Intel Xeon Processor Scalable family Intel Xeon Processor D (1500, 2100) Foreshadow may be very difficult to exploit, and there seems to be no evidence to date (15 August 2018) of any serious hacking involving the Foreshadow vulnerabilities. Mitigation Applying software patches may help alleviate some concern(s), although the balance between security and performance may be a worthy consideration. Companies performing cloud computing may see a significant decrease in their overall computing power; people should not likely see any performance impact, according to researchers. The real fix, according to Intel, is by replacing today's processors. Intel further states, "These changes begin with our next-generation Intel Xeon Scalable processors (code-named Cascade Lake), as well as new client processors expected to launch later this year [2018]." See also Transient execution CPU vulnerabilities Hardware security bug TLBleed, similar security vulnerability References Further reading Foreshadow – Technical details (USENIX; FSA) External links Transient execution CPU vulnerabilities X86 memory management 2018 in computing
Foreshadow
[ "Technology" ]
1,646
[ "Transient execution CPU vulnerabilities", "Computer security exploits" ]
58,151,211
https://en.wikipedia.org/wiki/Ulonic%20acid
A ulonic acid is a carboxylic acid derived from a monosaccharide where the acid group is at position 1. References Carboxylic acids
Ulonic acid
[ "Chemistry" ]
36
[ "Carboxylic acids", "Functional groups", "Organic compounds", "Organic compound stubs", "Organic chemistry stubs" ]
58,151,276
https://en.wikipedia.org/wiki/NGC%204267
NGC 4267 is a barred lenticular galaxy located about 55 million light-years away in the constellation Virgo. It was discovered by astronomer William Herschel on April 17, 1784 and is a member of the Virgo Cluster. See also List of NGC objects (4001–5000) NGC 4262 NGC 4340 NGC 4477 References External links 4267 39710 Virgo (constellation) Virgo Cluster Astronomical objects discovered in 1784 Barred lenticular galaxies 7373
NGC 4267
[ "Astronomy" ]
100
[ "Virgo (constellation)", "Constellations" ]
58,151,408
https://en.wikipedia.org/wiki/Transseries
In mathematics, the field of logarithmic-exponential transseries is a non-Archimedean ordered differential field which extends comparability of asymptotic growth rates of elementary nontrigonometric functions to a much broader class of objects. Each log-exp transseries represents a formal asymptotic behavior, and it can be manipulated formally, and when it converges (or in every case if using special semantics such as through infinite surreal numbers), corresponds to actual behavior. Transseries can also be convenient for representing functions. Through their inclusion of exponentiation and logarithms, transseries are a strong generalization of the power series at infinity () and other similar asymptotic expansions. The field was introduced independently by Dahn-Göring and Ecalle in the respective contexts of model theory or exponential fields and of the study of analytic singularity and proof by Ecalle of the Dulac conjectures. It constitutes a formal object, extending the field of exp-log functions of Hardy and the field of accelerando-summable series of Ecalle. The field enjoys a rich structure: an ordered field with a notion of generalized series and sums, with a compatible derivation with distinguished antiderivation, compatible exponential and logarithm functions and a notion of formal composition of series. Examples and counter-examples Informally speaking, exp-log transseries are well-based (i.e. reverse well-ordered) formal Hahn series of real powers of the positive infinite indeterminate , exponentials, logarithms and their compositions, with real coefficients. Two important additional conditions are that the exponential and logarithmic depth of an exp-log transseries that is the maximal numbers of iterations of exp and log occurring in must be finite. The following formal series are log-exp transseries: The following formal series are not log-exp transseries: — this series is not well-based. — the logarithmic depth of this series is infinite — the exponential and logarithmic depths of this series are infinite It is possible to define differential fields of transseries containing the two last series; they belong respectively to and (see the paragraph Using surreal numbers below). Introduction A remarkable fact is that asymptotic growth rates of elementary nontrigonometric functions and even all functions definable in the model theoretic structure of the ordered exponential field of real numbers are all comparable: For all such and , we have or , where means . The equivalence class of under the relation is the asymptotic behavior of , also called the germ of (or the germ of at infinity). The field of transseries can be intuitively viewed as a formal generalization of these growth rates: In addition to the elementary operations, transseries are closed under "limits" for appropriate sequences with bounded exponential and logarithmic depth. However, a complication is that growth rates are non-Archimedean and hence do not have the least upper bound property. We can address this by associating a sequence with the least upper bound of minimal complexity, analogously to construction of surreal numbers. For example, is associated with rather than because decays too quickly, and if we identify fast decay with complexity, it has greater complexity than necessary (also, because we care only about asymptotic behavior, pointwise convergence is not dispositive). Because of the comparability, transseries do not include oscillatory growth rates (such as ). On the other hand, there are transseries such as that do not directly correspond to convergent series or real valued functions. Another limitation of transseries is that each of them is bounded by a tower of exponentials, i.e. a finite iteration of , thereby excluding tetration and other transexponential functions, i.e. functions which grow faster than any tower of exponentials. There are ways to construct fields of generalized transseries including formal transexponential terms, for instance formal solutions of the Abel equation . Formal construction Transseries can be defined as formal (potentially infinite) expressions, with rules defining which expressions are valid, comparison of transseries, arithmetic operations, and even differentiation. Appropriate transseries can then be assigned to corresponding functions or germs, but there are subtleties involving convergence. Even transseries that diverge can often be meaningfully (and uniquely) assigned actual growth rates (that agree with the formal operations on transseries) using accelero-summation, which is a generalization of Borel summation. Transseries can be formalized in several equivalent ways; we use one of the simplest ones here. A transseries is a well-based sum, with finite exponential depth, where each is a nonzero real number and is a monic transmonomial ( is a transmonomial but is not monic unless the coefficient ; each is different; the order of the summands is irrelevant). The sum might be infinite or transfinite; it is usually written in the order of decreasing . Here, well-based means that there is no infinite ascending sequence (see well-ordering). A monic transmonomial is one of 1, x, log x, log log x, ..., epurely_large_transseries. Note: Because , we do not include it as a primitive, but many authors do; log-free transseries do not include but is permitted. Also, circularity in the definition is avoided because the purely_large_transseries (above) will have lower exponential depth; the definition works by recursion on the exponential depth. See "Log-exp transseries as iterated Hahn series" (below) for a construction that uses and explicitly separates different stages. A purely large transseries is a nonempty transseries with every . Transseries have finite exponential depth, where each level of nesting of e or log increases depth by 1 (so we cannot have x + log x + log log x + ...). Addition of transseries is termwise: (absence of a term is equated with a zero coefficient). Comparison: The most significant term of is for the largest (because the sum is well-based, this exists for nonzero transseries). is positive iff the coefficient of the most significant term is positive (this is why we used 'purely large' above). X > Y iff X − Y is positive. Comparison of monic transmonomials: – these are the only equalities in our construction. iff (also ). Multiplication: This essentially applies the distributive law to the product; because the series is well-based, the inner sum is always finite. Differentiation: (division is defined using multiplication). With these definitions, transseries is an ordered differential field. Transseries is also a valued field, with the valuation given by the leading monic transmonomial, and the corresponding asymptotic relation defined for by if (where is the absolute value). Other constructions Log-exp transseries as iterated Hahn series Log-free transseries We first define the subfield of of so-called log-free transseries. Those are transseries which exclude any logarithmic term. Inductive definition: For we will define a linearly ordered multiplicative group of monomials . We then let denote the field of well-based series . This is the set of maps with well-based (i.e. reverse well-ordered) support, equipped with pointwise sum and Cauchy product (see Hahn series). In , we distinguish the (non-unital) subring of purely large transseries, which are series whose support contains only monomials lying strictly above . We start with equipped with the product and the order . If is such that , and thus and are defined, we let denote the set of formal expressions where and . This forms a linearly ordered commutative group under the product and the lexicographic order if and only if or ( and ). The natural inclusion of into given by identifying and inductively provides a natural embedding of into , and thus a natural embedding of into . We may then define the linearly ordered commutative group and the ordered field which is the field of log-free transseries. The field is a proper subfield of the field of well-based series with real coefficients and monomials in . Indeed, every series in has a bounded exponential depth, i.e. the least positive integer such that , whereas the series has no such bound. Exponentiation on : The field of log-free transseries is equipped with an exponential function which is a specific morphism . Let be a log-free transseries and let be the exponential depth of , so . Write as the sum in where , is a real number and is infinitesimal (any of them could be zero). Then the formal Hahn sum converges in , and we define where is the value of the real exponential function at . Right-composition with : A right composition with the series can be defined by induction on the exponential depth by with . It follows inductively that monomials are preserved by so at each inductive step the sums are well-based and thus well defined. Log-exp transseries Definition: The function defined above is not onto so the logarithm is only partially defined on : for instance the series has no logarithm. Moreover, every positive infinite log-free transseries is greater than some positive power of . In order to move from to , one can simply "plug" into the variable of series formal iterated logarithms which will behave like the formal reciprocal of the -fold iterated exponential term denoted . For let denote the set of formal expressions where . We turn this into an ordered group by defining , and defining when . We define . If and we embed into by identifying an element with the term We then obtain as the directed union On the right-composition with is naturally defined by Exponential and logarithm: Exponentiation can be defined on in a similar way as for log-free transseries, but here also has a reciprocal on . Indeed, for a strictly positive series , write where is the dominant monomial of (largest element of its support), is the corresponding positive real coefficient, and is infinitesimal. The formal Hahn sum converges in . Write where itself has the form where and . We define . We finally set Using surreal numbers Direct construction of log-exp transseries One may also define the field of log-exp transseries as a subfield of the ordered field of surreal numbers. The field is equipped with Gonshor-Kruskal's exponential and logarithm functions and with its natural structure of field of well-based series under Conway normal form. Define , the subfield of generated by and the simplest positive infinite surreal number (which corresponds naturally to the ordinal , and as a transseries to the series ). Then, for , define as the field generated by , exponentials of elements of and logarithms of strictly positive elements of , as well as (Hahn) sums of summable families in . The union is naturally isomorphic to . In fact, there is a unique such isomorphism which sends to and commutes with exponentiation and sums of summable families in lying in . Other fields of transseries Continuing this process by transfinite induction on beyond , taking unions at limit ordinals, one obtains a proper class-sized field canonically equipped with a derivation and a composition extending that of (see Operations on transseries below). If instead of one starts with the subfield generated by and all finite iterates of at , and for is the subfield generated by , exponentials of elements of and sums of summable families in , then one obtains an isomorphic copy the field of exponential-logarithmic transseries, which is a proper extension of equipped with a total exponential function. The Berarducci-Mantova derivation on coincides on with its natural derivation, and is unique to satisfy compatibility relations with the exponential ordered field structure and generalized series field structure of and Contrary to the derivation in and is not surjective: for instance the series doesn't have an antiderivative in or (this is linked to the fact that those fields contain no transexponential function). Additional properties Operations on transseries Operations on the differential exponential ordered field Transseries have very strong closure properties, and many operations can be defined on transseries: Log-exp transseries form an exponentially closed ordered field: the exponential and logarithmic functions are total. For example: Logarithm is defined for positive arguments. Log-exp transseries are real-closed. Integration: every log-exp transseries has a unique antiderivative with zero constant term , and . Logarithmic antiderivative: for , there is with . Note 1. The last two properties mean that is Liouville closed. Note 2. Just like an elementary nontrigonometric function, each positive infinite transseries has integral exponentiality, even in this strong sense: The number is unique, it is called the exponentiality of . Composition of transseries An original property of is that it admits a composition (where is the set of positive infinite log-exp transseries) which enables us to see each log-exp transseries as a function on . Informally speaking, for and , the series is obtained by replacing each occurrence of the variable in by . Properties Associativity: for and , we have and . Compatibility of right-compositions: For , the function is a field automorphism of which commutes with formal sums, sends onto , onto and onto . We also have . Unicity: the composition is unique to satisfy the two previous properties. Monotonicity: for , the function is constant or strictly monotonous on . The monotony depends on the sign of . Chain rule: for and , we have . Functional inverse: for , there is a unique series with . Taylor expansions: each log-exp transseries has a Taylor expansion around every point in the sense that for every and for sufficiently small , we have where the sum is a formal Hahn sum of a summable family. Fractional iteration: for with exponentiality and any real number , the fractional iterate of is defined. Decidability and model theory Theory of differential ordered valued differential field The theory of is decidable and can be axiomatized as follows (this is Theorem 2.2 of Aschenbrenner et al.): is an ordered valued differential field. Intermediate value property (IVP): where P is a differential polynomial, i.e. a polynomial in In this theory, exponentiation is essentially defined for functions (using differentiation) but not constants; in fact, every definable subset of is semialgebraic. Theory of ordered exponential field The theory of is that of the exponential real ordered exponential field , which is model complete by Wilkie's theorem. Hardy fields is the field of accelero-summable transseries, and using accelero-summation, we have the corresponding Hardy field, which is conjectured to be the maximal Hardy field corresponding to a subfield of . (This conjecture is informal since we have not defined which isomorphisms of Hardy fields into differential subfields of are permitted.) is conjectured to satisfy the above axioms of . Without defining accelero-summation, we note that when operations on convergent transseries produce a divergent one while the same operations on the corresponding germs produce a valid germ, we can then associate the divergent transseries with that germ. A Hardy field is said maximal if it is properly contained in no Hardy field. By an application of Zorn's lemma, every Hardy field is contained in a maximal Hardy field. It is conjectured that all maximal Hardy fields are elementary equivalent as differential fields, and indeed have the same first order theory as . Logarithmic-transseries do not themselves correspond to a maximal Hardy field for not every transseries corresponds to a real function, and maximal Hardy fields always contain transexponential functions. See also Formal power series Hahn series Exponentially closed field Hardy field References . . Asymptotic analysis Mathematical series Exponentials Logarithms Real closed field
Transseries
[ "Mathematics" ]
3,493
[ "Sequences and series", "Logarithms", "Mathematical analysis", "Mathematical structures", "Series (mathematics)", "Calculus", "E (mathematical constant)", "Asymptotic analysis", "Exponentials" ]
58,151,566
https://en.wikipedia.org/wiki/InfoQ
Information quality (InfoQ) is the potential of a data set to achieve a specific (scientific or practical) goal using a given empirical analysis method. Definition Formally, the definition is InfoQ = U(X,f|g) where X is the data, f the analysis method, g the goal and U the utility function. InfoQ is different from data quality and analysis quality, but is dependent on these components and on the relationship between them. InfoQ has been applied in a wide range of domains like healthcare, customer surveys, data science programs, advanced manufacturing and Bayesian network applications. Kenett and Shmueli (2014) proposed eight dimensions to help assess InfoQ and various methods for increasing InfoQ: Data resolution, Data structure, Data integration, Temporal relevance, Chronology of data and goal, Generalization, Operationalization, Communication. References Data Research methods Statistical analysis Information
InfoQ
[ "Technology" ]
180
[ "Information technology", "Data" ]
58,151,936
https://en.wikipedia.org/wiki/Fluorotabun
Fluorotabun is a highly toxic organophosphate nerve agent of the G-series. It is the fluorinated analog of tabun, i.e. the cyanide group is replaced by a fluorine atom. GAF is considered an ineffective GA-like agent. It is less effective than GAA. See also Tabun (nerve agent) GV (nerve agent) References G-series nerve agents Acetylcholinesterase inhibitors Organophosphates Ethyl esters Dimethylamino compounds
Fluorotabun
[ "Chemistry" ]
111
[ "Organic compounds", "Organic compound stubs", "Organic chemistry stubs" ]
58,152,590
https://en.wikipedia.org/wiki/Monmouthshire%20Houses
Monmouthshire Houses: A Study of Building Techniques and Smaller House-Plans in the Fifteenth to Seventeenth Centuries is a study of buildings within the county of Monmouthshire written by Sir Cyril Fox and Lord Raglan and published by the National Museum of Wales. The study was published in three volumes; Part I Medieval Houses, Part II Sub-Medieval Houses, c. 1550–1610 and Part III Renaissance Houses, c. 1590–1714, between 1951 and 1954. The series was republished by Merton Priory Press in 1994. A later historian of Welsh architecture, Peter Smith, described Fox and Raglan’s work as equal in importance, in its own field, to Charles Darwin's On the Origin of Species. History Sir Cyril Fox (1882–1967) was Director of the National Museum of Wales from 1926 to 1948. Fitzroy Somerset, Lord Raglan, the great-grandson of the 1st Lord Raglan, British Commander during the Crimean War, was a soldier, author and resident of Cefntilla Court in Monmouthshire. Raglan was also a Commissioner for Ancient Monuments in Wales and both he and Fox were pioneers of the study of vernacular architecture, being founder members of the Vernacular Architecture Group. From the early 1940s until 1949, Fox and Raglan undertook the most extensive survey of lesser Monmouthshire buildings ever undertaken. Raglan described their methodology in the introduction to the first volume, Medieval Houses, published in 1951. He would identify houses of interest and obtain the necessary permissions from owners, before calling in Fox to undertake a detailed survey. In the Introduction to the 1994 reprint, Peter Smith, author of Houses of the Welsh Countryside, recorded Raglan's approach; "as we travelled from farmhouse to farmhouse, I realised that his was a name that carried weight and in Monmouthshire opened every door...status and personal charm carried the day and we followed in his wake". In the Introduction to Sub-Medieval Houses, the second volume of their history, Fox and Raglan record the genesis of the project. Wartime defensive precautions in 1941 led to a decision by the Ministry of Works to demolish an important farmhouse, Upper Wern-hir, near Llanbadoc. Fox and Raglan obtained permission to survey the building before its destruction and their investigations, together with the threats to Wern-hir and other, similar, buildings, convinced them of the need for a comprehensive survey of such structures throughout Monmouthshire. Working mainly at weekends, and funded by the National Museum, Fox and Raglan produced "the first truly comprehensive regional study of vernacular architecture in Britain". Some seventy years after the volumes were first published, Fox and Raglan's work is still cited by scholars. The architectural historian, John Newman, author of the Pevsner for Monmouthshire, considered their joint work as "ground-breaking, the single most important publication on any aspect of the county's buildings", and Smith described Monmouthshire Houses as "one of the most remarkable studies of vernacular architecture yet made in the British Isles", "a landmark, in its own field, as significant as Darwin's Origin of Species". Description The three-volume work comprises detailed studies of over 400 houses and farmhouses built in Monmouthshire between the medieval period and 1714. The volumes are: Part I Medieval Houses (1951), ; (reprinted 1994), Part II Sub-Medieval Houses, c. 1550–1610 (1953), ; (reprinted 1994), Part III Renaissance Houses, c. 1590–1714 (1954), ; (reprinted 1994), Notes References Sources Books History of Monmouthshire Architecture in Wales Architecture books Architecture in the United Kingdom Architectural history Series of non-fiction books Publications established in the 1950s
Monmouthshire Houses
[ "Engineering" ]
753
[ "Architectural history", "Architecture" ]
58,153,166
https://en.wikipedia.org/wiki/Sheerness%20Steelworks
Sheerness Steelworks was a steel plant located at Sheerness, on the Isle of Sheppey, in Kent, England. The plant opened in 1971 and produced steel via the Electric Arc Furnace (EAF) method rather than as a primary metal by the smelting of iron ore. The plant has closed down twice in its history; first in 2002 and again in 2012. Current owners Liberty House, had announced plans to re-open part of the site in 2016. History The UK Government approved an application to build a steelworks in North Kent in May 1968. The output from the plant was due to be per year, which was not seen as a threat to the operations of the nationalised British Steel. The steelworks was constructed on the site of a former dockyard, military port and hospital in Sheerness, Isle of Sheppey, Kent in 1971. However, the full commissioning of the steelworks was not complete until March 1972, and the plant was formally opened by the Duke of Edinburgh on 8 November 1972. The Sheerness site made steel from scrap metal using the EAF method with scrap metal as opposed to the normal route which was to smelt iron ore and carbon in a Basic oxygen steelmaking (BOS) process, which at that time, over half of the world's steel plants did. Because of this, it was described as a "mini-mill" in contrast to the integrated steelworks at Ravenscraig, Port Talbot and Scunthorpe. The scrap metal was supplied by water-borne transport (from a scrapyard in Erith) or via inward rail transport, mostly south-eastern scrapyards (such as Ridham and some across London). In the latter stages of the steelworks (2003–2012), some of the scrap was sourced from areas out of the south-east such as Crossley's at Shipley and Thomson's scrapyard in Stockton-on-Tees. In 1980, the plant was picketed by steelworkers who were striking at British Steel plants, and in 1984, miners who were on the miners' strike picketed the plant because the Co-Steel workers had not downed tools to join them on strike as other steelworkers had. However, Co-Steel, a Canadian registered company, was an independent steel-making concern and not part of the then Nationalised British Steel. The 1980s were an unsettled period for the steel industry and the Co-Steel management implemented changes to working practices and also persuaded all employees to become salaried staff as part of the company with a medical plan. In doing so, the whole plant became non-union by 1992. This later led to picketing at the gates as union members accused the management of the plant of having a "Dickensian attitude" to its workers. In December 1998, Allied Steel & Wire (ASW) made a bid to take over the Co-Steel plant so as to consolidate its power in the steel market in Europe. The takeover was described by analysts as a reverse takeover as Co-Steel was in profit at the time of the takeover and ASW was in debt. This amalgamation was completed by April 1999 with Sheerness losing 160 out of its 580 jobs, one furnace and its rod mill. In 2002, ASW went into administration and was subsequently bought by a Spanish firm, Celsa. This led to 320 redundancies from the plant and a protracted battle for some to get their pension money back from the defunct ASW. In 2003, Thamesteel, a Saudi Arabian backed company, reopened the plant to produce steel billet and export it to the Middle-East. In January 2012, Thamesteel went into administration and the site later closed with the loss of 400 jobs. The plant had not produced any steel since November 2011. In 2016, Peel Ports, the owners of the site, had of the former steelworks site demolished and remediated at a final cost of £37 million. The work was undertaken to enable Peel Ports to enhance their car import and export business through the port. The works included infilling of the former steelworks cooling ponds and adding new warehousing and an improved rail connection. In the same year, Liberty House announced its intention to lease the remainder of the site, as the rolling mill on site was capable of producing up to of rolled steel per year. Initial estimations were that the site would employ 60 people and possibly up to a further 40 employees if business was sufficient. The Electric Arc Furnace on site was dismantled and taken to the Liberty Steel works at Newport in South Wales as this was far cheaper than having a new EAF built at Newport. Statistics Notes References External links Diagram of the site on the kentrail website Ironworks and steelworks in England Economy of Kent History of Kent Manufacturing plants in England
Sheerness Steelworks
[ "Chemistry" ]
980
[ "Metallurgical industry of the United Kingdom", "Metallurgical industry by country" ]
58,154,583
https://en.wikipedia.org/wiki/Valerie%20Paul
Valerie J. Paul is the Director of the Smithsonian Marine Station at Fort Pierce, in Fort Pierce, FL since 2002 and the Head Scientist of the Chemical Ecology Program. She is interested in marine chemical ecology, and specializes in researching the ecology and chemistry of Cyanobacteria, blue-green algae, blooms. She has been a fellow of the American Association for the Advancement of Science since 1996, and was the chairperson of the Marine Natural Products Gordon Research Conference in 2000. Life and career Paul graduated from the University of California at San Diego in 1979 with a BA in Biology and Studies in Chemical Ecology and then in 1985 with a PhD in Marine Biology at the University of San Diego Scripps Institution of Oceanography. She started working at the University of Guam Marine Laboratory in 1985, became director of the laboratory in 1991 until 1994, and then full professor in 1993. In 2002 she accepted a position at the Smithsonian Marine Station in Fort Pierce as Head Scientist and Director of the Caribbean Coral Reef Ecosystems. She researches marine chemical ecology, marine plant and herbivore interactions, coral reef ecology, and the ecological roles of marine natural products. More specifically in her coral reef ecology research she studies the impact of cyanobacterial bloom on coral reefs and larvae of reef building corals. She has been a council member of the International Society for Reef Studies from 1992-1996, advisory editor for Coral Reefs since 1993, a member of the editorial advisory board of the Journal of Natural Products from 2004 to 2008, and a member of the California Sea Grant Committee from 2000 to 2001 and 2006 to 2007. She was also elected and served as the chair for the Marine Natural Products Gordon Research Conference in 2000 and as the vice-chair in 1998 and she was the program director of the NIH Minority Biomedical Research Support Grant from 1990 to 2002. Paul was elected a fellow of the American Association for the Advancement of Science in 1996. Select publications Paul is the author or co-author of more than 275 papers and review articles. Listed here are the top 10 cited of her papers of all time: HW Paerl, VJ Paul. 2012. Climate change: links to global expansion of harmful cyanobacteria. Water research 46 (5), 1349-1363. https://tropicalsoybean.com/sites/default/files/Climate%20Change%20-%20Links%20To%20Global%20Expansion%20Of%20Harmful%20Cyanobacteria_Paerl%20&%20Paul_2012.pdf . K Taori, VJ Paul, H Luesch. 2008. Structure and Activity of Largazole, a Potent Antiproliferative Agent from the Floridian Marine Cyanobacterium Symploca sp. Journal of the American Chemical Society 130 (6), 1806-1807.https://repository.si.edu/bitstream/handle/10088/3651/713Largazole_Structure.pdf. H Luesch, WY Yoshida, RE Moore, VJ Paul, TH Corbett. 2001. Total Structure Determination of Apratoxin A, a Potent Novel Cytotoxin from the Marine Cyanobacterium Lyngbya majuscula. Journal of the American Chemical Society 123 (23), 5418-5423. https://pubs.acs.org/doi/abs/10.1021/ja010453j. MG Hadfield, VJ Paul. 2001. Natural chemical cues for settlement and metamorphosis of marine invertebrate larvae. Marine chemical ecology, 431-461. https://www.researchgate.net/profile/Michael_Hadfield/publication/265222439_Natural_Chemical_Cues_for_Settlement_and_Metamorphosis_of_Marine-Invertebrate_Larvae/links/54e3963b0cf2b2314f5d9a12/Natural-Chemical-Cues-for-Settlement-and-Metamorphosis-of-Marine-Invertebrate-Larvae.pdf IB Kuffner, LJ Walters, MA Becerro, VJ Paul, R Ritson-Williams, KS Beach. 2006. Inhibition of coral recruitment by macroalgae and cyanobacteria. Marine Ecology Progress Series 323, 107-117. https://www.int-res.com/articles/meps2006/323/m323p107.pdf. H Luesch, RE Moore, VJ Paul, SL Mooberry, TH Corbett. 2001. Isolation of Dolastatin 10 from the Marine Cyanobacterium Symploca Species VP642 and Total Stereochemistry and Biological Evaluation of Its Analogue Symplostatin 1. Journal of Natural Products 64 (7), 907-910. https://pubs.acs.org/doi/abs/10.1021/np010049y. VJ Paul. 1992. Ecological roles of marine natural products. Explorations in chemical ecology (USA). S Dobretsov, M Teplitski, V Paul. 2009. Mini-review: quorum sensing in the marine environment and its relationship to biofouling. Biofouling 25 (5), 413-427.https://www.tandfonline.com/doi/abs/10.1080/08927010902853516. VJ Paul, ME Hay. 1986. Seaweed susceptibility to herbivory: chemical and morphological correlates. Marine Ecology Progress Series, 255-264. https://smartech.gatech.edu/bitstream/handle/1853/34323/1986_MEPS_001.pdf. DG Corley, R Herb, RE Moore, PJ Scheuer, VJ Paul. 1988. Laulimalides. New potent cytotoxic macrolides from a marine sponge and a nudibranch predator. The Journal of Organic Chemistry 53 (15), 3644-3646. https://pubs.acs.org/doi/abs/10.1021/jo00250a053?journalCode=joceah. References External links Smithsonian National Museum of Natural History bio page Smithsonian Institution Archives Wonderful Women Wednesday ORCID bio page Living people American ecologists American women ecologists University of California, San Diego alumni Scripps Institution of Oceanography alumni University of Guam faculty Smithsonian Institution people Fellows of the American Association for the Advancement of Science Year of birth missing (living people) 20th-century American scientists 20th-century American women scientists 21st-century American scientists 21st-century American women scientists 20th-century American non-fiction writers 20th-century American women writers 21st-century American non-fiction writers 21st-century American women writers Chemical ecologists American women academics
Valerie Paul
[ "Chemistry" ]
1,459
[ "Chemical ecologists", "Chemical ecology" ]
58,155,298
https://en.wikipedia.org/wiki/ADNP%20syndrome
ADNP syndrome, also known as Helsmoortel-Van der Aa syndrome (HVDAS), is a non-inherited neurodevelopmental disorder caused by mutations in the activity-dependent neuroprotector homeobox (ADNP) gene. The hallmark features of the syndrome are intellectual disability, global developmental delays, global motor planning delays, and autism spectrum disorder (ASD) or autistic features. Although ADNP syndrome was only identified in 2014, it is projected to be one of the most frequent single-gene causes of ASD. By June 2022, just over 275 children have been registered in the ADNP Kids Research Foundation Contact Registry. Signs and symptoms Symptoms of ADNP syndrome are variable, but the following are typical characteristics: Severe speech and motor delay Mild-to-severe intellectual disability Characteristic facial features (prominent forehead, high hairline, wide and depressed nasal bridge, and short nose with full, upturned nasal tip) Features of autism spectrum disorder Hypotonia Other commonly observed traits include: Behavioral problems Sleep disturbance Brain abnormalities Seizures Feeding issues Gastrointestinal problems Visual dysfunction (hypermetropia, strabismus, cortical visual impairment) Musculoskeletal anomalies Endocrine issues including short stature and hormonal deficiencies Cardiac and urinary tract anomalies Hearing loss Early tooth eruption Almost all children with ADNP syndrome have speech delay. The average age for first words has been observed to be 30 months, with a range of 7 to 72 months. Some individuals studied did not develop any language skills. Children with ADNP syndrome show some degree of intellectual disability. The degree can range from mild (roughly 1 in 8 children) to severe (roughly half of children). Toilet training is delayed in most children. Loss of previously acquired skills was reported in one fifth of children. The majority of children with ADNP syndrome have features of ASD, although with less severe socializing difficulties than other children with ASD. During infant and toddler years, children are often reported to have a notably happy personality. Genetics ADNP syndrome is caused by non-inherited (de novo) mutations in the ADNP gene. Spanning about 40 kb of DNA, the ADNP gene maps to the chromosomal position chr20q13.13 in the human genome. The protein produced from this gene helps control the activity (expression) of other genes through a process called chromatin remodeling. Chromatin is the network of DNA and protein that packages DNA into chromosomes. The structure of chromatin can be changed (remodeled) to alter how tightly DNA is packaged. By regulating gene expression, the ADNP protein is involved in many aspects of growth and development. It is particularly important for regulation of genes involved in normal brain development, and it likely controls the activity of genes that direct the development and function of other body systems. These changes likely explain the intellectual disability, ASD features, and other diverse signs and symptoms of ADNP syndrome. So far, only loss-of-function mutations such as stop-gain or frameshift mutations have been reported as directly related. Most, but not all mutations might give rise to a truncated protein. If neither parent is found to carry the change in the ADNP gene, the chance of having another child with ADNP syndrome is very low. However, there is a very small chance that some of the egg cells of the mother or some of the sperm cells of the father carry the change in the ADNP gene (germline mosaicism). In this case, parents who are not found to carry the same ADNP change as their child on a blood test still have a very small chance of having another child with ADNP syndrome. ADNP has been associated with abnormalities in the autophagy pathway in schizophrenia. As of 2023, its precise role in the autophagy process is under active investigation. Inverse comorbidity with cancer ADNP mutations have been shown to display roles in both neurodevelopment and cancer. Equivalent mutations may result in developmental delay or in cancer depending on whether or not they are present throughout initial development. A thorough meta-analysis of brains from ASD individuals revealed gene expression dysregulation and biological pathway derailments in cancer. The opposite tendency of developing one condition or another (here ASD and cancer, respectively) within a population is called inverse comorbidity. Diagnosis The diagnosis of ADNP syndrome is established through genetic testing to identify one or more pathogenic variants on the ADNP gene. Molecular genetic testing in a child with developmental delay or an older individual with intellectual disability typically begins with chromosomal microarray analysis. If this is not diagnostic, the next step is typically either a multigene panel or exome sequencing. Single-gene testing (sequence analysis of ADNP, followed by gene-targeted deletion/duplication analysis) may be indicated in individuals exhibiting characteristic signs of ADNP syndrome. Treatment There is no known cure for ADNP syndrome, and so treatment is primarily symptomatic. This may include speech, occupational, and physical therapy and specialized learning programs depending on individual needs. Early behavioral interventions can help children with speech delays gain self-care, social, and language skills. Other treatments may be needed to address neuropsychiatric features, provide nutritional support, and address any ophthalmologic and cardiac findings that may co-exist. There is ongoing current research into treatments that may improve some features of the condition. In 2020, a Phase 2A clinical trial by researchers at the Seaver Autism Center at Mount Sinai Hospital suggests that low-dose ketamine may be effective in treating clinical symptoms in children diagnosed with ADNP syndrome. The peptide drug davunetide or NAP, derived from the ADNP protein, has shown neuroprotective effects in preclinical trials and may be developed into a treatment for ADNP syndrome. History The gene was first cloned in 1998, and the syndrome was first described in 2014. The first ADNP Syndrome Family Conference and Scientific Symposium was held on November 3, 2019 at the UCLA campus in Los Angeles, California. See also Angelman syndrome Fragile X syndrome Rett syndrome White Sutton syndrome Conditions comorbid to autism spectrum disorders Heritability of autism References Genetic syndromes Rare syndromes Autism Developmental psychology Autosomal dominant disorders
ADNP syndrome
[ "Biology" ]
1,306
[ "Behavioural sciences", "Behavior", "Developmental psychology" ]
58,156,803
https://en.wikipedia.org/wiki/Bunkers%20%28energy%20in%20transport%29
In energy statistics, marine bunkers and aviation bunkers as defined by the International Energy Agency are the energy consumption of ships and aircraft. Marine and aviation bunkers are reported separately from international bunkers, which represent consumption of ships and aircraft on international routes. International bunkers are subtracted from the energy supplies of a country to calculate its domestic consumption. It is as if international aviation and international shipping did not belong to any country. They are managed by the International Civil Aviation Organization (ICAO) and the International Maritime Organization (IMO). Critics The European Federation for Transport and Environment has only limited confidence in ICAO and IMO's ability to reduce air and sea emissions due to international bunkers and thus to comply with the Paris Climate Agreement. A few figures International marine bunkers amount to 2,466 TWh/a whereas international aviation bunkers amount to 2,163 TWh/a. References See also Bunkering Energy in transport
Bunkers (energy in transport)
[ "Physics" ]
193
[ "Physical systems", "Transport", "Energy in transport" ]
58,157,092
https://en.wikipedia.org/wiki/Claudia%20Sagastiz%C3%A1bal
Claudia Alejandra Sagastizábal is an applied mathematician known for her research in convex optimization and energy management, and for her co-authorship of the book Numerical Optimization: Theoretical and Practical Aspects. She is a researcher at the University of Campinas in Brazil. Since 2015 she has been editor-in-chief of the journal Set-Valued and Variational Analysis. Education and career Sagastizábal earned a degree in mathematics, astronomy and physics from the National University of Córdoba in Argentina in 1984. She completed a PhD in 1993 at Pantheon-Sorbonne University in France; her dissertation, Quelques methodes numeriques d'optimization: Application en gestion de stocks, was supervised by Claude Lemaréchal. While in France, she worked with Électricité de France on optimization problems involving electricity generation, a topic that has continued in her research since that time. She moved to Brazil in 1997. Before joining the University of Campinas in 2017, she has also been affiliated with the Instituto Nacional de Matemática Pura e Aplicada and French Institute for Research in Computer Science and Automation, among other institutions. Recognition Sagastizábal was an invited speaker at the 8th International Congress on Industrial and Applied Mathematics in 2015. She was also an invited speaker on control theory and mathematical optimization at the 2018 International Congress of Mathematicians. She is a SIAM Fellow, in the 2024 class of fellows, elected "for contributions to non-smooth optimization and applications to engineering, and numerical methods for optimization". References External links Year of birth missing (living people) Living people Argentine mathematicians Brazilian mathematicians Brazilian women mathematicians Argentine women mathematicians Applied mathematicians National University of Córdoba alumni Fellows of the Society for Industrial and Applied Mathematics
Claudia Sagastizábal
[ "Mathematics" ]
345
[ "Applied mathematics", "Applied mathematicians" ]
58,157,292
https://en.wikipedia.org/wiki/NGC%206158
NGC 6158 is an elliptical galaxy located about 400 million light-years away in the constellation Hercules. The galaxy was discovered by astronomer William Herschel on March 17, 1787 and is a member of Abell 2199. See also List of NGC objects (6001–7000) NGC 6166, A giant elliptical galaxy in the center of Abell 2199 References External links 6158 58198 Hercules (constellation) Abell 2199 Astronomical objects discovered in 1787 Elliptical galaxies
NGC 6158
[ "Astronomy" ]
98
[ "Hercules (constellation)", "Constellations" ]
58,158,336
https://en.wikipedia.org/wiki/Brian%20Hibbert
Brian Hibbert is a British engineer. He is best known for his leadership of high-tech, commercial enterprises in the aerospace and defense industry. Hibbert began his career as an engineer with Rolls-Royce Limited where he attained chartered engineer status. From 1974 to 1991, he was an engineer and project manager at Hunting Engineering, and from 1993 to 2001 he was managing director. Hunting Engineering was acquired by INSYS in 2001, where Hibbert continued as Managing Director. He retired as Managing Director from Lockheed Martin in 2007 after they acquired INSYS in 2006. After retirement, Hibbert has spent his time promoting investment in small and medium enterprise. Hibbert is a Chartered Engineer and a Fellow of the Society of Environmental Engineers. In 2004 he was invested as a Commander of the Most Excellent Order of the British Empire. References Living people Environmental engineers Commanders of the Order of the British Empire Fellows of the Society of Environmental Engineers 1947 births
Brian Hibbert
[ "Chemistry", "Engineering" ]
191
[ "Environmental engineers", "Environmental engineering" ]
58,158,412
https://en.wikipedia.org/wiki/Norton%20Core
Norton Core is a discontinued mesh WiFi router that was introduced at the 2017 CES by Symantec (now NortonLifeLock) as a part of their Norton brand. It was marketed as a "Secure WiFi Router," as it protects connected devices by defending the network against online threats and blocking unsafe websites. The network can be controlled through a mobile app where users can view their "security score," set up and manage their router, and manage devices connected to it. It competes with the Bitdefender Box and CUJO AI. TIME rated Norton Core as one of the "25 Best Inventions of 2017." Norton Core faced limited acceptance from the public, and was criticized for requiring an expensive subscription. As a result, the Norton Core router was discontinued on January 31, 2019, with the sale of it ending immediately. Support ended on April 15, 2022, revised from its original January 2021 date. Specifications Norton Core consists of a 2.4 to 5 GHz frequency band, 1 GB of RAM, 4 GB of memory, a 1.7 GHz Qualcomm processor, Bluetooth 4.0 support, four Ethernet ports (1 WAN, 3 LAN), and two USB ports. There were silver and gold color options for the router. Notes External links Website Networking hardware Products introduced in 2017
Norton Core
[ "Engineering" ]
273
[ "Computer networks engineering", "Networking hardware" ]
58,158,486
https://en.wikipedia.org/wiki/Great%20Lakes%20water%20resource%20region
The Great Lakes water resource region is one of 21 major geographic areas, or regions, in the first level of classification used by the United States Geological Survey to divide and sub-divide the United States into successively smaller hydrologic units. These geographic areas contain either the drainage area of a major river, or the combined drainage areas of a series of rivers. The Great Lakes region, which is listed with a 2-digit hydrologic unit code (HUC) of 04, has an approximate size of , and consists of 15 subregions, which are designated with the 4-digit HUCs 0401 through 0415. This region includes the drainage within the United States that ultimately discharges into: (a) the Great Lakes system, including the lake surfaces, bays, and islands; and (b) the St. Lawrence River to the Riviere Richelieu drainage boundary. It encompasses parts of Illinois, Indiana, Michigan, Minnesota, New York, Ohio, Pennsylvania, and Wisconsin. List of water resource subregions See also List of rivers in the United States Water resource region External links References Lists of drainage basins Drainage basins Watersheds of the United States Regions of the United States Resource Water resource regions
Great Lakes water resource region
[ "Environmental_science" ]
243
[ "Hydrology", "Drainage basins" ]