id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
57,950,413
https://en.wikipedia.org/wiki/Fluoromedroxyprogesterone%20acetate
Fluoromedroxyprogesterone acetate (FMPA, 9α-fluoromedroxyprogesterone acetate, or 9α-FMPA) is a synthetic steroid medication which was under development by Meiji Dairies Corporation in the 1990s and 2000s for the potential treatment of cancers but was never marketed. It is described as an antiangiogenic agent, with about two orders of magnitude greater potency for inhibition of angiogenesis than its parent compound medroxyprogesterone acetate. FMPA showed about the same affinities for the progesterone and glucocorticoid receptors as MPA. It reached the preclinical phase of research prior to the discontinuation of its development. See also Anecortave acetate References External links Fluoromedroxyprogesterone acetate (FMPA) - AdisInsight Abandoned drugs Acetate esters Angiogenesis inhibitors Drugs with unknown mechanisms of action Experimental cancer drugs Glucocorticoids Pregnanes Progestogen esters Progestogens
Fluoromedroxyprogesterone acetate
[ "Chemistry", "Biology" ]
226
[ "Angiogenesis", "Angiogenesis inhibitors", "Drug safety", "Abandoned drugs" ]
57,955,533
https://en.wikipedia.org/wiki/Quartz%20Roasting%20Pits%20Complex
Quartz Roasting Pits Complex is a heritage-listed quartz roasting kiln located 10 km north of Hill End, Mid-Western Regional Council, New South Wales, Australia. It was built from 1854 to 1855. It is also known as Cornish Roasting Pits. The property is owned by the New South Wales Office of Environment and Heritage. It was added to the New South Wales State Heritage Register on 2 April 1999. History The Hill End Quartz Roasting Pits Complex was established by the Colonial Gold Mining Company in 1855, on the traditional land of the Wiradjuri people, to provide gold extraction facilities for those working claims on the Tambaroora and Hill End goldfields. Although at this time alluvial mining was the primary method of gold recovery, the Tambaroora fields also supported the earliest attempts at reef mining in Australia, over fifteen years before the reef mining boom of the 1870s. The Quartz Roasting Pits Complex is one of the oldest goldrush sites in Australia and represents one of the first attempts to process gold bearing ore. It also represents an unusual technological solution to the problems initially experienced in extracting payable gold from the quartz reefs, in its development of the earliest form of quartz firing technology in Australia. With kilns for roasting gold bearing quartz, a sophisticated battery and dam system for crushing and washing the ore and houses for workers, the Complex provides tangible evidence of technological, social and domestic relationships during this very early stage of Australia's goldmining history. Gold bearing quartz was brought to the Complex from surrounding mines and roasted to make it easier to crush (a relatively unusual process) and then crushed in a steam powered battery. Gold was then extracted from the powdered ore through a process of sieving and washing. Despite its impressive technological achievements, the operation, managed and probably designed by Alfred Spence, was short-lived, closing in 1856, only eighteen months after it opened. This may have been partly due to the fact that the primitive roasting techniques employed hampered the gold extraction process. It may also have been due to the lack of gold in the reefs of the surrounding area, which were not fully tested before the Complex was established. Since 1856, no major development has occurred on the site with the land being used for grazing, as part of Alpha Station. Ownership was transferred to National Parks and Wildlife in 1967. Limited archaeological excavations were undertaken by the University of Sydney in 1975 and partial reconstruction of the battery and roasting pits in 1977. Following the reassessment of the site in April 1997, the site is being stabilised and new interpretative signage prepared for its presentation to the public. Description The Roasting Pits Complex is located approximately 10 km north of Hill End township and approximately 2.5 km north of Valentine's Mine which acted as the main source of auriferous quartz. It sits within a small gently sloping valley, straddling a shallow watercourse. The site comprises a pair of kilns, a battery building which housed ore crushing machinery, a dam and the remains of two houses where the manager and various workers lived while the site was in operation. The Battery and Roasting Pits form the visual focus of the site, as they would have in the 1850s. From this point, all the other structures with the exception of the Dam, are largely obscured by vegetation. When the site was in operation however, the landscape would have been barren, with a clear view of mining activities in the surrounding hills. This harsh cultural landscape is still evident on close examination although its general sense has been diminished somewhat by the later plant growth. It is likely, given the short history of the Complex's operation, that all of the substantial structures on the site were built at about the same time in 1855. They would have suffered some impact upon the closure of the operation as equipment was salvaged, scavenged and removed from the site. Apart from grazing and cultivation, no significant later use of the site is known. The Roasting Pits are a two chamber kiln structure set on the hill upslope of the Battery. The Pits are composed of hard sedimentary stone, predominantly metamorphosed shale and greywacke. The kiln contains two conical pits that open from the top and taper sharply to ground level at the front (downslope). Each pit is built in front of an artificial embankment that joins the hill contour but drops sharply on both sides. The Battery sits on the valley floor about 20m below the Roasting Pits. It was here that the ore was crushed after roasting. The placement of the Battery on Fighting Ground Creek provided an area for tailings run off and access to a ready supply of water which was fed from the Dam further upstream. Locating it directly below the Roasting Pits also assisted the movement of the roasted ore from one stage of the extraction process to the next. The Battery is a large structure containing two main spaces and a solid stone platform containing several voids or spaces for machinery. It is oriented approximately north-south along its long axis, roughly parallel to the creek bank and across the slope leading down from the Roasting Pits. The western side of the building is cut into the slope, while the eastern is built onto the flat creek terrace. Other features are scattered throughout the surrounding bush including house remains and a large earth dam to the north-east of the battery. The site was reported to have high archaeological potential as at 29 August 1997. The ruins retain their integrity. Heritage listing The Hill End Quartz Roasting Pits is nationally significant as one of the earliest surviving reasonably intact gold mining related sites in Australia. The site demonstrates the operations of an uncommon technology brought to Australia to deal with the intractable quartz being mined. As an industrial site all facets of the processes that took place and the infrastructure associated with it are still represented. The site provides an understanding of the nature of capital investment, technological transfer and mining and extractive processes at the very start of the gold boom, which was itself a process of lasting importance for Australia. The Roasting Pits are nationally significant as a rare example of the application of a common design for lime burning being adapted for the roasting of quartz. They reflect the period prior to the development of specialised quartz roasting kilns. The Battery is nationally significant as a purpose-built structure to house a boiler, beam engine and stamper battery. It represents the oldest surviving stamper battery building erected for gold mining in Australia, being substantially intact and interpretable as a working structure. The features associated with the Roasting Pits, such as the Dam, and worker housing are also nationally significant by enhancing the understanding of the infrastructure associated with the operation of the Roasting Pits. The remains of the fences associated with the 1871 occupation of the site are regionally significant. Quartz Roasting Pits Complex was listed on the New South Wales State Heritage Register on 2 April 1999 having satisfied the following criteria. The place is important in demonstrating the course, or pattern, of cultural or natural history in New South Wales. The Hill End Quartz Roasting Pits are of national historical importance for their role in demonstrating the initial stage of the gold rushes that transformed the nineteenth century Australian economy. They represent an investment of capital and a transfer and application of technology from overseas that is both representative of the processes of the early gold rush period and unusual for its particular form of application. They are an evocative and extremely early representation of the capital-driven reef mining activity that was not to develop a major presence in Hill End until the early 1870s. The nature and pattern of technological transfer and experimentation during the early gold rush period can be understood by historical analysis of the site. The place is important in demonstrating aesthetic characteristics and/or a high degree of creative or technical achievement in New South Wales. The setting for the Roasting Pits is aesthetically pleasing as a pastoral landscape, typical of the mix of small farms, cleared pastoral land and variously aged forest regrowth. The appearance of the Battery, the Roasting Pits and the other cultural features within the complex is evocative of the romance of the historical period of gold mining. It does not reflect the reality, but there is a subliminal juxtaposition of the passivity of the natural setting and the implied violence and chaos of the industrial processes. The Roasting Pits and the Battery were carefully designed functional structures. It is evident that both were built to prepared designs, and that they were meant to operate closely together as a single functional unit. The design of the Roasting Pits reflects the utilisation, with minimal adaptation, of a standard design for lime kilns. This design, as shown by numerous examples, was commonplace and well-understood. It is rare as an example of the design used for quartz roasting. This type of design was superseded within a decade by more sophisticated designs, such as Wilkinson's in Victoria. The Battery, as a free-standing industrial building housing a stamper battery, would have been one of the first of its type in Australia. It was also a variation on a type that had a lengthy history in British usage, especially in mining areas such as Cornwall. It was an integration of architectural design and engineering function to create an optimal structure for a specific purpose. As stampers became more common they were housed in expedient structures which lacked the care for masonry construction evident in the Battery. The place has strong or special association with a particular community or cultural group in New South Wales for social, cultural or spiritual reasons. The Quartz Roasting Pits Complex represent an early stage of the development of the association that Hill End and Tambaroora had with gold. The link has been strengthened through the management of Hill End by NPWS since 1967 as an exemplar of nineteenth century gold mining landscapes and townscapes and has significance to the current local community. The living history of the area still plays a vital role in day to day life. The Complex's social significance is no greater than the remainder of the recognised historic heritage around Hill End. The place has potential to yield information that will contribute to an understanding of the cultural or natural history of New South Wales. The technological significance of the Quartz Roasting Pits rests on the rarity of their technology. The use of roasting, while not unknown in Australia, was rare. It reflects the first decade of gold mining in Australia, where lack of knowledge of the material was matched with a poverty of suitable equipment. The technological interest in the site is also enhanced by its integrity. The main features of the entire industrial complex are present in forms that can be interpreted and understood in relation to one another. It is representative of technology transfer - both as human systems (the Cornish bringing technology and designs into the colony for mining) and as machines (English capital, English and colonial kiln designs, American stampers and gold extracting technology). There are gaps in our knowledge of the layout and design of the complex, its corporate history, the details of its operation and its success which are able to be investigated by archaeology and no other source. The rarity of this type of site in intactness from the first phase of goldmining, and of that type of technology guarantees that it will not be represented effectively on many sites at all in Australia. Many sites of the first decade of the gold rushes have been destroyed by reinvestigation or different mining techniques. Sites of this period are inherently rare due to their vulnerability. The Quartz Roasting Pits complex can shed more light through archaeological investigation on itself, upon the processes of technological transfer and adaptation of technology in the period of gold mining commencing in Australia. As a largely intact site it has the potential to reveal the social relationships between different levels in the mining hierarchy, and the interrelationships between human and technological systems. An inquiry could investigate the investment capital used to create the complex and how this was reflected in the material culture of those who worked there. This can be contrasted to those of others who worked for themselves on the gold fields. The place possesses uncommon, rare or endangered aspects of the cultural or natural history of New South Wales. The Quartz Roasting Pits Complex is extremely rare in its technical/research and historic significance See also References Bibliography Attribution New South Wales State Heritage Register Hill End, New South Wales Industrial buildings in New South Wales Articles incorporating text from the New South Wales State Heritage Register Kilns
Quartz Roasting Pits Complex
[ "Chemistry", "Engineering" ]
2,509
[ "Chemical equipment", "Kilns" ]
64,527,577
https://en.wikipedia.org/wiki/Graham%E2%80%93Pollak%20theorem
In graph theory, the Graham–Pollak theorem states that the edges of an -vertex complete graph cannot be partitioned into fewer than complete bipartite graphs. It was first published by Ronald Graham and Henry O. Pollak in two papers in 1971 and 1972 (crediting Hans Witsenhausen for a key lemma), in connection with an application to telephone switching circuitry. The theorem has since become well known and repeatedly studied and generalized in graph theory, in part because of its elegant proof using techniques from algebraic graph theory. More strongly, write that all proofs are somehow based on linear algebra: "no combinatorial proof for this result is known". Construction of an optimal partition A partition into exactly complete bipartite graphs is easy to obtain: just order the vertices, and for each vertex except the last, form a star connecting it to all later vertices in the ordering. Other partitions are also possible. Proof of optimality The proof of the Graham–Pollak theorem described by (following ) defines a real variable for each vertex , where denotes the set of all vertices in the graph. Let the left sides and right sides of the th bipartite graph be denoted and , respectively and for any set of vertices define to be the sum of variables for vertices in : Then, in terms of this notation, the fact that the bipartite graphs partition the edges of the complete graph can be expressed as the equation Now consider the system of linear equations that sets and for each . Any solution to this system of equations would also obey the nonlinear equations But a sum of squares of real variables can only be zero if all the individual variables are zero, the trivial solution to the system of linear equations. If there were fewer than complete bipartite graphs, the system of equations would have fewer than equations in unknowns and would have a nontrivial solution, a contradiction. So the number of complete bipartite graphs must be at least . Related problems Distance labeling Graham and Pollak study a more general graph labeling problem, in which the vertices of a graph should be labeled with equal-length strings of the characters "0", "1", and "✶", in such a way that the distance between any two vertices equals the number of string positions where one vertex is labeled with a 0 and the other is labeled with a 1. A labeling like this with no "✶" characters would give an isometric embedding into a hypercube, something that is only possible for graphs that are partial cubes, and in one of their papers Graham and Pollak call a labeling that allows "✶" characters an embedding into a "squashed cube". For each position of the label strings, one can define a complete bipartite graph in which one side of the bipartition consists of the vertices labeled with 0 in that position and the other side consists of the vertices labeled with 1, omitting the vertices labeled "✶". For the complete graph, every two vertices are at distance one from each other, so every edge must belong to exactly one of these complete bipartite graphs. In this way, a labeling of this type for the complete graph corresponds to a partition of its edges into complete bipartite graphs, with the lengths of the labels corresponding to the number of graphs in the partition. Alon–Saks–Seymour conjecture Noga Alon, Michael Saks, and Paul Seymour formulated a conjecture in the early 1990s that, if true, would significantly generalize the Graham–Pollak theorem: they conjectured that, whenever a graph of chromatic number has its edges partitioned into complete bipartite subgraphs, at least subgraphs are needed. Equivalently, their conjecture states that edge-disjoint unions of complete bipartite graphs can always be colored with at most colors. The conjecture was disproved by Huang and Sudakov in 2012, who constructed families of graphs formed as edge-disjoint unions of complete bipartite graphs that require colors. More strongly, the number of colors can be as large as , tight up to the term in the exponent. Biclique partition The biclique partition problem takes as input an arbitrary undirected graph, and asks for a partition of its edges into a minimum number of complete bipartite graphs. It is NP-hard, but fixed-parameter tractable. The best approximation algorithm known for the problem has an approximation ratio of . References Algebraic graph theory
Graham–Pollak theorem
[ "Mathematics" ]
916
[ "Graph theory", "Theorems in discrete mathematics", "Mathematical relations", "Theorems in graph theory", "Algebra", "Algebraic graph theory" ]
64,528,168
https://en.wikipedia.org/wiki/Electroless%20copper%20plating
Electroless copper plating is a chemical process that deposits an even layer of copper on the surface of a solid substrate, like metal or plastic. The process involves dipping the substrate in a water solution containing copper salts and a reducing agent such as formaldehyde. Unlike electroplating, electroless plating processes in general do not require passing an electric current through the bath and the substrate; the reduction of the metal cations in solution to metallic is achieved by purely chemical means, through an autocatalytic reaction. Thus electroless plating creates an even layer of metal regardless of the geometry of the surface – in contrast to electroplating which suffers from uneven current density due to the effect of substrate shape on the electric field at its surface. Moreover, electroless plating can be applied to non-conductive surfaces. Process In a typical formulation of the process, the surfaces to be coated are primed with a palladium catalyst and then immersed in a bath containing copper ions , which are reduced by formaldehyde through the overall reactions + → (gas) + + 2e + 2e → (metal). Applications Electroless copper plating is used in the manufacture of printed circuit boards (PCBs), in particular for the conductive layer on the walls of through holes and vias. See also Copper electroplating Electroless nickel-phosphorus plating Electroless nickel-boron plating (NiB) Electroless nickel immersion gold (ENIG,ENEPIG) References Metal plating Copper
Electroless copper plating
[ "Chemistry" ]
309
[ "Metallurgical processes", "Coatings", "Metal plating" ]
63,175,915
https://en.wikipedia.org/wiki/Social%20narrative
A social narrative is an evidence-based learning tool designed for use with people with autism spectrum disorder (ASD) and other associated disabilities. Social narratives often use personalized stories to teach a skill, identify a situation, or tell a narrative; some examples of social narratives may cover topics such as getting along with others, interacting with others, or experiencing a new place or activity. It is referred to as a story or a written explanation that tells the learner not only what to do but also what the situation is, with the goal of addressing the challenge of learners finding social situations confusing. Social narratives have been found effective for learners from preschool to high school ages in several areas such as social, communication, joint attention, behavior, adaptive, play, and academic. Concept Social narrative is described as long story that could be employed as an antecedent intervention or not, for students that have behavioral challenges due to social and emotional development deficits. It depicts and explains social interactions, common behavioral expectations, and their respective social subtexts. According to the National Professional Development Center (NPDC) on ASD, in addition to teaching learners specific social behaviors and skills, it can also help them adapt their behaviors according to the social and physical cues of a situation and adjust to changes in routine. A defining feature of the social narrative is that it is individualized and narrated from the child or the learner's perspective. The story focuses on relevant cues and provides the learners appropriate responses through examples. It is written by an educator according to the learner's instructional level and is often complemented by contents such as pictures and photographs that do not only confirm the information being conveyed but also promote self-awareness, self-calming, and self-management. For example, it can be in the form of a one-page symbolic depiction, a book with photographs, or a learning material (e.g. mobile app) that clearly depicts and explains relevant information. Uses Social narrative can be used to support the learner with ASD understand various social contexts and develop new social skills, such as responding to a peer, or initiating a conversation with a familiar or new person. It can be used by various professionals such as teachers (both general and special education teachers), therapists, and professionals. It can also be used and implemented by the parent and family members. Social narratives can be applied and utilized in a variety of settings, for example, in educational and therapy based settings. Social narratives can be used to address issues, such as conversational skills in learners with ASD. Types Social stories are considered a type of social narrative. In a particular story, the expectations – including those of others such as peers and teachers – are clearly and accurately described. Social stories, which are attributed to Carol Gray, is primarily used to describe a specific way of constructing a social narrative. This type of narrative follows a formula, which orients the story towards description instead of direction. Social scripts, on the other hand, describe specific comments and questions appropriate to given situations. It is written in scripted prompt format or videotaped statements or phrases that learners can use in social situations. The statements are simple such as: "Hi, can I sit here?", or "Can you help me?". Both of these types of social narratives can be employed to instruct a learner on how to introduce themselves to others, ask for help, initiate conversations, and join a group of peers. Social scripts constitute another type of social narrative. These can be audio or written sentences or paragraphs that the learners can use in different settings and situations so that their ability to interact with others is enhanced. Comic Strip Conversations, developed by Carol Gray, utilize drawings to illustrate what people say, do, and think in various situations. In a Comic Strip Conversation, the adult and the individual with ASD would briefly introduce the comic strip. Shortly afterward, either the adult or the individual with ASD can draw about the situation and present a perspective on what happened during the situation. It is important to note that a form of structure must be provided for the individual with ASD to understand the concept and skill being taught. Power Cards are considered another type of social narrative. Power Cards are visual aids that capitalizes on an individual's interest. Power Cards can also be used to teach the learner how to appropriately engage in various social interactions, communicative behaviors, and daily routines. Although Power Cards are visual aids, they also vary in size. Power Cards are often written in first-person and describes how the child's identified hero can solve the presented problem. Social Scripts are another type of social narrative. Social Scripts can be used to teach a learner the language to use in specific situations. The learner is given social scenarios in which questions, comments and statements that they can use when engaged in conversation with others. Social Scripts reduce the stress of social interactions. Social Scripts cannot be used in all social situations, as it has the appearance of the individual rehearsing. Cartooning is a type of social narrative that uses cartoons to enhance the social understanding of the learner. Because visual symbols are helpful for making abstract concepts and events more meaningful, it is often used. Social Autopsies, developed by Richard Lavoie, are used to help individuals understand the social errors or mistakes that have occurred. It aids in dissecting an error that occurs. This assists the learner in understanding the error, and clarifies the error that was made. Technique The social narrative is usually written in first-person and the perspective of the learner so that the story matches his experiences, feelings, and behavior. It is often developed by an expert (e.g. educator, therapist) and the patient since it integrates new social information relevant to the patient. There are no strict guidelines when writing social narratives but the process usually involve the following steps: Identification of the social situation for intervention; Definition of target behavior for data collection; Collection of data; Social narrative writing. Some guidelines for social narrative development include the use of language understood by the learner. The narrative is also written according to his comprehension skills. There is also the preference for the "I" statements (although "you" statements can also be used if it is more effective) and the construction of sentences using present and future tenses. After reading the narrative, the learner is provided an opportunity to participate in the situation that was identified.  Key concepts is reviewed with the learner. While reviewing the key concepts, the learner is assess for understanding, to see if they understood the concept. If so, they continue. If not, reinforcements and prompts will be utilized to help the learner understand the concept of the social narrative. Data is collected to determine if the learner is making progress toward the goal. Goals and implications for use The overall goals of social narratives is to teach appropriate behavior, make choices, playing appropriately with materials and peers, decreasing problematic behavior, understanding expectations, and increasing social interactions. Social narratives, when used, can provide a variety of positive effects such as initiating conversations with peers and adults, and enhancing self-confidence. References Learning disabilities Learning methods Psychology of learning Social learning theory Storytelling Teaching Treatment of autism
Social narrative
[ "Biology" ]
1,467
[ "Behavior", "Social learning theory" ]
63,177,604
https://en.wikipedia.org/wiki/David%20Tannor
David Joshua Tannor (; born 1958) is a theoretical chemist, who is the Hermann Mayer Professorial Chair in the department of chemical physics at the Weizmann Institute of Science. Biography Tannor has a BA from Columbia University (1978), and a PhD with Eric Heller from UCLA (1983). He did his post-doc work with Stuart Rice and David W. Oxtoby at the University of Chicago. He is a black belt in karate. Tannor is a theoretical chemist. He studies the effects of quantum mechanics on how molecules move. He worked from 1986 to 1989 as an assistant professor at the Illinois Institute of Technology in Chicago, from 1989 to 1995 as an assistant and associate professor at the University of Notre Dame in South Bend, Indiana, from 1992 to 1993 as a visiting professor at Columbia University, and from 1995 to 2000 as an associate professor and since 2000 as a professor at the Weizmann Institute of Science in Rehovot, Israel. He is the Hermann Mayer Professorial Chair in the department of chemical physics at the Weizmann Institute of Science. Tannor is the author of Introduction to Quantum Mechanics (2018). He has also published or co-published over 120 scientific articles and reviews. References External links David Tannor (November 25, 2013). "Control of Multielectron Dynamics and High Harmonic Generation" (video). David Tannor (January 26, 2016). "Quantum Transitions using Complex-Valued Classical Trajectories" (video). Israeli male karateka Illinois Institute of Technology faculty Living people Columbia College (New York) alumni University of California, Los Angeles alumni American physical chemists Israeli physical chemists Theoretical chemists University of Notre Dame faculty Academic staff of Weizmann Institute of Science Columbia University faculty 1958 births 20th-century American chemists 21st-century American chemists Quantum physicists 20th-century Israeli sportsmen
David Tannor
[ "Physics", "Chemistry" ]
382
[ "Quantum chemistry", "Physical chemists", "Quantum physicists", "Quantum mechanics", "Theoretical chemistry", "Theoretical chemists" ]
62,271,137
https://en.wikipedia.org/wiki/Phenylacetyl-CoA
Phenylacetyl-CoA (C29H42N7O17P3S) is a form of acetyl-CoA formed from the condensation of the thiol group from coenzyme A with the carboxyl group of phenylacetic acid. Its molecular-weight is 885.7 g/mol. and IUPAC name is S-[2-[3-[[(2R)-4-[[[(2R,3S,4R,5R)-5-(6-aminopurin-9-yl)-4-hydroxy-3-phosphonooxyoxolan-2-yl]methoxy-hydroxyphosphoryl]oxy-hydroxyphosphoryl]oxy-2-hydroxy-3,3-dimethylbutanoyl]amino]propanoylamino]ethyl] 2-phenylethanethioate. It is formed via the actions of Phenylacetate—CoA ligase. Phenylacetyl-CoA is often produced via the reduction of ATP to AMP and the conversion of phenylacetate and CoA to diphosphate and Phenylacetyl-CoA. ATP + phenylacetate + CoA → AMP + diphosphate + phenylacetyl-CoA This reaction is catalyzed by phenylacetate-CoA ligase. Phenylacetyl-CoA combines with water and quinone to produce phenylglyoxylyl-CoA and quinol via a phenylacetyl-CoA dehydrogenase reaction acting as an oxidoreductase. Phenylacetyl-CoA inhibits choline acetyltransferase acting as a neurotoxin. It competes with acetyl-CoA. References Glycolysis Cholinergics Thioesters of coenzyme A
Phenylacetyl-CoA
[ "Chemistry" ]
424
[ "Carbohydrate metabolism", "Glycolysis" ]
62,272,698
https://en.wikipedia.org/wiki/Everykey
Everykey designs and builds a patented universal smart key that can unlock devices and log into online accounts on those devices. The idea began as an entrepreneurship class project at Case Western Reserve University. Crowdfunding Campaign Everykey launched its Kickstarter campaign on October 29, 2014. Within 48 hours, the campaign had reached trending status and raised over $25,000 in pre-orders. The project quickly gained attention, and Everykey launched another crowdfunding campaign on Indiegogo with John McAfee on December 7, 2015. While some media outlets such as Wired and TechCrunch were excited about the traction, they also expressed concern over the security versus convenience factor of the product. Writers at Business Insider focused more on the vision of the company, exploring Everykey’s future plans and classroom origin story. Products The company's debut product was an electronic wristband designed to replace keys and passwords. Development of the prototype into a working product was funded by a Kickstarter campaign. The current product, resembling a USB thumb drive that can be inserted into a wristband accessory, was funded by an Indiegogo crowdfunding campaign. The software enables Everykey to work with a variety of computer and mobile platforms. Everykey currently offers the hardware thumb-drive style product as well as a Key Ring Accessory, Band Accessory, Charging Cable, and Bluetooth Dongle. Technology Everykey is a Bluetooth device that can communicate securely with an unlimited number of other Bluetooth devices, simultaneously. The Everykey device employs a patented method including AES and RSA encryption to allow the user to unlock their devices and login to online accounts without having to type passwords. When the user leaves with Everykey, the app can lock everything back down and log out of online accounts. Everykey’s patented method allows it to perform unlocking and locking actions without plugging in the device. Reception Many were skeptical about Everykey’s legitimacy due to the company’s delayed shipment to early adopters. John McAfee’s involvement as the company’s brand ambassador was controversial, with some being concerned and others elated regarding his involvement. The company hosted an r/IAmA style open forum on Reddit so anyone could ask about topics ranging from security to late delivery. Everykey has since addressed many of the initial concerns, and is now selling their products with retailers such as Best Buy, Newegg, and Office Depot. Competitors In the password managers market, Everykey competes with LastPass, 1Password and Dashlane. In the hardware security key market, Everykey's competitors include Nymi and YubiKey. Awards Everykey has been recognized by many local and state organizations for the CEO’s flashy pitch style and grassroots backstory: ProtoTech 1st Place LaunchTown 1st Place North Coast Opportunities Technologies Fund Award FUND Conference 1st Place Best Startup Culture in Ohio, Finalist Morgenthaler-Pavey Startup Competition 1st Place References External links Everykey Website Password managers Cryptographic software Kickstarter-funded products Indiegogo projects Computer access control Security technology Security software Advanced Encryption Standard
Everykey
[ "Mathematics", "Engineering" ]
631
[ "Cybersecurity engineering", "Cryptographic software", "Computer access control", "Mathematical software" ]
62,278,303
https://en.wikipedia.org/wiki/Gregory%20Beylkin
Gregory Beylkin (born 16 March 1953) is a Russian–American mathematician. Education and career He studied from 1970 to 1975 at the University of Leningrad, with Diploma in Mathematics in November 1975. From 1976 to 1979 he was a research scientist at the Research Institute of Ore Geophysics, Leningrad. From 1980 to 1982 he was a graduate student at New York University, where he received his PhD under the supervision of Peter Lax. From 1982 to 1983 Beylkin was an associate research scientist at the Courant Institute of Mathematical Sciences. From 1983 to 1991 he was a member of the professional staff of Schlumberger-Doll Research in Ridgefield, Connecticut. Since 1991 he has been a professor in the Department of Applied Mathematics at the University of Colorado Boulder. He was a visiting professor at Yale University, the University of Minnesota, and the Mittag-Leffler Institute and participated in 2012 and 2015 in the summer seminar on "Applied Harmonic Analysis and Sparse Approximation" at Oberwolfach. He is the author or co-author of over 100 articles in refereed journal and has served on several editorial boards. Awards and honors 1998 — Invited Speaker of the International Congress of Mathematicians 2012 — Fellow of the American Mathematical Society 2016 — Fellow of the Society for Industrial and Applied Mathematics Patents See also References External links 1953 births Living people 20th-century Russian mathematicians 21st-century Russian mathematicians 20th-century American mathematicians 21st-century American mathematicians Applied mathematicians Fellows of the American Mathematical Society Fellows of the Society for Industrial and Applied Mathematics Saint Petersburg State University alumni Courant Institute of Mathematical Sciences alumni University of Colorado Boulder alumni
Gregory Beylkin
[ "Mathematics" ]
326
[ "Applied mathematics", "Applied mathematicians" ]
47,707,744
https://en.wikipedia.org/wiki/Shelby%20Gem%20Factory
The Shelby Gem Factory was the production facility of ICT Incorporated, a company in Shelby, Michigan, United States, that manufactured artificial gemstones through proprietary processes. ICT began operations in 1970 and closed in December 2019. History Larry Paul Kelley established ICT (International Crystal Technology) in 1970 with Craig Hardy and Tom VanBergen. Kelley had worked for Dow Chemical in Ludington and at a factory in Ann Arbor that produced laser crystals. The facility was sited in Shelby because the town had a new industrial park. By 2015, Kelley was ICT's sole owner. The Shelby Gem Factory initially produced only synthetic ruby, with ruby lasers being the principal application, primarily sold to firms in California. However, laser technology was in its infancy, and the far greater profit potential of converting ruby rods into a variety of artificial gemstones of various colors led to a change in the factory's focus. Larry Kelley built on Soviet research into cubic zirconia and became its first commercial producer, having solved issues of temperature control that had impeded its production. For a time, cubic zirconia was a lucrative product line; Shelby opened factories outside the United States to keep up with demand. However, the value of cubic zirconia soon declined to the point that it was used as fill when the factory was expanded. In 1983, ICT opened a faceting factory in southern China to create gemstones for jewelry use from the crystals produced in Shelby; this closed in 1991, and separate companies in China and South Korea were contracted to continue faceting. The South Korean market represented up to 40 percent of the factory's sales until a precipitous decline caused by the 1997 Asian financial crisis. In 1994, the factory entered the business of recrystallizing rubies, buying low-grade gems from Myanmar to be melted down in the process. A 50-seat theater ran a presentation for visitors, and jewelry was sold on site. The factory closed in 2019 after Kelley was diagnosed in 2017 with Alzheimer's disease. Other issues that contributed to the closing were worldwide competition and online markets. Larry Kelley died on October 24, 2020. Manufacturing Some of the furnaces burned at . Factory tours were discontinued due to liability concerns attendant to the "very high temperatures and extremely bright light" and the unavailability of affordable insurance to cover the risk. The gems were synthesized in a furnace. The Shelby Gem Factory's diamonds were simulants. The factory also manufactured simulated citrine and topaz, along with other birthstone substitutes. See also Czochralski method Skull crucible References Further reading External links Shelby Gem Factory visit ICT YouTube video on explanation of semiconductor invention for making solar cells Defunct technology companies of the United States American inventions Economy of Michigan Gemological laboratories Oceana County, Michigan Physical chemistry Science and technology in Michigan Solar cells Synthetic minerals 1970 establishments in Michigan 2019 disestablishments in Michigan Manufacturing companies based in Michigan
Shelby Gem Factory
[ "Physics", "Chemistry" ]
594
[ "Applied and interdisciplinary physics", "Synthetic materials", "nan", "Physical chemistry", "Synthetic minerals" ]
47,708,357
https://en.wikipedia.org/wiki/Crown%20flash
Crown flash is a rarely observed meteorological phenomenon caused by the effect of atmospheric electrical fluctuations on the alignment of ice crystals. It has been described as "the brightening of a thunderhead crown followed by the appearance of aurora-like streamers emanating into the clear atmosphere". The current hypothesis for why the phenomenon occurs is that sunlight is reflecting off, or refracting through, tiny ice crystals above the crown of a cumulonimbus cloud. These ice crystals are aligned by the strong electric field effects around the cloud, so the effect may appear as a tall (sometimes curved) streamer, pillar of light, or resemble a massive flash of a searchlight/flashlight beam. When the electric field is disturbed by electrical charging or discharging (typically, from lightning) within the cloud, the ice crystals are re-oriented causing the light pattern to shift in a characteristic manner, at times very rapidly and appearing to 'dance' in a strikingly mechanical fashion. The effect may also sometimes be known as a "leaping sundog" or "jumping sundog". As with sundogs, observation of the effect is dependent upon the observer's position – it is not a self-generated light such as seen in a lightning strike or aurora, but rather a changing reflection or refraction of the sunlight. Unlike sundogs, however (which are also caused by refraction of sunlight through ice crystals), these features move and realign within seconds, forming beams and loops of light, and the effect appears localised directly above the cloud rather than at some distance to the side(s) of the sun. The first scientific description of the crown flash phenomenon appears to be in the journal Monthly Weather Review in 1885, according to the Guinness Book of Records. Also mentioned in Nature in 1971 and in a letter to Nature slightly earlier in the same year, this phenomenon is regarded as uncommon and not well documented. Starting with an initial video upload in 2009 dozens of YouTube videos have since been emerging that appear to document this phenomenon. See also Light pillar Sun dog Subsun References External links Short video of a crown flash Leaping Streams of Light: A new natural phenonmenon? Amazing video of a bizarre, twisting, dancing cloud – Discover Magazine Lightning Atmospheric optical phenomena
Crown flash
[ "Physics" ]
462
[ "Physical phenomena", "Earth phenomena", "Optical phenomena", "Electrical phenomena", "Lightning", "Atmospheric optical phenomena" ]
47,715,057
https://en.wikipedia.org/wiki/Penicillium%20steckii
Penicillium steckii is a species of fungus in the genus Penicillium which produces citrinin, tanzawaic acid E, tanzawaic acid F. References Further reading steckii Fungi described in 1927 Fungus species
Penicillium steckii
[ "Biology" ]
51
[ "Fungi", "Fungus species" ]
71,865,071
https://en.wikipedia.org/wiki/1%2C2%2C4%2C5-Tetrachloro-3-nitrobenzene
1,2,4,5-Tetrachloro-3-nitrobenzene (tecnazene) is an organic compound with the formula . It is a colorless solid. A related isomer is 1,2,3,4-tetrachloro-5-nitrobenzene. It is used as a standard for quantitative analysis by nuclear magnetic resonance. 1,2,4,5-Tetrachloro-3-nitrobenzene is also a fungicide used to prevent dry rot and sprouting on potatoes during storage. References Fungicides Analytical standards Nitrobenzene derivatives Chlorobenzene derivatives
1,2,4,5-Tetrachloro-3-nitrobenzene
[ "Chemistry", "Biology" ]
144
[ "Fungicides", "Organic compounds", "Analytical standards", "Biocides", "Organic compound stubs", "Organic chemistry stubs" ]
71,870,181
https://en.wikipedia.org/wiki/International%20System%20Safety%20Society
The International System Safety Society (ISSS) is a non-profit professional organization for system safety engineers. ISSS was established in 1963 to support the development of system safety as a distinct engineering discipline. ISSS has local chapters in several states across the United States, as well as in Singapore and Canada. The society currently has members from over 25 countries across the world. History The event recognized as the founding of the Society occurred on December 4, 1963, in the main lecture hall at the School of Aviation Safety on the University of Southern California campus in Los Angeles. The gathering consisted of about 40 individuals, including many students and others from the USAF Aerospace Safety Center, some USC faculty members, along with system safety representatives of the numerous aerospace companies located in the area. Events Since the first event in 1972, the society has sponsored the annual International System Safety Conference. The society also sponsors annual member awards which are presented at the annual awards banquet during the International System Safety Conference. In addition, the society and local chapters organize webinars and symposia for society members. ISSS is also one of the sponsoring societies for the annual Reliability and Maintainability Symposium. The society is also a sponsor of the Board of Certified Safety Professionals (BCSP) Global Learning Summit. Publications The society publishes the Journal of System Safety, a triannual peer-reviewed academic journal, as well as periodic member newsletters and formerly the System Safety Analysis Handbook. The journal was established in 1965 as Hazard Prevention and obtained its current name in 1999. It is considered one of the important journals in the field of reliability and safety and is one of the oldest in continuous publication. The journal seeks to advance the discipline of system safety across a wide range of application domains, including aerospace, automotive, nuclear power, and military applications. The editor-in-chief is Charles Muniak (Syracuse Safety Research). In 2022 the journal began transitioning to a gold open access publishing model with no article processing charges. See also System safety Safety engineering Risk management References International organizations based in the United States Engineering organizations Engineering societies Professional associations
International System Safety Society
[ "Engineering" ]
423
[ "Engineering societies", "nan" ]
71,872,047
https://en.wikipedia.org/wiki/Tripartite%20symbiosis
Tripartite symbiosis is a type of symbiosis involving three species. This can include any combination of plants, animals, fungi, bacteria, or archaea, often in interkingdom symbiosis. Ants Fungus-growing ants Ants of Attini cultivate fungi. Microfungi, specialized to be parasites of the fungus gardens, coevolved with them. Allomerus-Hirtella-Trimmatostroma Allomerus decemarticulatus ants use Trimmatostroma sp. to create structures within Hirtella physophora. The fungi are connected endophytically and actively transfer nitrogen. Lichen The mycobiont in a lichen can form a relationship with both cyanobacteria and green algae as photobionts concurrently. Legumes Rhizobia are nitrogen-fixating bacteria that form symbiotic relationships with legumes. Sometimes, this is aided by the presence of a fungal species. This is most effective in undistributed soil. The presence of mycorrhizae can improve the rhizobial-liquorice nutrient transfer in droughts. Soybeans in particular can improve their ability to withstand soil salinity with the presence of both rhizobium and mycorrhizae. References Symbiosis
Tripartite symbiosis
[ "Biology" ]
280
[ "Biological interactions", "Behavior", "Symbiosis" ]
56,214,154
https://en.wikipedia.org/wiki/Intensity%20measure
In probability theory, an intensity measure is a measure that is derived from a random measure. The intensity measure is a non-random measure and is defined as the expectation value of the random measure of a set, hence it corresponds to the average volume the random measure assigns to a set. The intensity measure contains important information about the properties of the random measure. A Poisson point process, interpreted as a random measure, is for example uniquely determined by its intensity measure. Definition Let be a random measure on the measurable space and denote the expected value of a random element with . The intensity measure of is defined as for all . Note the difference in notation between the expectation value of a random element , denoted by and the intensity measure of the random measure , denoted by . Properties The intensity measure is always s-finite and satisfies for every positive measurable function on . References Measures (measure theory) Probability theory
Intensity measure
[ "Physics", "Mathematics" ]
184
[ "Measures (measure theory)", "Quantity", "Physical quantities", "Size" ]
56,215,685
https://en.wikipedia.org/wiki/C8H10N2O3S
{{DISPLAYTITLE:C8H10N2O3S}} The molecular formula C8H10N2O3S (molar mass: 214.242 g/mol) may refer to: Diazald, or N-methyl-N-nitroso-p-toluenesulfonamide Sulfacetamide Molecular formulas
C8H10N2O3S
[ "Physics", "Chemistry" ]
76
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
61,042,842
https://en.wikipedia.org/wiki/PathoPhenoDB
PathoPhenoDB is a biological database. The database connects pathogens to their phenotypes using multiple databases such as NCBI, Human Disease Ontology Human Phenotype Ontology, Mammalian Phenotype Ontology, PubChem, SIDER and CARD. Pathogen-disease associations were gathered mainly through the CDC and the List of Infectious Diseases page on Wikipedia. The manner by which they assigned taxonomy was semi-automatic. When mapped against NCBI Taxonomy, if the pathogen was not an exact match, it was then mapped to the parent class. PathoPhenoDB employs NPMI in order to filter pairs based on their co-occurrence statistics. See also Antimicrobial Resistance databases References Biological databases Bioinformatics Pathogen genomics
PathoPhenoDB
[ "Engineering", "Biology" ]
154
[ "Biological engineering", "Bioinformatics", "Molecular genetics", "DNA sequencing", "Biological databases", "Pathogen genomics" ]
61,043,737
https://en.wikipedia.org/wiki/Degenerate%20Higher-Order%20Scalar-Tensor%20theories
Degenerate Higher-Order Scalar-Tensor theories (or DHOST theories) are theories of modified gravity. They have a Lagrangian containing second-order derivatives of a scalar field but do not generate ghosts (kinetic excitations with negative kinetic energy), because they only contain one propagating scalar mode (as well as the two usual tensor modes). History DHOST theories were introduced in 2015 by David Langlois and Karim Noui. They are a generalisation of Beyond Horndeski (or GLPV) theories, which are themselves a generalisation of Horndeski theories. The equations of motion of Horndeski theories contain only two derivatives of the metric and the scalar field, and it was believed that only equations of motion of this form would not contain an extra scalar degree of freedom (which would lead to unwanted ghosts). However, it was first shown that a class of theories now named Beyond Horndeski also avoided the extra degree of freedom. Originally theories which were quadratic in the second derivative of the scalar field were studied, but DHOST theories up to cubic order have now been studied. A well-known specific example of a DHOST theory is mimetic gravity, introduced in 2013 by Chamseddine and Mukhanov. Action All DHOST theories depend on a scalar field . The general action of DHOST theories is given by where is the kinetic energy of the scalar field, , and the quadratic terms in are given by where and the cubic terms are given by where The and are arbitrary functions of and . References Theories of gravity General relativity
Degenerate Higher-Order Scalar-Tensor theories
[ "Physics" ]
333
[ "General relativity", "Theoretical physics", "Theory of relativity", "Theories of gravity" ]
66,034,897
https://en.wikipedia.org/wiki/Nucleoside-modified%20messenger%20RNA
A nucleoside-modified messenger RNA (modRNA) is a synthetic messenger RNA (mRNA) in which some nucleosides are replaced by other naturally modified nucleosides or by synthetic nucleoside analogues. modRNA is used to induce the production of a desired protein in certain cells. An important application is the development of mRNA vaccines, of which the first authorized were COVID-19 vaccines (such as Comirnaty and Spikevax). Background mRNA is produced by synthesising a ribonucleic acid (RNA) strand from nucleotide building blocks according to a deoxyribonucleic acid (DNA) template, a process that is called transcription. When the building blocks provided to the RNA polymerase include non-standard nucleosides such as pseudouridine — instead of the standard adenosine, cytidine, guanosine, and uridine nucleosides — the resulting mRNA is described as nucleoside-modified. Production of protein begins with assembly of ribosomes on the mRNA, the latter then serving as a blueprint for the synthesis of proteins by specifying their amino acid sequence based on the genetic code in the process of protein biosynthesis called translation. Overview To induce cells to make proteins that they do not normally produce, it is possible to introduce heterologous mRNA into the cytoplasm of the cell, bypassing the need for transcription. In other words, a blueprint for foreign proteins is "smuggled" into the cells. To achieve this goal, however, one must bypass cellular systems that prevent the penetration and translation of foreign mRNA. There are nearly-ubiquitous enzymes called ribonucleases (also called RNAses) that break down unprotected mRNA. There are also intracellular barriers against foreign mRNA, such as innate immune system receptors, toll-like receptor (TLR) 7 and TLR8, located in endosomal membranes. RNA sensors like TLR7 and TLR8 can dramatically reduce protein synthesis in the cell, trigger release of cytokines such as interferon and TNF-alpha, and when sufficiently intense lead to programmed cell death. The inflammatory nature of exogenous RNA can be masked by modifying the nucleosides in mRNA. For example, uridine can be replaced with a similar nucleoside such as pseudouridine (Ψ) or N1-methyl-pseudouridine (m1Ψ), and cytosine can be replaced by 5-methylcytosine. Some of these, such as pseudouridine and 5-methylcytosine, occur naturally in eukaryotes, while m1Ψ occurs naturally in archaea. Inclusion of these modified nucleosides alters the secondary structure of the mRNA, which can reduce recognition by the innate immune system while still allowing effective translation. Significance of untranslated regions A normal mRNA starts and ends with sections that do not code for amino acids of the actual protein. These sequences at the 5′ and 3′ ends of an mRNA strand are called untranslated regions (UTRs). The two UTRs at their strand ends are essential for the stability of an mRNA and also of a modRNA as well as for the efficiency of translation, i.e. for the amount of protein produced. By selecting suitable UTRs during the synthesis of a modRNA, the production of the target protein in the target cells can be optimised. Delivery Various difficulties are involved in the introduction of modRNA into certain target cells. First, the modRNA must be protected from ribonucleases. This can be accomplished, for example, by wrapping it in liposomes. Such "packaging" can also help to ensure that the modRNA is absorbed into the target cells. This is useful, for example, when used in vaccines, as nanoparticles are taken up by dendritic cells and macrophages, both of which play an important role in activating the immune system. Furthermore, it may be desirable that the modRNA applied is introduced into specific body cells. This is the case, for example, if heart muscle cells are to be stimulated to multiply. In this case, the packaged modRNA can be injected directly into an artery such as a coronary artery. Applications An important field of application are mRNA vaccines. Replacing uridine with pseudouridine to evade the innate immune system was pioneered by Karikó and Weissman in 2005. They won the 2023 Nobel Prize in Physiology or Medicine as a result of their work. Another milestone was achieved by demonstrating the life-saving efficacy of nucleoside modified mRNA in a mouse model of a lethal lung disease by the team of Kormann and others in 2011. N1-methyl-pseudouridine was used in vaccine trials against Zika, HIV-1, influenza, and Ebola in 2017–2018. The first authorized for use in humans were COVID-19 vaccines to address SARS-CoV-2. Examples of COVID-19 vaccines using modRNA include those developed by the cooperation of BioNTech/Pfizer (BNT162b2), and by Moderna (mRNA-1273). The zorecimeran vaccine developed by Curevac, however, uses unmodified mRNA, instead relying on codon optimization to minimize the presence of uridine. This vaccine is less effective, however. Other possible uses of modRNA include the regeneration of damaged heart muscle tissue, an enzyme-replacement tool and cancer therapy. References Further reading RNA Molecular genetics Life sciences industry
Nucleoside-modified messenger RNA
[ "Chemistry", "Biology" ]
1,156
[ "Molecular genetics", "Life sciences industry", "Molecular biology" ]
66,038,122
https://en.wikipedia.org/wiki/High%20performance%20positioning%20system
A high performance positioning system (HPPS) is a type of positioning system consisting of a piece of electromechanics equipment (e.g. an assembly of linear stages and rotary stages) that is capable of moving an object in a three-dimensional space within a work envelope. Positioning could be done point to point or along a desired path of motion. Position is typically defined in six degrees of freedom, including linear, in an x,y,z cartesian coordinate system, and angular orientation of yaw, pitch, roll. HPPS are used in many manufacturing processes to move an object (tool or part) smoothly and accurately in six degrees of freedom, along a desired path, at a desired orientation, with high acceleration, high deceleration, high velocity and low settling time. It is designed to quickly stop its motion and accurately place the moving object at its desired final position and orientation with minimal jittering. HPPS requires a structural characteristics of low moving mass and high stiffness. The resulting system characteristic is a high value for the lowest natural frequency of the system. High natural frequency allows the motion controller to drive the system at high servo bandwidth, which means that the HPPS can reject all motion disturbing frequencies, which act at a lower frequency than the bandwidth. For higher frequency disturbances such as floor vibration, acoustic noise, motor cogging, bearing jitter and cable carrier rattling, HPPS may employ structural composite materials for damping and isolation mounts for vibration attenuation. Unlike articulating robots, which have revolute joints that connect their links, HPPS links typically consists of sliding joints, which are relatively stiffer than revolute joints. That is the reason why high performance positioning systems are often referred to as cartesian robots. Performance HPPS, driven by linear motors, can move at a combined high velocity on order of 3-5 m/s, high accelerations of 5-7 g, at micron or sub micron positioning accuracy with settling times on order of milliseconds and servo bandwidth of 30-50 Hz. Ball screw actuators, on the other hand, have typical bandwidth of 10-20 Hz and belt driven actuators at about 5-10 Hz. The bandwidth value of HPPS is about 1/3 of the lowest natural frequency in the range of 90-150 Hz. Settling time to +/- 1% Constant Velocity, or + / - 1 um jitter, after high acceleration or high deceleration respectively, takes an estimated 3 bandwidth periods. For example, a 50 Hz servo bandwidth, having a 1 / 50 · 1000 = 20 msec period, will settle to 1 um position accuracy within an estimated 3 · 20 = 60 msec. The lowest natural frequency equals the square root of system stiffness divided by moving inertia. A typical linear recirculating bearing rail, of a high performance positioning stage, has a stiffness on order of 100-300 N/um. Such a performance is required in semiconductor process equipment, electronics assembly lines, numerically controlled machine tools, coordinate-measuring machines, 3D Printing, pick-and-place machines, drug discovery assaying and many more. At their highest performance HPPS may use granite base for thermal stability and flat surfaces, air bearings for jitter free motion, brushless linear motors for non contact, frictionless actuation, high force and low inertia, and laser interferometer for sub micron position feedback. On the other hand, a typical 6 degrees of freedom articulated robot, with 1 m' reach, has a structural stiffness on the order of 1 N/um. That is why articulated robots are best being employed as automation equipment in processes which require position repeatability on the order of 100's microns, such as robot welding, paint robots, palletizers and many more. History The original HPPS were developed at Anorad Corporation (now Rockwell Automation) in the 1980s, after the invention of brushless linear motors by Anorad's Founder and CEO, Anwar Chitayat. Initially HPPS were used for high precision manufacturing processes in semiconductor applications such as Applied Materials, PCB Inspection Orbotech and High Velocity Machine Tool Ford. In parallel linear motor technology and their integration in HPPS, was expanded around the world. As a result, in 1996 Siemens integrated its CNC with Anorad linear motors to drive a 20 m' long Maskant machine at Boeing for chemical milling of aircraft wings. In 1997 FANUC licensed Anorad's linear motor technology and integrated it as a complete solution with their CNC products line. And in 1998, Rockwell Automation acquired Anorad to compete with Siemens and Fanuc in providing a complete linear motor solutions to drive high velocity machine tools in Automotive transfer lines. Today linear motors are being used in hundreds of thousands high performance positioning systems, which drive manufacturing processes around the world. Their market is expected to grow, according to some studies, at 4.4% a year and reach $1.5B in 2025. System requirements Applications Semiconductors - Photolithography is a wafer (electronics) manufacturing process in semiconductor fabrication plants. It uses linear motor stages or maglev stages, for extreme positioning, to move its wafer stage. Electronics - Surface-mount technology is using high performance, linear motor, positioning systems to mount integrated circuit chips on printed circuit boards. Optics - Stereo microscopes use linear motor positioning stages for high smoothness of motion during scanning Machine Tools - Wire electrical discharge machining is used for cutting thick hard metals such as in Die (manufacturing). Linear motor / air bearing positioning systems provide high smoothness of motion. CMM - Coordinate-measuring machine often require granite base, isolation mounts, linear motor actuators, air bearing and laser interferometer. Lab Automation - High-throughput screening process is used in laboratory automation for drug discovery, where linear motor positioning provides high acceleration / deceleration with short settling time. Specifications System specification (technical standard) is an official interface between the application requirements (problem), as described by the user (customer) and the design (solution) as optimized by the developer (supplier). Inertia - Indicates the resistance of the moving load (tool or part) to linear (kg) and angular (kg·m2) change in velocity. To maximize natural frequency the inertia of the moving load should be minimal. Size - Indicates the geometrical constraints of the system's width (m), length(m) and height (m), as may be needed for handling, transport as installation. Motion - Indicates process cycle time (s) and process constraints for each degree of freedom, including maximum travel (m, rad), maximum velocity (m/s, rad/s) and maximum acceleration/deceleration (m/s2, rad/s2). Precision - Indicates linear and angular resolution of position measurement and motion (um, urad) as well as total indicator reading of accuracy and precision for each degree of freedom. Jitter - Indicates maximum amplitude (um) of high frequency vibrations which is allowed at stand still conditions. Constant velocity - Indicates the required smoothness of motion and allowed variations in (+/- %) of required constant velocity (m/s, rad/s) during motion. Stiffness - Indicates the resistance of position change in response to external load (N/um, N·m/rad). Life - indicates the expected time (hrs) or travel (km) the most active degree of freedom of the system is expected to act reliably in process operation. Reliability - Mean time between failures (hrs, cycles) often associated with a requirement for a Failure modes, effects, and diagnostic analysis Maintainability - Mean time to repair (hrs), often associated with system manuals including, operation, maintenance schedule and spare parts list. Environment - Indicates the expected disturbance conditions that the system may encounter during operation within its life time including Thermal, Humidity, Shock and Vibration, Cleanliness and Radiation. Environment Thermal - Indicates the highest and lowest temperature (°C) that the system may endure during operation. Effects structural deformations and precision. May require cooling, insulation and low thermal conductivity material. Humidity - Indicated the level of water vapors in the surrounding air (%). May include the required system protection based on IP Code. May require protective seals. Shock (mechanics) and Vibration - Indicates the level of floor vibration and other process disturbances. May require active or passive vibration isolation mounts and structural material with high damping. Cleanliness - Indicates the allowable level (size and quantity per unit volume) of particles in the surrounding air. May require cleanroom operation, filtration of incoming air and protective seals. Radiation - Electromagnetic interference may require shielded cable management, non ferrous structural material and protective shields of the linear motor magnet plates. System solution Configuration HPPS configuration is typically optimized for maximum structural stiffness with maximum damping and minimum inertia, smallest Abbe error at the point of interest (POI), with minimum components and maximum maintainability. X - A single linear stage, driven by linear motor, ball screw or timing belt, is typically available as a standard actuator (aka slide, axis or table) from many suppliers. XYZ - A customized assembly of single stages, including moving cable management. Z axis is typically actuated with a ball screw or linear motor with a counterbalance. Axes may be separated to reduce inertia. XYZR - Rotational axes including pitch, yaw and roll are typically added in HPPS for orienting the end of arm tool (EOAT) or Robot end effector. Gantry - Gantry configuration provides maximum work envelope in XYZ configuration per given size constraints. It has 2 parallel axes for x, controlled as a single axis or master / slave. Ideal for transfer lines. Rotary (pitch, yaw, roll) - Rotary stages may be customized with linear stage at various order to best meet the specifications. They are typically using direct-drive mechanism, analogous to linear motors. Custom - Custom configurations of HPPS may be required in the mathematical optimization process of integrating the best system components into the most compact, and responsive system. System analysis System analysis is a process of understanding the relationships between design parameters, operating conditions, environmental variables and system performance based on system modeling and analysis tools Dynamics (mechanics) - Optimizing linear motion and rotational profiles, dynamic accuracy, bearing loads, and motor power, using analysis tools such as MATLAB, Simulink, Mathcad, Microsoft Excel Vibration analysis - Estimating natural frequency, servo bandwidth, settling times, using analysis tools such as MATLAB, Simulink, Mathcad, Microsoft Excel Accuracy and precision - Estimating 3D static errors at the point of interest as a function of axes straightness, flatness, pitch, yaw, roll, wobble and ran-outs using analysis tools such as Mathcad, Microsoft Excel Strength of materials - Estimating stiffness of axes, frames, supports and mounting structures, using Finite element method tools such as AutoCAD, Nastran Thermal expansion - Predicting thermal expansions and optimizing heat transfer using insulation and cooling, using analysis tools such as AutoCAD, Nastran Fluid dynamics - Estimating flow rates and supply pressures for fluid actuators and cooling using analysis tools such as Computational fluid dynamics Servo control - Estimating required filters and tuning parameters for PID controller loops of system axes using analysis tools such as MATLAB, Simulink Reliability engineering - Estimating system mean time between failures using analysis tools such as Mathcad, Microsoft Excel Component sizing Component sizing is the process of selecting standard parts from component suppliers, or designing a custom part for manufacturing Frame - Typically made of aluminum or steel weldments of hollow tubes, possibly filled in with concrete composite for damping. Mounted on leveling pads and secured to floor possibly with earthquake posts. Actuator slide, base - High precision bases use granite for flatness and thermal stability. Lower precision standard stages use extruded aluminum. Custom stages typically use ribbed aluminum or stainless steel machined for low inertia high stiffness. bearing - Options include cross roller bearing for relatively short travel, recirculating bearing for higher stiffness longer travel and air bearing with granite base for high smoothness of motion, higher precision. servo motor - Typically linear brushless DC electric motor for horizontal axes with 3 phase synchronized current in moving coil and field in stationary, low cogging, magnet plates. For vertical linear motor axes a counterbalance may be used. Rotary stages use similar 2 parts, direct drive motor, including a stationary coil armature and a moving magnetic rotor. feedback - Typically high resolution encoder, optical, magnetic or captative, analog or digital, linear or rotary, absolute or Incremental encoder with index mark for homing. Laser interferometry for long travel, sub-micron precision. forcer - Forcer options include ball screw for high force, rack and pinion for long travel and timing belt drive for high velocity. Their limitations in HPPS is friction, jitter, backlash, lower stiffness and maintenance. cable management - For power and signal transmission. The weakest link of the system reliability chain. Lower bend radius for low profile increases fatigue. Requires cable carrier or using flat ribbon cable. Introduce jitter. Accessories - Hard stops, stiffening brackets, bellows, shock mounts, air cooling fans, limits, flexures, robot end effector grippers, machine vision camera and sensors for Artificial Intelligence, Machine learning, monitoring such as accelerometers, temperature sensors and gyroscopes. servo drive - Amplifying motion control signals to drive servo motors. Ranging from low power to 10s KW. For example, 40 KW in driving high force linear motor of 10,000 N moving at 4 m/s. DC voltage ranges from a safe 24V/48V to over 400 V. High current loop update rates, of motor signals, are on order of thousands of Hz. Popular network communication with motion controller is via EtherCAT. Motion PID controller - Options include computer numerical control (CNC), single axis, multi axis, PC based, stand alone or integrated with servo drive and /or PLC, including I/O, auto tuning, diagnostic and programming available from multiple sources. Programmable logic controller (PLC) - Higher layer in the hierarchical control system for process sequence control, provided by many suppliers. System testing System testing is an iterative process of system development, intended to validate system analysis modeling, proof of concepts, safety factor of performance specifications and acceptant testing. Motion travel, maximum velocity, maximum acceleration, jerk - Commonly provided within the motion controller. Linear accuracy, repeatability, flatness, straightness, yaw, pitch - Laser interferometer. Angular accuracy, repeatability, runout, wobble - Autocollimator. Cycle time, settling time, jitter, constant velocity - Commonly provided within the motion controller. Thermal stability - Temperature sensors mounted at multiple location to observe heat transfer. Inertia, stiffness - scales, dial indicators and force gauge. Natural frequency, servo bandwidth - Commonly provided with motion controller which has frequency response tools. Mean time between failure, Life test - Non stop operation for specified period without a failure in extreme operating conditions under continuous monitoring with frequent visual and sensor checking. References Further reading Positioning instruments Motion control Electric motors Electromechanical engineering Systems engineering
High performance positioning system
[ "Physics", "Technology", "Engineering" ]
3,200
[ "Systems engineering", "Physical phenomena", "Engines", "Electric motors", "Automation", "Motion (physics)", "Electromechanical engineering", "Mechanical engineering by discipline", "Motion control", "Electrical engineering" ]
66,041,273
https://en.wikipedia.org/wiki/Methylcitrate%20cycle
The methylcitrate cycle, or the MCC, is the mechanism by which propionyl-CoA is formed, generated by β-oxidation of odd-chain fatty acids, and broken down to its final products, succinate and pyruvate. The methylcitrate cycle is closely related to both the citric acid cycle and the glyoxylate cycle, in that they share substrates, enzymes and products. The methylcitrate cycle functions overall to detoxify bacteria of toxic propionyl-CoA, and plays an essential role in propionate metabolism in bacteria. Incomplete propionyl-CoA metabolism may lead to the buildup of toxic metabolites in bacteria, and thus the function of the methylcitrate cycle is an important biological process. History 2-methylisocitric acid, an intermediate of the methylcitrate cycle, was first synthesized in 1886 as a mixture of four isomers. The pathway of the methylcitrate cycle was not discovered until 1973 in fungi, though it was not yet fully understood. Originally, the methylcitrate cycle was thought to be present only in fungal species, such as Candida lipolytica and Aspergillus nidulans. In 1999, it was discovered that the methylcitrate cycle was also present in bacteria Salmonella enterica and Escherichia coli. Much research has been done on the methylcitrate cycle's role in the development and function of various fungi and strains of bacteria, as well as its virulent properties in conjunction with the glyoxylate cycle. Steps There are three basic steps in the methylcitrate cycle, as outlined below. Additionally, the mechanism is shown with its reactants, products, intermediates, and enzymes. The major enzymes involved in this process are methylcitrate synthase (MCS) in step one, methylcitrate dehydratase (MCD) in step two, and 2-methylisocitrate lyase (MCL) in step three. The PrpC gene, which encodes for enzyme methylcitrate synthase in the first step of the methylcitrate cycle, is the gene responsible for propionate metabolism in the process. Without this gene, the methylcitrate cycle and ultimate metabolism would not occur, but rather catabolism. The reaction of the methylcitrate cycle both overlaps and intertwines with the citric acid cycle and the glyoxylate cycle. Odd-chain fatty acids acetate and propionate are broken down by the β-oxidation cycle to form acetyl-CoA, which is further oxidized by the citric acid cycle, and propionyl-CoA, which is oxidized by the methylcitrate cycle. The substrate oxaloacetate is generated by the citric acid and glyoxylate cycles, and the product succinate is taken from the methylcitrate cycle to be used in the citric acid cycle. Products One of the major products of the methylcitrate cycle is pyruvate. This pyruvate can be used by metabolic enzymes for energy and biomass formation. The other major product, succinate, is used in the citric acid cycle and helps to carry the reaction forward and restarts the cycle. Succinate is used by the citric acid and glyoxylate cycles to generate oxaloacetate, one of the key substrates necessary to begin the methylcitrate cycle. References Molecular biology
Methylcitrate cycle
[ "Chemistry", "Biology" ]
716
[ "Biochemistry", "Molecular biology" ]
66,044,705
https://en.wikipedia.org/wiki/Rothalpy
Rothalpy (or trothalpy) , a short name of rotational stagnation enthalpy, is a fluid mechanical property of importance in the study of flow within rotating systems. Concept Consider we have an inertial frame of reference and a rotating frame of reference which both are sharing common origin . Assume that frame is rotating around a fixed axis with angular velocity . Now assuming fluid velocity to be and fluid velocity relative to rotating frame of reference to be : Rothalpy of a fluid point can be defined as where and and is the stagnation enthalpy of fluid point relative to the rotating frame of reference , which is given by and is known as relative stagnation enthalpy. Rothalpy can also be defined in terms of absolute stagnation enthalpy: where is tangential component of fluid velocity . Applications Rothalpy has applications in turbomachinery and study of relative flows in rotating systems. One such application is that for steady, adiabatic and irreversible flow in a turbomachine, the value of rothalpy across a blade remains constant along a flow streamline: so Euler equation of turbomachinery can be written in terms of rothalpy. This form of the Euler work equation shows that, for rotating blade rows, the relative stagnation enthalpy is constant through the blades provided the blade speed is constant. In other words, , if the radius of a streamline passing through the blades stays the same. This result is important for analyzing turbomachinery flows in the relative frame of reference. Naming The function was first introduced by Wu (1952) and has acquired the widely used name rothalpy. This quantity is commonly called rothalpy, a compound word combining the terms rotation and enthalpy. However, its construction does not conform to the established rules for formation of new words in the English language, namely, that the roots of the new word originate from the same language. The word trothalpy satisfies this requirement as trohos is the Greek root for wheel and enthalpy is to put heat in, whereas rotation is derived from Latin rotare. See also Stagnation enthalpy Euler's pump and turbine equation References Fluid dynamics Enthalpy
Rothalpy
[ "Physics", "Chemistry", "Mathematics", "Engineering" ]
466
[ "Thermodynamic properties", "Physical quantities", "Chemical engineering", "Quantity", "Enthalpy", "Piping", "Fluid dynamics" ]
74,662,377
https://en.wikipedia.org/wiki/Polyimine
Polyimines are classified as polymer materials that contain imine groups, which are characterised by a double bond between a carbon and nitrogen atom. The term polyimine can also be found occasionally in covalent organic frameworks (COFs). In (older) literature, polyimines are sometimes also referred to as poly(azomethine) or polyschiff. Synthesis Polyimines can be synthesised via a condensation reaction between aldehydes and (primary) amines. During this reaction, water is also formed as byproduct. Often, the synthesis can be performed at room temperature, but to fully cure the materials and remove remaining water, they can be dried at slightly elevated temperatures and/or in vacuum. Applications One of the applications of polyimines is as in covalent adaptable networks (CANs). These are polymer materials that are crosslinked via dynamic covalent bonds. Besides polyimines, other types of dynamic covalent chemistry can also be used. Polyimine CANs are largely investigated to create recyclable and self-healing thermoset materials, but they can also find use in composite materials with higher performance. Flame retardants Because of the free radical scavenging properties of imines, they are well fit to be used in flame retardant materials. In addition, different polyimine materials have also been investigated for which phosporous species have been incorporated. These materials represent more sustainable and less harmful alternatives to previously used halogenated polymers. Sensory devices The dynamic characteristics of polyimines enables them to be used as sensory devices. An example of this is the sensing of amine compounds. Polyimine materials have been constructed that enable penetration of (small) monoamine molecules. These amines can perform bond exchange reactions with the polyimine network, and as a result reduce the crosslinking density. As a result, the materials soften or even liquify. The change in material properties provides a "read-out" of the presence of amines. Electronic skin Polyimines have been investigated for their use in the production of electronic skins (e-skin). For this, Polyimine networks were doped with conductive silver nanoparticles. The malleability of the polyimine network enables the e-skin to conform to complex or uneven surfaces without introducing excessive interfacial stresses. Bio-based polyimines Various studies have been conducted to synthesise bio-based polyimines due the great natural abundance of aldehydes and amines. Popular sources for aldehydes include vanilin, which can be obtained from lignin, or 2,5-furandicarboxaldehyde (FDC), which can be derived from fructose. Imines in other polymers Apart from polyimine polymers that are formed directly via the condensation reaction from aldehydes and amines, it is also possible to incorporate imines in other existing polymer materials. Imines have, for example, been incorporated into recyclable epoxy-based thermosets and polyesters. See also Imine Polyimide Polyamide Vitrimers Covalent adaptable network (CAN) References Imines Polymers
Polyimine
[ "Chemistry", "Materials_science" ]
673
[ "Polymers", "Polymer chemistry" ]
74,668,064
https://en.wikipedia.org/wiki/Jacob%20Hanna
Jacob H. Hanna (Arabic: Yaqub or Yaoub; born 26 August 1979) is a Palestinian Arab-Israeli biologist who is working as a professor in the Department of Molecular Genetics at the Weizmann Institute of Science in Rehovot, Israel. An expert in embryonic stem cell research, he is most recognized for developing the first bona fide synthetic embryo models (also known as "complete embryo models") from stem cells in the petri dish in mice and humans. To achieve this, he first developed a technique for extended culturing mouse embryos outside the uterus (ex utero) in 2021 capturing development from before gastrulation until late organogenesis outside the uterus, subsequently applying his technique for making the first synthetic complete embryo models of mice in 2022, and then of human in 2023 that can be made solely from embryonic pluripotent stem cells and outside the womb. Hanna pioneered the extended static and dynamic post-implantation ex utero embryo growth platform that was critical for enabling the synthetic complete embryo model establishment, and also pioneered developing the technology to generate alternative naive-like and naive pluripotent states in humans, that correspond to more early stages in development and retain an enhanced potential to make essential extra-embryonic tissue (placenta and yolk-sac), which proved essential for generating the first complete synthetic embryo models by his team and solely from such naive pluripotent cells. Education Hanna has a PhD in microbiology and immunology and an MD in clinical medicine from the Hebrew University of Jerusalem. To train in stem cell research, he worked from 2007 to 2011 as a Helen Hay Whitney - Novartis postdoctoral fellow and a Genzyme postdoctoral fellow at the Whitehead Institute for Biomedical Research at MIT, Cambridge, Massachusetts, under Rudolf Jaenisch. In 2011, Hanna joined the Weizmann Institute of Science as an assistant professor in 2011, and has been there ever since. In 2018, Hanna received academic tenure and promotion at the Department of Molecular Genetics in the Weizmann Institute, and in 2023 became a Full Professor of Stem Cell Biology and Synthetic Embryology. Hanna was listed in 2014 among top 40 under 40 leading international scientists by Cell journal and elected to the European Molecular Biology Organization in 2018. In 2021, he was announced as the top thinker of the year 2021 by Prospect magazine for his works on embryology. His extended ex utero embryo culture was selected among Science Magazine Breakthrough of the year 2021, and his mouse complete synthetic embryo models were selected by Nature magazine among seven technologies to watch in 2023. Human complete synthetic embryo model generated by Hanna was selected by the Time as invention breakthrough of the year 2023, and the generation of synthetic embryo models of development through using stem cells was selected as the method of the year 2023 by Nature Methods. Early life Hanna was born in Rameh, an Arab village in the Galilee region of Israel to a Christian Palestinian family. His father was a pediatrician and his mother was a high-school biology teacher. His grandfather was also a doctor in this village. He studied medical science at the Hebrew University of Jerusalem, obtaining a B.Sc. degree summa cum laude in 2001, and then continued to do an M.D.-Ph.D. degree at the same institute. He indicated in his interviews that his decision to undertake a career in research was heavily influenced and inspired by the success of his uncle, Nabil Hanna, who invented the first FDA-approved antibody therapy in humans (Rituxan, a blockbuster anti-CD20 mAb drug for treatment of non-Hodgkin lymphoma) while serving as chief scientific office of IDEC Pharmaceuticals. His Ph.D. research was supervised by and was on the roles of natural killer cells. In 2007, the Hebrew University awarded him both Ph.D. in microbiology and immunology and M.D. in clinical medicine summa cum laude. Hanna's three sisters also study medicine in the Hebrew University. Hanna decided not to go into practicing medicine but focus only on developing his research career in Academia. In 2007, he received the Helen Hay Whitney Foundation, and later a Genzyme-Whitehead Fellowship for outstanding postdoctoral fellows in 2009, by which he worked at the Whitehead Institute for Biomedical Research in Cambridge, Massachusetts. His research there till early 2011 under Rudolf Jaenisch helped him specialize in pluripotent stem cell research and induced pluripotent stem cell reprogramming. Research and discoveries Induced pluripotent stem cell reprogramming During his postdoctoral research at the Whitehead Institute, Hanna focused on studying embryonic stem cells (ESCs) and epigenetic reprogramming of somatic cells into ESC-like cells, called induced pluripotent stem cells (iPSCs). He developed transgenic mouse models to address problems in stem cell research. In 2007, he provided the first evidence that iPSCs could be used for a blood genetic disease, sickle cell anemia, by combined gene and cell therapy approach in mice. His supervisor, Jaenisch was awarded the Masri Prize and the Wolf Prize in 2011 for this research innovation, as the award citation read: "For demonstration that iPS cells can be used to cure genetic disease in a mammal, thus establishing their therapeutic potential". Hanna made scientific contributions to understanding the iPSC phenomenon in its early days. He developed a novel inducible "reprogrammable mouse" transgenic models with drug controlled over expression of the Yamanaka reprogramming factors. This technique allowed him to create reprogrammed B lymphocytes carrying endogenous genetic rearrangements of the B-cell receptor (BCR) into iPSCs, thus providing definitive proof for the feasibility of reprogramming terminally differentiated cells to iPSCs that carried the original genetic rearrangement mark of the BCR. Epigenetic reprogramming and naive pluripotency Initially, his independent group identified a number of key epigenetic regulators influencing iPSC derivation efficiency such as the role of H3K27 demethylase Utx in iPS formation, and first demonstrated the deterministic reprogramming efficiencies (up to 100% within 8 days) via optimized depletion of Gatad2a/Mbd3 core member-axis of the NuRD co-repressor complex. The latter work set the stage for others to show alternative methods to obtain deterministic reprogramming. For example, Thomas Graf group showed that transient activation of C/EBPα, previously highlighted by Hanna and Jaenisch as a booster for B cell reprogramming, can yield up to 100% deterministic iPSC reprogramming from B cells within 8 days. Hanna also identified SUMOylation of linker H1 histone as a major determinant for transition between totipotency and naïve pluripotency states. From 2013, Hanna worked as the Robertson Stem Cell Investigator of The New York Stem Cell Foundation. His first major achievement under the NYSCF research was the demonstration that human naïve-like ES/iPS cell state tin NHSM naive-like conditions that he discovered (and later also in HENSM naive conditions), has additional unique functional properties compared to conventional primed iPS cells, which is the creation of sperm and egg stem cells from human skin cell derived naive-like iPSCs, which has not been possible thus far with conventional human iPSCs. The experiment, done in collaboration with Azim Surani's team at the University of Cambridge, was published in the journal Cell in 2015. David Cyranoski reported in Nature as "A feat achieved for the first time in humans". In 2014, Hanna criticized Jaenisch, his former postdoctoral mentor at the Whitehead Institute - MIT, accusing his team of publishing unreliable "false" negative experimental results on inability to generate any cross-species mouse-human chimerism in a paper published by Jaenisch on the pluripotency of human embryonic stem cells in the journal Cell Stem Cell. Surprisingly, in 2016, Jaenisch and his team reported positive results on the same topic in the Proceedings of the National Academy of Sciences (PNAS) and reporting the ability to create chimeric embryo from a mixture of mouse and human cells. In light of the latter, Hanna again raised the same critical comments in PubMed, suggesting retraction of a section of the previous 2014 Jaenisch paper in Cell Stem Cell that reported negative results, in contrast to the newly reported 2016 PNAS positive results by the same team led by Jaenisch. Jaenisch published a correction in the same journal. Later findings independently reported by Jun Wu group and others supported mouse-human cross-species chimerism with human pluripotent cells as originally reported by Hanna in 2013 and expanded further by his group in 2021. Human PSCs expanded in Hanna lab naive-like RSeT media were also independently shown to contribute to dopamine neurons in postnatal mouse-human cross species chimeras, thus solidifying the earlier claims by Hanna and refuting those published by Jaenisch in 2014. The Hanna team has also tackled pathways that resolve naive pluripotency programs and delineated a critical function for m6A RNA methylation in stem cell transitions in peri-implantation mouse development. Their study published in Science in 2015 provided the first evidence for the absolute essentiality of m6A mRNA epigenetic layer for mammalian embryo viability in vivo and uncovered opposing tolerance of epigenetic repressor depletion in naive and primed cells from the same species, that Hanna later used to optimize naive conditions in humans since only naive cells can tolerate genetic ablation or RNA and DNA methylation (deposited and maintained by METTL3 and DNMT1 enzymes, respectively). Hanna used the latter property to screen for conditions that allow survival of human pluripotent cells without these enzymes and termed the conditions human enhanced naive stem cell media (HENSM). Hanna's lab also focused on deciphering the principles regulating naive pluripotency in different species and in 2013 his team was the first to derive human genetically unmodified MEK/ERK independent naïve-like pluripotent cells (termed NHSM conditions that were commercialized as RSeT by Stemcell Technologies). Hanna next developed engineered systems to screen for enhanced NHSM conditions that maintain human pluripotent ES cells that can tolerate removal of RNA or DNA methylation enzymes (by ablating METTL3 or DNMT1 genes, respectively), and identified enhanced NHSM conditions (termed HENSM), that can yield ESCs/iPSCs with more compelling characteristics of human pre-implantation blastocyst-morula stages. From Naive Stem cells to synthetic complete embryo models developed ex utero - in mouse and human Hanna is most recognized for developing a method, combining static and revamped "roller culture" conditions, for extended culturing of advanced mouse embryos outside the uterus (ex utero) in 2021 (from pre-gastrulation to late organogenesis for the first time), subsequently allowing him to make the first synthetic complete and bona fide mouse embryo models derived only from naïve pluripotent stem cells in 2022. In September 2023, Nature accepted Hanna's, previously preprinted article on bioRxiv on 14 June 2023, on the generation of complete and structured day-14 synthetic human embryos derived from human naïve ES/iPS cells grown in his HENSM conditions. Hanna's complete human stem cell-derived embryo model (SEM) can generate extra-embryonic trophoblast stem cells, mesoderm cells and primitive endoderm cells without genetic modification, transgene or transcription factor over-expression, and has structural and morphological uncanny similarity to day 14 human embryo inside the womb. Conventional human (and mouse) primed ESCs/iPSCs fail to achieve this feat, highlighting the essentiality of capturing alternative naive pluripotent states in humans to be able to derive up to day 14 human SEMs. Prof Alfonso Martinez Arias, from the department of experimental and health sciences at Pompeu Fabra University, said it was "a most important piece of research". "The work has, for the first time, achieved a faithful construction of the complete structure [of a human embryo] from stem cells" in the lab, "thus opening the door for studies of the events that lead to the formation of the human body plan," Martinez-Arias said. It was reported by Philip Ball that Dr. Bailey Weatherbee from the University of Cambridge who tried to generate human embryo models "is impressed by the embryo-like structures reported by Hanna's team, and agrees that their own don't have these structures." Prof Robin Lovell Badge, who researches embryo development at the Francis Crick Institute, told the BBC that Hanna's human embryo models "do look pretty good" and "do look pretty normal". He also said "I think it's good, I thinks it's done very well, it's all making sense and I'm pretty impressed with it". Synthetic stem-cell-derived embryo model research related ethical discussions In 2022, when Hanna published the first bona fide mouse synthetic embryo, he reported to the MIT Technology Review that he was already using the same method to make human embryo models (which indeed he was the first to report in 2023). The funder company, NFX, stated that the aim is "renewing humanity—making all of us young and healthy." When Hanna announced the creation of the first human synthetic embryo models in a preprinted manuscript on bioRxiv and shortly after in Nature, it was received as a "breakthrough" and "groundbreaking advance" in science. But Hanna's scientific feat raised further the discussions surrounding ethical and legal controversies. The International Society for Stem Cell Research (ISSCR) has instituted guidelines for maintaining human embryos that are followed in most countries. However, the guidelines or any other legislations do not cover synthetic embryo models, as the embryo models are made from ordinary cells. Hanna commented on Stat: "You don't ban nuclear physics because somebody can make a nuclear bomb." Rivron, Martinez Arias and others, writing on the ethical issues in Cell in 2023, expressed a possible need to open discussions about revising the definition of an embryo since certain embryo models can theoretically become functional embryos and produce babies. Robin Lovell-Badge, at the Francis Crick Institute and member of the ISSCR guidelines preparation, also agreed that both natural and synthetic human embryo models should be regulated equally, saying, "These models do challenge the need to stick to the 14-day rule", referring the ISSCR's relaxation in 2021 the limit of growing human embryos up to 14 days. The scientific and ethical complexity were remarked by J. Benjamin Hurlbut, bioethicist at the Arizona State University: "The big question is how the boundary between a tissue culture and a human organism is going to be drawn and on what criteria." Pompeu Fabra University Professor Alfonso Martinez Arias, Ph.D., whose own lab is working on building human embryo models, noted that such conversations and debates are nothing new and should be welcomed. The International Society for Stem Cell Research publicly announced support for the research and highlighted to the public that such complete embryo models are only models of embryogenesis and should not be considered as embryos. British science writer, Philip Ball, alleviated concern related to this line of research by emphasizing that "None [of the embryo models] has the potential to grow into a human being, nor is there any reason why scientists would want them to." Upon publication of Hanna's ground breaking paper on human complete stem cell-derived embryo models (termed SEMs) in Nature in 2023 Philip Ball tweeted "This is work at the absolute forefront of this extraordinary and exciting field". Awards and honors Named on the 2024 STATUS List by STAT news, a prestigious list with most influential individuals in healthcare, medicine, & life sciences Human complete synthetic embryo model generated by Hanna was selected by the Time as invention breakthrough of the year 2023 , The generation of synthetic embryo models was selected as the method of the year 2023 by Nature Methods journal on behalf of Nature publishing group. Mouse and human complete synthetic embryo models were selected by Nature magazine among seven technologies to watch in 2023. The IVI Foundation Award for Basic Research in Reproductive Medicine (2023) Manuscript describing mouse synthetic stem cell-derived embryo models among the 10 selected "Best of Cell 2022” publications by Cell journal editors (2023) The 2022 paper on stem cell-derived (synthetic) embryogenesis listed among top scientific breakthroughs of the year 2022 by The Atlantic magazine and The Week Magazine (2023) A Paul Harris Fellow by the Rotary International Foundation in recognition for scientific achievements (2022) Selected as top thinker for the year 2021 by Prospect magazine, UK for his work on stem cells and synthetic embryology (2021) Paper on ex utero embryogenesis listed among top scientific breakthroughs of the year by Science journal (2021) Robert Edwards honorary lecture and lifetime achievement award by the COGI meeting in Berlin (2021) Research on ex utero embryogenesis was covered in a dedicated Nature Outlook article (2021) Elected a member of the European Molecular Biology Organization (EMBO) (2018) Research Professorship Award by the Israel Cancer Research Fund (ICRF) (2017) The Segal Family Award for Excellence in Stem Cell Biology, University of Michigan, USA (2016) The Kimmel Prize for outstanding scientist at the Weizmann Institute of Science (2015) Selected among "40 under 40" most innovative young scientists by Cell journal (2014) Elected member of the Israeli Young Academy of Science (2014) Robertson Innovator Award in Stem Cell Research by the New York Stem Cell Foundation (NYSCF) (2013) Krill Prize for outstanding early career scientists by the Wolf Foundation (2013) The Rappaport Prize for a Young Researcher in Biomedicine by the Bruce and Ruth Rappaport Foundation (2013) Elected member of the European Molecular Biology Organization Young Investigator Program (EMBO-YIP) (2012) Inaugural Award for Excellence in Biomedical Research by the Palestinian Society for Biomedical Research (2011) Alon Foundation Scholar for distinguished junior faculty in Israeli academia (2011) The Clore Prize for an outstanding new scientist at the Weizmann Institute of Science (2011) TR35 Young Innovator Award for international innovators under the age of 35 by MIT Technology Review magazine (2010) Genzyme Postdoctoral Prize and Fellowship for Outstanding Postdoc at the Whitehead Institute (2010) Novartis Postdoctoral Fellowship by the Helen Hay Whitney Foundation – Novartis Fellow (2007) Hebrew University Medical School Excellence Award for graduating M.D.-Ph.D. students, Hebrew University of Jerusalem (2007) Max Schlomiuk Award for Ph.D. students graduating with distinction (summa cum laude), Hebrew University of Jerusalem (2007) Gertrude Kohn Award for outstanding scientific work in human genetics, Hebrew University of Jerusalem (2005) Foulkes Foundation Award and Scholarship for M.D.- Ph.D. students (2004) Wolf Foundation Award and Fellowship for Outstanding Ph.D. students (2003) References 1979 births Academic staff of Weizmann Institute of Science Arab citizens of Israel Stem cell researchers Hebrew University of Jerusalem alumni People from Rameh Living people
Jacob Hanna
[ "Biology" ]
4,035
[ "Stem cell researchers", "Stem cell research" ]
76,188,403
https://en.wikipedia.org/wiki/Carbonate%20nitrate
Carbonate nitrates are mixed anion compounds containing both carbonate and nitrate ions. Hydrotalcite can contain carbonate and nitrate ions between its layers. Magnesium can be substituted by nickel, cobalt or copper. Oxycarbonitrates containing an alkaline earth metal and cuprate and nitrate and carbonate anions in layers, form a family of superconducting materials. List References Carbonates Nitrates Mixed anion compounds
Carbonate nitrate
[ "Physics", "Chemistry" ]
85
[ "Matter", "Mixed anion compounds", "Oxidizing agents", "Nitrates", "Salts", "Ions" ]
76,188,460
https://en.wikipedia.org/wiki/Entropy-vorticity%20wave
Entropy-vorticity waves (or sometimes entropy-vortex waves) refer to small-amplitude waves carried by the gas within which entropy, vorticity, density but not pressure perturbations are propagated. Entropy-vortivity waves are essentially isobaric, incompressible, rotational perturbations along with entropy perturbations. This wave differs from the other well-known small-amplitude wave that is a sound wave, which propagates with respect to the gas within which density, pressure but not entropy perturbations are propagated. The classification of small disturbances into acoustic, entropy and vortex modes were introduced by Leslie S. G. Kovasznay. Entropy-vorticity waves are ubiquitous in supersonic problems, particularly those involving shock waves. Since these perturbations are carried by the gas, they are convected by the flow downstream of the shock wave, but they cannot be propagates in the upstream direction (behind the shock wave) unlike the acoustic wave, which can propagate upstream and can catch up the shock wave. As such, they are useful in understanding many highspeed flows and are important in many applications such as in solid-propellant rockets and detonations. Mathematical description Consider a gas flow with a uniform velocity field and having a pressure , density , entropy and sound speed . Now we add small perturbations to these variables, which are denoted with a symbol . The perturbed variables being small quatities satisfy linearized form of the Euler equations, which is given by where in the continuity equation, we have used the relation (since and ) and the used the entropy equation to simplify it. Taking perturbations to be of the plane-wave form , the linearised equations can be reduced to algebraic equations The last equation shows that either , which corresponds to sound waves in which entropy does not change or . The later condition indicating that perturbations are carried by the gas corresponds to the entropy-vortex wave. In this case, we have where is the vorticity perturbation. As we can see, the entropy perturbation and the vorticity perturbation are independent meaning that one can have entropy waves without vorticity waves or vorticity waves with entropy waves or both entropy and vorticity waves. In non-reacting multicomponent gas, we can also have compositional perturbations since in this case, , where is the mass fraction of ith specices of total chemical species. In the entropy-vorticity wave, we have then References Fluid dynamics Wave mechanics
Entropy-vorticity wave
[ "Physics", "Chemistry", "Engineering" ]
540
[ "Physical phenomena", "Chemical engineering", "Classical mechanics", "Waves", "Wave mechanics", "Piping", "Fluid dynamics" ]
67,504,737
https://en.wikipedia.org/wiki/Laser%20speckle%20contrast%20imaging
Laser speckle contrast imaging (LSCI), also called laser speckle imaging (LSI), is an imaging modality based on the analysis of the blurring effect of the speckle pattern. The operation of LSCI is having a wide-field illumination of a rough surface through a coherent light source. Then using photodetectors such as CCD camera or CMOS sensors imaging the resulting laser speckle pattern caused by the interference of coherent light. In biomedical use, the coherent light is typically in the red or near-infrared region to ensure higher penetration depth. When scattering particles moving during the time, the interference caused by the coherent light will have fluctuations which will lead to the intensity variations detected via the photodetector, and this change of the intensity contain the information of scattering particles' motion. Through image the speckle patterns with finite exposure time, areas with scattering particles will appear blurred. Development The first practical application of utilizing speckle pattern reduction to mapping retinal blood flow was reported by Fercher and Briers in 1982. This technology was called single-exposure speckle photography at that time. Due to the lacking of sufficient digital techniques in the 1980s, single-exposure speckle photography has a two-step process which made it not convenient and efficient enough for biomedical research especially in clinical use. With the development of digital techniques, including the CCD cameras, CMOS sensors, and computers, in the 1990s, Briers and Webster successfully improved single-exposure speckle photography. It no longer needed to use photographs to capture images. The improved technology is called laser speckle contrast imaging (LSCI) which can directly measure the contrast of speckle pattern. A typical instrumental setup of laser speckle contrast imaging only contains a laser source, camera, diffuser, lens, and computer. Due to the simple structure of the instrumental setup, LSCI can be integrated into other systems easily. Concept Speckle theory Contrast For a fully developed speckle pattern which formed when the complete coherent and polarized light illuminate a static medium, the contrast (K) range from 0 to 1 is defined by the ratio between the standard deviation and mean intensity: The intensity distribution of the speckle pattern will be used to compute the contrast value. Autocorrelation functions Autocorrelation functions of electric field are used to measure the relationship between contrast and the motion of scatterers because the intensity fluctuations are produced by electric field changes of scatterers. E(t) is the electric field over time, E* is the complex conjugate of electric field and is the autocorrelation delay time. Bandyopadhyay et al. showed that the reduced intensity variances of speckle pattern are related to . Therefore, the contrast can be written as where T is the exposure time. The normalization constant takes into account the loss of correlation due to the detector pixel size, and depolarization of the light through the medium. Motion Distributions Dynamic scatterers' motion can be classified into two categories, one is the ordered motion and the other one is disordered motion. The ordered motion is the ordered flow of scattered while the disordered motion is caused by the temperature effects. The total dynamic scatterers' motions were thought of as Brownian motion historically, the approximate velocity distribution of Brownian motion can be considered as the Lorentzian profile. However, the ordered motion in dynamic scatterers follows Gaussian distribution. When considering the motion distribution, the contrast equation related to the autocorrelation can be updated. The updated equations are as follows, is the contrast equation function in Lorentzian profile and is the contrast equation function in Gaussian profile. is the decorrelation time. Both equations can be used in contrast measurement, some scientists also use contrast equations with the combination of them. However, what the correct theoretical contrast equation should be is still under investigation. Normalization constants is the normalization constants that vary in different LSCI systems, the value of it is 1, the most common method to determine the value of it is using the following equation. is account for the instability and maximum contrast of each LSCI system. Effect of static scatterers Static scatterers are present in the assessed sample, speckle contrast produced by static scatterers remains constant. By adding statics scatterers, the contrast equation can be updated again. *The above equation did not account for the motion distributions. P1 and P2 are two constants that range from 0 to 1, they are determined by fitting this equation to the actual experimental data. Scatterers velocity determination The relationship between the velocity of scatterers and decorrelation time is as follows, velocity of scatterers such as the blood flow is proportional to the decorrelation time, is the laser light wavelength. Contrast processing algorithm The method to compute the contrast of speckle patterns can be classified into three categories: s-K (spatial), t-K (temporal), and st-K (Spatio-temporal). To compute the spatial contrast, raw images of laser speckle will be separated into small elements, and each element corresponds to a pixels. The value of is determined by the speckle size. The intensity of all the pixels in each element will be summed and averaged to return a mean intensity value (μ), the final contrast value of this element will be calculated based on the mean intensity and actual intensity of each pixel. To improve the resolution limitation, scientists also compute the temporal contrast of the speckle pattern. The method is the same as how to compute spatial contrast but just in temporal. The combination computation of spatial contrast and temporal contrast is Spatio-temporal contrast processing algorithm and this is the most commonly used one. Practical considerations Several parameters should take into considerations to maximum contrast and signal to noise ratio (SNR) of LSCI. The size of individual speckle is essential and it will determine the requirement of the photodetector. The size of each speckle pattern should smaller than the photodetector's pixel size to avoid the decrease of contrast. The minimum speckle diameter for an LSCI system depends on the wavelength of light, imaging system magnification, and imaging system f-number: Measurement of normalization constant , static scatters is necessary, as they can determine the maximum contrast the LSCI system can obtained. Both too short or too long exposure time (T) can decrease the efficiency of the LSCI system as too short exposure can not ensure the adequate photons to be accumulated while too long exposure time can reduce contrast. Suitable T should be analyzed in advance. The illumination angle should be considered to achieve higher light transmittance efficiency. Suitable laser source should be chosen to get rid of a decrease in contrast and SNR. Applications Compared with other existing imaging technologies, laser speckle contrast imaging has several obvious advantages. It can uses simple and cost-effective instrument to return excellent spatial and temporal resolution imaging. And due to these strengths, laser speckle contrast imaging has been involved in mapping blood flow for decades. The utilize of LSCI has been extended to many subjects in the biomedical field which include but are not limited to rheumatology, burns, dermatology, neurology, gastrointestinal tract surgery, dentistry, cardiovascular research. LSCI can be adopted into another system easily for clinical full-field monitoring, measuring, and investigating living processes in almost real-time scale. However, LSCI still has some limitations, it can only be used to mapping relative blood flow instead of measuring the absolute blood flow. Due to the complex vascular anatomy structure, the maximum detection depth of LSCI is limited by 900 micrometers now. The scattering and absorption effect of red blood cell can influence the contrast value. The complex physics of measuring behind this technology made it hard to do quantitative measurements. References Image sensors Medical devices Biomedical engineering
Laser speckle contrast imaging
[ "Engineering", "Biology" ]
1,621
[ "Biological engineering", "Medical technology", "Medical devices", "Biomedical engineering" ]
67,505,211
https://en.wikipedia.org/wiki/Sodium%20naphthalene
Sodium naphthalene is an organic salt with the chemical formula . In the research laboratory, it is used as a reductant in the synthesis of organic, organometallic, and inorganic chemistry. It is usually generated in situ. When isolated, it invariably crystallizes as a solvate with ligands bound to . Preparation and properties The alkali metal naphthalene salts are prepared by stirring the metal with naphthalene in an ethereal solvent, usually as tetrahydrofuran or dimethoxyethane. The resulting salt is dark green. The anion is a radical, giving a strong EPR signal near g = 2.0. Its deep green color arises from absorptions centered at 463 and 735 nm. Several solvates of sodium naphthalenide have been characterized by X-ray crystallography. The effects are subtle, the outer pair of CH−CH bonds contract by 3 pm and the other nine C−C bonds elongate by 2–3 pm. The net effect is that reduction weakens the bonding. Reactions Redox With a reduction potential near −2.5 V vs NHE, the naphthalene radical anion is a strong reducing agent. It is capable of defluorinating PTFE and is commonly used for chemically etching PTFE to allow adhesion. Protonation The anion is strongly basic, and a typical degradation pathway involves reaction with water and related protic sources such as alcohols. These reactions afford dihydronaphthalene: As a ligand Alkali metal salts of the naphthalene radical anion are used to prepare complexes of naphthalene. Related reagents References One-electron reducing agents Organosodium compounds Free radicals Naphthalenes
Sodium naphthalene
[ "Chemistry", "Biology" ]
369
[ "One-electron reducing agents", "Free radicals", "Reducing agents", "Senescence", "Biomolecules" ]
67,507,369
https://en.wikipedia.org/wiki/Proprioception%20and%20motor%20control
Proprioception refers to the sensory information relayed from muscles, tendons, and skin that allows for the perception of the body in space. This feedback allows for more fine control of movement. In the brain, proprioceptive integration occurs in the somatosensory cortex, and motor commands are generated in the motor cortex. In the spinal cord, sensory and motor signals are integrated and modulated by motor neuron pools called central pattern generators (CPGs). At the base level, sensory input is relayed by muscle spindles in the muscle and Golgi tendon organs (GTOs) in tendons, alongside cutaneous sensors in the skin. Physiology Central pattern generators Central pattern generators are groups of neurons in the spinal cord that are responsible for generating stereotyped movement. It has been shown that in cats, rhythmic activation patterns are still observed following removal of sensory afferents and removal of the brain., indicating that there is neural pattern generation in the spinal cord independent of descending signals from the brain and sensory information. It is currently understood that the spinal cord receives sensory input from proprioceptive organs and descending commands from the brain, integrates these signals, and sends activation signals to muscle through alpha motoneurons and fusimotor signals through gamma motoneurons in a coordinated and rhythmic fashion. Muscle spindles The muscle spindle is a proprioceptive organ that lies embedded in the muscle. It consists of bag- and chain-type fibers, which correspond to dynamic and static responses, respectively. Spindles relay information through primary (Group Ia) and secondary (Group II) sensory afferents, with the primary afferent attached at the nucleus of the spindle and the secondary afferent attached at the end of the spindle. Spindles are conventionally thought of as encoding muscle length, velocity, and acceleration, however there is evidence to suggest that they respond to the force and yank (the first time-derivative of force) exerted on intrafusal muscle. Spindles are also composed of bag- and chain-type fibers, with dynamic and static stretch responses, respectively. Key features of muscle spindle firing responses include initial bursts, history-dependence, and rate relaxation. Initial bursts occur at the onset of stretch and only last a very short time. History dependence refers to how the response of muscle spindles is affected by past stretch inputs. Rate relaxation refers to how the firing rate of muscle spindles decreases over time when held at a constant length. Golgi tendon organs The Golgi tendon organ (GTO) is a proprioceptive organ that lies at the muscle-tendon junction. GTOs relay information through group Ib afferents, and encode active muscle force. As they are connected at one end to motor units, individual GTOs only relay information on a few fibers. At the same time, GTOs exhibit self-adaptation, in which GTO response decreases after prior activation, and cross-adaptation, in which GTO activity is modulated by prior activation of another GTO. Similar to muscle spindles, GTO firing is characterized by a heightened response at the onset of activity (dynamic response) and gradual relaxation to a resting firing rate (static response). Fusimotor system While muscle spindles relay information via primary afferents, they receive descending efferent signals from the spinal cord via gamma motoneurons. This gamma innervation modulates the sensitivity of muscle spindle afferents to stretch. In cat studies, muscle spindle afferent firing rates with gamma fusimotor innervation were shown to be approximately equal to the sum of the gamma motoneuron firing rate and muscle spindle firing rate with no gamma innervation. In these same studies, gamma activity was shown to be correlated with joint angle during locomotion, indicating that fusimotor activity is periodically modulated during locomotion. Similar to muscle spindles, gamma motoneurons are also categorized according to static and dynamic response properties. Motor control In motor control, proprioceptors provide critical feedback to the central nervous system. Muscle spindles relay information regarding muscle stretch, Golgi tendon organs relay information regarding tendon force, and gamma motoneurons modulate muscle spindle feedback. Afferent signals from spindles and tendon organs are integrated in the spinal cord, which then output muscle activation commands to muscle via alpha motoneurons. Because muscle spindles and tendon organs exhibit burst-like activity in response to rapid stretch, they play a vital role in reflexive perturbation responses. In a simulation study, it has been shown that the controllability of a limb in response to a perturbation is significantly increased when utilizing muscle spindle and tendon organ feedback in conjunction. However, proprioceptive feedback is also critical in controlling steady movements. In one study, de-afferented mice were unable to walk as quickly as the control group, and showed some reduced activity in extensor muscles. It's also been shown in cats that disruption of feedback from muscle spindles impairs inter-joint coordination during ramp descent tasks. In a study on people with amputations, those with a higher degree of proprioceptive feedback from muscle spindles were able to better control the movement of a virtual limb. Pathologies Proprioceptive feedback is also linked to motor deficits in Parkinson's disease and cerebral palsy. People with cerebral palsy often suffer from spasticity due to hyperreflexia. A common clinical test of spasticity is the pendulum test, in which the subject remains seated and the relaxed leg is dropped from horizontal. In individuals with spasticity, the leg comes to rest much more quickly due to increased reflexive muscle contraction. Computational models have shown that results from pendulum tests in children with spastic cerebral palsy are explained by increased muscle tone, short-range stiffness, and increased stretch reflex responses due to increased muscle force feedback. Pendulum test results are also dependent on prior motion, indicating that muscle spindle feedback is a large component of spastic movement due to the history-dependent behavior of muscle spindles. Increased proprioceptive feedback has also explained properties of gait in children with spastic cerebral palsy In addition to functional impairments, proprioceptive deficits are linked to compensatory adaptations in the central nervous system. In the study on people with amputations mentioned previously, those with a lower degree of proprioception showed stronger connectivity between their visual and motor cortices, which is interpreted as a greater reliance on visual feedback to coordinate movement. Those with higher degrees of proprioception also showed higher connectivity between brain regions associated with sensorimotor feedback and sensory integration. References Proprioception Motor control
Proprioception and motor control
[ "Biology" ]
1,386
[ "Behavior", "Motor control" ]
67,513,406
https://en.wikipedia.org/wiki/2021%20Leaders%20Summit%20on%20Climate
The 2021 Leaders' Summit on Climate was a virtual climate summit on April 22–23, 2021, organized by the Joe Biden administration, with leaders from various countries. At the summit Biden announced that greenhouse gas emissions by the United States would be reduced by 50% - 52% relative to the level of 2005 by 2030. Overall, the commitments made at the summit reduce the gap between governments' current pledges and the 1.5 degrees target of the Paris Agreement by 12% - 14%. If the pledges are accomplished, greenhouse gas emissions will fall by 2.6% - 3.7% more in comparison to the pledges before the summit. The results of the summit were described by Climate Action Tracker as a step forward in the fight against climate change. Invited countries and their representatives Results At the summit Biden announced that greenhouse gas emissions by the United States would be reduced by 50% - 52% relative to the level of 2005 by 2030. Overall, the commitments made at the summit reduce the gap between governments' current pledges and the 1.5 degrees target of the Paris Agreement by 12% - 14%. If the pledges are accomplished, greenhouse gas emissions will fall by 2.6% - 3.7% GtCO2e more in comparison to the pledges before the summit. The results of the summit were described by Climate Action Tracker as a step forward in the fight against climate change, even though there is still a long way to go to reach the 1.5 degrees target. The most important commitments were made by United States, United Kingdom, European Union, China and Japan. At the summit the Biden administration submitted a new Nationally Determined Contribution to the United Nations Framework Convention on Climate Change (UNFCCC), according to Climate Action Tracker "the biggest climate step made by any US government in history". At the summit Biden's administration launched a number of coalitions and initiatives to limit climate change and help to reduce its impacts, among others a Global Climate Ambition Initiative to help low income countries achieve those targets, and a "Net-Zero Producers Forum, with Canada, Norway, Qatar, and Saudi Arabia, together representing 40% of global oil and gas production" Several countries increased their climate pledges in the summit. Several countries deliver vague promises, and statements: In the beginning of May, 2021, Climate Action Tracker released a more detailed report about the significance of the summit. According to the report the summit, together with the pledges made from September 2020, reduce the expected rise in temperature by 2100 by 0.2 degrees. If all pledges are fulfilled the temperature will rise by 2.4 °C. However, if the policies will remain as they are now it will rise by 2.9 °C. In the most optimistic scenario, if the countries will fulfill also the pledges that are not part of Paris agreement it will rise by 2.0 °C. Use of masks After the summit, there were claims spread that Joe Biden was the only leader there wearing a mask, which was later proved was wrong as at least 5 other world leaders were wearing masks. Notes References External links whitehouse.gov: President Biden Invites 40 World Leaders to Leaders Summit on Climate (March 26) Leaders Summit on Climate Summary of Proceedings (April 23) Remarks by President Biden at the Virtual Leaders Summit on Climate Opening Session (April 22) Remarks by President Biden at the Virtual Leaders Summit on Climate Session 2: Investing in Climate Solutions International conferences in the United States 2021 in international relations 2021 in American politics 2021 conferences 2021 in the United States April 2021 events in the United States Presidency of Joe Biden Politics of climate change Emissions reduction
2021 Leaders Summit on Climate
[ "Chemistry" ]
747
[ "Greenhouse gases", "Emissions reduction" ]
70,413,563
https://en.wikipedia.org/wiki/Transition%20metal%20porphyrin%20complexes
Transition metal porphyrin complexes are a family of coordination complexes of the conjugate base of porphyrins. Iron porphyrin complexes occur widely in nature, which has stimulated extensive studies on related synthetic complexes. The metal-porphyrin interaction is a strong one such that metalloporphyrins are thermally robust. They are catalysts and exhibit rich optical properties, although these complexes remain mainly of academic interest. Structure Porphyrin complexes consist of a square planar MN4 core. The periphery of the porphyrins, consisting of sp2-hybridized carbons, generally display only small deviations from planarity. Additionally, the metal is often not centered in the N4 plane. Large metals such as zirconium, tantalum, and molybdenum tend to bind two porphyrin ligands. Some [M(OEP)]2 feature a multiple bonds between the metals. Formation Metal porphyrin complexes are almost always prepared by direct reaction of a metal halide with the free porphyrin, abbreviated here as H2P: MClx + H2P → M(P)Cl2−x + 2HCl Two pyrrole protons are lost. The porphyrin dianion is an L2X2 ligand. These syntheses require somewhat forcing conditions, consistent with the tight fit of the metal in the N42- "pocket." In nature, the insertion is mediated by chelatase enzymes. The insertion of a metal proceeds by the intermediacy of a "sitting atop complex" (SAC), whereby the entering metal interacts with only one or a few of the nitrogen centers. In contrast to natural porphyrins, synthetic porphyrin ligands are typically symmetrical (i.e., their dianionic conjugate bases). Two major varieties are well studied, those with substituents at the meso positions, the premier example being tetraphenylporphyrin. These ligands are easy to prepare in one-pot procedures. A large number of aryl groups can be deployed aside from phenyl. A second class of synthetic porphyrins have hydrogen at the meso positions. Octaethylporphyrin (H2OEP) is the subject of many such studies. It is more expensive than tetraphenylporphyrin. Protoporphyrin IX, which occurs naturally, can be modified by removal of the vinyl groups and esterification of the carboxylic acid groups to gives deuteroporphyin IX dimethyl ester. Biomimetic complexes Iron porphyrin complexes ("hemes") are the dominant metalloporphyrin complexes in nature. Consequently, synthetic iron porphyrin complexes are well investigated. Common derivatives are those of Fe(III) and Fe(II). Complexes of the type Fe(P)Cl are square-pyramidal and high spin with idealized C4v symmetry. Base hydrolysis affords the "mu-oxo dimers" with the formula [Fe(P)]2O. These complexes have been widely investigated as oxidation catalysts. Typical stoichiometries of ferrous porphyrins are Fe(P)L2 where L is a neutral ligand such as pyridine and imidazole. Cobalt(II) porphyrins behave similarly to the ferrous derivatives. They bind O2 to form dioxygen complexes. Synthetic applications Catalysts based on synthetic metalloporphyrins have been extensively investigated, although few or no applications exist. Due to their distinctive redox properties, Co(II)–porphyrin-based systems are radical initiators. Some complexes emulate the action of various heme enzymes such as cytochrome P450, lignin peroxidase. Metalloporphyrins are also studied as catalysts for water splitting, with the purpose of generating molecular hydrogen and oxygen for fuel cells. In addition, porous organic polymers based on porphyrins, along with metal oxide nanoparticles, Supramolecular chemistry Porphyrins are often used to construct structures in supramolecular chemistry. These systems take advantage of the Lewis acidity of the metal, typically zinc. An example of a host–guest complex that was constructed from a macrocycle composed of four porphyrins. A guest-free base porphyrin is bound to the center by coordination with its four-pyridine substituents. See also phthalocyanines macrocyclic ligand References Biomolecules Chelating agents
Transition metal porphyrin complexes
[ "Chemistry", "Biology" ]
968
[ "Natural products", "Biochemistry", "Organic compounds", "Biomolecules", "Molecular biology", "Structural biology", "Chelating agents", "Porphyrins", "Process chemicals" ]
70,417,545
https://en.wikipedia.org/wiki/Vincent%20Bouchiat
Vincent Bouchiat (born 1970) is a French condensed matter physicist and entrepreneur. He was a CNRS research director from 1997 to 2019. In 2019 he co-founded the company Grapheal SAS, of which he is currently CEO. Early life and education Bouchiat was born to Claude Bouchiat and Marie-Anne Bouchiat, both of whom were physicists. Vincent Bouchiat followed his studies in Paris partially at the Lycée Henri-IV. In 1993, he received an engineer degree from the School of Industrial Physics and Chemistry of Paris ESPCI in 1993 and a master's degree in solid state physics from the University of Paris, Pierre & Marie Curie. After completing his Ph.D. at Quantronics group in CEA-Saclay in 1997 under the supervision of Michel Devoret and Daniel Estève. Career Directeur de recherche Bouchiat became a director of research at the French National Centre for Scientific Research (CNRS) in 1997. He was affiliated with the Institut Néel in Grenoble from 2012. Bouchiat also became invited professor in 2007 at the Physics department of University of California, Berkeley. Grapheal SAS In 2019, Bouchiat co-founded the company Grapheal SAS, where he is currently CEO. It is a startup focusing on the healthcare applications of graphene. Research Bouchiat's PhD thesis is recognized as a pioneering study in the field of quantum computing hardware, showing the quantum superposition of charge states in a single Cooper pair box. This experiment paved the way for the realisation of a charge qubit. Bouchiat's research interests cover a wide range of solid state physics and multidisciplinary investigations which include quantum information, superconductivity, carbon nanostructures (graphene and carbon nanotubes), bioelectronics and translational research research in medical sciences . Awards Bouchiat has won the following awards: Visiting Miller Professorship Award (2007) from Miller Institute at University of California, Berkeley Lee-Hsun Research Award (2017) from the Chinese Academy of Sciences (Institute of Metal Research) Yves Rocard Prize (2033) from the French Physical Society Personal life Vincent has a sister, Hélène Bouchiat, who is also a physicist. References External links LinkedIn profile PhD manuscript file 20th-century French physicists Condensed matter physicists Pierre and Marie Curie University alumni ESPCI Paris alumni 1970 births Living people 21st-century French physicists Research directors of the French National Centre for Scientific Research
Vincent Bouchiat
[ "Physics", "Materials_science" ]
522
[ "Condensed matter physicists", "Condensed matter physics" ]
70,423,945
https://en.wikipedia.org/wiki/Emergency%20data%20request
An emergency data request is a procedure used by U.S. law enforcement agencies for obtaining information from service providers in emergency situations where there is not time to get a subpoena. In 2022, Brian Krebs reported that emergency data requests were being spoofed by hackers to obtain confidential information. There have been proposals to secure emergency data requests using digital signatures, but this would require substantial technical and legal effort to implement. Implementing digital signatures would not solve the problem of compromised government and law enforcement email accounts. Hackers could still compromise these accounts and use them to submit fraudulent emergency data requests. Additionally, there is no validated master list of authorized law enforcement personnel, making it difficult for service providers to verify the legitimacy of the requests. References Security engineering Social engineering (security) Cybercrime
Emergency data request
[ "Engineering" ]
164
[ "Systems engineering", "Security engineering" ]
70,425,236
https://en.wikipedia.org/wiki/Semiconductor%20industry%20in%20China
The Chinese semiconductor industry, including integrated circuit design and manufacturing, forms a major part of mainland China's information technology industry. China's semiconductor industry consists of a wide variety of companies, from integrated device manufacturers to pure-play foundries, fabless semiconductor companies and OSAT companies. Integrated device manufacturers (IDMs) design and manufacture integrated circuits. Pure-play foundries only manufacture devices for other companies, without designing them, while fabless semiconductor companies only design devices. Examples of Chinese IDMs are YMTC and CXMT, examples of Chinese pure-play foundries are SMIC, Hua Hong Semiconductor and Wingtech, examples of Chinese fabless companies are Zhaoxin, HiSilicon, Loongson and UNISOC, and examples of Chinese OSAT companies are JCET, Huatian Technology and Tongfu Microelectronics. Overview China is currently the world's largest semiconductor market in terms of consumption. In 2020, China represented 53.7% of worldwide chip sales, or $239.45 billion out of $446.1 billion. However, a large percentage are imported from multinational suppliers. In 2020, imports took up over 83% ($199.7 billion) of total chip sales. In response, the country launched a number of initiatives to reduce its reliance on foreign companies. To reduce reliance on foreign semiconductor companies, CICF pools resources from state investors including the Ministry of Finance, China Tobacco, China Mobile, and China Development Bank. In 2022, the country announced a Made in China 2025 goal of 70% domestic production. China leads the world in terms of number of new fabs under construction, with 8 out of 19 worldwide in 2021. A total of 17 fabs were expected to start construction between 2021 and 2023. Total installed capacity of Chinese-owned chipmakers was also set to increase from 2.96 million wafers per month (wpm) in 2020 to 3.57 million wpm in 2021. History Soviet-style system of industry The semiconductor industry in China began between a period in 1956, along with the country's first transistor produced in a state lab. In 1965, China created their first integrated circuit. From 1956 to 1990, the industry followed a Soviet-style system of industrial organization. China's State Council prioritized semiconductor technology in its "Outline for Science and Technology Development, 1956–1967," leading to the establishment of semiconductor-related degree programs in five major universities. The Huajing Group's Wuxi Factory No. 742, operational from 1960, played a crucial role in training industry experts and supporting subsequent industrial strategies. In the mid-1960s through the late 1960s, China began a semiconductor program. During the 1970s, China's semiconductor industry operated with research conducted in state labs and manufacturing in separate state-owned factories, hindering technology transfer. Most of the approximately 40 factories focused on producing basic diodes and transistors rather than integrated circuits (ICs). The Cultural Revolution from 1965 to 1975 further disrupted industry progress. However by 1972, China was producing third-generation computers. Deng Xiaoping's economic reforms starting in 1978 initiated significant changes. By the early 1980s, under the sixth Five-Year Plan (1981-1985), the State Council formed a "Computer and Large Scale IC Lead Group" to modernize the industry. Despite importing 24 secondhand semiconductor lines by 1985, only the Wuxi Factory No. 742 met production targets. Efforts then narrowed to focus on five key firms, but cumulative setbacks left the industry lagging behind global leaders. End of the 20th century In the 1990s, China adopted a strategy of concentrating resources on a few large firms to foster partnerships with foreign companies, aiming to accelerate technological advancement. Initiatives included joint ventures with Nortel, Philips, NEC, and ITT starting in the late 1980s and early 1990s. The eighth Five-Year Plan (1991-1995) focused on developing Huajing (operator of Wuxi Factory No. 742) into a leading integrated device manufacturer (IDM), supported by significant funding and a joint venture with Lucent Technologies. However, delays in implementation resulted in outdated manufacturing technologies and slower market entry. The ninth Five-Year Plan (1996-2000) introduced Project 909, aiming for a domestic firm, Huahong, to produce internationally competitive chips using Chinese intellectual property and engineers. While Huahong successfully partnered with NEC to enter production promptly, reliance on Japanese expertise limited knowledge transfer. Economic downturns in the global DRAM market by 2002 led to significant financial losses for Huahong, prompting changes in its partnership and operational strategies. 21st century In 2014, the China Integrated Circuit Industry Investment Fund was established in an effort to reduce dependence on foreign semiconductor companies. In addition to the China Integrated Circuit Industry Investment Fund, many other China government guidance funds also frequently invest in companies in the semiconductor industry. Due to the rapid pace of Chinese semiconductor industry advances, on 7 October 2022, the U.S. government announced a major set of export restrictions toward China, with a focus on artificial Intelligence and semiconductor technologies, with the aim of disrupting the development of China's semiconductor industry. In January 2023, these export controls were made multilateral with an agreement between the governments of the United States, Japan, and the Netherlands. Between October 2022 and May 2023, China's government responded with a diverse set of measures, including filing a suit in the World Trade Organization. On 26 December 2023, China prohibited the use of CPUs made by US companies Intel and AMD for Chinese government PCs and servers. The country instead approved 18 processors made by Chinese companies Loongson and Phytium. State-owned companies were also instructed to transition towards Chinese hardware by 2027. The ban was part of China's strategy to rely more on its domestically designed options in response to US sanctions and export controls. On 15th September 2024, China announced significant advancements in its domestic semiconductor industry, promoting two new deep ultraviolet (DUV) lithography machines. One of the machines operates at a wavelength of 193nm with a resolution below 65nm and an overlay accuracy below 8nm. The second machine has a wavelength of 248nm, with 110nm resolution and 25nm overlay accuracy. Foreign companies Samsung, who is currently the world's largest producer of NAND flash memory, has two plants in Xi'an which accounts for 42.5% percent of its total production capacity, and 15.3% of worldwide NAND production capacity. It was the company's largest overseas investment in chip production with an initial cost of $7 billion. Domestic companies Integrated Device Manufacturers (IDMs) Yangtze Memory Technologies Corp (YMTC) Yangtze Memory Technologies Corp (YMTC) is a Chinese semiconductor integrated device manufacturer specializing in flash memory (NAND) chips. Founded in Wuhan, China in 2016, the company received backing from Tsinghua Unigroup. Prior to YMTC, China had no company capable of producing flash memory. Its consumer products are marketed under the brand Zhitai. In 2020, YMTC was using a 20 nm process to make 64-layer 3D NAND flash. In April 2020, the company unveiled its first 128 layers vertical NAND chip, then the most advanced layer count in mass production, based on XTacking architecture, which then entered production. In 2021, YMTC was producing around 80,000 wafers per month, with plans to expand its first plant to reach 100,000 wpm capacity by 2022; this would have given the company around 6-8% of global market share. ChangXin Memory Technologies (CXMT) ChangXin Memory Technologies (CXMT) is a Chinese semiconductor integrated device manufacturer headquartered in Hefei, Anhui specializing in the production of DRAM memory. As of 2020, ChangXin can manufacture LPDDR4 and DDR4 RAM on a 19 nm process with a capacity of 40,000 wafers per month. The company plans to increase output to 120,000 wpm and launch 17 nm (LP)DDR5 by end of 2022, with a target total capacity of 300,000 wpm in the mid-long term. Hangzhou Silan Microelectronics Hangzhou Silan Microelectronics is a Chinese semiconductor company headquartered in Hangzhou. The company focuses on the design of integrated circuit (IC) chip and the manufacturing of semiconductor microelectronics-related products. It is one of the largest integrated device manufacturers (IDM) in China. Other companies Silergy Pure-play foundries Semiconductor Manufacturing International Corporation (SMIC) Semiconductor Manufacturing International Corporation (SMIC) is a partially state-owned publicly listed Chinese pure-play semiconductor foundry company. It is the largest contract chip maker in mainland China and 5th largest globally, with a market share of 5.3% in the second quarter of 2021. SMIC is headquartered in Shanghai and incorporated in the Cayman Islands. It has wafer fabrication sites throughout mainland China, offices in the United States, Italy, Japan, and Taiwan, and a representative office in Hong Kong. It provides integrated circuit (IC) manufacturing services on 350 nm to 14 nm process technologies. State-owned civilian and military telecommunications equipment provider Datang Telecom Group as well as the China National Integrated Circuit Industry Investment Fund are major shareholders of SMIC. Hua Hong Semiconductor Hua Hong Semiconductor Limited is a publicly listed Chinese pure-play semiconductor foundry company based in Shanghai, established in 1996 as part of China's national efforts to boost its IC industry. Currently, Hua Hong's most advanced node is achieved by its subsidiary Shanghai Huali (HLMC), which in 2022 could manufacture a 28/22nm process; in 2022 advanced 14nm technology was being developed by the company. Hua Hong Semiconductor is currently mainland China's second largest chip-maker behind rival SMIC and the 6th largest globally, with a market share of 2.6% in the second quarter of 2021. Other companies Nexchip United Nova Technology Wingtech Fabless companies HiSilicon HiSilicon is a Chinese fabless semiconductor company based in Shenzhen, Guangdong and wholly owned by Huawei. HiSilicon purchases licenses for CPU designs from ARM Holdings, including the ARM Cortex-A9 MPCore, ARM Cortex-M3, ARM Cortex-A7 MPCore, ARM Cortex-A15 MPCore, ARM Cortex-A53, ARM Cortex-A57 and also for their Mali graphics cores. HiSilicon has also purchased licenses from Vivante Corporation for their GC4000 graphics core. HiSilicon is reputed to be the largest domestic designer of integrated circuits in China. In 2020, the U.S. instituted rules that require American firms providing certain equipment to HiSilicon or non-American firms who use American technologies that supply HiSilicon to have licenses and Huawei announced it will stop producing its Kirin chipset from 15 September 2020, onwards. HiSilicon was overtaken by Chinese rival UNISOC in terms of mobile processor market share as a consequence. However, at the end of 2023 the Kirin 9000S processor, which was first used in the Huawei Mate 60, showed that HiSilicon was restarting its production of Kirin chipsets after a three-year hiatus, this time with entirely domestically produced chips. Hygon Information Technology Hygon Information Technology is a partially state-owned publicly listed Chinese fabless semiconductor company headquartered in Beijing. The company mainly handles central processing units (CPUs) based on Intel's x86 technology as well as domestic Deep Learning Processors. Loongson Technology Loongson Technology is a Chinese fabless company that develops a family of general-purpose, MIPS architecture-compatible microprocessors, mainly used in personal computers and supercomputers. The processors were favoured by the Chinese government in its “Made in China 2025” push, which was also directed against US semiconductor sanctions. UNISOC UNISOC is a Chinese fabless semiconductor company headquartered in Shanghai which produces chipsets for mobile phones. UNISOC develops its business in two major fields: consumer electronics, which includes smart phones, feature phones, smart audio systems, smart wear and other areas; Industrial electronics, on the other hand, covers such fields as LAN IoT, WAN IoT, or smart display. In 2021, it was the fourth largest mobile processor manufacturer in the world, after Mediatek, Qualcomm and Apple, with 9% of global market share. Will Semiconductor (OmniVision Group) Will Semiconductor is a publicly listed Chinese fabless semiconductor company headquartered in Shanghai. In May 2019, it acquired OmniVision Technologies. Zhaoxin Zhaoxin is a fabless semiconductor company, created in 2013 as a joint venture between VIA Technologies and the Shanghai Municipal Government. The company manufactures x86-compatible desktop and laptop CPUs. The term Zhào xīn means million core. The processors are created mainly for the Chinese market: the venture is an attempt to reduce the Chinese dependence on foreign technology. Other companies Actions Semiconductor Alibaba Group (Yitian) Allwinner Technology Black Sesame Technologies Cambricon GigaDevice Horizon Robotics Ingenic Semiconductor Jiangnan Computing Lab (Sunway) Maxscend Microelectronics Montage Technology Moore Threads Phytium Technology PNC Process Systems Rockchip Sanan Optoelectronics Tencent Outsourced Semiconductor Assembly and Test (OSAT) JCET JCET Group Co., Ltd. is a public company headquartered in Jiangyin on China's eastern coast. It is the largest Outsourced Semiconductor Assembly and Test (OSAT) company in mainland China and the third-largest globally. JCET was formed in 1972, when Jiangyin converted a local factory to produce transistors. JCET went public on the Shanghai Stock Exchange in 2003 and continued to grow over time. JCET provides a range of semiconductor packaging, assembly, manufacturing, and testing products and services. Other companies Huatian Technology Tongfu Microelectronics Semiconductor equipment manufacturers Advanced Micro-Fabrication Equipment (AMEC) Advanced Micro-Fabrication Equipment is a partially state-owned publicly listed Chinese company that manufactures semiconductor chip production equipment. It is one of the largest semiconductor equipment manufacturer in China. NAURA Technology Group NAURA Technology Group is a partially state-owned publicly listed Chinese company that manufactures semiconductor chip production equipment. It is currently the largest semiconductor equipment manufacturer in China. Shanghai Micro Electronics Equipment (SMEE) Shanghai Micro Electronics Equipment (SMEE) is a semiconductor manufacturing equipment manufacturer based in Shanghai, supplying lithography (DUV immersion) equipment and other equipment used in the semiconductor manufacturing industry. Currently, its most advanced product is the SSA600, with a resolution of 90 nm. SMEE is developing the SSA800, with a resolution of 28 nm, which will be followed up by the SSA900, with a resolution of 22 nm. In December 2022, the United States Department of Commerce added SMEE to the Bureau of Industry and Security's Entity List. China Electronics Technology Group Corporation (CETC) China Electronics Technology Group Corporation (CETC) is China's third largest electronics and IT company behind only Huawei and Lenovo. Its fields include communications equipment, computers, electronic equipment, IT infrastructure, networks, software development, research services, investment and asset management for civilian and military applications. The company also manufactures semiconductors and semiconductor equipment used in the semiconductor manufacturing industry, largely for military applications. Other companies ACM Research Beijing Huafeng Test & Control Technology China Resources Microelectronics Dongfang Jingyuan Electron Hangzhou Changchuan Technology Hwatsing Technology KINGSEMI Jingjia Micro Piotech Shanghai Wanye Enterprises Skyverse Technology Wuhan Jingce Electronic Group Zhejiang Jingsheng Mechanical & Electrical Other developments Huawei Huawei is reportedly planning to build its own fabs, in cooperation with SMIC, in an attempt to promote vertical integration and reduce impacts of US sanctions such as the one it was subjected to during the China–United States trade war. In 2023, Huawei released its Mate60 Pro smartphone, which had a 7nm chip application processor HiSilicon Kirin 9000S manufactured by SMIC using SMIC's N+2 process node. See also Semiconductor industry Semiconductor industry in India Semiconductor industry in South Korea Semiconductor industry in Taiwan Economy of China Industry of China Notes References Sources Semiconductor industry by country Electronics industry in China Semiconductor device fabrication
Semiconductor industry in China
[ "Materials_science" ]
3,378
[ "Semiconductor device fabrication", "Microtechnology" ]
54,672,540
https://en.wikipedia.org/wiki/Quantum%20Boltzmann%20equation
The quantum Boltzmann equation, also known as the Uehling–Uhlenbeck equation, is the quantum mechanical modification of the Boltzmann equation, which gives the nonequilibrium time evolution of a gas of quantum-mechanically interacting particles. Typically, the quantum Boltzmann equation is given as only the “collision term” of the full Boltzmann equation, giving the change of the momentum distribution of a locally homogeneous gas, but not the drift and diffusion in space. It was originally formulated by L.W. Nordheim (1928), and by and E. A. Uehling and George Uhlenbeck (1933). In full generality (including the p-space and x-space drift terms, which are often neglected) the equation is represented analogously to the Boltzmann equation. where represents an externally applied potential acting on the gas' p-space distribution and is the collision operator, accounting for the interactions between the gas particles. The quantum mechanics must be represented in the exact form of , which depends on the physics of the system to be modeled. The quantum Boltzmann equation gives irreversible behavior, and therefore an arrow of time; that is, after a long enough time it gives an equilibrium distribution which no longer changes. Although quantum mechanics is microscopically time-reversible, the quantum Boltzmann equation gives irreversible behavior because phase information is discarded only the average occupation number of the quantum states is kept. The solution of the quantum Boltzmann equation is therefore a good approximation to the exact behavior of the system on time scales short compared to the Poincaré recurrence time, which is usually not a severe limitation, because the Poincaré recurrence time can be many times the age of the universe even in small systems. The quantum Boltzmann equation has been verified by direct comparison to time-resolved experimental measurements, and in general has found much use in semiconductor optics. For example, the energy distribution of a gas of excitons as a function of time (in picoseconds), measured using a streak camera, has been shown to approach an equilibrium Maxwell-Boltzmann distribution. Application to semiconductor physics A typical model of a semiconductor may be built on the assumptions that: The electron distribution is spatially homogeneous to a reasonable approximation (so all x-dependence may be suppressed) The external potential is a function only of position and isotropic in p-space, and so may be set to zero without losing any further generality The gas is sufficiently dilute that three-body interactions between electrons may be ignored. Considering the exchange of momentum between electrons with initial momenta and , it is possible to derive the expression References Statistical mechanics
Quantum Boltzmann equation
[ "Physics" ]
555
[ "Statistical mechanics" ]
54,678,886
https://en.wikipedia.org/wiki/Well%20cluster
A well cluster, also referred to as a monitoring well cluster, consists of multiple co-located monitoring wells that are constructed with intakes (well screens) at different depths in the subsurface. The purpose of a well cluster is to provide groundwater samples from discrete depths at approximately the same location, and/or to measure the vertical pressure gradient to calculate the vertical component of groundwater flow. Water levels can be measured in each individual monitoring well, providing vertical profiles of groundwater pressure (hydraulic head). The pressure difference between the groundwater table and the potentiometric surface in a submerged well can. A shallow well is used to measure the water table, and is equilibrated to atmospheric pressure. A deeper well, or piezometer, measured the potentiometric surface, determined by the water level observed when submerged and sealed. The associated change in pressure between the shallow well and piezometer, and the corresponding length, usually taken as the distance between the center of the two well screens, can be applied to Darcy's Law to calculate the vertical groundwater flow. Well clusters are different from nested wells in that each monitoring well is constructed in a separate borehole, as opposed to multiple well casings in a single borehole which is the case with nested wells). A key advantage of a well cluster over a nested well is that there is less potential for vertical leakage and hydraulic short-circuiting because reliable annular seals are installed in each separate borehole in the cluster. However, well clusters require drilling multiple boreholes, which results in higher construction costs compared to nested wells. Economical alternatives to well clusters are multilevel groundwater monitoring systems, also referred to as engineered nested wells. Further reading Johnson, T.L. 1983. "A comparison of well nests vs. single-well completions." Ground Water Monitoring Review 3 (1):76-78. doi=10.1111/j.1745-6592.1983.tb00864.x Water wells
Well cluster
[ "Chemistry", "Engineering", "Environmental_science" ]
413
[ "Hydrology", "Water wells", "Environmental engineering" ]
68,882,786
https://en.wikipedia.org/wiki/ZNF821
Zinc Finger Protein 821, also known as ZNF821, is a protein encoded by the ZNF821 gene. This gene is located on the 16th chromosome and is expressed highly in the testes, moderately expressed in the brain and low expression in 23 other tissues. The protein encoded is 412 amino acids long with 2 Zinc Finger motifs (C2H2 type) and a 23 amino acid long STPR domain. Gene Locus ZNF821 is located at 16q22.2 on the minus strand, it is composed of 35,657 bases spanning from base 71,893,583 to 71,929,239. ZNF821 has 8 exons and is located in the same neighborhood as 4 other genes, ATXNL1, IST1, PKD1L3, AP1G1. Transcriptional Regulation Transcription of ZNF821 is handled by the promoter GXP_9784938 which is 539 bases long and located from base 71,884,046 to 71,884,585. The promoter region begins 404 base pairs upstream of the beginning of transcription. Several transcription factors with scores greater than 0.9 are predicted to regulate ZNF821 expression. Expression ZNF821 is highly expressed in the testes, almost 2.5 times as much as in the brain, the next most highly expressed in tissue. Expression in the brain is primarily during fetal development, with lower levels of expression occurring in the cerebellum. There are low levels of expression in most other tissues. mRNA Variants and Isoforms ZNF821 has 7 different transcript variants and 4 isoforms. Variant 1 Isoform 1 is the second longest and but most abundant of all the variants and isoforms. While variant 2 is longer, it contains one fewer exon. Variant 1, Isoform 1 is 1987 bases long with a 5' UTR 415 bases long and a 3' UTR 433 bases long. Protein Characteristics The protein encoded by the ZNF821 gene is 412 amino acids long with a calculated molecular weight of ~ 47 kDa and a predicted isoelectric point of 6.14. Compared to the rest of the human proteome, there are decreased amounts of Isoleucine and Tyrosine residues as well as increased levels of Arginine residues. Structure ZNF821 protein contains two C2H2 Zinc Finger motifs (spanning amino acids 120-140 and 152–172, respectively) and an STPR (one-score-and-three-amino acid peptide repeat) domain (spanning amino acids 223–314) containing a bipartite nuclear localization signal. This STPR domain is a double-stranded DNA-binding domain with similar traits to the silkworm FMBP-1 STPR domain and is thought to be responsible for the nuclear localization of the ZNF821 protein. The secondary structure of the ZNF821 protein is composed of several alpha helical structures along with two small regions of beta sheets. The tertiary structure of the ZNF821 protein provides exposure of the Zinc Fingers for presumed DNA-binding. Cellular Localization The ZNF821 protein binds to DNA making it highly likely to be localized to the nucleus, there is also a bipartite nuclear localization sequence from Lys280 to Arg297, Lys304 to Leu320, and Lys338 to Arg354. An analysis of the subcellular localization in both close and distant orthologs resulted in a >99% chance of being localized to the nucleus for all orthologs. Regulation The ZNF821 protein is predicted to be modified post-translationally at several different positions. When compared with both close and distant orthologous sequences two phosphorylation sites are conserved, the Serine at position 2 and the Threonine at position 7. It is also predicted by several sources to have further phosphorylation sites of the Serine at position 254 and the Tyrosine at position 279. Interactions Several proteins have been shown to interact with the ZNF821 protein, many of them relating to transcriptional regulation. Homology Paralogs ZNF821 has no paralogs in humans. Orthologs There are orthologs for ZNF821 across vertebrates, but none for the protein in invertebrates. The Zinc Finger motifs are conserved into invertebrates. The STPR domain is only present in mammals. The relative rate of divergence is slow when compared to the rates of two reference proteins, Cytochrome c and Fibrinogen alpha, but increases to slightly faster than Cytochrome c as the date of divergence gets closer to the present. Clinical Significance ZNF821 has been associated in some capacity with several different diseases and conditions. It has been implicated in causing craniosynostosis through interactions with the transcription factor BCL11B by affecting the charges on the arginine-3, arginine-5, and lysine-3 residues, thereby increasing their conformational flexibility. It has also been found to be a possible biomarker for methamphetamine-associated psychosis (MAP) via the process of RNA-degradation. Another disease association is that with breast cancer as part of a DNA-repair sub network. ZNF821 was found to be dysregulated among breast cancer patients. Finally, there is a study showing an increase in methylation over time on ZNF821 in Parkinson's disease patients who did not receive L-dopa/entacapone. This provides a clearer view of changes due only to Parkinson's pathophysiology. References Proteins
ZNF821
[ "Chemistry" ]
1,201
[ "Biomolecules by chemical classification", "Proteins", "Molecular biology" ]
68,883,824
https://en.wikipedia.org/wiki/T7%20expression%20system
The T7 expression system is used in the field of microbiology to clone recombinant DNA using strains of E. coli. It is the most popular system for expressing recombinant proteins in E. coli. By 2021, this system had been described in over 220,000 research publications. Development The sequencing and annotating of the genome of the T7 bacteriophage took place in the 1980s at the U.S. Department of Energy's Brookhaven National Laboratory, under the senior biophysicist F. William Studier. Soon, the lab was able to clone the T7 RNA polymerase and use it, along with the powerful T7 promoter, to transcribe copious amounts of almost any gene. The development of the T7 expression system has been considered the most successful biotechnology developed at the Brookhaven National Laboratory, being licensed by over 900 companies which has generated over $55 million for the lab. Mechanism An expression vector, most commonly the pET expression vector, is engineered to integrate two essential components: a T7 promoter and a gene of interest downstream of the promoter and under its control. The expression vector is transformed into one of several relevant strains of E. coli, most frequently BL21(DE3). The E. coli cell also has its own chromosome, which possesses a gene that is expressed to produce T7 RNA polymerase. (This polymerase originates from the T7 phage, a bacteriophage virus which infects E. coli bacterial cells and is capable of integrating its DNA into the host DNA, as well as overriding its cellular machinery to produce more copies of itself.) T7 RNA polymerase is responsible for beginning transcription at the T7 promoter of the transformed vector. The T7 gene is itself under the control of a lac promoter. Normally, both the lac promoter and the T7 promoter are repressed in the E. coli cell by the Lac repressor. In order to initiate transcription, an inducer must bind to the lac repressor and prevent it from inhibiting the gene expression of the T7 gene. Once this happens, the gene can be normally transcribed to produce T7 RNA polymerase. T7 RNA polymerase, in turn, can bind to the T7 promoter on the expression vector and begin transcribing its downstream gene of interest. To stimulate this process, the inducer IPTG can be added to the system. IPTG is a reagent which mimics the structure of allolactose, and can therefore bind to the lac repressor and prevent it from inhibiting gene expression. Once enough IPTG is added, the T7 gene is normally transcribed and so transcription of the gene of interest downstream of the T7 promoter also begins. Expression of a recombinant protein under the control of the T7 promoter is 8x faster than protein expression under the control of E. coli RNA polymerase. Basal levels of expression of T7 RNA polymerase in the cell are also inhibited by the bacteriophage T7 lysozyme, which results in a delay of the accumulation of T7 RNA polymerase until after lysozymic activity is saturated. Application During the COVID-19 pandemic, mRNA vaccines have been developed by Moderna and Pfizer to combat the spread of the virus. Both Moderna and Pfizer have relied on the T7 expression system to generate the large quantities of mRNA needed to manufacture the vaccines. References T-phages Cloning
T7 expression system
[ "Engineering", "Biology" ]
727
[ "Cloning", "Genetic engineering" ]
68,883,987
https://en.wikipedia.org/wiki/Lufotrelvir
Lufotrelvir (PF-07304814) is an antiviral drug developed by Pfizer which acts as a 3CL protease inhibitor. It is a prodrug with the phosphate group being cleaved in vivo to yield the active agent PF-00835231. Lufotrelvir is in human clinical trials for the treatment of COVID-19, and shows good activity against COVID-19 including several variant strains, but unlike the related drug nirmatrelvir it is not orally active and must be administered by intravenous infusion, and so has been the less favoured candidate for clinical development overall. See also 3CLpro-1 Bemnifosbuvir Baloxavir marboxil Favipiravir GC376 GRL-0617 Molnupiravir Remdesivir Ribavirin Rupintrivir S-217622 Triazavirin References Anti–RNA virus drugs Antiviral drugs COVID-19 drug development Drugs developed by Pfizer Amides Pyrrolidones Phosphonic acids Indoles Methoxy compounds SARS-CoV-2 main protease inhibitors Isobutyl compounds
Lufotrelvir
[ "Chemistry", "Biology" ]
258
[ "Antiviral drugs", "Biocides", "Drug discovery", "Functional groups", "COVID-19 drug development", "Amides" ]
68,886,251
https://en.wikipedia.org/wiki/Proof%20of%20identity%20%28blockchain%20consensus%29
Proof of identity (PoID) is a consensus protocol for permission-less blockchains, in which each uniquely identified individual receives one equal unit of voting power and associated rewards (minting token). The protocol is based on biometric identification, humanity identification parties and additional verification parties. The proof of identity supersedes the approach of proof of work and proof of stake which distribute voting power and rewards to participants according to their investment in some activity or resource and introduces the opportunity to create a universal basic income (UBI) for individuals. The proof of identity solves the problem with the proof of personhood in which individuals are requested to attend recurrent pseudonymous parties and creates a network that is permanently secured and censorship resilient. Background Currently used proofs of investment In a permission-less network, some kind of proof is required to prevent Sybil attacks, i.e., the event in which an attacker gains control over the transactions of the network by creating multiple users generated with a malicious script. The most common methods to prevent Sybil attacks are proofs of investment (proof of work, proof of stake) that require participants of the network to invest in some activity or resource as evidence of genuine involvement in the chain. The growing criticism over this approach is that voting power and rewards are not distributed equally among individuals but instead, big holders/corporations benefit the most from the network. Proof of investment blockchains are thus prone to the formation of oligarchies and marginally appeal to small investors/holders who receive minimal rewards. In the case of proof of work, there are additional sustainability concerns over the amount of electrical energy wasted as proof. The idea of having a "unique identity system" as a consensus protocol for cryptocurrencies, which would give each human user one and only one anti-Sybil participation token, was initially proposed in 2014 by Vitalik Buterin. The proof of personhood In contrast with proofs of investment, proof of personhood aims to allocate to each individual one equal unit of voting power and its associated reward. In the PoP protocol, each individual is required to demonstrate his humanity and uniqueness regardless of his identity by attending a pseudonymous party. To preserve privacy, attendances to parties are anonymous and individuals can wear masks or hide their appearance. Whilst the PoP protocol achieves the goal of democratizing blockchain networks, some criticisms have been raised over the recurrent nature of PoP parties and more specifically: To avoid multiple attendances to pseudonymous parties, each individual has to attend a new party every time the network expands; this suggests the process will be endless or will leave out those individuals unable to attend the last round of parties. Because there is no control over the creation of PoP parties and anyone can organize them, more study should be conducted to rule out the possibility for an attacker to create a high number of parties with the intent of populating one or more parties entirely with forgeries and hence gaining a considerable number of minting tokens. Because of the recurrent nature of pseudonymous parties, the PoP network does not guarantee the creation of a value stable in time. Any epoch can be better or worse than the previous one, so the network is possibly exposed to censorship and instability. Other protocols Some other protocols based on the use of national IDs, online face recognition CAPTCHA solving, and social network identification, have been also proposed; however, in general, they are deemed not secure enough against the threats posed by AI engines’ capacity to create spoofed identities when a banking security level is to be attained. Biometric data-based proofs The proof of UniqueID To avoid the problems associated with the recurrent nature of PoP parties and strengthen identification methods against spoofed identities, the use of biometric identification and data storage of individuals/minters were introduced. The first comprehensive study was proposed in June 2018. According to the white paper, individuals are biometrically identified in person by local verifiers. Then their data are encrypted using a trusted setup and recorded in the blockchain. The system relies on the Ethereum blockchain for the execution of a set of smart contracts. The protocol also proposes the use of CAPTCHA parties and the involvement of trusted, famous people to strengthen the system against Sybil attacks. The proof of UniqueID achieves the goal of assigning one minting token to each identified individual while not requiring attendance to multiple recurrent PoP parties; however, there are some challenges and possible criticisms to consider under this proposal: The reliance on the Ethereum blockchain ultimately ties down the security of the system to the security of the Ethereum blockchain and may present serious technical challenges. A trusted setup for biometric data encryption may not guarantee the privacy of users against a governmental request of data disclosure. This seems to significantly penalize the censorship resistance of the system. There is no certainty against the future possibility that AI engines will be able to solve CAPTCHAs. Additional investigation and study may be required to rule out the possibility of carrying out a Sybil attack under this protocol. In fact, it is possible to figure out a scenario in which initial parties are kicked off by a colluding group, or a group of legit verifiers collude with each other and start to generate identity forgeries, particularly in remote locations (e.g., Papua New Guinea). Once spoofed identities successfully infiltrate the system there are no procedures in place to detect them. The proof of identity The proof of identity protocol proposes to overcome the possibility of biometric spoofed identities and collusions between participants using an AI engine stored in the blockchain which randomly organizes humanity identification parties and constantly computes the possibility of cheats for each individual/area/country which triggers additional verification parties either in a pyramid scheme or in a random way. The identification procedure requires each party attendee to perform a face recognition of all others to establish the principle “everyone or no one is cheating,” which ultimately requires attackers to collude globally to avoid being discovered. The protocol also introduces a new proposal to preserve the privacy of individuals: biometric data are stored partially encrypted. The amount of public data stored in the blockchain is enough to avoid a good number of biometric collisions, but it is not enough to securely identify a person. Once biometric collisions occur, minters are requested by the system to temporarily disclose their encryption key. The system can be implemented according to CanDID methodology. The proof of identity addresses the problems with the proof of personhood and UniqueID, however, there are some challenges and possible criticisms to consider: The amount of data to store in the blockchain is considerable and may require some daily downtime for cohort leader servers to reboot. The system includes the creation of a PoID global organization that facilitates the kick-off of identification venues and rules out rare biometric disputes. This may be seen as a center of authority. Summary of the main protocols based on individuals The following table summarizes the characteristics of the three main individuals-based protocols: The proof of identity protocol The proof of identity protocol combines state of the art of 3D mapping face recognition technologies, the attendance to humanity identification parties, and the decentralized (stored in the blockchain) supervision of an AI engine that randomly forms parties and carries on organizing additional verification parties to rule out any possible cheating. The protocol is summarized in the following scheme: Humanity identification/verification parties are designed in a way to enforce the principle: “everyone or no one is cheating,” ruling out the possibility of having a party formed by honest individuals and spoofed identities at the same time: Individuals gather in the identification venue (fig. 1) randomly assigned by the AI engine. Individuals are requested to firstly confirm the number of attendees and then to perform the face recognition of all others using their minting app. One or more verifiers coming from remote locations attend the party in incognito Parties kick off in a region once there are enough online data submissions and sufficient interoperability of verifiers. The process includes a public offer to interested “identification entrepreneurs” who are willing to make identification venues available; the PoID global organization facilitates the procedure. Biometric disputes are very rare in the PoID because the 3D face-mapping technology is able to differentiate close biometric twins, however, the PoID Global Organization is able to rule out rare possible disputes while not constituting a significant center of authority in the system. Additional verification parties are run by the PoID AI engine to check the status of the ecosystem and by leveraging the principle “everyone or no one is cheating” identity forgeries are discovered by forming verification parties in a pyramid mode, i.e., higher level of parties are made of individuals coming from single lower-level parties. The AI engine also searches for cheats and computes their probability in a random way/outside the principle of “everyone or no one is cheating.” Use cases for the proof of identity network The proof of identity consensus protocol has the prerogatives of being: Non-quantitative: once the network is sufficiently populated, any blockchain/cryptocurrency can join the network and have its blocks validated by the PoID. Permanently secured against Sybil attacks: the PoID security is based on the impossibility for an attacker to create identity forgeries. Permanently censorship resilient: because of the high level of distribution and decentralization and the privacy of the minters, the PoID is marginally affected by censorship. These conditions suggest the possibility for the PoID to become a globally used consensus protocol aggregating all blockchains/cryptocurrencies under the same network. The second notable use is the creation of a universal basic income for individuals. A third possible use is to create a direct instrument for democratic participation; in fact, the PoID is made of securely and uniquely identified individuals, which is the necessary requirement to run referendums and surveys. References Turing tests Cryptography Cryptocurrencies Blockchains
Proof of identity (blockchain consensus)
[ "Mathematics", "Engineering" ]
2,061
[ "Cybersecurity engineering", "Cryptography", "Applied mathematics", "Artificial intelligence engineering", "Turing tests" ]
68,889,599
https://en.wikipedia.org/wiki/Jedi2
Jedi2 is a chemical compound which acts as an agonist for the mechanosensitive ion channel PIEZO1, and is used in research into the function of touch perception. See also Yoda1 and Jedi1 References Furans Thiophenes
Jedi2
[ "Chemistry" ]
54
[ "Pharmacology", "Pharmacology stubs", "Medicinal chemistry stubs" ]
68,889,834
https://en.wikipedia.org/wiki/SB-366791
SB-366791 is a drug which acts as a potent and selective blocker of the TRPV1 ion channel. It has analgesic effects in animal studies, and is used in research into pain and inflammation. See also AMG-517 AMG-9810 Discovery and development of TRPV1 antagonists References 4-Chlorophenyl compounds Amides 3-Methoxyphenyl compounds Transient receptor potential channel modulators
SB-366791
[ "Chemistry" ]
99
[ "Amides", "Functional groups" ]
73,276,921
https://en.wikipedia.org/wiki/Bohr%E2%80%93Favard%20inequality
The Bohr–Favard inequality is an inequality appearing in a problem of Harald Bohr on the boundedness over the entire real axis of the integral of an almost-periodic function. The ultimate form of this inequality was given by Jean Favard; the latter materially supplemented the studies of Bohr, and studied the arbitrary periodic function with continuous derivative for given constants and which are natural numbers. The accepted form of the Bohr–Favard inequality is with the best constant : The Bohr–Favard inequality is closely connected with the inequality for the best approximations of a function and its th derivative by trigonometric polynomials of an order at most and with the notion of Kolmogorov's width in the class of differentiable functions (cf. Width). References Inequalities Theorems in real analysis
Bohr–Favard inequality
[ "Mathematics" ]
169
[ "Theorems in mathematical analysis", "Mathematical analysis", "Mathematical theorems", "Mathematical analysis stubs", "Theorems in real analysis", "Binary relations", "Mathematical relations", "Inequalities (mathematics)", "Mathematical problems" ]
73,278,652
https://en.wikipedia.org/wiki/Sectorial%20operator
In mathematics, more precisely in operator theory, a sectorial operator is a linear operator on a Banach space whose spectrum in an open sector in the complex plane and whose resolvent is uniformly bounded from above outside any larger sector. Such operators might be unbounded. Sectorial operators have applications in the theory of elliptic and parabolic partial differential equations. Definition Let be a Banach space. Let be a (not necessarily bounded) linear operator on and its spectrum. For the angle , we define the open sector , and set if . Now, fix an angle . The operator is called sectorial with angle if and if for every larger angle . The set of sectorial operators with angle is denoted by . Remarks If , then is open and symmetric over the positive real axis with angular aperture . Bibliography References Functional analysis Operator theory
Sectorial operator
[ "Mathematics" ]
167
[ "Functional analysis", "Mathematical objects", "Functions and mappings", "Mathematical relations" ]
73,284,126
https://en.wikipedia.org/wiki/Pentacyanocobaltate
In chemistry, pentacyanocobaltate is the coordination complex with the formula . When crystallized with a quaternary ammonium cation, it can be obtained as a yellow solid. Pentacyanocobaltate attracted attention as an early example of a metal complex that reacts with hydrogen. It contains low-spin cobalt(II), with a doublet ground state. Synthesis and structure Aqueous solutions of pentacyanocobaltate are produced by the addition of five or more equivalents of a cyanide salt to a solution of a cobalt(II) salt. Initially this reaction produces insoluble cobalt dicyanide, but this solid dissolves in the presence of the excess cyanide. Pentacyanocobaltate forms within seconds. When prepared using a quaternary ammonium (quat) cyanide, crystals can be obtained with the formula . According to X-ray crystallography, the salt features square pyamidal . Reactions Solutions of undergo a variety of reactions. The complex attracted attention in the 1940s for its reactivity toward hydrogen, which is now understood to produce a cobalt hydride: When allowed to stand as a dilute solution for several minutes, the complex reacts with water to give two Co(III) derivatives: In concentrated solution, the complex dimerizes: With benzyl chloride and related alkylating agents, Co(III) alkyls are formed: References Cyano complexes Cobalt(II) compounds Anions Cobalt complexes Cyanometallates
Pentacyanocobaltate
[ "Physics", "Chemistry" ]
317
[ "Ions", "Matter", "Anions" ]
73,285,598
https://en.wikipedia.org/wiki/Iron%20and%20Steel%20Board
The Iron and Steel Board was a governmental body, originally established in 1946, to supervise the work and development of the United Kingdom iron and steel industry. It was reestablished in 1953 and was abolished in 1967. Iron and Steel Board 1946-48 In 1946 the Labour government established an Iron and Steel Board to control the price of raw materials, finished products, and steel imports; and to regulate investment, pooling arrangements, and the development of new plant and equipment. Board members were appointed by the Minister of Supply and represented industry employers, workers and consumers. Personnel The Board originally comprised: Sir Archibald Forbes (chairman) Sir Alan Barlow A. Callighan Sir Lincoln Evans G. H. Latham R. Mather Sir Wilfred Ayre A. C. Boddis (secretary) The Board was staffed by about 100 civil servants from the Ministry of Supply. Nationalisation The Board received little cooperation from the British Iron and Steel Federation. In October 1948 in light of the changed circumstances likely to arise from the government's nationalisation proposals five of the board members, with the exception of the trade unionists, refused a further term of office. As a consequence, the Board nominally ceased to exist from 1 October 1948. The work of the Board continued within the Ministry of Supply. The Labour government nationalised the iron and steel industry. This was enacted by the Iron and Steel Act 1949. The Iron and Steel Bill included provisions for the establishment of a National Iron and Steel Board. During the passage of the Bill through Parliament the proposed Iron and Steel Board became the Iron and Steel Corporation. Iron and Steel Board 1953-67 The Conservative government denationalised the iron and steel industry under the provisions of the Iron and Steel Act 1953. The 1953 Act established a new Iron and Steel Board which began operating on 13 July 1953. The Board had a duty 'to exercise a general supervision over the iron and steel industry... with a view to promoting the efficient, economic and adequate supply, under competitive conditions, of iron and steel products'. There were criticisms of  work of the Board, its powers were sufficient when iron and steel were in short supply. However, when iron and steel became plentiful in the late 1950s the board's powers to set maximum prices and to veto development became meaningless. However, the ‘toothless tiger’ suited the steelmasters. Personnel The 1953 Act prescribed that the Board should comprise not less than 10 and not more than 15 members. They were appointed by the Minister of Supply. In 1953 the Board initially comprised: Sir Archibald Forbes (chairman) Sir Lincoln Evans (vice-chairman) Robert Shone Sir Andrew McCancy Neville Rollason James Owen Wilfred Beard James Shaw Charles Connell Sir Percy Lister George Beharrell Sir Cyril Musgrave became chairman on 1 March 1959 upon the retirement of Sir Archibald Forbes. Other staff included: Sir Robert Shone, executive member (1955-62); J. Grieve-Smith, head of the economics division (1962-63); W. Taplin and Dr R. Robson, chief economists/senior economic advisors (1955-58); H. McArthur, head of finance/prices (1955-57); R.W. Foad, director of finance (1954-59). Ministerial responsibility In 1955 ministerial responsibility for the industry, including the Board, passed from the Ministry of Supply to the Board of Trade. In 1957 responsibility passed to the Ministry of Power. Dissolution In 1965 the Labour government announced plans to nationalise the iron and steel industry. The Iron and Steel Act 1967 vested the industry in the British Steel Corporation and the Iron and Steel Board was abolished. Records The records of the Iron and Steel Board are held at the National Archives, reference BE 1; and at the Modern Records Centre. See also Iron and Steel Corporation of Great Britain British Iron and Steel Federation British Steel Corporation History of the steel industry (1850–1970) References 1946 establishments in the United Kingdom 1948 disestablishments in the United Kingdom 1953 establishments in the United Kingdom 1967 disestablishments in the United Kingdom Former nationalised industries of the United Kingdom Government agencies established in 1946 Government agencies disestablished in 1948 Government agencies established in 1953 Government agencies disestablished in 1967 Metallurgical industry of the United Kingdom Steel industry of the United Kingdom
Iron and Steel Board
[ "Chemistry" ]
874
[ "Metallurgical industry of the United Kingdom", "Metallurgical industry by country" ]
73,287,030
https://en.wikipedia.org/wiki/Spatial%20neural%20network
Spatial neural networks (SNNs)Spatial neural networks (SNNs) constitute a supercategory of tailored neural networks (NNs) for representing and predicting geographic phenomena. They generally improve both the statistical accuracy and reliability of the a-spatial/classic NNs whenever they handle geo-spatial datasets, and also of the other spatial (statistical) models (e.g. spatial regression models) whenever the geo-spatial datasets' variables depict non-linear relations. Examples of SNNs are the OSFA spatial neural networks, SVANNs and GWNNs. History Openshaw (1993) and Hewitson et al. (1994) started investigating the applications of the a-spatial/classic NNs to geographic phenomena. They observed that a-spatial/classic NNs outperform the other extensively applied a-spatial/classic statistical models (e.g. regression models, clustering algorithms, maximum likelihood classifications) in geography, especially when there exist non-linear relations between the geo-spatial datasets' variables. Thereafter, Openshaw (1998) also compared these a-spatial/classic NNs with other modern and original a-spatial statistical models at that time (i.e. fuzzy logic models, genetic algorithm models); he concluded that the a-spatial/classic NNs are statistically competitive. Thereafter scientists developed several categories of SNNs – see below. Spatial models Spatial statistical models (aka geographically weighted models, or merely spatial models) like the geographically weighted regressions (GWRs), SNNs, etc., are spatially tailored (a-spatial/classic) statistical models, so to learn and model the deterministic components of the spatial variability (i.e. spatial dependence/autocorrelation, spatial heterogeneity, spatial association/cross-correlation) from the geo-locations of the geo-spatial datasets’ (statistical) individuals/units. Categories There exist several categories of methods/approaches for designing and applying SNNs. One-Size-Fits-all (OSFA) spatial neural networks, use the OSFA method/approach for globally computing the spatial weights and designing a spatial structure from the originally a-spatial/classic neural networks. Spatial Variability Aware Neural Networks (SVANNs) use an enhanced OSFA method/approach that locally recomputes the spatial weights and redesigns the spatial structure of the originally a-spatial/classic NNs, at each geo-location of the (statistical) individuals/units' attributes' values. They generally outperform the OSFA spatial neural networks, but they do not consistently handle the spatial heterogeneity at multiple scales. Geographically Weighted Neural Networks (GWNNs) are similar to the SVANNs but they use the so-called Geographically Weighted Model (GWM) method/approach by Lu et al. (2023), so to locally recompute the spatial weights and redesign the spatial structure of the originally a-spatial/classic neural networks. Like the SVANNs, they do not consistently handle spatial heterogeneity at multiple scales. Applications There exist case-study applications of SNNs in: energy for predicting the electricity consumption; agriculture for classifying the vegetation; real estate for appraising the premises. See also Statistics Neural networks' supercategories Statistical software Quantitative geography Spatial analysis GIS software References Neural network architectures Spatial analysis
Spatial neural network
[ "Physics" ]
706
[ "Spacetime", "Space", "Spatial analysis" ]
73,289,766
https://en.wikipedia.org/wiki/The%20Longevity%20Diet
The Longevity Diet is a 2018 book by Italian biogerontologist Valter Longo. The subject of the book is fasting and longevity. The book advocates a fasting mimicking diet (FMD) coupled with a mostly plant based diet that allows for the consumption of fish, for greater longevity. Background Valter Longo, a PhD in biochemistry and director of the Longevity Institute at the University of Southern California, invented the fasting mimicking diet. Longo has said, "Using epidemiology and clinical trials, we put all the research together..." The diet calls for an emphasis on combining a plant-based diet with fish, together with fasting, timing and food quantity. Synopsis In the book, Longo says one should alter one's diet to avoid illness in old age. He advises dieters start the diet with a five-day fasting mimicking diet (FMD), which calls for a plant-based diet with calorie restriction of 1100 calories the first day, followed by 800 calories for the next few days. The fast-mimicking diet was pioneered by Valter Longo. The book calls for the five-day, calorie restriction FMD to occur twice per year. Before turning 65 the diet calls for minimal protein, and mostly plant-based diet augmented with calorie-restriction. After someone finishes the fasting mimicking diet, Longo advocates following a mostly plant-based diet that includes fish. He also suggests implementing time-restricted eating, with daily eating windows of 11-12 hours. Reception The book is an international bestseller, has been translated into more than 15 languages, and is sold in more than 20 countries. Writing for Red Pen Reviews, Hilary Bethancourt stated the diet might be difficult and expensive to follow. Bethancourt goes on to say that the book gives advice about how to have a longer lifespan and healthspan through the practice of following a five-day fasting-mimicking diet and by choosing what to eat, how much to eat, and how often to eat. Reviewing the book for Glam Adelaide James Murphy felt that the book has "too much discussion of his thwarted ambitions to be a rock star". References 2018 non-fiction books Self-help books Dieting books Fasting Books about life extension Plant-based diets Senescence
The Longevity Diet
[ "Chemistry", "Biology" ]
482
[ "Senescence", "Metabolism", "Cellular processes" ]
73,290,256
https://en.wikipedia.org/wiki/Yttrium%20perchlorate
Yttrium perchlorate is the inorganic compound with the chemical formula . The compound is an yttrium salt of perchloric acid. Synthesis Dissolving yttrium oxide in perchloric acid solution can produce yttrium perchlorate octahydrate. Chemical properties Potentially explosive. Physical properties The compound is soluble in water and forms a hexahydrate with the formula •6. References Perchlorates Oxidizing agents Yttrium compounds
Yttrium perchlorate
[ "Chemistry" ]
97
[ "Perchlorates", "Redox", "Oxidizing agents", "Salts" ]
73,292,355
https://en.wikipedia.org/wiki/Somatostatin%20inhibitor
Somatostatin receptor antagonists (or somatostatin inhibitors) are a class of chemical compounds that work by imitating the structure of the neuropeptide somatostatin, which is an endogenous hormone found in the human body.  The somatostatin receptors are G protein-coupled receptors. Somatostatin receptor subtypes in humans include sstr1, 2A, 2 B, 3, 4, and 5. While normally expressed in the gastrointestinal (GI) tract, pancreas, hypothalamus, and central nervous system (CNS), they are expressed in different types of tumours. The predominant subtype in cancer cells is the ssrt2 subtype, which is expressed in neuroblastomas, meningiomas, medulloblastomas, breast carcinomas, lymphomas, renal cell carcinomas, paragangliomas, small cell lung carcinomas, and hepatocellular carcinomas. As a radiopharmaceutical compound that is selective for somatostatin receptors, there is research being done for these radiolabeled compounds to act as diagnostic tests in PET scans for neuroendocrine tumors and other tumors not previously targeted with radiolabeled somatostatin receptor agonists, and to act as radiopharmaceutical therapeutic compound, more specifically to conduct peptide radionuclide receptor therapy. Some non-radiopharmaceutical compounds that are developed as competitive inhibitors of somatostatin, such as the hormone antagonist cyclosomatostatin. Somatostatin Somatostatin is a G protein-coupled receptor ligand. When the receptors are activated, it causes the cells where the receptors are expressed to decrease hormone secretion. Mainly, as a neuroendocrine inhibitor, it exerts its effects on the gastrointestinal tract, pancreas, hypothalamus, and central nervous system, causing hormone secretions coupled to this pathway to be reduced. It can affect neurotransmission and memory formation within the central nervous system. Within human and animal models, it demonstrated its effects of preventing angiogenesis and reducing healthy and cancer cell proliferation. Within tumors, somatostatin receptors, mostly of the ssrt2 subtype, are expressed in most neuroendocrine tumors, breast tumors, some brain tumors, renal tumors, lymphomas, and prostate tumors. Somatostatin receptor antagonists in radiolabelling These compounds work by binding to somatostatin receptors, which are more common in specific types of tumours. It does not activate the receptor. Due to the radionuclide, it will appear on PET scans. The radiolabeled somatostatin receptor antagonists share the following structure. The antagonist has a peptide moiety, and is responsible for receptor recognition and antagonist activities. Nomenclature is based on Radionuclide-Chelator-Receptor Antagonist. The structure of somatostatin receptor antagonists is similar to that of the agonists. Some agonists were already approved by the FDA for clinical use, such as In-DTPA-octreotide and Ga-DOTATATE. Development started after the discovery of modifications that can be done to the octreotide group, an ssrt selective subtype agonist, to cause its agonistic effects to be lost and gain antagonistic effects. Different subtype receptor antagonists were later developed. Research has been done mostly on the sstr2 receptor antagonist, as the sstr2 receptor is expressed on most tumors. Somatostatin receptor antagonists are divided by generation based on the type of the subtype receptor antagonist. The first generation consists of sst2-ANT and BASS, which are sstr2 selective; and sst3-ODAN-8, which is selective for sstr3. After initial results of their increased sensitivity to neurocrine tumors appeared, ssrt2 selective antagonists that had even higher affinity were developed. These were LM3, JR10, and JR11, which make up the second generation. JR11 was shown to be the most effective among these 3 antagonists, and compounds that entered further clinical development to act as a PET imaging agent or therapeutic agent carried this subtype antagonist. The presence of a chelator coupled to the subtype antagonist was shown to affect the biological properties., by increasing the binding stability of the radionuclide to the rest of the compound, and increasing the binding affinity to the receptor by allowing conjugation of the radionuclide to the receptor. Compounds were developed with 3 macrocyclic chelators: DOTA, NODAGA, and CB-TE2A. DOTA had already been used as a chelator in the radiolabeled somatostatin agonists, as well as NODAGA and CB-TE2A. Ga-NODAGA-based compounds were shown to have a higher binding affinity than its DOTA analogues. However, these somatostatin receptor antagonists showed a higher tumor uptake despite its lower affinity for ssrt receptors, due to being able to bind a receptor despite its activation status. Compounds containing one of the radionuclides of indium-111, lutetium-177, copper-64, yttrium-80 and gallium-68 have been made. A study indicated the gallium compound had the lowest affinity to the sstr2 receptor. List of radiolabeled somatostatin receptor antagonists The following listed compounds are those that have entered some phase of a pre-clinical study. Structure of selected antagonist peptides The structure of the antagonist peptides shown in the above table are shown below. Further clinical studies of radiolabeled somatostatin receptor antagonists Ga-NODAGA-JR11 had entered further clinical studies as an imaging agent, while Lu-DOTA-JR11 had similar research done as a therapeutic agent, as JR11 has a high binding affinity for ssrt2 subtype receptors which are highly expressed on the surface of tumor cells. Gallium-containing agonists had already been established as an imaging agent. Lutetium-containing agonists were used as a therapeutic agent in peptide receptor radionuclide therapy, due to the lower energy electrons emitted, and γ-emission causing easier dose adjustment to patient characteristics to avoid renal damage. The NODAGA chelator was used over DOTA in Gallium antagonists due to higher binding affinity, while no Lu-NODAGA compounds were developed due to established usage of Lu-DOTA derivative agonist drugs, and poor uptake compared to DOTA, which is reverse that of the gallium-containing antagonists. Safety of radiolabeled somatostatin receptor antagonists In general, somatostatin receptor antagonists were noted to be well tolerated. However, due to its mechanism of action, it may decrease the effectiveness of SSA therapy (Somatostatin Analogue Therapy), but other studies indicate SSA may not need to be stopped if somatostatin antagonists are used to for tumor labelling instead of agonists. As somatostatin can cause inhibition of hormone production that uses it as a mediating hormone, it has an antiproliferative effect on cell tumors, especially in neuroendocrine tumors. Somatostatin analogue therapy uses longer-acting agonists than the endogenous somatostatin to extend the antiproliferative effects. Somatostatin receptor antagonists can bind to the receptors without activating them, antagonizing the therapeutic inhibitory effects of SSA therapy. Slow intravenous injection might be used until further safety data become available. Comparison of somatostatin receptor agonists and antagonists in radiolabelling Agonists of the somatostatin receptor had been long established as an imaging agent, with the first agonist Ga-DOTATOC coming out in 2001, which is based on a radiolabeled somatostatin receptor agonist drug octreotide, and further developments were based on its structure. Agonists share the characteristic of being uptaken into tumor cells, and degraded intracellularly. Antagonists, while not widely absorbed into the tumor cells, can bind to a wider range of receptors as they can bind to the receptors regardless if the receptors are activated or inactivated. They thus are more sensitive to neuroendocrine tumors. Another study noted the antagonists showed lowered internalization into tumors, cleared from the blood quickly, and had a higher binding to tumors, which were noted to be properties benefitting its use over agonists in detecting metastatic tumors. A head-to-head study of the gallium-containing compounds, where the Ga-NODAGA-JR11 antagonist and Ga-DOTATOC agonist are directly compared, showed that Ga-NODAGA-JR11 had a higher hepatic metastatic tumor detection rate and lesion sensitivity than Ga-DOTATOC. Another head-to-head study of lutetium containing compound found the antagonist Lu-DOTA-JR11 bound with the receptors more quickly, had a longer retention time, and unbound more slowly than the Lu-DOTA-TATE agonist. Radiolabeled somatostatin receptor antagonists in Peptide Radionuclide Receptor Therapy (PRRT) Somatostatin receptor antagonists are also being developed as therapeutic agents in peptide radionuclide receptor therapy (PRRT) due to the wider binding of antagonists compared to agonists. Research indicated the antagonist Lu-DOTA-JR11 showed a higher tumor uptake, more double-strand breaks within tumor cells, longer adherence time to tumors and improved tumor-to-kidney dose ratio. Moreover, another study finds out the radionuclide terbium-161, which can release short-ranged electrons, can combine with somatostatin receptor antagonists that localize at the cell membrane. acting as an alternative to the current clinically used lutetium-somatostatin receptor agonist, which are localized at the cytoplasm and nucleus. Moreover, Tb-antagonist in vitro shows 102-fold higher potency than Lu-antagonists in inhibiting tumor cell growth and survival prolongation in mice, due to its high linear energy transfer. This result is further repeated and confirmed in vivo, showing the high potential and strengths of radiolabeled somatostatin receptor antagonists to treat neuroendocrine neoplasms. Further potential Other compounds other than radiolabelled somatostatin receptor antagonists have also been studied. Cyclosomatostatin is one such compound. Contrary to previously discussed compounds, cyclosomatostatin does not contain a radionuclide. It is a non-selective somatostatin receptor antagonist, inhibiting the effects of somatostatin on target cells in the gastrointestinal tract, pancreas, hypothalamus, and central nervous system. Cyclosomatostatin is used as a research chemical to investigate the effects of somatostatin on different cell types by antagonizing its receptors. However it acts as an agonist in SH-SY5Y neuroblastoma cells. Cyclosomatostatin is also known by the following names: 7-CPP antagonist SRIF-A CyCam cyclo(7-Ahep-Phe-Trp-Lys-Thr(Bzl)) cyclo(7-aminoheptanoylphenylalanyl-tryptophyl-lysyl-benzylthreonyl) cyclo-(7-aminoheptanoyl-Phe-D-Trp-Lys-Thr(Bzl)) Cyclosomatostatin may have the possibility of treating complications of acute hemorrhage. Hepatic insulin sensitizing substance (HISS), a hormone, will be secreted by the liver which stimulates skeletal muscle glucose uptake when responding to insulin. This action makes up around 56% of total insulin action. Hemorrhage was shown to cause insulin resistance by this type of HISS-dependent insulin resistance (HDIR). Two animal studies shows that cyclosomatostatin can help prevent HDIR without correcting the hyperglycemic condition in the situation of hemorrhage and exogenous somatostatin infusion. Cyclosomatostatin may be related to other indications, including the potential of blocking the suppression of gastric emptying triggered by corticotropin-releasing hormone (CRH), the key regulator of the hypothalamic-pituitary-adrenal axis released to alter the body response caused by stress. Furthermore, cyclosomatostatin, even if used alone, may modulate neurotransmitter levels. It increases acetylcholine (ACh) release by reversing the inhibitory effect of a substance, DHP agonist Bay K 8844, to L-type voltage-sensitive Ca2+ calcium channel. References Receptor antagonists
Somatostatin inhibitor
[ "Chemistry" ]
2,739
[ "Neurochemistry", "Receptor antagonists" ]
51,884,181
https://en.wikipedia.org/wiki/Multi-stage%20game
In game theory, a multi-stage game is a sequence of several simultaneous games played one after the other. This is a generalization of a repeated game: a repeated game is a special case of a multi-stage game, in which the stage games are identical. Multi-Stage Game with Different Information Sets As an example, consider a two-stage game in which the stage game in Figure 1 is played in each of two periods: The payoff to each player is the simple sum of the payoffs of both games. Players cannot observe the action of the other player within a round; however, at the beginning of Round 2, Player 2 finds out about Player 1's action in Round 1, while Player 1 does not find out about Player 2's action in Round 1. For Player 1, there are strategies. For Player 2, there are strategies. The extensive form of this multi-stage game is shown in Figure 2: In this game, the only Nash Equilibrium in each stage is (B, b). (BB, bb) will be the Nash Equilibrium for the entire game. Multi-Stage Game with Changing Payoffs In this example, consider a two-stage game in which the stage game in Figure 3 is played in the first period and the game in Figure 4 is played in the second: The payoff to each player is the simple sum of the payoffs of both games. Players cannot observe the action of the other player within a round; however, at the beginning of Round 2, both players find out about the other's action in Round 1. For Player 1, there are strategies. For Player 2, there are strategies. The extensive form of this multi-stage game is shown in Figure 5: Each of the two stages has two Nash Equilibria: which are (A, a), (B, b), (X, x), and (Y, y). If the complete contingent strategy of Player 1 matches Player 2 (i.e. AXXXX, axxxx), it will be a Nash Equilibrium. There are 32 such combinations in this multi-stage game. Additionally, all of these equilibria are subgame-perfect. References Game theory game classes
Multi-stage game
[ "Mathematics" ]
458
[ "Game theory game classes", "Game theory" ]
64,536,363
https://en.wikipedia.org/wiki/Electrochemical%20quartz%20crystal%20microbalance
Electrochemical quartz crystal microbalance (EQCM) is the combination of electrochemistry and quartz crystal microbalance, which was generated in the eighties. Typically, an EQCM device contains an electrochemical cells part and a QCM part. Two electrodes on both sides of the quartz crystal serve two purposes. Firstly, an alternating electric field is generated between the two electrodes to make up the oscillator. Secondly, the electrode contacting electrolyte is used as a working electrode (WE), together with a counter electrode (CE) and a reference electrode (RE), in the potentiostatic circuit constituting the electrochemistry cell. Thus, the working electrode of electrochemistry cell is the sensor of QCM. As a high mass sensitive in-situ measurement, EQCM is suitable to monitor the dynamic response of reactions at the electrode–solution interface at the applied potential. When the potential of a QCM metal electrode changes, a negative or positive mass change is monitored depending on the ratio of anions adoption on the electrode surface and the dissolution of metal ions into solution. EQCM calibration The EQCM sensitivity factor K can be calculated by combing the electrochemical cell measured charge density and QCM measured frequency shift. The sensitivity factor is only valid when the mass change on the electrode is homogenous. Otherwise, K is taken as the average sensitivity factor of the EQCM. where is the measured frequency shift (Hz), S is the quartz crystal active area (cm2), ρ is the density of quartz crystal, is the quartz crystal shear modulus and is the fundamental quartz crystal frequency. K is the intrinsic sensitivity factor of the EQCM. In a certain electrolyte solution, a metal film will deposited on the working electrode, which is the QCM sensor surface of QCM. The charge density () is involved in the electro-reduction of metal ions at a constant current , in a period of time (). The active areal mass density is calculated by where is the atomic weight of deposited metal, z is the electrovalency, and F is the Faraday constant. The experimental sensitivity of the EQCM is calculated by combing and . EQCM application Application of EQCM in electrosynthesis EQCM can be used to monitor the chemical reaction occurring on the electrode, which offers the optimized reaction condition by comparing the influence factors during the synthesis process. Some previous work has already investigated the polymerization process and charge transport properties, polymer film growth on gold electrode surface, and polymerization process of polypyrrole and its derivatives. EQCM was used to study electro-polymerization process and doping/de-doping properties of polyaniline film on gold electrode surface as well. To investigate the electrosynthesis process, sometimes it is necessary to combine other characterization technologies, such as using FTIR and EQCM to study the effect of different conditions on the formation of poly(3,4-ethylenedioxythiophene) film structure, and using EQCM, together with AFM, FTIR, EIS, to investigate the film formation process in the alkyl carbonate/lithium salt electrolyte solution on precious metal electrodes surfaces. Application of EQCM in electrodeposition and dissolution EQCM is broadly used to study the deposition/dissolution process on electrode surface, such as the oscillation of electrode potential during Cu/CuO2 layered nanostructure electrodeposition, deposition growth process of cobalt and nickel hexacyanoferrate in calcium nitrate and barium nitrate electrolyte solution, and the Mg electrode electrochemical behaviour in various polar aprotic electrolyte solutions. EQCM can be used as a powerful tool for corrosion and corrosion protection study, which is usually combined with other characterization technologies. A previous work used EQCM and XPS studied Fe-17Cr-33Mo/ Fe-25Cr alloy electrodes mass changes during the potential sweep and potential step experiments in the passive potential region in an acidic and a basic electrolyte. Another previous work used EQCM and SEM to study the influence of purine (PU) on Cu electrode corrosion and spontaneous dissolution in NaCl electrolyte solution. Application of EQCM in adsorption and desorption EQCM has been used to study the self-assembled monolayers of long chain alkyl mercaptan and alkanethiol and mercaptoalkanoic on gold electrode surface. Application of EQCM in polymer modified electrode EQCM can be used to ideally modify polymer membranes together with other electrochemical measurements or surface characterization methods. A team has used CV, UV-Vis, IR and EQCM studied irreversible changes of some polythiophenes in the electrochemical reduction process in acetonitrile. Later on they used AFM and EQCM investigated growth of polypyrrole film in anionic surfactant micellar solution. Then combing with CV, UV-Vis, FTIR, ESR, they used EQCM to study conductivity and magnetic properties of 3,4-dimethoxy and 3,4-ethylenedioxy-terminated polypyrrole and polythiophene. Application of EQCM in energy conversion and storage EQCM can be used to study the process of adsorption and oxidation of fuel molecules on the electrode surface, and the effect of electrode catalyst or other additives on the electrode, such as assessment of polypyrrole internal Pt load in the polypyrrole/platinum composites fuel cell, methanol fuel cell anodizing process, and electrodeposition of cerium oxide suspended nanoparticles doped with gadolinium oxide under the ultrasound for Co/CeO2 and Ni/CeO2 composite fuel cells. EQCM can also be used to study the energy storage performance and influencing factors of supercapacitors and electrochemical capacitors. For example, EQCM is used to study the ion movement gauge of conductive polymer of capacitor on cathode. Some work studied the EQCM application in solar energy, which is mostly additive and thin film material related, for instance, using EQCM to study the electrochemical deposition process and stability of Co-Pi oxygen evolution catalyst for solar storage. References Electrochemistry Weighing instruments
Electrochemical quartz crystal microbalance
[ "Physics", "Chemistry", "Technology", "Engineering" ]
1,321
[ "Weighing instruments", "Mass", "Measuring instruments", "Electrochemistry", "Matter" ]
64,540,037
https://en.wikipedia.org/wiki/Mirror%20symmetry%20conjecture
In mathematics, mirror symmetry is a conjectural relationship between certain Calabi–Yau manifolds and a constructed "mirror manifold". The conjecture allows one to relate the number of rational curves on a Calabi-Yau manifold (encoded as Gromov–Witten invariants) to integrals from a family of varieties (encoded as period integrals on a variation of Hodge structures). In short, this means there is a relation between the number of genus algebraic curves of degree on a Calabi-Yau variety and integrals on a dual variety . These relations were original discovered by Candelas, de la Ossa, Green, and Parkes in a paper studying a generic quintic threefold in as the variety and a construction from the quintic Dwork family giving . Shortly after, Sheldon Katz wrote a summary paper outlining part of their construction and conjectures what the rigorous mathematical interpretation could be. Constructing the mirror of a quintic threefold Originally, the construction of mirror manifolds was discovered through an ad-hoc procedure. Essentially, to a generic quintic threefold there should be associated a one-parameter family of Calabi-Yau manifolds which has multiple singularities. After blowing up these singularities, they are resolved and a new Calabi-Yau manifold was constructed. which had a flipped Hodge diamond. In particular, there are isomorphisms but most importantly, there is an isomorphism where the string theory (the A-model of ) for states in is interchanged with the string theory (the B-model of ) having states in . The string theory in the A-model only depended upon the Kahler or symplectic structure on while the B-model only depends upon the complex structure on . Here we outline the original construction of mirror manifolds, and consider the string-theoretic background and conjecture with the mirror manifolds in a later section of this article. Complex moduli Recall that a generic quintic threefold in is defined by a homogeneous polynomial of degree . This polynomial is equivalently described as a global section of the line bundle . Notice the vector space of global sections has dimension but there are two equivalences of these polynomials. First, polynomials under scaling by the algebraic torus (non-zero scalers of the base field) given equivalent spaces. Second, projective equivalence is given by the automorphism group of , which is dimensional. This gives a dimensional parameter space since , which can be constructed using Geometric invariant theory. The set corresponds to the equivalence classes of polynomials which define smooth Calabi-Yau quintic threefolds in , giving a moduli space of Calabi-Yau quintics. Now, using Serre duality and the fact each Calabi-Yau manifold has trivial canonical bundle , the space of deformations has an isomorphism with the part of the Hodge structure on . Using the Lefschetz hyperplane theorem the only non-trivial cohomology group is since the others are isomorphic to . Using the Euler characteristic and the Euler class, which is the top Chern class, the dimension of this group is . This is because Using the Hodge structure we can find the dimensions of each of the components. First, because is Calabi-Yau, so giving the Hodge numbers , hence giving the dimension of the moduli space of Calabi-Yau manifolds. Because of the Bogomolev-Tian-Todorov theorem, all such deformations are unobstructed, so the smooth space is in fact the moduli space of quintic threefolds. The whole point of this construction is to show how the complex parameters in this moduli space are converted into Kähler parameters of the mirror manifold. Mirror manifold There is a distinguished family of Calabi-Yau manifolds called the Dwork family. It is the projective family over the complex plane . Now, notice there is only a single dimension of complex deformations of this family, coming from having varying values. This is important because the Hodge diamond of the mirror manifold has The family has symmetry group acting by Notice the projectivity of is the reason for the condition The associated quotient variety has a crepant resolution given by blowing up the singularities giving a new Calabi-Yau manifold with parameters in . This is the mirror manifold and has where each Hodge number is . Ideas from string theory In string theory there is a class of models called non-linear sigma models which study families of maps where is a genus algebraic curve and is Calabi-Yau. These curves are called world-sheets and represent the birth and death of a particle as a closed string. Since a string could split over time into two strings, or more, and eventually these strings will come together and collapse at the end of the lifetime of the particle, an algebraic curve mathematically represents this string lifetime. For simplicity, only genus 0 curves were considered originally, and many of the results popularized in mathematics focused only on this case. Also, in physics terminology, these theories are heterotic string theories because they have supersymmetry that comes in a pair, so really there are four supersymmetries. This is important because it implies there is a pair of operators acting on the Hilbert space of states, but only defined up to a sign. This ambiguity is what originally suggested to physicists there should exist a pair of Calabi-Yau manifolds which have dual string theories, one's that exchange this ambiguity between one another. The space has a complex structure, which is an integrable almost-complex structure , and because it is a Kähler manifold it necessarily has a symplectic structure called the Kähler form which can be complexified to a complexified Kähler form which is a closed -form, hence its cohomology class is in The main idea behind the Mirror Symmetry conjectures is to study the deformations, or moduli, of the complex structure and the complexified symplectic structure in a way that makes these two dual to each other. In particular, from a physics perspective, the super conformal field theory of a Calabi-Yau manifold should be equivalent to the dual super conformal field theory of the mirror manifold . Here conformal means conformal equivalence which is the same as an equivalence class of complex structures on the curve . There are two variants of the non-linear sigma models called the A-model and the B-model which consider the pairs and and their moduli. A-model Correlation functions from String theory Given a Calabi-Yau manifold with complexified Kähler class the nonlinear sigma model of the string theory should contain the three generations of particles, plus the electromagnetic, weak, and strong forces. In order to understand how these forces interact, a three-point function called the Yukawa coupling is introduced which acts as the correlation function for states in . Note this space is the eigenspace of an operator on the Hilbert space of states for the string theory. This three point function is "computed" as using Feynman path-integral techniques where the are the naive number of rational curves with homology class , and . Defining these instanton numbers is the subject matter of Gromov–Witten theory. Note that in the definition of this correlation function, it only depends on the Kahler class. This inspired some mathematicians to study hypothetical moduli spaces of Kahler structures on a manifold. Mathematical interpretation of A-model correlation functions In the A-model the corresponding moduli space are the moduli of pseudoholomorphic curves or the Kontsevich moduli spaces These moduli spaces can be equipped with a virtual fundamental class or which is represented as the vanishing locus of a section of a sheaf called the Obstruction sheaf over the moduli space. This section comes from the differential equation which can be viewed as a perturbation of the map . It can also be viewed as the Poincaré dual of the Euler class of if it is a Vector bundle. With the original construction, the A-model considered was on a generic quintic threefold in . B-model Correlation functions from String theory For the same Calabi-Yau manifold in the A-model subsection, there is a dual superconformal field theory which has states in the eigenspace of the operator . Its three-point correlation function is defined as where is a holomorphic 3-form on and for an infinitesimal deformation (since is the tangent space of the moduli space of Calabi-Yau manifolds containing , by the Kodaira–Spencer map and the Bogomolev-Tian-Todorov theorem) there is the Gauss-Manin connection taking a class to a class, hence can be integrated on . Note that this correlation function only depends on the complex structure of . Another formulation of Gauss-Manin connection The action of the cohomology classes on the can also be understood as a cohomological variant of the interior product. Locally, the class corresponds to a Cech cocycle for some nice enough cover giving a section . Then, the insertion product gives an element which can be glued back into an element of . This is because on the overlaps giving hence it defines a 1-cocycle. Repeating this process gives a 3-cocycle which is equal to . This is because locally the Gauss-Manin connection acts as the interior product. Mathematical interpretation of B-model correlation functions Mathematically, the B-model is a variation of hodge structures which was originally given by the construction from the Dwork family. Mirror conjecture Relating these two models of string theory by resolving the ambiguity of sign for the operators led physicists to the following conjecture: for a Calabi-Yau manifold there should exist a mirror Calabi-Yau manifold such that there exists a mirror isomorphism giving the compatibility of the associated A-model and B-model. This means given and such that under the mirror map, there is the equality of correlation functions This is significant because it relates the number of degree genus curves on a quintic threefold in (so ) to integrals in a variation of Hodge structures. Moreover, these integrals are actually computable! See also Cotangent complex Homotopy associative algebra Kuranishi structure Mirror symmetry (string theory) Moduli of algebraic curves Kontsevich moduli space External links https://ocw.mit.edu/courses/mathematics/18-969-topics-in-geometry-mirror-symmetry-spring-2009/lecture-notes/ References Books/Notes Mirror Symmetry - Clay Mathematics Institute ebook Mirror Symmetry and Algebraic Geometry - Cox, Katz On the work of Givental relative to mirror symmetry First proofs Equivariant Gromov - Witten Invariants - Givental's original proof for projective complete intersections The mirror formula for quintic threefolds Rational curves on hypersurfaces (after A. Givental) - an explanation of Givental's proof Mirror Principle I - Lian, Liu, Yau's proof closing gaps in Givental's proof. His proof required the undeveloped theory of Floer homology Dual Polyhedra and Mirror Symmetry for Calabi-Yau Hypersurfaces in Toric Varieties - first general construction of mirror varieties for Calabi-Yau's in toric varieties Mirror symmetry for abelian varieties Derived geometry in Mirror symmetry Notes on supersymmetric and holomorphic field theories in dimensions 2 and 4 Research Mirror symmetry: from categories to curve counts - relation between homological mirror symmetry and classical mirror symmetry Intrinsic mirror symmetry and punctured Gromov-Witten invariants Homological mirror symmetry Categorical Mirror Symmetry: The Elliptic Curve An Introduction to Homological Mirror Symmetry and the Case of Elliptic Curves Homological mirror symmetry for the genus two curve Homological mirror symmetry for the quintic 3-fold Homological Mirror Symmetry for Calabi-Yau hypersurfaces in projective space Speculations on homological mirror symmetry for hypersurfaces in Mathematical physics Conjectures String theory Algebraic geometry
Mirror symmetry conjecture
[ "Physics", "Astronomy", "Mathematics" ]
2,481
[ "Astronomical hypotheses", "Unsolved problems in mathematics", "Applied mathematics", "Theoretical physics", "Conjectures", "Fields of abstract algebra", "Algebraic geometry", "String theory", "Mathematical problems", "Mathematical physics" ]
63,188,350
https://en.wikipedia.org/wiki/HVTN%20702
HVTN 702 was a clinical trial which the HIV Vaccine Trials Network organized to develop an HIV vaccine. In February 2020 the organizers halted the trial after finding no evidence of efficacy. Around December 2019 various media outlets reported that HVTN 702 could be an effective vaccine in preventing HIV. In 2016 various media outlets announced the start of the research. HVTN 702 was based on outcomes of the RV 144 trial. References External links profile at ClinicalTrials.gov HIV vaccine research Clinical trials related to HIV
HVTN 702
[ "Chemistry" ]
109
[ "HIV vaccine research", "Drug discovery" ]
63,192,046
https://en.wikipedia.org/wiki/Intercollegiate%20Biomathematics%20Alliance
The Intercollegiate Biomathematics Alliance (IBA) is a syndicate of organizations focused on connecting both academic and non-academic institutions to promote the study of biomathematics, ecology, and other related fields. Biomathematics is a scientific area connecting biology, ecology, mathematics, and computer science. Founded in 2014 by Executive director Olcay Akman of Illinois State University, the Intercollegiate Biomathematics Alliance helps organizations to work together and share resources among one another that are not regularly available at all institutions. The IBA is still young and typically attracts smaller colleges around the United States who tend to benefit more from being part of a consortium. However, in recent years, universities such as Arizona State University have joined and the IBA continues to maintain connections with larger research groups such as the Mathematical Bioscience Institute (MBI) and the National Institute for Mathematical and Biological Synthesis (NIMBioS). History In 2007, Olcay Akman of mathematics and Steven Juliano of biological sciences started a master's degree program at Illinois State University. The program grew and is now operated under the same umbrella as the IBA, the Center for Collaborative Studies in Mathematical Biology. In 2008, the first BEER (Biomathematics Ecology Education and Research) conference was held at Illinois State University with only 10 speakers and less than 50 attendees. In 2014, the BEER conference was the second largest biomathematics conference globally with more than 100 speakers. Then in 2014, other universities were asked to collaborate with the common goal of educating students about biomathematics, and this led to the creation of the Intercollegiate Biomathematics Alliance (IBA). The IBA is not the first to create a network of institutions. Morehouse College in Atlanta, GA participates in its own network of institutions that helps to provide students with greater access to resources. Similarly, Massachusetts Institute of Technology houses a consortium for research in energy, the MIT Energy Initiative. This network brings together the university and companies to expand research experiences and broaden educational perspectives. By pooling together resources, these consortia attempt to unite organizations under a common goal and share resources in infrastructure, intellect, and academia. Member Institutions As of 2021, the Intercollegiate Biomathematics Alliance has 9 member institutions. In 2019, the IBA had 11 member institutions. IBA members pay dues based on their institutional size. Individuals are also able to become members of the IBA with reduced rates for students. There is some incentive beyond collaboration efforts to become an IBA member. The organization offers reduced registration fees to the International Symposium on BEER, access to distance education courses, a copy of Spora-Journal of Biomathematics, and travel funding. Programs and Resources that the IBA Supports and Sponsors BEAM: Biomathematics Education with Applications and Methods Grant BEAM is a research grant for undergraduate research that supports both faculty members and students. BEAM also provides some support for participants at CURE. BEER: Biomathematics and Ecology Education and Research Symposium BEER (Biomathematics Ecology Education and Research) is an annual research symposium that takes place in the fall. The first BEER symposium took place in 2008 at Illinois State University with only 10 speakers and 30 attendees. By 2014, BEER was the second largest biomathematics conference globally. In 2017, the 10th annual BEER symposium was celebrated at Illinois State University. BEER has also been hosted by other institutions such as Arizona State University (2018) and University of Wisconsin- La Crosse (2019). In 2020, the 13th annual BEER symposium was hosted virtually due to the COVID-19 pandemic. BEER is expected to be hosted in 2021 by the University of Richmond in Richmond, VA. CLOUD: CLOUD for Layering, Organizing, and Utilizing Data IBA-CLOUD is a super computer available for IBA members to assist in research. IBA-CLOUD is a high-performance computing cluster server and available for members of IBA to use remotely. CURE: Cross-Institutional Undergraduate Research Experience Workshop Started in 2016, CURE is an undergraduate research workshop and experience. Students typically meet for a few days to work on their scientific research skills before choosing a faculty member to work with throughout the summer. Students come from around the country and some will present their work at BEER in the following fall. PEER: Partners in Extending Education and Research PEER is a service that the IBA provides for the scientific community. An appropriate IBA member will work together with individuals from other scientific fields to assist in experimental design, data analysis, and writing. IBA-GCP: IBA Graduate Certificate Program Designed to strengthen the mathematical biology background of students before they apply for graduate programs. Courses are available online and in person in the following areas: mathematical modeling, data analysis, computer science, and biological sciences. Academic Journals: Letters in Biomathematics and Spora Letters in Biomathematics (LiB) is an open access peer-reviewed international journal dedicated to showcasing the most current research in biomathematics and related fields. Spora: A Journal of Biomathematics is an open-access research journal for undergraduate and graduate research in the field of biomathematics. Currently there are five published volumes of Spora and 31 total published papers. Fellowship Awards The IBA grants fellowship awards to outstanding scholars who have made significant contributions to the field of mathematical biology. The Distinguished Senior Fellowship is awarded to senior scholars who have a record of significant scientific accomplishments and active leadership in biomathematics. This award is given on even-numbered years. The Excellence in Research Award is awarded to junior scholars who have scientific accomplishments in biomathematics and the potential to become a leader in the field. This award is given on odd-numbered years. References Bioinformatics organizations Organizations based in Illinois Ecology Academic organizations Mathematical and theoretical biology
Intercollegiate Biomathematics Alliance
[ "Mathematics", "Biology" ]
1,181
[ "Bioinformatics organizations", "Mathematical and theoretical biology", "Applied mathematics", "Ecology", "Bioinformatics" ]
63,192,698
https://en.wikipedia.org/wiki/National%20Timing%20Centre
The United Kingdom National Timing Centre is the proposed network of atomic clocks consisting of a central building, and a series of other locations across the UK. The cost of the new system will cost £36 million, but additionally the UK government has given £6.7 million through Innovate UK Funding and £40 million toward a new research program Quantum Technologies for fundamental physics to support UK research and investment. Locations: University of Birmingham; University of Strathclyde; University of Surrey; BT Adastral Park, Suffolk; BBC, Manchester; National Physical Laboratory, Teddington. History Discussions around a United Kingdom National Timing Centre began on 19 February 2020 as a response to the United Kingdom's over reliance on the European Union Global Navigation Satellite System (GNSS), and the United States of America's (USA) GNSS Systems. References External links National Timing Centre at the National Physical Laboratory Standards organisations in the United Kingdom
National Timing Centre
[ "Physics" ]
190
[ "Spacetime", "Physical quantities", "Time", "Time stubs" ]
63,194,019
https://en.wikipedia.org/wiki/Grainyhead-like%20gene%20family
Grainyhead-like genes are a family of highly conserved transcription factors that are functionally and structurally homologous across a large number of vertebrate and invertebrate species. For an estimated 100 million years or more, this genetic family has been evolving alongside life to fine tune the regulation of epithelial barrier integrity during development, fine-tuning epithelial barrier establishment, maintenance and subsequent homeostasis. The three main orthologues, Grainyhead-like 1, 2 and 3, regulate numerous genetic pathways within different organisms and perform analogous roles between them, ranging from neural tube closure, wound healing, establishment of the craniofacial skeleton and repair of the epithelium. When Grainyhead-like genes are impaired, due to genetic mutations in embryogenesis, it will cause the organism to present with developmental defects that largely affect ectodermal (and sometimes also endodermal) tissues in which they are expressed. These subsequent congenital disorders, including cleft lip and exencephaly, vary greatly in their severity and impact on the quality of life for the affected individual. There is much still to learn about the function of these genes and the more complex roles of Grainyhead-like genes are yet to be discovered. Gene Family The Grainyhead-like (Grhl) gene family is a group of highly conserved transcription factors, which work to regulate the expression of specific target genes. Grainyhead (Grh) was originally identified in Drosophila as being implicated within development through its role of regulating numerous genetic pathways. While Drosophila has only one Grh gene, there are three homologues currently known across other species (Grhl1-3). It appears that all members of the Grhl gene family are involved in epidermal barrier integrity, including its formation and repair, and are tightly regulated to prevent physical defects. The Grhl family of genes are found in a range of organisms, from humans to fish and fungi, and all have similar roles to each other in regards to the developmental processes that they have a role in regulating. This could indicate that the Grhl genes could be one of the earliest genes to arise within our genome, providing vital functions for survival of an early common ancestor. Conservation The Grhl gene family is tightly conserved between species across an estimated millions of years of evolution, also maintaining the binding site (AACCGGTT) on the target genes of Grhl. While the presence of the Grhl genes varies between species, the functions regulated remain largely analogous. The reason for the presence of multiple Grhl orthologues would likely be due to speciation and the evolution of species from a common ancestor over time. Due to many animals possessing Grhl genes, there are many possible animal models available for research on the Grhl family. At present, the most characterized are the models of Drosophila, mouse and zebrafish. Interestingly, Grh was also identified in fungi, which lack epidermal tissue and instead utilize a cell wall. This gives evidence that the formation of physical barriers across all, or a large variety of, species may have had an evolutionary ancestor that initially developed barrier formation as a result of the presence of a Grhl gene. Orthologues Grhl1 Grhl1 is, much like the rest of the family of genes, involved in epithelial barrier formation and wound healing while the loss of Grhl1 is often associated with the activation of the skin's immune system. Knockout of grhl1 in zebrafish has shown to cause hair cell apoptosis within the inner ear which leads to sensory epithelium damage that consequently causes deafness. Grhl1 may carry out its functions through regulation of downstream genetic targets such as desmosomal cadherin genes (Dsg1) and other cadherin family genes, as a reduction in Grhl1 yields similar phenotypes to that of reduced Dsg1 expression. The desmosomes are the intercellular junctions within the epidermis and genes like Dsg1 regulate cadherin expression within these junctions. The development and differentiation of epidermal cells is regulated by Grhl1 in a tissue-specific manner in vertebrates, meaning that different tissues will respond differently to Grhl1 regulation. In regards to other craniofacial features, such as the palate and jaw, Grhl1 does not currently have any known significant role in their development. Grhl2 Grhl2 is involved in lower jaw formation of mammals, among other craniofacial developmental processes. It is also evolutionarily closest to Grhl1, compared to Grhl3, while still exhibiting the highly conserved functions that all Grhl genes share. It also appears that Grhl2 is involved in the fusion of the facial bones and that disruption to the regulation of Grhl2 can lead to cranioschisis/split face during embryonic development, often causing death. Continuing with the trend of incomplete fusion, the formation of the neural tube and abdominal wall is also regulated by Grhl2, evident by observation of incomplete closure of these structures, leading to spina bifida and thoracoabdominoschisis, following loss of Grhl2 function in mutant mice models for Grhl2. Additionally, over-expression of Grhl2 can also lead to mice developing spina bifida, showing the delicate balance in regulation required for Grhl2. Grhl2 is also related to breast cancer progression due to its ability to regulate epithelial cells and other processes such as epithelial-mesenchymal transition (EMT), although it is not known if EMT is promoted or inhibited by Grhl2. However, tumour progression is more associated with the epithelial tissue phenotype. Interestingly, within zebrafish there are two separate orthologues, grhl2a and grhl2b. Comparing the homology of these two orthologues to the human and mice equivalent, Grhl2, showed that grhl2b had 36 out of 47 amino acids identical (77% identical), meaning it was slightly more conserved than grhl2a, which had 34 out of 47 (72% identical). grhl2b loss causes apoptosis throughout the brain and the nervous system of zebrafish. A similar result came from a mouse study and led to the belief that grhl2b is a key survival factor for neural cells. Grhl3 Much the same as the previous two orthologues, Grhl3 is involved in the regulation of epidermal tissue, such as the formation of the jaw, neural tube and other craniofacial features, and does so across both land and aquatic organisms. Grhl3 is a downstream target of Irf6, and plays a key role in processes involving fusion during development much like Grhl2, especially so in the oral palate and spinal cord. A mutation of Grhl3, that causes an increase or a decrease in expression, can lead to Van der Woude syndrome, which is characterized by phenotypes that include cleft lip and/or palate and spina bifida. Primarily, Grhl3 appears to play a vital role for regulating the development of the craniofacial skeleton. A genome-wide association study found that Grhl3 is an etiological variant for a nonsyndromic form of cleft palate, ~50% of all cleft palate cases, highlighting the level of impact that dysregulation of Grhl3 has on development. Apart from the defects that are physically noticeable, Grhl3 is also expressed in the brain of mice embryos and has been shown to regulate the impulsiveness and anxiety levels of mice. Furthermore, it appears that grhl3 regulates the enveloping layer of zebrafish and axial extension as well as cell size and identity during embryonic development. If expression is disrupted during the early stages of disruption it will lead to severe defects that can lead to the death of the embryo before epiboly is complete. Epiboly is the stage of development for select organisms, such as the xenopus, sea urchin and zebrafish, when the cells of the embryo grow and migrate to the opposite end of the yolk sac to envelop it to continue developing. Developmental defects Associated defects/diseases There are thousands of deaths a year of infants, either during or shortly after birth, and the leading cause of these deaths are congenital birth defects (CBDs), which are defined as abnormalities of the chromosomes. In the year 2004, CBDs had been the cause of over 139,000 hospitalizations in the U.S. and had cost the community $2.6 billion in healthcare and medical supplies. While some CBDs can be easily fixed by simple surgery or medication, such as cleft lip, there are still life threatening diseases that are caused by mutations to the Grhl family members or genetic pathways that they are associated with. In developing countries, where there is a large percentage of the population in poverty, families struggle to receive the necessary treatment to combat CBDs and the extent at which the quality of life is affected is continually worsening. Members of the Grhl3 family are closely related to endodermal tissues and the issues that can arise from a mutation in one of the Grhl family members can include respiratory problems, loss of hearing, spina bifida and much more. Grhl3 has been shown to be a downstream target of genes such as Fgf8 and Irf6, of which the associated pathways are involved in the aetiology of Van der Woude syndrome. Role in disease References Protein families
Grainyhead-like gene family
[ "Biology" ]
1,994
[ "Protein families", "Protein classification" ]
63,195,300
https://en.wikipedia.org/wiki/Computing%20the%20Continuous%20Discretely
Computing the Continuous Discretely: Integer-Point Enumeration in Polyhedra is an undergraduate-level textbook in geometry, on the interplay between the volume of convex polytopes and the number of lattice points they contain. It was written by Matthias Beck and Sinai Robins, and published in 2007 by Springer-Verlag in their Undergraduate Texts in Mathematics series (Vol. 154). A second edition was published in 2015, and a German translation of the first edition by Kord Eickmeyer, Das Kontinuum diskret berechnen, was published by Springer in 2008. Topics The book begins with a motivating problem, the coin problem of determining which amounts of money can be represented (and what is the largest non-representable amount of money) for a given system of coin values. Other topics touched on include face lattices of polytopes and the Dehn–Sommerville equations relating numbers of faces; Pick's theorem and the Ehrhart polynomials, both of which relate lattice counting to volume; generating functions, Fourier transforms, and Dedekind sums, different ways of encoding sequences of numbers into mathematical objects; Green's theorem and its discretization; Bernoulli polynomials; the Euler–Maclaurin formula for the difference between a sum and the corresponding integral; special polytopes including zonotopes, the Birkhoff polytope, and permutohedra; and the enumeration of magic squares. In this way, the topics of the book connect together geometry, number theory, and combinatorics. Audience and reception This book is written at an undergraduate level, and provides many exercises, making it suitable as an undergraduate textbook. Little mathematical background is assumed, except for some complex analysis towards the end of the book. The book also includes open problems, of more interest to researchers in these topics. As reviewer Darren Glass writes, "Even people who are familiar with the material would almost certainly learn something from the clear and engaging exposition that these two authors use." Reviewer Margaret Bayer calls the book "coherent and tightly developed ... accessible and engaging", and reviewer Oleg Karpenkov calls it "outstanding". See also List of books about polyhedra References Polytopes Lattice points Volume Mathematics textbooks 2007 non-fiction books 2015 non-fiction books Springer Science+Business Media books
Computing the Continuous Discretely
[ "Physics", "Mathematics" ]
483
[ "Scalar physical quantities", "Physical quantities", "Lattice points", "Quantity", "Size", "Extensive quantities", "Volume", "Wikipedia categories named after physical quantities", "Number theory" ]
63,199,597
https://en.wikipedia.org/wiki/Institute%20of%20Nuclear%20Physics%20of%20the%20Polish%20Academy%20of%20Sciences
The (Henryka Niewodniczański) Institute of Nuclear Physics Polish Academy of Sciences is a research center in the field of nuclear physics of the Polish Academy of Sciences, located in Kraków. It was founded in 1955 by Henryk Niewodniczański. In 1988 the institute was named after Niewodniczański. The co-founder of the Institute was Marian Mięsowicz. The institute conducts research in four main areas: Astrophysics and particle physics, Nuclear physics and strong interactions, Condensed matter (including nano-materials), Interdisciplinary and applied research, which involves applications of physics in medicine, biology, dosimetry, environmental protection, nuclear geophysics, radiochemistry, high-temperature plasma diagnostics, the study of complex systems, such as the human brain, econophysics or linguistics. See also Cosmic-Ray Extremely Distributed Observatory References Physics organizations Institutes of the Polish Academy of Sciences 1955 establishments in Poland Organizations established in 1955 Nuclear physics
Institute of Nuclear Physics of the Polish Academy of Sciences
[ "Physics" ]
198
[ "Nuclear physics" ]
59,600,531
https://en.wikipedia.org/wiki/Phase%20reduction
Phase reduction is a method used to reduce a multi-dimensional dynamical equation describing a nonlinear limit cycle oscillator into a one-dimensional phase equation. Many phenomena in our world such as chemical reactions, electric circuits, mechanical vibrations, cardiac cells, and spiking neurons are examples of rhythmic phenomena, and can be considered as nonlinear limit cycle oscillators. History The theory of phase reduction method was first introduced in the 1950s, the existence of periodic solutions to nonlinear oscillators under perturbation, has been discussed by Malkin in, in the 1960s, Winfree illustrated the importance of the notion of phase and formulated the phase model for a population of nonlinear oscillators in his studies on biological synchronization. Since then, many researchers have discovered different rhythmic phenomena related to phase reduction theory. Phase model of reduction Consider the dynamical system of the form where is the oscillator state variable, is the baseline vector field. Let be the flow induced by the system, that is, is the solution of the system for the initial condition . This system of differential equations can describe for a neuron model for conductance with , where represents the voltage difference across the membrane and represents the -dimensional vector that defines gating variables. When a neuron is perturbed by a stimulus current, the dynamics of the perturbed system will no longer be the same with the dynamics of the baseline neural oscillator. The target here is to reduce the system by defining a phase for each point in some neighbourhood of the limit cycle. The allowance of sufficiently small perturbations (e.g. external forcing or stimulus effect to the system) might cause a large deviation of the phase, but the amplitude is perturbed slightly because of the attracting of the limit cycle. Hence we need to extend the definition of the phase to points in the neighborhood of the cycle by introducing the definition of asymptotic phase (or latent phase). This helps us to assign a phase to each point in the basin of attraction of a periodic orbit. The set of points in the basin of attraction of that share the same asymptotic phase is called the isochron (e.g. see Figure 1), which were first introduced by Winfree. Isochrons can be shown to exist for such a stable hyperbolic limit cycle . So for all point in some neighbourhood of the cycle, the evolution of the phase can be given by the relation , where is the natural frequency of the oscillation. By the chain rule we then obtain an equation that govern the evolution of the phase of the neuron model is given by the phase model: where is the gradient of the phase function with respect to the vector of the neuron's state vector , for the derivation of this result, see This means that the -dimensional system describing the oscillating neuron dynamics is then reduced to a simple one-dimensional phase equation. One can notice that, it is impossible to retrieve the full information of the oscillator from the phase because is not one-to-one mapping. Phase model with external forcing Consider now a weakly perturbed system of the form where is the baseline vector field, is a weak periodic external forcing (or stimulus effect) of period , which can be different from (in general), and frequency , which might depend on the oscillator state . Assuming that the baseline neural oscillator (that is, when ) has an exponentially stable limit cycle with period (example, see Figure 1) that is normally hyperbolic, it can be shown that persists under small perturbations. This implies that for a small perturbation, the perturbed system will remain close to the limit cycle. Hence we assume that such a limit cycle always exists for each neuron. The evolution of the perturbed system in terms of the isochrons is where is the gradient of the phase with respect to the vector of the neuron's state vector , and is the stimulus effect driving the firing of the neuron as a function of time . This phase equation is a partial differential equation (PDE). For a sufficiently small , a reduced phase model evaluated on the limit cycle of the unperturbed system can be given by, up to the first order of , where function measures the normalized phase shift due to a small perturbation delivered at any point on the limit cycle , and is called the phase sensitivity function or infinitesimal phase response curve. In order to analyze the reduced phase equation corresponding to the perturbed nonlinear system, we need to solve a PDE, which is not a trivial one. So we need to simplify it into an autonomous phase equation for , which can more easily be analyzed. Assuming that the frequencies and are sufficiently small so that , where is , we can introduce a new phase function . By the method of averaging, assuming that does not vary within , we obtain an approximated phase equation where , and is a -periodic function representing the effect of the periodic external forcing on the oscillator phase, defined by The graph of this function can be shown to exhibit the dynamics of the approximated phase model, for more illustrations see. Examples of phase reduction For a sufficiently small perturbation of a certain nonlinear oscillator or a network of coupled oscillators, we can compute the corresponding phase sensitivity function or infinitesimal PRC . References Dynamical systems
Phase reduction
[ "Physics", "Mathematics" ]
1,117
[ "Mechanics", "Dynamical systems" ]
59,600,751
https://en.wikipedia.org/wiki/IR%20welding
IR welding is a welding technique that uses a non-contact heating method to melt and fuse thermoplastic parts together using the energy from infrared radiation. The process was first developed in the late 1900s, but due to the high capital cost of IR equipment the process was not commonly applied in industry until prices dropped in the 1990s. IR welding typically uses a range of wavelengths from 800 to 11,000 nm on the electromagnetic spectrum to heat, melt, and fuse the interface between two plastic parts through the absorption and conversion of the IR energy into heat. Laser welding is a similar joining process that applies IR radiation at a single wavelength. There are many different welding techniques that use IR heating, with the three major modes being surface heating, through transmission IR welding (TTIr), and IR staking. A variety of heating configurations have been applied to these techniques such as scanning, continuous illumination, and mask welding. Advantages such as faster and controllable non-contact heating applicable for a wide range of simple or complex part geometries sets IR welding apart from other forms of plastic welding. CO detectors, IV bags, and brake transmission lines are just a few of the many products that utilize IR welds. History IR welding is categorized as a form thermal plastic welding alongside hot gas welding, hot tool welding, and extrusion welding. Although infrared radiation was first discovered in the 1800s, IR was not applied as a source of heat until the beginning of WWII when it was found to be more effective than the fuel convection ovens of that time. IR radiation was first tested for the welding of thermoplastic polymers in the late 1900s, but the process was relatively new and not fully understood. IR welding systems offered faster heating times than the other forms of thermal welding, but the high capital costs limited its development. With a decrease in the price of equipment in the 1990s, IR welding has become more popular in the industry. Physics of IR welding IR welding typically uses wavelengths from 800 to 11,000 nm on the electromagnetic spectrum. Plastics interact with IR radiation through reflection, transmission, and absorption. Incident IR radiation can either be reflected off the surface of the plastic, transmitted through the plastic, or absorbed into the plastic as other forms of energy including thermal energy. The ratio of these three interactions depends on the wavelength of the IR radiation and the receiving plastic's properties. Amorphous plastics are generally optically clear and can transmit almost all incident IR radiation. For this reason they are commonly used in TTIr. Semi-crystalline plastics can diffuse incident IR radiation between the amorphous and crystalline boundaries, reducing the transmittance and increasing the absorbance of the material. The higher absorptivity results in more heat generation for a given IR source. Additives such as clarifying agents can be used increase a plastic's transmittance while dies and pigments can be used increase the absorbance of a material. Increasing amounts of these additives can decrease the strength of both the material and the welded joint. The closer the IR radiation source, the higher incidence efficiency on the material. IR radiation is most effective when directing radiation normal to the part. Radiation energy always affects the surface of a part while the depth of penetration that the energy can reach is dependent on the plastic's crystallinity. Equipment IR Sources Potential IR welding sources include quartz lamps and ceramic heaters which can generate a wide range of IR wavelengths. Laser welding employs IR sources that operate at a single wavelength such as CO2 lasers, Nd:YAG lasers, laser diodes. The equipment selected for each welding process stems from the type of radiation produced. Quartz lamps produce wavelengths of around 1,000 to 5,000 nm and ceramic heaters produce wavelengths of around 5,000 to 10,000 nm. Attachments P-wave technology utilizes an IR lamp and a pre-placed focusing device such as an IR transducer or film that can filter and focus IR radiation at a desired wavelength and increased intensity within a selected area to improve weld penetration with minimal surface damage. This method allows improved IR welding of polymers with higher melting temperatures such as most fluoropolymers and polyketones. IR welding techniques The three major welding techniques used in the industry today include surface heating, through transmission IR welding, and IR staking. All IR welding techniques contain the following six basic steps in some form: Loading of parts into the welding system that will hold the parts in place Insertion of IR source in front of the face of each part that will welded together Application of IR radiation to melt a thin layer of plastic on the surface of each part Change-over in which the IR source is removed from the face of each part Clamping of the parts to join the melted surfaces together under pressure as they cool and solidify Unloading of the parts after the weld has been made Surface Heating Surface heating includes heating and melting of the interface between plastic parts with IR radiation and forcing the parts together into a molten joint that solidifies as one part. This process can be split into 3 phases as shown in the figure to the right: A) Loading of parts, insertion of the IR source, and IR application. B) Change-over with the removal of IR source and clamping of the parts to join them. C) Unloading of the parts after the weld was made. Through Transmission IR Welding (TTIr) TTIr welding is the joining of an IR transparent part to a second part such that the IR radiation travels through the transparent part and heat the surface of the second part as shown in the figure to the right. IR wavelengths are generally within 800 to 1050 nm. To make a transparent part absorbent to IR radiation, the addition of dies or colorants such as carbon black can be used. Highly absorbent thermoplastic films can be placed at the joint to receive the IR radiation and melt the interface during welding. Using these methods, TTIr welds can be completed between parts of both the same or different materials. IR Staking IR staking includes the localized welding of a thermoplastic stud or stake from one part into the cavity of a non-weldable part to form a mechanical fastener. As shown in the figure to the right, the polymer part and non-weldable part are first placed together (A), then the projecting polymer is melted and formed around the non-weldable part to fasten the two together (B). The stud can be heated through directed TTIr when pre-placed within the cavity of an IR transparent part, then melted to deform it into a button shape required to fill the cavity before solidifying. Surface IR radiation can also be used to soften a plastic stud which is then pressed into a button-shaped die to form a head before cooling and solidifying. Heating Configurations IR systems generally rely on one of three surface heating methods: scanning, continuous illumination, and mask welding. Scanning Scanning involves the movement of an IR beam across the surface of a part using either an automated motion system or galvanic mirrors. Equipment is limited by the speed of movements across the part's surface to maintain uniform temperatures on the surface. In TTIr welding, scanning allows the un-melted portion of the part to act as a mechanical stop in order to maintain the joint gap between the two parts. Continuous illumination Continuous illumination uses more than one IR radiation source to heat the entire joint interface at the same time. Part tolerances or fit is not as crucial with this method as the entire surface will be melted before welding. This method is useful when welding parts with complex geometries, employing the multiple IR sources to evenly heat all forms of joint interfaces. Mask welding Similar to continuous illumination, mask welding utilizes multiple IR sources to completely illuminate a joint interface while placing an IR radiation mask over the parts to control which regions will form a melt layer. Materials Below is a list of materials well known for their IR weldability: Polycarbonate (PC) Poly(methyl methacrylate) (PMMA) Ethylene vinyl alcohol (EVOH) Acrylic Polystyrene (PS) Acrylonitrile butadiene stryrene (ABS) Polyvinyl chloride (PVC) Polyethylene (PE) Polypropylene (PP) Polyketone (PK) Elastomers Polyamide (PA) Polyoxymethylene (POM or Acetal) Polytetrafluoroethylene (PTFE) High density polyethylene (HDPE) Glass fiber reinforced polyethylene (PPE) Polybutylene terephthalate (PBT) Glass fiber reinforced polyether sulfone (PES) Advantages / Disadvantages Advantages Fast heating and cycle time than other thermal plastic welding processes Non-contact heating on the weld interface prevents plastic parts from sticking to the heat source, as seen in hot plate welding Controlled heat affected zone for reduced flash than seen in hot plate welding Minimal contamination risk with prevention of production of particulate than other thermal plastic welding processes Continuous and easily automated process Potential for higher joint strengths and lower residual stresses than other thermal plastic welding processes Cost-effective in comparison to laser welding Direct heat transfer to parts allows for maximum energy efficiency and fast response time with lower weight equipment than other thermal plastic welding processes Well suited for welding high temperature thermoplastics in comparison to hot plate welding Disadvantages IR welding parts and systems are more expensive than other thermal plastic welding processes IR welding can only weld materials susceptible to IR waves and part interfaces exposed to IR radiation Prolonged heating may cause material degradation or vapor oxidation entrapment Applications New joining technologies using IR welding are critical for fabricating complex parts and assemblies at high speeds and low costs. Although IR plastic welding has many advantages over other types of plastic welding, limitations such as equipment costs and susceptible materials properties reduce the amount of industrial applications of the method. A few examples of current industrial applications are shown below: CO detector filters are IR welded to their plastic housing to prevent damaging the filter with particulate Medical IV-bags are IR welded to achieve minimal flash and particulate generation for smooth and clean fluid flow High-speed cut and seal film (300 m/min) processes allow for minimal fraying at the edges and cauterized seams Brake fluid reservoirs are IR welded to prevent clogging and contamination of the small fluid transmission channels PE pipes in the infrastructure of natural gas transmission undergo IR welding using TTIr to improve joint strength with minimal coupling deformation References Welding
IR welding
[ "Engineering" ]
2,141
[ "Welding", "Mechanical engineering" ]
59,601,438
https://en.wikipedia.org/wiki/Inner%20core%20super-rotation
Inner core super-rotation is the eastward rotation of the inner core of Earth relative to its mantle, for a net rotation rate that is usually faster than Earth as a whole. A 1995 model of Earth's dynamo predicted super-rotations of up to 3 degrees per year; the following year, this prediction was supported by observed discrepancies in the time that p-waves take to travel through the inner and outer core. A 1995 model of Earth's dynamo predicted super-rotations of up to 3 degrees per year; the following year, this prediction was supported by observed discrepancies in the time that p-waves take to travel through the inner and outer core. Utilizing both s and p waves dramatically increases the confidence levels of the many seismic data conclusions. Seismic observations have made use of a direction dependence (anisotropy) of the speed of seismic waves in the inner core, as well as spatial variations in the speed. Other estimates come from free oscillations of Earth. The results are inconsistent and the existence of a super-rotation is still controversial, but it is probably less than 0.1 degrees per year. When geodynamo models take into account gravitational coupling between the inner core and mantle, it lowers the predicted super-rotation to as little as 1 degree per million years. For the inner core to rotate despite gravitational coupling, it must be able to change shape, which places constraints on its viscosity. A 2023 study reported that the spin of the Earth's inner core has stopped spinning faster than the planet's surface around 2009 and likely is now rotating slower than it. Background At the center of Earth is the core, a ball with a mean radius of 3480 kilometres that is composed mostly of iron. The outer core is liquid while the inner core, with a radius of 1220 km, is solid. Because the outer core has a low viscosity, it could be rotating at a different rate from the mantle and crust. This possibility was first proposed in 1975 to explain a phenomenon of Earth's magnetic field called westward drift: some parts of the field rotate about 0.2 degrees per year westward relative to Earth's surface. In 1981, David Gubbins of Leeds University predicted that a differential rotation of the inner and outer core could generate a large toroidal magnetic field near the shared boundary, accelerating the inner core to the rate of westward drift. This would be in opposition to the Earth's rotation, which is eastwards, so the overall rotation would be slower. In 1995, Gary Glatzmeier at Los Alamos and Paul Roberts at UCLA published the first "self-consistent" three-dimensional model of the dynamo in the core. The model predicted that the inner core rotates 3 degrees per year faster than the mantle, a phenomenon that became known as super-rotation. 1996, Xiaodong Song and Paul G. Richards, scientists at the Lamont–Doherty Earth Observatory, presented seismic evidence for a super-rotation of 0.4 to 1.8 degrees per year, while another study estimated the super-rotation to be 3 degrees per year. Seismic observations The main observational constraints on inner core rotation come from seismology. When an earthquake occurs, two kinds of seismic wave travel down through the Earth: those with ground motion in the direction the wave propagates (p-waves) and those with transverse motion (s-waves). S-waves do not travel through the outer core because they involve shear stress, a type of deformation that cannot occur in a liquid. In seismic notation, a p-wave is represented by the letter P when traveling through the crust and mantle and by the letter K when traveling through the outer core. A wave that travels through the mantle, core and mantle again before reaching the surface is represented by PKP. For geometric reasons, two branches of PKP are distinguished: PKP(AB) through the upper part of the outer core, and PKP(BC) through the lower part. A wave passing through the inner core is referred to as PKP(DF). (Alternate names for these phases are PKP1, PKP2 and PKIKP.) Seismic waves can travel multiple paths from an earthquake to a given sensor. PKP(BC) and PKP(DF) waves have similar paths in the mantle, so any difference in the overall travel time is mainly due to the difference in wave speeds between the outer and inner core. Song and Richards looked at how this difference changed over time. Waves traveling from south to north (emitted by earthquakes in the South Sandwich Islands and received at Fairbanks, Alaska) had a differential that changed by 0.4 seconds between 1967 and 1995. By contrast, waves traveling near the equatorial plane (e.g., between Tonga and Germany) showed no change. One of the criticisms of the early estimates of super-rotation was that uncertainties about the hypocenters of the earthquakes, particularly those in the earlier records, caused errors in the measurement of travel times. This error can be reduced by using data for doublet earthquakes. These are earthquakes that have very similar waveforms, indicating that the earthquakes were very close to each other (within about a kilometer). Using doublet data from the South Sandwich Islands, a study in 2015 arrived at a new estimate of 0.41° per year. Seismic observations – in particular "temporal changes between repeated seismic waves that should traverse the same path through the inner core" – were used to reveal a core rotation slow-down around 2009. This is not thought to have major effects and one cycle of the oscillation in rotation is thought to be about seven decades, coinciding with several other geophysical periodicities, "especially the length of day and magnetic field". Inner core anisotropy Song and Richards explained their observations in terms of the prevailing model of inner core anisotropy at the time. Waves were observed to travel faster between north and south than along the equatorial plane. A model for the inner core with uniform anisotropy had a direction of fastest travel tilted at an angle 10° from the spin axis of the Earth. Since then, the model for the anisotropy has become more complex. The top 100 kilometers are isotropic. Below that, there is stronger anisotropy in a "western" hemisphere (roughly centered on the Americas) than in an "eastern" hemisphere (the other half of the globe), and the anisotropy may increase with depth. There may also be a different orientation of anisotropy in an "innermost inner core" (IMIC) with a radius of about 550 kilometers. A group at the University of Cambridge used travel time differentials to estimate the longitudes of the hemisphere boundaries with depth up to 90 kilometers below the inner core boundary. Combining this information with an estimate for the rate of growth for the inner core, they obtained a rate of 0.1–1° per million years. Estimates of the rotation rate based on travel time differentials have been inconsistent. Those based on the Sandwich Island earthquakes have the fastest rates, although they also have a weaker signal, with PKP(DF) barely emerging above the noise. Estimates based on other paths have been lower or even in the opposite direction. By one analysis, the rotation rate is constrained to be less than 0.1° per year. Heterogeneity A study in 1997 revisited the Sandwich Islands data and came to a different conclusion about the origin of changes in travel times, attributing them to local heterogeneities in wave speeds. The new estimate for super-rotation was reduced to 0.2–0.3° per year. Inner core rotation has also been estimated using PKiKP waves, which scatter off the surface of the inner core, rather than PKP(DF) waves. Estimates using this method have ranged from 0.05 to 0.15° per year. Normal modes Another way of constraining the inner core rotation is using normal modes (standing waves in Earth), giving a global picture. Heterogeneities in the core split the modes, and changes in the "splitting functions" over time can be used to estimate the rotation rate. However, their accuracy is limited by the shortage of seismic stations in the 1970s and 1980s, and the inferred rotation can be positive or negative depending on the mode. Overall, normal modes are unable to distinguish the rotation rate from zero. Theory In the 1995 model of Glatzmeier and Roberts, the inner core is rotated by a mechanism similar to an induction motor. A thermal wind in the outer core gives rise to a circulation pattern with flow from east to west near the inner core boundary. Magnetic fields passing through the inner and outer cores provide a magnetic torque, while viscous torque on the boundary keeps the inner core and the fluid near it rotating at the same rate on average. The 1995 model did not include the effect of gravitational coupling between density variations in the mantle and topography on the inner core boundary. A 1996 study predicted that it would force the inner core and mantle to rotate at the same rate, but a 1997 paper showed that relative rotation could occur if the inner core was able to change its shape. This would require the viscosity to be less than 1.5 x 1020 pascal-seconds (Pa·s). It also predicted that, if the viscosity were too low (less than 3 x 1016 Pa·s), the inner core would not be able to maintain its seismic anisotropy. However, the source of the anisotropy is still not well understood. A model of the viscosity of the inner core based on Earth's nutations constrains the viscosity to 2–7 × 1014 Pa·s. Geodynamo models that take into account gravitational locking and changes in the length of day predict a super-rotation rate of only 1° per million years. Some of the inconsistencies between measurements of the rotation may be accommodated if the rotation rate oscillates. See also Inverse problem Notes and references Further reading Rotation Structure of the Earth Geodynamics 1996 in science
Inner core super-rotation
[ "Physics" ]
2,104
[ "Physical phenomena", "Motion (physics)", "Classical mechanics", "Rotation" ]
59,601,600
https://en.wikipedia.org/wiki/Squeeze%20flow
Squeeze flow (also called squeezing flow, squeezing film flow, or squeeze flow theory) is a type of flow in which a material is pressed out or deformed between two parallel plates or objects. First explored in 1874 by Josef Stefan, squeeze flow describes the outward movement of a droplet of material, its area of contact with the plate surfaces, and the effects of internal and external factors such as temperature, viscoelasticity, and heterogeneity of the material. Several squeeze flow models exist to describe Newtonian and non-Newtonian fluids undergoing squeeze flow under various geometries and conditions. Numerous applications across scientific and engineering disciplines including rheometry, welding engineering, and materials science provide examples of squeeze flow in practical use. Basic Assumptions Conservation of mass (expressed as a continuity equation), the Navier-Stokes equations for conservation of momentum, and the Reynolds number provide the foundations for calculating and modeling squeeze flow. Boundary conditions for such calculations include assumptions of an incompressible fluid, a two-dimensional system, neglecting of body forces, and neglecting of inertial forces. Relating applied force to material thickness: Where is the applied squeezing force, is the initial length of the droplet, is the fluid viscosity, is the width of the assumed rectangular plate, is the final height of the droplet, and is the change in droplet height over time. To simplify most calculations, the applied force is assumed to be constant. Newtonian fluids Several equations accurately model Newtonian droplet sizes under different initial conditions. Consideration of a single asperity, or surface protrusion, allows for measurement of a very specific cross-section of a droplet. To measure macroscopic squeeze flow effects, models exist for two the most common surfaces: circular and rectangular plate squeeze flows. Single asperity For single asperity squeeze flow: Where is the initial height of the droplet, is the final height of the droplet, is the applied squeezing force, is the squeezing time, is the fluid viscosity, is the width of the assumed rectangular plate, and is the initial length of the droplet. Based on conservation of mass calculations, the droplet width is inversely proportional to droplet height; as the width increases, the height decreases in response to squeezing forces. Circular plate For circular plate squeeze flow: is the radius of the circular plate. Rectangular plate For rectangular plate squeeze flow: These calculations assume a melt layer that has a length much larger than the sample width and thickness. Non-Newtonian fluids Simplifying calculations for Newtonian fluids allows for basic analysis of squeeze flow, but many polymers can exhibit properties of non-Newtonian fluids, such as viscoelastic characteristics, under deformation. The power law fluid model is sufficient to describe behaviors above the melting temperature for semicrystalline thermoplastics or the glass transition temperature for amorphous thermoplastics, and the Bingham fluid model provides calculations based on variations in yield stress calculations. Power law fluid For squeeze flow in a power law fluid: Where (or ) is the flow consistency index and is the dimensionless flow behavior index. Where is the flow consistency index, is the initial flow consistency index, is the activation energy, is the universal gas constant, and is the absolute temperature. During experimentation to determine the accuracy of the power law fluid model, observations showed that modeling slow squeeze flow generated inaccurate power law constants ( and ) using a standard viscometer, and fast squeeze flow demonstrated that polymers may exhibit better lubrication than current constitutive models will predict. The current empirical model for power law fluids is relatively accurate for modeling inelastic flows, but certain kinematic flow assumptions and incomplete understanding of polymeric lubrication properties tend to provide inaccurate modeling of power law fluids. Bingham fluid Bingham fluids exhibit uncommon characteristics during squeeze flow. While undergoing compression, Bingham fluids should fail to move and act as a solid until achieving a yield stress; however, as the parallel plates move closer together, the fluid shows some radial movement. One study proposes a “biviscosity” model where the Bingham fluid retains some unyielded regions that maintain solid-like properties, while other regions yield and allow for some compression and outward movement. Where is the known viscosity of the Bingham fluid, is the "paradoxical" viscosity of the solid-like state, and is the biviscosity region stress. To determine this new stress: Where is the yield stress and is the dimensionless viscosity ratio. If , the fluid exhibits Newtonian behavior; as , the Bingham model applies. Applications Squeeze flow application is prevalent in several science and engineering fields. Modeling and experimentation assist with understanding the complexities of squeeze flow during processes such as rheological testing, hot plate welding, and composite material joining. Rheological testing Squeeze flow rheometry allows for evaluation of polymers under wide ranges of temperatures, shear rates, and flow indexes. Parallel plate plastometers provide analysis for high viscosity materials such as rubber and glass, cure times for epoxy resins, and fiber-filled suspension flows. While viscometers provide useful results for squeeze flow measurements, testing conditions such as applied rotation rates, material composition, and fluid flow behaviors under shear may require the use of rheometers or other novel setups to obtain accurate data. Hot plate welding During conventional hot plate welding, a successful joining phase depends on proper maintenance of squeeze flow to ensure that pressure and temperature create an ideal weld. Excessive pressure causes squeeze out of valuable material and weakens the bond due to fiber realignment in the melt layer, while failure to allow cooling to room temperature creates weak, brittle welds that crack or break completely during use. Composite material joining Prevalent in the aerospace and automotive industries, composites serve as expensive, yet mechanically strong, materials in the construction of several types of aircraft and vehicles. While aircraft parts are typically composed of thermosetting polymers, thermoplastics may become an analog to permit increased manufacturing of these stronger materials through their melting abilities and relatively inexpensive raw materials. Characterization and testing of thermoplastic composites experiencing squeeze flow allow for study of fiber orientations within the melt and final products to determine weld strength. Fiber strand length and size show significant effects on material strength, and squeeze flow causes fibers to orient along the load direction while being perpendicular to the joining direction to achieve the same final properties as thermosetting composites. References Welding Plastics Materials science Molding processes
Squeeze flow
[ "Physics", "Materials_science", "Engineering" ]
1,338
[ "Applied and interdisciplinary physics", "Welding", "Unsolved problems in physics", "Materials science", "nan", "Mechanical engineering", "Amorphous solids", "Plastics" ]
59,604,285
https://en.wikipedia.org/wiki/Dounce%20homogenizer
Invented by and named for Alexander Dounce , a Dounce homogenizer or "Douncer", is a cylindrical glass tube, closed at one end, with two glass pestles of carefully specified outer diameters, intended for the gentle homogenization of eukaryotic cells (e.g. mammalian cells). Dounce homogenizers are still commonly used today to isolate cellular organelles. The two Dounce homogenizer pestles (known as the "loose" or "A" and "tight" or "B" pestles), have a carefully specified outer diameter, relative to the inner diameter of the cylinder. The "A" (loose) pestle has a clearance from the cylinder wall of (~0.0025 - 0.0055 in.) while the "B" (tight) pestle has a clearance of (~0.0005 - 0.0025 in.). This allows for tissue and cells to be lysed by shear stress with minimal (if any) degree of heating, thereby leaving extracted organelles or heat-sensitive enzyme complexes largely intact. Typically, a soft tissue (e.g. mammalian liver) is cut or broken into smaller pieces and placed into the glass cylinder, alongside a suitable volume of an appropriate lysis buffer. Homogenization is performed by a defined number of "passes" of the pestles, first with the loose pestle, then with the tight pestle, up and down the cylinder. Five to ten passes are typical. Dounce homogenizers are typically produced from borosilicate glass, but are still fragile, and should be used with care. Especially hard or tough tissues should be pre-homogenized before use in a dounce homogenizer. Eukaryotic cells with tough cell walls, such as Saccharomyces cerevisiae, cannot be directly lysed with a dounce homogenizer, unless the cell wall is first broken down (e.g. with lyticase, or zymolyase in the case of S. cerevisiae). References Cell biology Biochemistry Laboratory glassware
Dounce homogenizer
[ "Chemistry", "Biology" ]
444
[ "Biochemistry", "Cell biology", "nan" ]
59,609,098
https://en.wikipedia.org/wiki/Optical%20baffle
An optical baffle is an opto-mechanical construction designed to block light from a source shining into the front of an optical system and reaching the image as unwanted light. Principles Optical systems which have stringent requirements on stray light levels often need optical baffles. There are many designs, depending on the desired goals. Generic optical baffle designs and their advantages for stray light control can be classified as reflective, absorbing or refractive; reimaging and nonreimaging systems. References Optical devices
Optical baffle
[ "Materials_science", "Engineering" ]
102
[ "Glass engineering and science", "Optical devices" ]
47,720,961
https://en.wikipedia.org/wiki/HWCG%20LLC
HWCG LLC is a non-profit consortium of deep water oil and gas companies. HWCG maintains a comprehensive deepwater well containment response model that can be activated immediately in the event of a US Gulf of Mexico subsea blowout. It comprises oil and gas companies operating in the Gulf and incorporates the consortium’s generic well containment plan. HWCG has a healthy mutual aid component whereby HWCG members will respond and support another member’s incident. History After the Deepwater Horizon oil spill, President Obama announced a drilling moratorium on new permits for offshore wells and exploration in the Gulf of Mexico came to a standstill. In response to the suspension, twenty-four deepwater operators came together to establish well containment resources and plans according to the guidelines set forth in BOEMRE’s (BSEE’s) Notice To Lessees No. 2010-N10. These offshore oil and gas companies formed HWCG LLC with the common goal of establishing and maintaining the capability to quickly and comprehensively respond to a subsea blowout. Response System Capabilities The consortium’s response system builds on the equipment proven effective in the containment of the Deepwater Horizon blowout, including the Helix Fast Response System with the Q4000 intervention vessel and Helix Producer 1 from Helix Energy Solutions Group. HWCG’s core equipment includes two dual-ram capping stacks capable of operating in water depths through 10,000 feet. These capping stacks can effectively shut-in and contain a subsea well. If a flow and capture is required, the system is capable of a process volume of 130,000 barrels of oily fluids per day and 220 million cubic feet of gas per day. HWCG maintains contracts and operating agreements with over thirty service providers to leverage additional expertise, assistance and equipment. This integrated response solution yielded a tested deployment response time of less than seven days to cap a deepwater well, compared to the nearly 90 days needed to contain the Deepwater Horizon blowout. HWCG continues to run annual subsea incident response drills and collaborates with members, service sector companies and regulators in order to continually test and improve its response plan. References Petroleum organizations Petroleum industry in the United States Trade associations based in the United States
HWCG LLC
[ "Chemistry", "Engineering" ]
454
[ "Petroleum", "Petroleum organizations", "Energy organizations" ]
53,413,131
https://en.wikipedia.org/wiki/Planigon
In geometry, a planigon is a convex polygon that can fill the plane with only copies of itself (isotopic to the fundamental units of monohedral tessellations). In the Euclidean plane there are 3 regular planigons; equilateral triangle, squares, and regular hexagons; and 8 semiregular planigons; and 4 demiregular planigons which can tile the plane only with other planigons. All angles of a planigon are whole divisors of 360°. Tilings are made by edge-to-edge connections by perpendicular bisectors of the edges of the original uniform lattice, or centroids along common edges (they coincide). Tilings made from planigons can be seen as dual tilings to the regular, semiregular, and demiregular tilings of the plane by regular polygons. History In the 1987 book, Tilings and patterns, Branko Grünbaum calls the vertex-uniform tilings Archimedean in parallel to the Archimedean solids. Their dual tilings are called Laves tilings in honor of crystallographer Fritz Laves. They're also called Shubnikov–Laves tilings after Shubnikov, Alekseĭ Vasilʹevich. John Conway calls the uniform duals Catalan tilings, in parallel to the Catalan solid polyhedra. The Laves tilings have vertices at the centers of the regular polygons, and edges connecting centers of regular polygons that share an edge. The tiles of the Laves tilings are called planigons. This includes the 3 regular tiles (triangle, square and hexagon) and 8 irregular ones. Each vertex has edges evenly spaced around it. Three dimensional analogues of the planigons are called stereohedrons. These tilings are listed by their face configuration, the number of faces at each vertex of a face. For example V4.8.8 (or V4.82) means isosceles triangle tiles with one corner with four triangles, and two corners containing eight triangles. Construction The Conway operation of dual interchanges faces and vertices. In Archimedean solids and k-uniform tilings alike, the new vertex coincides with the center of each regular face, or the centroid. In the Euclidean (plane) case; in order to make new faces around each original vertex, the centroids must be connected by new edges, each of which must intersect exactly one of the original edges. Since regular polygons have dihedral symmetry, we see that these new centroid-centroid edges must be perpendicular bisectors of the common original edges (e.g. the centroid lies on all edge perpendicular bisectors of a regular polygon). Thus, the edges of k-dual uniform tilings coincide with centroid-to-edge-midpoint line segments of all regular polygons in the k-uniform tilings. Using the 12-5 Dodecagram (Above) All 14 uniform usable regular vertex planigons also hail from the 6-5 dodecagram (where each segment subtends radians, or 150 degrees). The incircle of this dodecagram demonstrates that all the 14 VRPs are cocyclic, as alternatively shown by circle packings. The ratio of the incircle to the circumcircle is: and the convex hull is precisely the regular dodecagons in the k-uniform tiling. The equilateral triangle, square, regular hexagon, and regular dodecagon; are shown above with the VRPs. In fact, any group of planigons can be constructed from the edges of a polygram, where and is the number of sides of sides in the RP adjacent to each involved vertex figure. This is because the circumradius of any regular -gon (from the vertex to the centroid) is the same as the distance from the center of the polygram to its line segments which intersect at the angle , since all polygrams admit incircles of inradii tangent to all its sides. Regular Vertices In Tilings and Patterns, Grünbaum also constructed the Laves tilings using monohedral tiles with regular vertices. A vertex is regular if all angles emanating from it are equal. In other words: All vertices are regular, All Laves planigons are congruent. In this way, all Laves tilings are unique except for the square tiling (1 degree of freedom), barn pentagonal tiling (1 degree of freedom), and hexagonal tiling (2 degrees of freedom): When applied to higher dual co-uniform tilings, all dual coregular planigons can be distorted except for the triangles (AAA similarity), with examples below: Derivation of all possible planigons For edge-to-edge Euclidean tilings, the interior angles of the convex polygons meeting at a vertex must add to 360 degrees. A regular -gon has internal angle degrees. There are seventeen combinations of regular polygons whose internal angles add up to 360 degrees, each being referred to as a species of vertex; in four cases there are two distinct cyclic orders of the polygons, yielding twenty-one types of vertex. In fact, with the vertex (interior) angles , we can find all combinations of admissible corner angles according to the following rules: Every vertex has at least degree 3 (a degree-2 vertex must have two straight angles or one reflex angle); If the vertex has degree , the smallest polygon vertex angles sum to over ; The vertex angles add to , and must be angles of regular polygons of positive integer sides (of the sequence ). Using the rules generates the list below: *The cannot coexist with any other vertex types. The solution to Challenge Problem 9.46, Geometry (Rusczyk), is in the Degree 3 Vertex column above. A triangle with a hendecagon (11-gon) yields a 13.2-gon, a square with a heptagon (7-gon) yields a 9.3333-gon, and a pentagon with a hexagon yields a 7.5-gon). Hence there are combinations of regular polygons which meet at a vertex. Planigons in the plane Only eleven of these angle combinations can occur in a Laves Tiling of planigons. In particular, if three polygons meet at a vertex and one has an odd number of sides, the other two polygons must be the same. If they are not, they would have to alternate around the first polygon, which is impossible if its number of sides is odd. By that restriction these six cannot appear in any tiling of regular polygons: On the other hand, these four can be used in k-dual-uniform tilings: Finally, assuming unit side length, all regular polygons and usable planigons have side-lengths and areas as shown below in the table: Number of Dual Uniform Tilings Every dual uniform tiling is in a 1:1 correspondence with the corresponding uniform tiling, by construction of the planigons above and superimposition. Such periodic tilings may be classified by the number of orbits of vertices, edges and tiles. If there are k orbits of planigons, a tiling is known as k-dual-uniform or k-isohedral; if there are t orbits of dual vertices, as t-isogonal; if there are e orbits of edges, as e-isotoxal. k-dual-uniform tilings with the same vertex faces can be further identified by their wallpaper group symmetry, which is identical to that of the corresponding k-uniform tiling. 1-dual-uniform tilings include 3 regular tilings, and 8 Laves tilings, with 2 or more types of regular degree vertices. There are 20 2-dual-uniform tilings, 61 3-dual-uniform tilings, 151 4-dual-uniform tilings, 332 5-dual-uniform tilings and 673 6--dualuniform tilings. Each can be grouped by the number m of distinct vertex figures, which are also called m-Archimedean tilings. Finally, if the number of types of planigons is the same as the uniformity (m = k below), then the tiling is said to be dual Krotenheerdt. In general, the uniformity is greater than or equal to the number of types of vertices (m ≥ k), as different types of planigons necessarily have different orbits, but not vice versa. Setting m = n = k, there are 11 such dual tilings for n = 1; 20 such dual tilings for n = 2; 39 such dual tilings for n = 3; 33 such dual tilings for n = 4; 15 such dual tilings for n = 5; 10 such dual tilings for n = 6; and 7 such dual tilings for n = 7. Regular and Laves tilings The 3 regular and 8 semiregular Laves tilings are shown, with planigons colored according to area as in the construction: Higher Dual Uniform Tilings Insets of Dual Planigons into Higher Degree Vertices A degree-six vertex can be replaced by a center regular hexagon and six edges emanating thereof; A degree-twelve vertex can be replaced by six deltoids (a center deltoidal hexagon) and twelve edges emanating thereof; A degree-twelve vertex can be replaced by six Cairo pentagons, a center hexagon, and twelve edges emanating thereof (by dissecting the degree-6 vertex in the center of the previous example). This is done above for the dual of the 3-4-6-12 tiling. The corresponding uniform process is dissection, and is shown here. 2-Dual-Uniform There are 20 tilings made from 2 types of planigons, the dual of 2-uniform tilings (Krotenheerdt Duals): 3-Dual-Uniform There are 39 tilings made from 3 types of planigons (Krotenheerdt Duals): 4-Dual-Uniform There are 33 tilings made from 4 types of planigons (Krotenheerdt Duals): 5-Dual-Uniform There are 15 5-uniform dual tilings with 5 unique planigons: Krotenheerdt duals with six planigons There are 10 6-uniform dual tilings with 6 unique planigons: Krotenheerdt duals with seven planigons There are 7 7-uniform dual tilings with 7 unique planigons: The last two dual uniform-7 tilings have the same vertex types, even though they look nothing alike! From onward, there are no uniform n tilings with n vertex types, or no uniform n duals with n distinct (semi)planigons. Fractalizing Dual k-Uniform Tilings There are many ways of generating new k-dual-uniform tilings from other k-uniform tilings. Three ways is to scale by as seen below: Large Fractalization To enlarge the planigons V32.4.12 and V3.4.3.12 using the truncated trihexagonal method, a scale factor of must be applied: Big Fractalization By two 9-uniform tilings in a big fractalization is achieved by a scale factor of 3 in all planigons. In the case of s,C,B,H its own planigon is in the exact center: The two 9-uniform tilings are shown below, fractalizations of the demiregulars DC and DB, and a general example on S2TC: Miscellaneous Centroid-Centroid Construction Dual co-uniform tilings (red) along with the originals (blue) of selected tilings. Generated by centroid-edge midpoint construction by polygon-centroid-vertex detection, rounding the angle of each co-edge to the nearest 15 degrees. Since the unit size of tilings varies from 15 to 18 pixels and every regular polygon slightly differs, there is some overlap or breaks of dual edges (an 18-pixel size generator incorrectly generates co-edges from five 15-pixel size tilings, classifying some squares as triangles). Other Edge-Edge Construction Comparisons Other edge-edge construction comparisons. Rotates every 3 seconds. Affine Linear Expansions Below are affine linear expansions of other uniform tilings, from the original to the dual and back: The first 12-uniform tiling contains all planigons with three types of vertices, and the second 12-uniform tiling contains all types of edges. Optimized Tilings If - tiling means dual uniform, Catalaves tiling, then there exists a 11-9 tiling, a 13-10 tiling, 15-11 tiling, a 19-12 tiling, two 22-13 tilings, and a 24-14 tiling. Also exists a 13-8 slab tiling and a 14-10 non-clock tiling. Finally, there are 7-5 tilings using all clock planigons: Circle Packing Each uniform tiling corresponds to a circle packing, in which circles of diameter 1 are placed at all vertex points, corresponding to the planigons. Below are the circle packings of the Optimized Tilings and all-edge tiling: 5-dual-uniform 4-Catalaves tilings A slideshow of all 94 5-dual-uniform tilings with 4 distinct planigons. Changes every 6 seconds, cycles every 60 seconds. Clock Tilings All tilings with regular dodecagons in are shown below, alternating between uniform and dual co-uniform every 5 seconds: 65 k-Uniform Tilings A comparison of 65 k uniform tilings in uniform planar tilings and their dual uniform tilings. The two lower rows coincide and are to scale: References Planigon tessellation cellular automata Alexander Korobov, 30 September 1999 B. N. Delone, “Theory of planigons”, Izv. Akad. Nauk SSSR Ser. Mat., 23:3 (1959), 365–386 Types of polygons Euclidean tilings
Planigon
[ "Physics", "Mathematics" ]
3,001
[ "Tessellation", "Euclidean plane geometry", "Euclidean tilings", "Planes (geometry)", "Symmetry" ]
53,413,559
https://en.wikipedia.org/wiki/Bosanquet%20equation
In the theory of capillarity, Bosanquet equation is an improved modification of the simpler Lucas–Washburn theory for the motion of a liquid in a thin capillary tube or a porous material that can be approximated as a large collection of capillaries. In the Lucas–Washburn model, the inertia of the fluid is ignored, leading to the assumption that flow is continuous under constant viscous laminar Poiseuille flow conditions without considering the effects of mass transport undergoing acceleration occurring at the start of flow and at points of changing internal capillary geometry. The Bosanquet equation is a differential equation that is second-order in the time derivative, similar to Newton's Second Law, and therefore takes into account the fluid inertia. Equations of motion, like the Washburn's equation, that attempt to explain a velocity (instead of acceleration) as proportional to a driving force are often described with the term Aristotelian mechanics. Definition When using the notation for dynamic viscosity, for the liquid-solid contact angle, for surface tension, for the fluid density, t for time, and r for the cross-sectional radius of the capillary and x for the distance the fluid has advanced, the Bosanquet equation of motion is assuming that the motion is completely driven by surface tension, with no applied pressure at either end of the capillary tube. Solution The solution of the Bosanquet equation can be split into two timescales, firstly to account for the initial motion of the fluid by considering a solution in the limit of time approaching 0 giving the form where and For the condition of short time this shows a meniscus front position proportional to time rather than the Lucas-Washburn square root of time, and the independence of viscosity demonstrates plug flow. As time increases after the initial time of acceleration, the equation decays to the familiar Lucas-Washburn form dependent on viscosity and the square root of time. See also Hagen–Poiseuille equation Washburn's equation References Equations of fluid dynamics Porous media
Bosanquet equation
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
430
[ "Equations of fluid dynamics", "Equations of physics", "Porous media", "Materials science", "Fluid dynamics" ]
54,686,533
https://en.wikipedia.org/wiki/Cross%20slip
In materials science, cross slip is the process by which a screw dislocation moves from one slip plane to another due to local stresses. It allows non-planar movement of screw dislocations. Non-planar movement of edge dislocations is achieved through climb. Since the Burgers vector of a perfect screw dislocation is parallel to the dislocation line, it has an infinite number of possible slip planes (planes containing the dislocation line and the Burgers vector), unlike an edge or mixed dislocation, which has a unique slip plane. Therefore, a screw dislocation can glide or slip along any plane that contains its Burgers vector. During cross slip, the screw dislocation switches from gliding along one slip plane to gliding along a different slip plane, called the cross-slip plane. The cross slip of moving dislocations can be seen by transmission electron microscopy. Mechanisms The possible cross-slip planes are determined by the crystal system. In body centered cubic (BCC) metals, a screw dislocation with b=0.5<11> can glide on {110} planes or {211} planes. In face centered cubic (FCC) metals, screw dislocations can cross-slip from one {111} type plane to another. However, in FCC metals, pure screw dislocations dissociate into two mixed partial dislocations on a {111} plane, and the extended screw dislocation can only glide on the plane containing the two partial dislocations. The Friedel-Escaig mechanism and the Fleischer mechanism have been proposed to explain the cross-slip of partial dislocations in FCC metals. In the Friedel-Escaig mechanism, the two partial dislocations constrict to a point, forming a perfect screw dislocation on their original glide plane, and then re-dissociate on the cross-slip plane creating two different partial dislocations. Shear stresses then may drive the dislocation to extend and move onto the cross-slip plane. Atomic simulations have confirmed the Friedel-Escaig mechanism. Alternatively, in the Fleischer mechanism, one partial dislocation is emitted onto the cross-slip plane, and then the two partial dislocations constrict on the cross-slip plane, creating a stair-rod dislocation. Then the other partial dislocation combines with the stair-rod dislocation so that both partial dislocations are on the cross-slip plane. Since the stair rod and the new partial dislocations are high energy, this mechanism would require very high stresses. Role in plasticity Cross-slip is important to plasticity, since it allows additional slip planes to become active and allows screw dislocations to bypass obstacles. Screw dislocations can move around obstacles in their primary slip plane (the plane with the highest resolved shear stress). A screw dislocation may glide onto a different slip plane until it has passed the obstacle, and then can return to the primary slip plane. Screw dislocations can then avoid obstacles through conservative motion (without requiring atomic diffusion), unlike edge dislocations which must climb to move around obstacles. Therefore, some methods of increasing the yield stress of a material such as solid solution strengthening are less effective because due to cross slip they do not block the motion of screw dislocations. At high strain rates (during stage II work hardening), discrete dislocation dynamics (DD) simulations have suggested that cross-slip promotes the generation of dislocations and increase dislocation velocity in a way that is dependent on strain rate, which has the effect of decreasing flow stress and work hardening. Cross slip also plays an important role in dynamic recovery (stage III work hardening) by promoting annihilation of screw dislocations and then movement of screw dislocations into a lower energy arrangement. See also Slip Plastic deformation Miller indices References Materials science Crystallographic defects
Cross slip
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
837
[ "Applied and interdisciplinary physics", "Crystallographic defects", "Materials science", "Crystallography", "nan", "Materials degradation" ]
76,204,084
https://en.wikipedia.org/wiki/Californium%28III%29%20nitrate
Californium(III) nitrate is an inorganic compound of californium and iodine with the formula . It can be used as a precursor to other californium compounds. References Californium compounds Nitrates
Californium(III) nitrate
[ "Chemistry" ]
49
[ "Inorganic compounds", "Nitrates", "Salts", "Oxidizing agents", "Inorganic compound stubs" ]
76,206,157
https://en.wikipedia.org/wiki/Birds%20of%20the%20World
Birds of the World (BoW) is an online database of ornithological data adapted from the Handbook of the Birds of the World and contemporary reference works, including Birds of North America, Neotropical Birds Online, and Bird Families of the World. The database is published and maintained by the Cornell Lab of Ornithology and collects data on bird observations through integration with eBird. The database requires a subscription to access the majority of its entries, but offers institutional access to many libraries and birding-related organizations, participating in the National Information Standards Organization's Shared E-Resource Understanding practice as a publisher. The database is frequently cited in regional checklists and distribution map studies, either as a point of comparison or a source of data. History Birds of the World was originally developed in the early 1990s through collaboration between the American Ornithologists' Union, the Cornell Lab of Ornithology, and the Academy of Natural Sciences of Drexel University. The goal of the project was to produce an illustrated guide to all of the birds of the world; its first iteration was in the 17-volume Handbook of the Birds of the World, published by Lynx Edicions over the course of 22 years, from 1992 to 2014. After the Cornell Lab of Ornithology acquired the rights to the contents of the Handbook of the Birds of the World, the online database was launched in March of 2020. A significant portion of the audiovisual content available in Birds of the World is collected through citizen science data collection as provided by eBird, but content is also included from the Macaulay Library, as it was gathered in the Internet Bird Collection by Josep del Hoyo, the initial founder of Lynx Edicions, and his colleagues in 2002. Description Birds of the World is a subscription-access database that aims to describe comprehensive life history information on birds. This includes: Species accounts Details on taxonomy, habitat, breeding, diet, and behaviors Family accounts Hybrid and subspecies descriptions and photos Migration and range maps IUCN Conservation Status Literature cited Common names in multiple languages Free resources Birds of the World provides various resources other than those provided with an institutional or individual subscription to the service. James A. Jobling's Dictionary of Scientific Bird Names, which would be published by Lynx Edicions as the HBW Alive Key to Scientific Names In Ornithology, is accessible as a searchable database on the Birds of the World website, allowing for free access to the definitions of the various scientific names of birds. The HBW Alive Key has been the underpinning for developments between the Cornell Lab and BirdLife International to produce a unified checklist of the birds of the world, and is currently used to form the list of bird species on the IUCN Red List. References External links Birds of the World website The Key to Scientific Names on Birds of the World 2020 introductions Biodiversity databases Birdwatching Citizen science Cornell University Ornithological citizen science
Birds of the World
[ "Biology", "Environmental_science" ]
595
[ "Biodiversity databases", "Environmental science databases", "Biodiversity" ]
76,212,886
https://en.wikipedia.org/wiki/Acromelic%20acid%20A
Acromelic acid A (ACRO A) is a toxic compound that is part of a group known as kainoids, characterized by a structure bearing a pyrrolidine dicarboxylic acid, represented by kainic acid. Acromelic acid A has the molecular formula . It has been isolated from a Japanese poisonous mushroom, Clitocybe acromelalga. Acromelic acid is responsible for the poisonous aspects of the mushroom because of its potent neuroexcitatory and neurotoxic properties. Ingestion of the Clitocybe acromelalga, causes allodynia which can continue for over a month. The systemic administration of acromelic acid A in rats results in selective loss of interneurons in the lower spinal cord, without causing neuronal damage in the hippocampus and other regions. Structure and isoforms Acromelic acids represent a group of compounds found in various forms. Five distinct molecules have been identified, including two isoforms designated acromelic acid A and B. Acromelic acid C-E are recognized toxic analogs. Acromelic acid A, characterized by its pyrrolidine carboxylic acid (L-proline), tricarboxylic acid, and pyridone composition, resembles kainic acid in its chemical makeup. Acromelic acid A was the first to be isolated from Clitocybe acromelalga, leading to extensive investigation of this type. Comparative studies reveal acromelic acid B, an isoform of A, to exhibit reduced allodynia effects in mice models. Conversely, limited information exists regarding ACROs C, D, and E, though their analogous structure suggests similar functionalities to varying extents. Further research into these compounds is needed, but not without challenges; the synthesis of acromelic acid A presents difficulties for large-scale production required for comprehensive biological studies. Synthesis Acromelic acid A can be produced through the synthesis of L-alpha-kainic acid. However, this process involves multiple complicated steps. One way to do this, as outlined by Katsuhiro Konno et al. (1986), initiates with the successive protection of imino and carboxyl groups of L-alpha-kainic acid, followed by a reduction and silylation. Subsequently, the oxidation of the methyl group via epoxidation occurs. To form the pyridine nucleus, 1,4-addition by thiophenol, Horner-Emmons reaction, and a Pummerer reaction are necessary. Following several rearrangements, an unstable compound is obtained, which promptly cyclizes. Treatment with various compounds transforms this compound into a pyridone carboxylic acid derivative. The final steps involve the deprotection of various groups, resulting in the production of acromelic acid A. The yield of this elaborated synthesis is notably low, as expected due to the numerous synthetic steps, which in turn also hinders large-scale biological studies on acromelic acid A. Alternatively, another synthesis route involves the condensation of L-glutamic acid with pyridones. This method, too, entails numerous steps leading to a yield of only 9%. The construction of the pyridone ring is achieved from a catechol through an oxidative cleavage recyclization strategy, akin to the previously described method. Researchers attempted a similar approach to synthesize acromelic acid B, which proved challenging but feasible. In a more recent development, a 13-step synthesis with a yield of 36% has been described. Acromelic acids A and B were synthesized from 2,6-dichloropyridine, with the pyrrolidine ring constructed via Ni-catalyzed asymmetric conjugate addition, followed by intramolecular reductive amination. This represents an advancement over previous synthesis methods, offering a higher yield and fewer steps. Mechanism of action Following absorption, acromelic acid A induces abnormal behavioral symptoms in rats, and tactile allodynia in mice. Administration of this toxin causes selective degeneration specifically in lower spinal interneurons. In the late 20th century, acromelic acid A was initially believed to act as a glutamate receptor agonist, specifically targeting AMPA receptors. This would explain the observed increase in intracellular Ca2+ concentration after administration. However, over the years, a new type of non-NMDA receptor was thought to be the target of acromelic acid A, as the observed effects couldn't completely be explained by AMPA binding. This idea was established through comparative studies with kainic acid, another glutamate receptor agonist. This revealed remarkable differences in behavioral and pathological effects. Therefore, the proposed mechanism suggests binding of acromelic acid A to a (yet unidentified) non-NMDA receptor. Binding to the target receptor leads to depolarization of the postsynaptic cell. This depolarization subsequently activates NMDA receptors, which in turn become permeable for Ca2+. The increase in intracellular Ca2+ concentration triggers a cascade of downstream signaling events, including activation of various intracellular enzymes. Consequently, neuronal damage and sustained neuronal excitability, particularly in spinal cord neurons, occur Although researchers know the resulting pathological symptoms and some molecular conditions after administration of acromelic acid A, they have still not been able to unravel the exact mechanism of action of this neurotoxic compound. Therefore, further investigation into the mechanism of action of acromelic acid A is required to better understand the toxic effects. Toxicity Research has revealed that the lethal dose () ranges between 5 and 5.5 mg/kg in rats, when acromelic acid A administered intravenously. Effects on rats Multiple studies were performed in which rats were injected with acromelic acid A intravenously. Kwak et al. (1991) conducted experiments involving the injection of both 2 mg/kg and the lethal dose (5 mg/kg) of acromelic acid A in rats. The results demonstrated a series of behavioral changes. 30 minutes after injection: All rats began to bite and moved their tails like snails. Their hindlimbs became gradually extended and their back slowly bent forwards, which led them to occasional falls. Further, rats suffered from attacks of intermittent hindlimb cramp, which became tonic over time. 1 hour after injection: Rats were seized by tonic cloning convulsions. The Rats which got the lethal dose died during these seizures. The surviving rats developed complete flaccid paraplegia which carried on for 2 hours. Days after injection: Rats developed paraparesis of severe spasticity. They were able to move using their forelimbs. Effects on mice Intrathecal administration of acromelic acid A provoked tactile allodynia in mice. At an extremely low dose of 1 fg/mouse allodynia was already provoked and persisted over a month. Furthermore, at a higher dose of 500 ng/kg, injection of acromelic acid A induced strong spontaneous agitation, scratching, jumping and  tonic  convulsion  and  caused  death  within  15 min. Effects on humans The effects of acromelic acid A on humans have not been studied yet. However, after accidental ingestion of Clitocybe acromelalga, violent pain and marked reddish edema in hands and feet were observed after several days and continued for a month. However, there is no direct evidence these symptoms were caused by acromelic acid A. Findings from experiments in rats and mice suggest a potential association between acromelic acid A and the observed symptoms. References Pyrrolidines Carboxylic acids Pyridines
Acromelic acid A
[ "Chemistry" ]
1,645
[ "Carboxylic acids", "Functional groups" ]
76,215,026
https://en.wikipedia.org/wiki/Einstein%E2%80%93Oppenheimer%20relationship
Albert Einstein and J. Robert Oppenheimer were twentieth century physicists who made pioneering contributions to physics. From 1947 to 1955 they had been colleagues at the Institute for Advanced Study (IAS). Belonging to different generations, Einstein and Oppenheimer became representative figures for the relationship between "science and power", as well as for "contemplation and utility" in science. Overview In 1919, after the successful verification of the phenomenon of light from faraway stars gravitationally bending near the sun — as predicted earlier by Einstein's theory of gravity — became an observable fact, Albert Einstein was acclaimed as “the most revolutionary innovator in physics” since Isaac Newton. J. Robert Oppenheimer, called the American physics community's "boy-wonder" in the 1930s, became a popular figure from 1945 onwards after overseeing the first ever successful test of nuclear weapons. Both Einstein and Oppenheimer were born into nonobservant Jewish families. Belonging to different generations, Einstein (1879–1955) and Oppenheimer (1904–1967), with the full development of quantum mechanics by 1925 marking a delineation, represented the shifted approach in being either a theoretical physicist or an experimental physicist since the mid-1920s when being both became rare due to the division of labor. Einstein and Oppenheimer, who incorporated different modes of approach for their achievements, became emblematic for the relationship between "science and power", as well as for "contemplation and utility" in science. When in 1945 the first ever nuclear weapons were successfully tested, Oppenheimer was acknowledged for bringing forth to the world the astounding "instrumental power of science". Einstein, after facing criticism for having "participated" in the creation of the atomic bomb, answered in 1950 that, when he contemplated on the relationship between mass and energy in 1905, he had no idea that it could have been used for military purposes in anyway, and maintained that he had always been a "convinced pacifist". While Einstein engaged in the pursuit of what he called as "Unity" in the complex phenomena of the Universe, Oppenheimer engaged in the establishment of an "Unified" framework at the Institute for Advanced Study, which would comprise all the academic disciplines of knowledge that can be pursued. Einstein was markedly individualistic in his approach to physics. He had only few students, and was disinterested if not adversarial in his relation with formal institutions and politics. Oppenheimer was more collaborative and embraced collective scientific work. He had been a better successful teacher and more immersed in political and institutional realms. Oppenheimer emerged as a powerful political 'insider', a role that Einstein never embraced but instead wondered why Oppenheimer desired such power. Despite their differences in stances, both Oppenheimer and Einstein were regarded as "deeply suspicious" figures by the authorities, specifically by J. Edgar Hoover. With the advent of modern physics in the twentieth century changing the world radically, both Einstein and Oppenheimer grappled with metaphysics that can provide an ethical framework for human actions. Einstein turned to the philosophical works of Spinoza and Schopenhauer, along with an attachment to the European enlightenment heritage. Oppenheimer became engrossed in the eastern philosophy, with particular interest in the Bhagavad Gita, and an affinity with the American philosophical tradition of pragmatism. Association with each other Oppenheimer met Einstein for the first time in January 1932 when the latter visited Caltech as part of his round-the-world trip during 1931-32. In 1939, Einstein published a paper that argued against the existence of Black holes. Einstein used his own general theory of relativity to arrive at this conclusion. A few months after Einstein rejected the existence of Black holes, Oppenheimer and his student Hartland Snyder published a paper that revealed, for the first time, using Einstein's general theory of relativity, how Black holes would form. Though Oppenheimer and Einstein later met, there's no record of them having discussed Black holes. When in 1939, the general public became aware of the Einstein–Szilard letter that urged the US government to initiate the Manhattan project, for the development of nuclear weapons, Einstein was credited for foreseeing the destructive power of the atom with his mass–energy equivalence formula. Einstein played an active role in the development of US nuclear weapons by being an advisor to the research that ensued; this was in contrast to the common belief that his role was limited to only signing a letter. During this time, the public linked Einstein with Oppenheimer, who then happened to be the scientific director of the Manhattan project. In 1945, when Oppenheimer and Pauli were being considered for a professorial position at an institute, Einstein and Hermann Weyl wrote a letter that recommended Pauli over Oppenheimer. After the end of World War II, both Einstein and Oppenheimer lived and worked in Princeton at the Institute for Advanced Study, Einstein became a professor there while Oppenheimer its director and a professor of physics from 1947 to 1966. They had their offices down the hall from each other. Einstein and Oppenheimer became colleagues and conversed with each other occasionally. They saw each other socially, with Einstein once attending dinner at the Oppenheimers in 1948. At the Institute, Oppenheimer considered general relativity to be an area of physics that wouldn't be of much benefit to the efforts of physicists, partly due to lack of observational data and due to conceptual and technical difficulties. He actively prohibited people from taking up these problems at the institute. Furthermore he forbade Institute members from having contacts with Einstein. For one of Einstein's birthdays, Oppenheimer gifted him a new FM radio and had an antenna installed on his house so that he may listen to New York Philharmonic concerts from Manhattan, about 50 miles away from Princeton. Oppenheimer did not provide an article to the July 1949 issue of Reviews of Modern Physics, which was dedicated to the seventieth birthday of Einstein. In October 1954, when an honorary doctorate was to be conferred to Einstein at Princeton, Oppenheimer made himself unavailable at the last moment (despite being "begged" to attend the event); he informed the convocation committee that he had to be out of town on the day of convocation. Earlier, in May 1954 when the Emergency Civil Liberties Committee decided to honour Einstein on his seventy-fifth birthday, the American Committee for Cultural Freedom, concerned about the Communist ties of the honouring committee requested Oppenheimer to stop Einstein from attending the event lest it may cause people to associate Judaism with Communism, and think of scientists as naive about politics. Oppenheimer, who was then busy with his security clearance hearings, persuaded Einstein to dissociate with the honouring committee. Views about each other In January 1935, Oppenheimer visited Princeton University as a visiting faculty member on an invitation. After staying there and interacting with Einstein, Oppenheimer wrote to his brother Frank Oppenheimer in a letter thus, "Princeton is a madhouse: its solipsistic luminaries shining in separate & helpless desolation. Einstein is completely cuckoo. ... I could be of absolutely no use at such a place, but it took a lot of conversation & arm waving to get Weyl to take a no”. Oppenheimer's initial harsh assessment was attributed to the fact that he found Einstein highly skeptical about the quantum field theory. Einstein never accepted the quantum theory; in 1945 he said: "The quantum theory is without a doubt a useful theory, but it does not reach to the bottom of things. I never believed that it constitutes the true conception of nature". Oppenheimer also noted that Einstein became very much a loner in his working style. After the death of Einstein in April 1955, in a public eulogy Oppenheimer wrote that "physicists lost their greatest colleague". He noted that of all the great accomplishments in Physics, the theory of general relativity is the work of one man, and it would have remained undiscovered for a long time had it not been for the work of Einstein. He ascertained that the public image of Einstein as a simple and kindhearted man “with warm humor,... wholly without pretense” was indeed right, and remembered what Einstein once said to him before his death, "You know, when it once has been given to a man to do something sensible, afterwards life is a little strange." Oppenheimer wrote that it was given to Einstein to do "something reasonable". He stated that general theory of relativity is "perhaps the single greatest theoretical synthesis in the whole of science". Oppenheimer wrote that more than anything, the one special quality, that made Einstein unique was “his faith that there exists in the natural world an order and a harmony and that this may be apprehended by the mind of man”, and that Einstein had given not just an evidence of that faith, but also its heritage. Oppenheimer was less graceful about Einstein in private. He said Einstein had no interest in or did not understand modern physics and wasted his time in trying to unify gravity and electromagnetism. He stated that Einstein's methods in his final years had in "a certain sense failed him". Einstein in his last twenty-five years of life focused solely on working out the unified field theory without considering its reliability nor questioning his own approach. This led him to lose connections with the wider physics community. Einstein's urge to find unity had been constant throughout his life. In 1900, while still a student at ETH, he wrote in a letter to his friend Marcel Grossmann that, "It is a glorious feeling to recognize the unity of a complex of phenomena, which appear to direct sense perceptions as quite distinct things." In 1932, when questioned about his goal of work, Einstein replied, "The real goal of my research has always been the simplification and unification of the system of theoretical physics. I attained this goal satisfactorily for macroscopic phenomena, but not for the phenomena of quanta and atomic structure." And added, "I believe that despite considerable success, the modern quantum theory is still far from a satisfactory solution of the latter group of problems." Einstein was never convinced with quantum field theory, which Oppenheimer advocated. Oppenheimer noted that Einstein tried in vain to prove the existence of inconsistencies in quantum field theory, but there were none. In the 1960s Oppenheimer became skeptical about Einstein's general theory of relativity as the correct theory of gravitation. He thought Brans–Dicke theory to be a better theory. Oppenheimer also complained that Einstein did not leave any papers to the institute (IAS) in his will despite the support he received from it for twenty-five years. All of Einstein's papers went to Israel. In December 1965, Oppenheimer visited Paris on an invitation from UNESCO to speak at the tenth anniversary of Einstein's death. He spoke on the first day of the commemoration as he had known Einstein for more than thirty years and at the IAS, they "were close colleagues and something of friends". Oppenheimer made his critical views of Einstein public there. He also praised Einstein for his stand against violence and described his attitude towards humanity by the Sanskrit word "Ahimsa". The speech received considerable media attention, New York Times reported the story headlined “Oppenheimer View of Einstein Warm But Not Uncritical”. After the speech, as part of an effort to amend any misunderstandings, in an interview with the French magazine L'Express, Oppenheimer said, "During all the end of his life, Einstein did no good. He worked all alone with an assistant who was there to correct his calculations... He turned his back on experiments, he even tried to rid himself of the facts that he himself had contributed to establish ... He wanted to realize the unity of knowledge. At all cost. In our days, this is impossible." But nevertheless, Oppenheimer said he was "convinced that still today, as in Einstein’s time, a solitary researcher can effect a startling discovery. He will only need more strength of character". The interviewer concluded asking Oppenheimer if he had any longing or nostalgia, to which he replied "Of course, I would have liked to be the young Einstein. This goes without saying." Einstein appreciated Oppenheimer for his role in the drafting and advocacy of the Acheson–Lilienthal Report, and for his subsequent work to contain the nuclear arms race between the United States and Soviet Union. At the IAS, Einstein acquired profound respect for Oppenheimer on his administration skills, and described him as an “unusually capable man of many sided education”. In popular culture A semifictional account of the relationship between Albert Einstein and J. Robert Oppenheimer was portrayed in the feature film Oppenheimer directed by Christopher Nolan. Notes See also Einstein versus Oppenheimer References Citations Sources Quantum physicists Theory of relativity History of physics
Einstein–Oppenheimer relationship
[ "Physics" ]
2,684
[ "Quantum physicists", "Quantum mechanics", "Theory of relativity" ]
76,217,482
https://en.wikipedia.org/wiki/Thallide
Thallides are compounds containing anions composed of thallium. There are several thallium atoms in a cluster, and it does not occur as a single Tl− in thallides. They are a subclass of trielides, which also includes gallides and indides. A more general classification is polar intermetallics, as clusters contain delocalized multicentre bonds. Thallides were discovered by Eduard Zintl in 1932. Mixed anion compounds with thallides include halides (bromides and chlorides), oxides, and tetrelates (silicate, germanate). Production Thallide compounds can be produced by melting metals together in a tantalum crucible under an inert argon atmosphere. However if arsenic is included in the mix, it can react with the crucible wall. A low temperature production route, is to dissolve an alkali metal in liquid ammonia, and use that to reduce a thallium salt, like thallium iodide. Properties Thallide compounds are dense, dense to X-rays and usually metallic grey or black in appearance. Thallide clusters mostly do not follow Wade-Mingos rules or the Zintl–Klemm concept, as they have too small a negative charge. They can be called "hypoelectronic". Reactions In liquid ammonia, oxidation occurs yielding metal amides, and thallium metal. Thallides react with water and air. List References Thallium compounds Anions
Thallide
[ "Physics", "Chemistry" ]
318
[ "Ions", "Matter", "Anions" ]
61,048,240
https://en.wikipedia.org/wiki/Noa%20Marom
Noa Marom is an Israeli materials scientist and computational physicist at Carnegie Mellon University. She was awarded the International Union of Pure and Applied Physics Young Scientist Prize. Early life and education Marom studied materials engineering at Technion – Israel Institute of Technology and earned her bachelor's degree in 2003. After graduating, she worked as an application engineer in the Process and Control Division. She joined the Weizmann Institute of Science for her doctoral studies, earning a PhD under the supervision of Leeor Kronik in 2010. Marom won the Shimon Reich Memorial Prize for her PhD thesis. Her doctoral work considered the predictions of dispersion interactions and electronic structure using computational chemistry. She worked on molecules including copper phthalocyanine, azabenzenes and hexagonal boron nitride. Research and career Marom joined the University of Texas at Austin as a postdoctoral researcher in 2010. She moved to Tulane University as an assistant professor in physics in 2013. In 2016 Marom was appointed as an assistant professor at Carnegie Mellon University. She is a member of the Pittsburgh Quantum Institute. Her work considers molecular crystals that are bound by Van der Waals interactions. As Van de Waal's interactions are weak, molecules can adopt a range of crystal structures. These are known as polymorphs, and can be predicted using computational simulations. The chemical and physical properties of these systems are determined by their crystal structure. Marom develops genetic algorithms that predict the structure of molecular crystals using the principles of survival of the fittest. Marom's work uses density functional theory and many-body perturbation theory to study complex atomic systems. She has investigated the GW approximation for molecules. The materials investigated by Marom can be used for dye-sensitized solar cells. Awards and honors In 2018 Marom was awarded the International Union of Pure and Applied Physics Young Scientist Prize. References Israeli women academics Israeli women scientists Technion – Israel Institute of Technology alumni Weizmann Institute of Science alumni Carnegie Mellon University faculty Computational chemists Year of birth missing (living people) Living people Recipients of the IUPAP Early Career Scientist Prize
Noa Marom
[ "Chemistry" ]
434
[ "Computational chemistry", "Theoretical chemists", "Computational chemists" ]
68,892,195
https://en.wikipedia.org/wiki/Photo%20response%20non-uniformity
Photo response non-uniformity, pixel response non-uniformity, or PRNU, is a form of fixed-pattern noise related to digital image sensors, as used in cameras and optical instruments. Both CCD and CMOS sensors are two-dimensional arrays of photosensitive cells, each broadly corresponding to an image pixel. Due to the non-uniformity of image sensors, each cell responds with a different voltage level when illuminated with a uniform light source, and this leads to luminance inaccuracy at the pixel level. High-end and metrology camera vendors tend to characterise this non-uniformity during instrument manufacture. The sensor is illuminated with a standardized light source and a two-dimensional table of correction factors is generated. This table is either carried in camera non-volatile memory and dynamically applied to the image on each capture, or ships with the camera to be applied by an external image processing and correcting pipeline. See also Color balance Color correction Flat-field correction Image sensor Digital photography Image sensors Optics
Photo response non-uniformity
[ "Physics", "Chemistry" ]
210
[ "Applied and interdisciplinary physics", "Optics", " molecular", "Atomic", " and optical physics" ]
68,892,243
https://en.wikipedia.org/wiki/Phosphanide
Phosphanides are chemicals containing the [PH2]− anion. This is also known as the phosphino anion or phosphido ligand. The IUPAC name can also be dihydridophosphate(1−). It can occur as a group phosphanyl -PH2 in organic compounds or ligand called phosphanido, or dihydridophosphato(1−). A related substance has PH2−. Phosphinidene (PH) has phosphorus in a −1 oxidation state. As a ligand PH2 can either bond to one atom or be in a μ2-bridged ligand across two metal atoms. With transition metals and actinides, bridging is likely unless the metal atom is mostly enclosed in a ligand. In phosphanides, phosphorus is in the −3 oxidation state. When phosphanide is oxidised, the first step is phosphinite ([H2PO]−). Further oxidation yields phosphonite ([HPO2]2−) and phosphite ([PO3]3−). The study of phosphine derivatives is unpopular, because they are unstable, poisonous and malodorous. Formation Alkali metal phosphanides can be made from phosphine and the metal dissolved in liquid ammonia. Sodium phosphanide can also be made from phosphine and triphenylmethyl sodium. Lithium phospahnide can be made from phosphine and butyl lithium or phenyl lithium. Another way to produce -PH2 complexes is by hydrolysis of a -P(SiMe3)2 compound with an alcohol, such as methanol. Yet another way is to remove a hydrogen atom from the phosphine in a phosphine complex by using a strong base. Properties When calcium phosphanide is heated, it decomposes by releasing phosphine and yielding the phosphanediide: CaPH. With further heating a binary calcium phosphide is formed. Other compounds may also lose hydrogen as well as phosphine. Phosphanides can react with CCl4 to substitute Cl for H giving a -PCl2 compound. Similarly CBr4 can produce -PBr2. Also AgBF4 can react to yield -PF2. Sodium phosphanide can react with ethyl alcohol in a diethyl carbonate solution to yield sodium 2-phosphaethynolate (NaOCP). Na(DME)2OCP is also formed from NaPH2 when reacted with CO in a dimethoxyethane (DME) solution under pressure. List Derivatives Some derivatives of phosphanides have also been studied where hydrogen is substituted by another group. They include bis(trimethylsilyl)phosphanide, bis (triisopropylsilyl) phosphanide, bis (trimethylsilyl) phosphanide, diphenyl phosphanide. References Phosphines Ligands
Phosphanide
[ "Chemistry" ]
659
[ "Ligands", "Coordination chemistry" ]
68,901,004
https://en.wikipedia.org/wiki/Experimental%20Techniques
Experimental Techniques is an official journal of the Society for Experimental Mechanics and was established in 1975. The journal is published by Springer Nature and the editor-in-chief is Bonnie Antoun (Sandia National Laboratories). Abstracting and indexing The journal is abstracted and indexed in: According to the Journal Citation Reports, the journal has a 2020 impact factor of 1.167. References External links English-language journals Materials science journals Springer Science+Business Media academic journals Academic journals established in 1975 Bimonthly journals
Experimental Techniques
[ "Materials_science", "Engineering" ]
104
[ "Materials science journals", "Materials science" ]
68,901,022
https://en.wikipedia.org/wiki/Journal%20of%20Dynamic%20Behavior%20of%20Materials
The Journal of Dynamic Behavior of Materials is a quarterly peer-reviewed scientific journal published by Springer Science+Business Media on behalf of the Society for Experimental Mechanics. Jennifer L. Jordan (Los Alamos National Laboratory) has been the editor-in-chief since 2020. The journal was established in 2015 with Eric N. Brown as the inaugural editor-in-chief. Abstracting and indexing The journal is abstracted and indexed in: Astrophysics Data System Ei Compendex Emerging Sources Citation Index ProQuest databases Scopus References External links English-language journals Materials science journals Springer Science+Business Media academic journals Academic journals established in 2015 Quarterly journals Hybrid open access journals
Journal of Dynamic Behavior of Materials
[ "Materials_science", "Engineering" ]
136
[ "Materials science journals", "Materials science" ]
68,904,876
https://en.wikipedia.org/wiki/Ovarian%20culture
Ovarian culture is an in-vitro process that allows for the investigation of the development, toxicology and pathology of the ovary. This technique can also be used to study possible applications of fertility treatments e.g. isolating oocytes from primordial ovarian follicles that could be used for fertilisation. Culture methods using mouse ovarian tissue There are several culture systems which can be employed to investigate ovarian and follicular growth and development. Whole ovarian culture The culture of intact ovaries supports the formation and development of primordial follicles. Ovaries are dissected from neonatal mouse pups and placed into ovarian culture medium containing Bovine Serum Albumen (BSA) dissolved in α-Minimal Essential Media (αMEM). The cultures are maintained in a 37°C, 5% CO2 incubator and then the ovaries are frozen or fixed to facilitate further study. Follicle culture Individual This method of culturing supports the growth of individual follicles from late pre-antral to pre-ovulatory stage. This system allows follicle growth and hormone production to be studied. The ovaries of young mice (19–23 days) are removed and halved, and follicles are identified under a microscope. Late pre-antral follicles are identified as having a diameter of 180-200 μm and containing 2-3 layers of granulosa cells. Follicles are manually dissected and then examined for suitability to culture. Follicles are chosen for culture only if they are healthy (diameter of 190 ± 10 μm; translucent; without dark atretic areas; intact basal lamina.) Wells containing follicle culture medium (α-Minimal Essential Media, recombinant human follicle stimulating hormone, ascorbic acid and adult female mouse serum) is overlaid with sterilised silicon oil, which prevents medium evaporation. A follicle is placed at the bottom of each well and maintained in a 37°C, 5% CO2 incubator, being moved into a well containing fresh medium for up to 6 days. If growth measurements are being taken visually the distortion due to the oil layer must be accounted for. Follicles are frozen or fixed so further analysis can be performed. Paired By culturing 2 follicles in close proximity, follicle-follicle interactions can be examined. The follicles may grow together to form a two-follicle unit. The follicles are dissected from the ovaries as above, then placed in contact with each other in pairs, in a well with follicle culture medium and sterilised silicon oil. Follicles from different genetic sources can be co-cultured so that tissue origins can be differentiated within the co-culture. The medium is replaced every 2 days and after 6 days the culture is fixed or frozen for further processing. Follicle-ovary co-culture This method allows follicle-ovary interactions to be studied.  The ovaries and follicles are dissected as above and then one follicle is placed in contact with one pole of a neonatal ovary on a plate. The follicle-ovary plate is cultured in follicle culture medium at 37 °C, 5% CO2 for up to 5 days. At this point the co-culture is frozen or fixed before further processing. To facilitate differentiation between tissue origins the ovary and the follicle should be from different genetic sources. Uses of ovarian culture techniques Toxicological studies At present research within the field of reproductive toxicology is principally carried out in vivo, however new culture methods have been developed with the aim of allowing ovarian follicles to be grown in vitro. These new methods allow us to culture isolated ovarian follicles, embryos, ovaries (whole organ or only part of the tissue), and embryonic stem cells. Ovarian cultures are useful to research as they can allow us to replicate systematic follicle development, periodical ovulation, and follicle atresia in an environment with modulated culture conditions.The ability of in vitro ovarian cultures to detect damage to the ovary and its specialised structures of the follicles and oocytes, allows for faster screening of potential developmental and/or reproductive toxicants. Therefore, ovarian culture systems have become increasingly widely used in reproductive biology and toxicology. Culture of the whole ovary or ovarian fragments allows evaluation of various parameters in a controlled way and, therefore, has the potential for more complete reproductive toxicity studies. A big advantage of ovarian culture is the ability to evaluate the effect of drugs on the pool of primordial follicles that make up the ovarian reserve. However, this strategy is restricted regarding the duration of culture time, as short periods may not be sufficient to ensure follicular development. On the contrary, cells may be negatively affected by longer periods of culture. Most in vitro toxicology studies use female mice and rat models. These species have been selected to assess the adverse effects of drugs on reproductive function and fertility, due to ease of handling and small size. Additionally, these species have been well characterised; anatomically, physiologically, and genetically. Their short life cycles make it convenient to assess gestation, breastfeeding, and puberty. The relevance of animal studies for toxicological risk assessment in heterogeneous human populations remains undetermined as it is unknown if the results obtained can be extrapolated to humans. Fertility treatment The use of in vivo maturation in ovarian culture would eliminate the risk of Ovarian Hyperstimulation Syndrome during IVF in patients with polycystic ovary syndrome (PCOS). For those without PCOS, in vitro maturation still has advantages as the process is less intense as superovulation is not required. Principles of ovarian culture can be applied to women who are resistant to FSH or oestrogen sensitive tumours. In comparison to IVF, cells used in vitro maturation are harvested at a smaller size, immature and arrested at Metaphase I stage of meiosis. Once in the lab they undergo maturation to Metaphase II. Fertility preservation Ovarian tissue can be harvested before ovarian damaging treatments and re-implanted at a later stage using cryopreservation. However, this method is associated with the recurrence of malignancy in those with ovarian cancer and leukaemia. In theory, ovarian tissue culture is a safer method to produce mature oocytes for fertilisation in these patients. References External links In vitro fertilisation Cell culture Reproduction Fertility medicine Environmental toxicology
Ovarian culture
[ "Biology", "Environmental_science" ]
1,411
[ "Behavior", "Toxicology", "Reproduction", "Biological interactions", "Environmental toxicology", "Model organisms", "Cell culture" ]
71,882,510
https://en.wikipedia.org/wiki/Gerard%20Kraus
Gerard Kraus (February 25, 19201990) - was a Phillips Petroleum scientist known for developing testing standards for carbon black surface area. Education Kraus was born in Prague, Czechoslovakia, the son of a pathologist and professor of medicine. He came to the United States in 1940, following his graduation in 1938 from the State High School in Prague. In 1943, he completed his Bachelor of Science degree with High Honors from Southern Methodist University. He presented work entitled "Supercharging Diesels" at the ASME convention that year. In 1947, he received the doctoral degree in polymer chemistry working under W. B. Reynolds at the University of Cincinnati, under a fellowship funded by the Inland Division of General Motors Corporation. He studied adhesion of rubber-to-metal interfaces with application to the manufacture of tank track treads. Career From 1947 to 1953, Kraus was employed on the faculty at the University of Cincinnati, first as an instructor, then later as an assistant professor. He joined the Research and Development department at Phillips Petroleum Company in 1953. By 1963, he was managing a group responsible for exploratory work in carbon black, filler reinforcement, and properties of elastomers. In 1968 his title was Senior Scientist. Kraus' most cited work is an account of the swelling behavior of filler-reinforced, vulcanized rubbers. He established a relationship on the assumption that, at the filler interface, swelling is completely restricted due to adhesion. He is also known for a model of the Payne effect. Awards 1990 - Melvin Mooney Distinguished Technology Award 1996 - elected to the International Rubber Science Hall of Fame References 1920 births 1990 deaths Polymer scientists and engineers 20th-century American engineers People from Bartlesville, Oklahoma
Gerard Kraus
[ "Chemistry", "Materials_science" ]
355
[ "Polymer scientists and engineers", "Physical chemists", "Polymer chemistry" ]
71,886,336
https://en.wikipedia.org/wiki/Nucleon%20magnetic%20moment
The nucleon magnetic moments are the intrinsic magnetic dipole moments of the proton and neutron, symbols μp and μn. The nucleus of an atom comprises protons and neutrons, both nucleons that behave as small magnets. Their magnetic strengths are measured by their magnetic moments. The nucleons interact with normal matter through either the nuclear force or their magnetic moments, with the charged proton also interacting by the Coulomb force. The proton's magnetic moment was directly measured in 1933 by Otto Stern team in University of Hamburg. While the neutron was determined to have a magnetic moment by indirect methods in the mid-1930s, Luis Alvarez and Felix Bloch made the first accurate, direct measurement of the neutron's magnetic moment in 1940. The proton's magnetic moment is exploited to make measurements of molecules by proton nuclear magnetic resonance. The neutron's magnetic moment is exploited to probe the atomic structure of materials using scattering methods and to manipulate the properties of neutron beams in particle accelerators. The existence of the neutron's magnetic moment and the large value for the proton magnetic moment indicate that nucleons are not elementary particles. For an elementary particle to have an intrinsic magnetic moment, it must have both spin and electric charge. The nucleons have spin ħ/2, but the neutron has no net charge. Their magnetic moments were puzzling and defied a valid explanation until the quark model for hadron particles was developed in the 1960s. The nucleons are composed of three quarks, and the magnetic moments of these elementary particles combine to give the nucleons their magnetic moments. Description The CODATA recommended value for the magnetic moment of the proton is μp =  =  The best available measurement for the value of the magnetic moment of the neutron is Here, μN is the nuclear magneton, a standard unit for the magnetic moments of nuclear components, and μB is the Bohr magneton, both being physical constants. In SI units, these values are and A magnetic moment is a vector quantity, and the direction of the nucleon's magnetic moment is determined by its spin. The torque on the neutron that results from an external magnetic field is towards aligning the neutron's spin vector opposite to the magnetic field vector. The nuclear magneton is the spin magnetic moment of a Dirac particle, a charged, spin-1/2 elementary particle, with a proton's mass p, in which anomalous corrections are ignored. The nuclear magneton is where is the elementary charge, and is the reduced Planck constant. The magnetic moment of such a particle is parallel to its spin. Since the neutron has no charge, it should have no magnetic moment by the analogous expression. The non-zero magnetic moment of the neutron thus indicates that it is not an elementary particle. The sign of the neutron's magnetic moment is that of a negatively charged particle. Similarly, that the magnetic moment of the proton, is not almost equal to 1 N indicates that it too is not an elementary particle. Protons and neutrons are composed of quarks, and the magnetic moments of the quarks can be used to compute the magnetic moments of the nucleons. Although the nucleons interact with normal matter through magnetic forces, the magnetic interactions are many orders of magnitude weaker than the nuclear interactions. The influence of the neutron's magnetic moment is therefore only apparent for low energy, or slow, neutrons. Because the value for the magnetic moment is inversely proportional to particle mass, the nuclear magneton is about 1/2000 as large as the Bohr magneton. The magnetic moment of the electron is therefore about 1000 times larger than that of the nucleons. The magnetic moments of the antiproton and antineutron have the same magnitudes as their antiparticles, the proton and neutron, but they have opposite sign. Measurement Proton The magnetic moment of the proton was discovered in 1933 by Otto Stern, Otto Robert Frisch and Immanuel Estermann at the University of Hamburg. The proton's magnetic moment was determined by measuring the deflection of a beam of molecular hydrogen by a magnetic field. Stern won the Nobel Prize in Physics in 1943 for this discovery. Neutron The neutron was discovered in 1932, and since it had no charge, it was assumed to have no magnetic moment. Indirect evidence suggested that the neutron had a non-zero value for its magnetic moment, however, until direct measurements of the neutron's magnetic moment in 1940 resolved the issue. Values for the magnetic moment of the neutron were independently determined by R. Bacher at the University of Michigan at Ann Arbor (1933) and I. Y. Tamm and S. A. Altshuler in the Soviet Union (1934) from studies of the hyperfine structure of atomic spectra. Although Tamm and Altshuler's estimate had the correct sign and order of magnitude (), the result was met with skepticism. By 1934 groups led by Stern, now at the Carnegie Institute of Technology in Pittsburgh, and I. I. Rabi at Columbia University in New York had independently measured the magnetic moments of the proton and deuteron. The measured values for these particles were only in rough agreement between the groups, but the Rabi group confirmed the earlier Stern measurements that the magnetic moment for the proton was unexpectedly large. Since a deuteron is composed of a proton and a neutron with aligned spins, the neutron's magnetic moment could be inferred by subtracting the deuteron and proton magnetic moments. The resulting value was not zero and had a sign opposite to that of the proton. By the late 1930s, accurate values for the magnetic moment of the neutron had been deduced by the Rabi group using measurements employing newly developed nuclear magnetic resonance techniques. The value for the neutron's magnetic moment was first directly measured by L. Alvarez and F. Bloch at the University of California at Berkeley in 1940. Using an extension of the magnetic resonance methods developed by Rabi, Alvarez and Bloch determined the magnetic moment of the neutron to be . By directly measuring the magnetic moment of free neutrons, or individual neutrons free of the nucleus, Alvarez and Bloch resolved all doubts and ambiguities about this anomalous property of neutrons. Unexpected consequences The large value for the proton's magnetic moment and the inferred negative value for the neutron's magnetic moment were unexpected and could not be explained. The unexpected values for the magnetic moments of the nucleons would remain a puzzle until the quark model was developed in the 1960s. The refinement and evolution of the Rabi measurements led to the discovery in 1939 that the deuteron also possessed an electric quadrupole moment. This electrical property of the deuteron had been interfering with the measurements by the Rabi group. The discovery meant that the physical shape of the deuteron was not symmetric, which provided valuable insight into the nature of the nuclear force binding nucleons. Rabi was awarded the Nobel Prize in 1944 for his resonance method for recording the magnetic properties of atomic nuclei. Nucleon gyromagnetic ratios The magnetic moment of a nucleon is sometimes expressed in terms of its -factor, a dimensionless scalar. The convention defining the -factor for composite particles, such as the neutron or proton, is where is the intrinsic magnetic moment, is the spin angular momentum, and is the effective -factor. While the -factor is dimensionless, for composite particles it is defined relative to the nuclear magneton. For the neutron, is , so the neutron's -factor is while the proton's g-factor is The gyromagnetic ratio, symbol , of a particle or system is the ratio of its magnetic moment to its spin angular momentum, or For nucleons, the ratio is conventionally written in terms of the proton mass and charge, by the formula The neutron's gyromagnetic ratio is The proton's gyromagnetic ratio is The gyromagnetic ratio is also the ratio between the observed angular frequency of Larmor precession and the strength of the magnetic field in nuclear magnetic resonance applications, such as in MRI imaging. For this reason, the quantity γ/2π called "gamma bar", expressed in the unit MHz/T, is often given. The quantities and are therefore convenient. Physical significance Larmor precession When a nucleon is put into a magnetic field produced by an external source, it is subject to a torque tending to orient its magnetic moment parallel to the field (in the case of the neutron, its spin is antiparallel to the field). As with any magnet, this torque is proportional the product of the magnetic moment and the external magnetic field strength. Since the nucleons have spin angular momentum, this torque will cause them to precess with a well-defined frequency, called the Larmor frequency. It is this phenomenon that enables the measurement of nuclear properties through nuclear magnetic resonance. The Larmor frequency can be determined from the product of the gyromagnetic ratio with the magnetic field strength. Since for the neutron the sign of γn is negative, the neutron's spin angular momentum precesses counterclockwise about the direction of the external magnetic field. Proton nuclear magnetic resonance Nuclear magnetic resonance employing the magnetic moments of protons is used for nuclear magnetic resonance (NMR) spectroscopy. Since hydrogen-1 nuclei are within the molecules of many substances, NMR can determine the structure of those molecules. Determination of neutron spin The interaction of the neutron's magnetic moment with an external magnetic field was exploited to determine the spin of the neutron. In 1949, D. Hughes and M. Burgy measured neutrons reflected from a ferromagnetic mirror and found that the angular distribution of the reflections was consistent with spin . In 1954, J. Sherwood, T. Stephenson, and S. Bernstein employed neutrons in a Stern–Gerlach experiment that used a magnetic field to separate the neutron spin states. They recorded the two such spin states, consistent with a spin  particle. Until these measurements, the possibility that the neutron was a spin  particle could not have been ruled out. Neutrons used to probe material properties Since neutrons are neutral particles, they do not have to overcome Coulomb repulsion as they approach charged targets, unlike protons and alpha particles. Neutrons can deeply penetrate matter. The magnetic moment of the neutron has therefore been exploited to probe the properties of matter using scattering or diffraction techniques. These methods provide information that is complementary to X-ray spectroscopy. In particular, the magnetic moment of the neutron is used to determine magnetic properties of materials at length scales of 1–100 Å using cold or thermal neutrons. B. Brockhouse and C. Shull won the Nobel Prize in physics in 1994 for developing these scattering techniques. Control of neutron beams by magnetism As neutrons carry no electric charge, neutron beams cannot be controlled by the conventional electromagnetic methods employed in particle accelerators. The magnetic moment of the neutron allows some control of neutrons using magnetic fields, however, including the formation of polarized neutron beams. One technique employs the fact that cold neutrons will reflect from some magnetic materials at great efficiency when scattered at small grazing angles. The reflection preferentially selects particular spin states, thus polarizing the neutrons. Neutron magnetic mirrors and guides use this total internal reflection phenomenon to control beams of slow neutrons. Nuclear magnetic moments Since an atomic nucleus consists of a bound state of protons and neutrons, the magnetic moments of the nucleons contribute to the nuclear magnetic moment, or the magnetic moment for the nucleus as a whole. The nuclear magnetic moment also includes contributions from the orbital motion of the charged protons. The deuteron, consisting of a proton and a neutron, has the simplest example of a nuclear magnetic moment. The sum of the proton and neutron magnetic moments gives 0.879 μN, which is within 3% of the measured value 0.857 μN. In this calculation, the spins of the nucleons are aligned, but their magnetic moments offset because of the neutron's negative magnetic moment. Nature of the nucleon magnetic moments A magnetic dipole moment can be generated by two possible mechanisms. One way is by a small loop of electric current, called an "Ampèrian" magnetic dipole. Another way is by a pair of magnetic monopoles of opposite magnetic charge, bound together in some way, called a "Gilbertian" magnetic dipole. Elementary magnetic monopoles remain hypothetical and unobserved, however. Throughout the 1930s and 1940s it was not readily apparent which of these two mechanisms caused the nucleon intrinsic magnetic moments. In 1930, Enrico Fermi showed that the magnetic moments of nuclei (including the proton) are Ampèrian. The two kinds of magnetic moments experience different forces in a magnetic field. Based on Fermi's arguments, the intrinsic magnetic moments of elementary particles, including the nucleons, have been shown to be Ampèrian. The arguments are based on basic electromagnetism, elementary quantum mechanics, and the hyperfine structure of atomic s-state energy levels. In the case of the neutron, the theoretical possibilities were resolved by laboratory measurements of the scattering of slow neutrons from ferromagnetic materials in 1951. Anomalous magnetic moments and meson physics The anomalous values for the magnetic moments of the nucleons presented a theoretical quandary for the 30 years from the time of their discovery in the early 1930s to the development of the quark model in the 1960s. Considerable theoretical efforts were expended in trying to understand the origins of these magnetic moments, but the failures of these theories were glaring. Much of the theoretical focus was on developing a nuclear-force equivalence to the remarkably successful theory explaining the small anomalous magnetic moment of the electron. The problem of the origins of the magnetic moments of nucleons was recognized as early as 1935. G. C. Wick suggested that the magnetic moments could be caused by the quantum-mechanical fluctuations of these particles in accordance with Fermi's 1934 theory of beta decay. By this theory, a neutron is partly, regularly and briefly, disassociated into a proton, an electron, and a neutrino as a natural consequence of beta decay. By this idea, the magnetic moment of the neutron was caused by the fleeting existence of the large magnetic moment of the electron in the course of these quantum-mechanical fluctuations, the value of the magnetic moment determined by the length of time the virtual electron was in existence. The theory proved to be untenable, however, when H. Bethe and R. Bacher showed that it predicted values for the magnetic moment that were either much too small or much too large, depending on speculative assumptions. Similar considerations for the electron proved to be much more successful. In quantum electrodynamics (QED), the anomalous magnetic moment of a particle stems from the small contributions of quantum mechanical fluctuations to the magnetic moment of that particle. The g-factor for a "Dirac" magnetic moment is predicted to be for a negatively charged, spin-1/2 particle. For particles such as the electron, this "classical" result differs from the observed value by around 0.1%; the difference compared to the classical value is the anomalous magnetic moment. The g-factor for the electron is measured to be QED is the theory of the mediation of the electromagnetic force by photons. The physical picture is that the effective magnetic moment of the electron results from the contributions of the "bare" electron, which is the Dirac particle, and the cloud of "virtual", short-lived electron–positron pairs and photons that surround this particle as a consequence of QED. The effects of these quantum mechanical fluctuations can be computed theoretically using Feynman diagrams with loops. The one-loop contribution to the anomalous magnetic moment of the electron, corresponding to the first-order and largest correction in QED, is found by calculating the vertex function shown in the diagram on the right. The calculation was discovered by J. Schwinger in 1948. Computed to fourth order, the QED prediction for the electron's anomalous magnetic moment agrees with the experimentally measured value to more than 10 significant figures, making the magnetic moment of the electron one of the most accurately verified predictions in the history of physics. Compared to the electron, the anomalous magnetic moments of the nucleons are enormous. The g-factor for the proton is 5.6, and the chargeless neutron, which should have no magnetic moment at all, has a g-factor of −3.8. Note, however, that the anomalous magnetic moments of the nucleons, that is, their magnetic moments with the expected Dirac particle magnetic moments subtracted, are roughly equal but of opposite sign: , but . The Yukawa interaction for nucleons was discovered in the mid-1930s, and this nuclear force is mediated by pion mesons. In parallel with the theory for the electron, the hypothesis was that higher-order loops involving nucleons and pions may generate the anomalous magnetic moments of the nucleons. The physical picture was that the effective magnetic moment of the neutron arose from the combined contributions of the "bare" neutron, which is zero, and the cloud of "virtual" pions and photons that surround this particle as a consequence of the nuclear and electromagnetic forces. The Feynman diagram at right is roughly the first-order diagram, with the role of the virtual particles played by pions. As noted by A. Pais, "between late 1948 and the middle of 1949 at least six papers appeared reporting on second-order calculations of nucleon moments". These theories were also, as noted by Pais, "a flop" they gave results that grossly disagreed with observation. Nevertheless, serious efforts continued along these lines for the next couple of decades, to little success. These theoretical approaches were incorrect because the nucleons are composite particles with their magnetic moments arising from their elementary components, quarks. Quark model of nucleon magnetic moments In the quark model for hadrons, the neutron is composed of one up quark (charge  ) and two down quarks (charge  ) while the proton is composed of one down quark (charge  ) and two up quarks (charge  ). The magnetic moment of the nucleons can be modeled as a sum of the magnetic moments of the constituent quarks, although this simple model belies the complexities of the Standard Model of Particle Physics. The calculation assumes that the quarks behave like pointlike Dirac particles, each having their own magnetic moment, as computed using an expression similar to the one above for the nuclear magneton: where the q-subscripted variables refer to quark magnetic moment, charge, or mass. Simplistically, the magnetic moment of a nucleon can be viewed as resulting from the vector sum of the three quark magnetic moments, plus the orbital magnetic moments caused by the movement of the three charged quarks within it. In one of the early successes of the Standard Model (SU(6) theory), in 1964 M. Beg, B. Lee, and A. Pais theoretically calculated the ratio of proton-to-neutron magnetic moments to be , which agrees with the experimental value to within 3%. The measured value for this ratio is . A contradiction of the quantum mechanical basis of this calculation with the Pauli exclusion principle led to the discovery of the color charge for quarks by O. Greenberg in 1964. From the nonrelativistic quantum-mechanical wave function for baryons composed of three quarks, a straightforward calculation gives fairly accurate estimates for the magnetic moments of neutrons, protons, and other baryons. For a neutron, the magnetic moment is given by where d and u are the magnetic moments for the down and up quarks respectively. This result combines the intrinsic magnetic moments of the quarks with their orbital magnetic moments and assumes that the three quarks are in a particular, dominant quantum state. The results of this calculation are encouraging, but the masses of the up or down quarks were assumed to be the mass of a nucleon. The masses of the quarks are actually only about 1% that of a nucleon. The discrepancy stems from the complexity of the Standard Model for nucleons, where most of their mass originates in the gluon fields, virtual particles, and their associated energy that are essential aspects of the strong force. Furthermore, the complex system of quarks and gluons that constitute a nucleon requires a relativistic treatment. Nucleon magnetic moments have been successfully computed from first principles, requiring significant computing resources. See also Aharonov–Casher effect LARMOR neutron microscope Neutron electric dipole moment Neutron triple-axis spectrometry References Bibliography S. W. Lovesey (1986). Theory of Neutron Scattering from Condensed Matter. Oxford University Press. . Donald H. Perkins (1982). Introduction to High Energy Physics. Reading, Massachusetts: Addison Wesley, . John S. Rigden (1987). Rabi, Scientist and Citizen. New York: Basic Books, Inc., . Sergei Vonsovsky (1975). Magnetism of Elementary Particles. Moscow: Mir Publishers. External links Electric and magnetic fields in matter Magnetic moment Magnetism Magnetostatics magnetic moment magnetic moment Physical quantities
Nucleon magnetic moment
[ "Physics", "Chemistry", "Materials_science", "Mathematics", "Engineering" ]
4,431
[ "Physical phenomena", "Physical quantities", "Quantity", "Electric and magnetic fields in matter", "Materials science", "Magnetic moment", "Condensed matter physics", "Physical properties", "Moment (physics)" ]
71,889,057
https://en.wikipedia.org/wiki/Chronology%20of%20Haile%20Selassie
This is a chronology of the lifetime of Ethiopian Emperor Haile Selassie (reigned from 1930 to 1974). 1892–1930 23 July 1892 – Haile Selassie (as Ras Tafari) was born from Ras Mekonnen Woldemikael and Woizero Yeshimebet Ali Abba Jifar. 1 November 1905 – Tafari was renamed as Dejazmach at the age of 13. 1906 – His father Ras Mikael died at Kulibi. 1906 – Tafari assumed nominal governorship of Selale, enabled him to continue his studies. 1907 – He was appointed as governor over part of the province of Sidamo. 1907 – Following his death of his brother Yelma, the governorate of Harar was left vacant, which left to Menelik's loyal general Balcha Safo. 1910/1911 – Tafari appointed as governor of Harar. 3 August 1911 – Tafari married to Menen Asfaw from Ambassel, the niece of the heir to throne Lij Iyasu. 1916 – Empress Zewditu made Tafari Ras and was made heir apparent and Crown Prince. 11 February 1917 – During the coronation of Zewditu, she pledged Regent Tafari to rule fairly. 1924 – Ras Tafari toured numerous countries: Jerusalem, Alexandria, Paris, Luxembourg, Brussels, Amsterdam, Stockholm, London, Geneva, and Athens. 1928 – When Dejazmach Balcha Safo went to Addis Ababa with considerable size of forces, Tafari consolidated his hold over the provinces, many of Menelik's appointees refused to abide the new regulations. 18 February 1928 – As Balcha Safo went to Addis Ababa, Tafari had Ras Kassa Haile Darge buy off his army and arranged to have him displaced as the shum of Sidamo Province, by Birru Wolde Gabriel who himself was replaced by Desta Damtew. 2 August 1928 – the Italo-Ethiopian Treaty was signed to foster favorable relations between the two countries. 7 October 1928 – Empress Zewditu crowned Tafari as Negus. 31 March 1930 – Gugsa Welle was defeated by loyal forces of Tafari during the Battle of Anchem. 2 April 1930 – Death of Zewditu; Tafari rose to power as Emperor of Ethiopia. 2 November 1930 – Ras Tafari crowned as Haile Selassie I at Addis Ababa's St. George"s Cathedral. 1930–1974 16 July 1931 – Emperor Haile Selassie introduced the first Constitution of Ethiopia, providing bicameral legislature. 5 December 1934 – the Italians initially invaded Ethiopia at Welwel, in Ogaden; Haile Selassie armies set up headquarters at Dessie in Wollo Province. 3 October 1935 – the Second Italo-Ethiopian War began. 19 October 1935 – Haile Selassie gave more precise orders for his army to his Commander-in-Chief Ras Kassa. 2 May 1936 – Haile Selassie appointed Ras Imru Haile Selassie as Prince Regent in his absence, departing with his family for French Somaliland. 30 June 1936 – Haile Selassie appealed to the League of Nations address the invasion. 1936–1941 – Haile Selassie lived in Bath, England, in Fairfield House, which he bought. 18 January 1941 – during the East African Campaign in World War II, Haile Selassie crossed the border between Sudan and Ethiopia near the village of Um Idda. 5 May 1941 – Haile Selassie entered Addis Ababa and reclaimed his throne after leaving for five years since Italian occupation, and address the Ethiopian populace. 27 August 1942 – Haile Selassie abolished slavery in Ethiopia. 1942 – Haile Selassie attempted to institute a progressive tax scheme. 2 December 1950 – the UN General Assembly adopted Resolution 390 (V), establishing the former Italian colony into Ethiopia. 4 November 1955 – the revised 1955 Constitution of Ethiopia adopted with unitary parliamentary constitutional monarchy scheme. 1958 – the famine of Tigray unveiled to Ministry of the Interior two years later, which contributed significant deaths. 1959 – Haile Selassie played a role of the autocephaly of the Ethiopian Orthodox Tewahedo Church from Coptic Orthodox Church. 13 December 1960 – a coup d'état was attempted against Haile Selassie during state visiting Brazil despite successfully suppressed by his loyal Kebur Zabagna army. 2 December 1950 – The federation of Eritrea with Ethiopia had stipulated under UN Resolution 390 (V). 1961 – Eritrean War of Independence began, followed by the dissolution of the federation and closing of Eritrean parliament. 25 May 1963 – Haile Selassie formed the Organization of African Unity (OAU) headquartered in Addis Ababa. 1964 – Haile Selassie would initiate the concept of the United States of Africa, a proposition later taken up by Muammar Gaddafi. 1966 – Haile Selassie attempted to replace the historical tax system with a single progressive income tax, which weakened the nobility which previously avoided to pay taxes. 1960s – 1970s – Students Marxism revolution took place among educated people with radical and left-wing sentiments to oppose Haile Selassie feudal administration. 1972 – 1974 – the Wollo–Tigray famine killed about 40,000 to 80,000 Ethiopians. Haile Selassie was criticized for not reporting these famines. 12 January 1974 – the Ethiopian Revolution began when Ethiopian soldiers began rebellion in Negele Borena. 27 February 1974 – Prime Minister Aklilu Habte-Wold resigned as a result from mutiny. He installed the liberal aristocrat Endelkachew Mekonnen as a new Prime Minister. June 1974 – The Coordinating Committee of the Armed Forces, also known as the Derg formed to topple Haile Selassie government. 12 September 1974 – Haile Selassie was deposed by the Derg's General Aman Andom at the age of 82. He was subsequently imprisoned at National Palace in Addis Ababa. 27 August 1975 – Haile Selassie died and pronounced on state media one day later on 28 August from "respiratory failure" following complications from prostate examination followed up by a prostate operation. 17 February 1992 – After the fall of the Derg in 1991, Haile Selassie's bones were found under a concrete slab on the palace grounds. 5 November 2000 – State funeral of Haile Selassie took place at Holy Trinity Cathedral in Addis Ababa. References Haile Selassie Selassie, Haile
Chronology of Haile Selassie
[ "Physics" ]
1,368
[ "Spacetime", "Chronology", "Physical quantities", "Time" ]
67,521,729
https://en.wikipedia.org/wiki/Nirmatrelvir
Nirmatrelvir is an antiviral medication developed by Pfizer which acts as an orally active 3C-like protease inhibitor. It is part of a nirmatrelvir/ritonavir combination used to treat COVID-19 and sold under the brand name Paxlovid. Development Pharmaceutical Coronaviral proteases cleave multiple sites in the viral polyprotein, usually after there are glutamine residues. Early work on related human rhinoviruses showed that the flexible glutamine side chain in inhibitors could be replaced by a rigid pyrrolidone. These drugs had been further developed prior to the COVID-19 pandemic for other diseases including SARS. The utility of targeting the 3CL protease in a real world setting was first demonstrated in 2018 when GC376 (a prodrug of GC373) was used to treat the previously 100% lethal cat coronavirus disease, feline infectious peritonitis, caused by feline coronavirus. Nirmatrelvir and GC373 are both peptidomimetics, share the aforementioned pyrrolidone in P1 position and are competitive inhibitors. They use a nitrile and an aldehyde respectively to bind the catalytic cysteine. Pfizer investigated two series of compounds, with nitrile and benzothiazol-2-yl ketone as the reactive group, respectively, and in the end settled on using nitrile. Nirmatrelvir was developed by modification of the earlier clinical candidate lufotrelvir, which is also a covalent protease inhibitor but its active element is a phosphate prodrug of a hydroxyketone. Lufotrelvir needs to be administered intravenously, limiting its use to a hospital setting. Stepwise modification of the tripeptide peptidomimetic led to nirmatrelvir, which is suitable for oral administration. Key changes include a reduction in the number of hydrogen bond donors, and the number of rotatable bonds by introducing a rigid bicyclic non-canonical amino acid (specifically, a "fused cyclopropyl ring with two methyl groups"), which mimics the leucine residue found in earlier inhibitors. This residue had previously been used in the synthesis of boceprevir. Tert-leucine (abbreviation: Tle) used in the P3 position of nirmatrelvir was identified first as optimal non-canonical amino acid in potential drug targeting SARS-CoV-2 3C-like protease using combinatorial chemistry (hybrid combinatorial substrate library technology). The leucine-like residue resulted in loss of a nearby contact with a glutamine on the 3C-like protease. To compensate, Pfizer tried adding methane sulfonamide, acetamide and trifluoroacetamide, discovering that of the three, trifluoroacetamide resulted in superior oral bioavailability. Chemistry and pharmacology Full details of the synthesis of nirmatrelvir were first published by scientists from Pfizer. In the penultimate step a synthetic homochiral amino acid is coupled with a homochiral amino amide using the water-soluble carbodiimide EDCI as a coupling agent. The resulting intermediate is then treated with Burgess reagent, which dehydrates the amide group to the nitrile of the product. Nirmatrelvir is a covalent inhibitor, binding directly to the catalytic cysteine (Cys145) residue of the cysteine protease enzyme. In the co-packaged medication nirmatrelvir/ritonavir, ritonavir serves to slow the metabolism of nirmatrelvir via cytochrome enzyme inhibition, thereby increasing the circulating concentration of the main drug. This effect is also used in HIV therapy, where ritonavir is used in combination with another protease inhibitor to similarly enhance their pharmacokinetics. Society and culture Licensing In November 2021, Pfizer signed a license agreement with the United Nations–backed Medicines Patent Pool to allow nirmatrelvir to be manufactured and sold in 95 countries. Pfizer stated that the agreement will allow local medicine manufacturers to produce the pill "with the goal of facilitating greater access to the global population". The deal excludes several countries with major COVID-19 outbreaks including Brazil, China, Russia, Argentina, and Thailand. Names Nirmatrelvir is the international nonproprietary name Research The research that led to nirmatrelvir began in March 2020, when Pfizer formally launched a project at its Cambridge, Massachusetts site to develop antiviral drugs for treating COVID-19. In July 2020, Pfizer chemists were able to synthesize nirmatrelvir for the first time. In September 2020, Pfizer completed a pharmacokinetic study in rats which suggested that nirmatrelvir could be administered orally. The actual synthesis of the drug for laboratory research and for clinical trials was carried out at Pfizer's Groton, Connecticut site. In February 2021, Pfizer launched the company's first phase I trial of PF-07321332 (nirmatrelvir) at its clinical research unit in New Haven, Connecticut. A study published in March 2023 reported that treatment with nirmatrelvir within five days of initial infection reduced the risk of long COVID relative to patients who did not receive Paxlovid. A 2024 study found that "the time to sustained alleviation of all signs and symptoms of Covid-19 did not differ significantly between participants who received nirmatrelvir–ritonavir and those who received placebo." References Amides COVID-19 drug development Cyclopropanes Nitriles Drugs developed by Pfizer Pyrrolidones SARS-CoV-2 main protease inhibitors Trifluoromethyl compounds
Nirmatrelvir
[ "Chemistry" ]
1,272
[ "Amides", "Drug discovery", "Functional groups", "COVID-19 drug development", "Nitriles" ]
67,523,190
https://en.wikipedia.org/wiki/Qalculate%21
Qalculate! is an arbitrary precision cross-platform software calculator. It supports complex mathematical operations and concepts such as derivation, integration, data plotting, and unit conversion. It is a free and open-source software released under GPL v2. Features Qalculate! supports common mathematical functions and operations, multiple bases, autocompletion, complex numbers, infinite numbers, arrays and matrices, variables, mathematical and physical constants, user-defined functions, symbolic derivation and integration, solving of equations involving unknowns, uncertainty propagation using interval arithmetic, plotting using Gnuplot, unit and currency conversion and dimensional analysis, and provides a periodic table of elements, as well as several functions for computer science, such as character encoding and bitwise operations. It provides four interfaces: two GUIs, one using GTK (qalculate-gtk) and another using Qt (qalculate-qt), a library for use in other programs (libqalculate), and a CLI program for use in a terminal (qalc). Qalculate! (GTK+ GUI): qalculate-gtk Qalculate! (Qt GUI): qalculate-qt Qalculate! (CLI): qalc (usually provided by the libqalculate package) Qalculate! (Library): libqalculate Use in academic research Bartel, Alexandre. "DOS Software Security: Is there Anyone Left to Patch a 25-year old Vulnerability?." "In our example of Figure 7, we choose to execute /usr/bin/qalculate-gtk, a calculator. Since the stack of the DOSBox process is non-executable, we cannot directly inject our shellcode on it." "The Gnome calculator was used to perform these calculations and the results were verified using the Qalculate! calculator and WolframAlpha (15) since spreadsheets are unable to perform these calculations." See also Mathematical software List of arbitrary-precision arithmetic software Comparison of software calculators References External links Qalculate! - the ultimate desktop calculator at GitHub Qalculate! - downloads at GitHub Qalculate/qalculate-gtk GUI at GitHub Qalculate! Manual at GitHub QALC man page at GitHub Ubuntu – Details of package qalculate in bionic Ubuntu – Details of package qalculate in focal Qalculate! code review by PVS-Studio Free educational software GNOME Applications Software calculators Free software programmed in C++ Software using the GNU General Public License
Qalculate!
[ "Mathematics" ]
582
[ "Software calculators", "Mathematical software" ]
67,525,376
https://en.wikipedia.org/wiki/Representative%20layer%20theory
The concept of the representative layer came about though the work of Donald Dahm, with the assistance of Kevin Dahm and Karl Norris, to describe spectroscopic properties of particulate samples, especially as applied to near-infrared spectroscopy. A representative layer has the same void fraction as the sample it represents and each particle type in the sample has the same volume fraction and surface area fraction as does the sample as a whole. The spectroscopic properties of a representative layer can be derived from the spectroscopic properties of particles, which may be determined by a wide variety of ways. While a representative layer could be used in any theory that relies on the mathematics of plane parallel layers, there is a set of definitions and mathematics, some old and some new, which have become part of representative layer theory. Representative layer theory can be used to determine the spectroscopic properties of an assembly of particles from those of the individual particles in the assembly. The sample is modeled as a series of layers, each of which is parallel to each other and perpendicular to the incident beam. The mathematics of plane parallel layers is then used to extract the desired properties from the data, most notably that of the linear absorption coefficient which behaves in the manner of the coefficient in Beer’s law. The representative layer theory gives a way of performing the calculations for new sample properties by changing the properties of a single layer of the particles, which doesn’t require reworking the mathematics for a sample as a whole. History The first attempt to account for transmission and reflection of a layered material was carried out by George G. Stokes in about 1860 and led to some very useful relationships. John W. Strutt (Lord Rayleigh) and Gustav Mie developed the theory of single scatter to a high degree, but Aurthur Schuster was the first to consider multiple scatter. He was concerned with the cloudy atmospheres of stars, and developed a plane-parallel layer model in which the radiation field was divided into forward and backward components. This same model was used much later by Paul Kubelka and Franz Munk, whose names are usually attached to it by spectroscopists. Following WWII, the field of reflectance spectroscopy was heavily researched, both theoretically and experimentally.  The remission function, , following Kubelka-Munk theory, was the leading contender as the metric of absorption analogous to the absorbance function in transmission absorption spectroscopy.  The form of the K-M solution originally was: , but it was rewritten in terms of linear coefficients by some authors, becoming , taking and as being equivalent to the linear absorption and scattering coefficients as they appear in the Bouguer-Lambert law, even though sources who derived the equations preferred the symbolism and usually emphasized that and was a remission or back-scattering parameter, which for the case of diffuse scatter should properly be taken as an integral. In 1966, in a book entitled Reflectance Spectroscopy, Harry Hecht had pointed out that the formulation led to , which enabled plotting "against the wavelength or wave-number for a particular sample" giving a curve corresponding "to the real absorption determined by transmission measurements, except for a displacement by in the ordinate direction." However, in data presented, "the marked deviation in the remission function ... in the region of large extinction is obvious." He listed various reasons given by other authors for this "failure ... to remain valid in strongly absorbing materials", including: "incomplete diffusion in the scattering process"; failure to use "diffuse illumination; "increased proportion of regular reflection"; but concluded that "notwithstanding the above mentioned difficulties, ... the remission function should be a linear function of the concentration at a given wavelength for a constant particle size" though stating that "this discussion has been restricted entirely to the reflectance of homogeneous powder layers" though "equation systems for combination of inhomogeneous layers cannot be solved for the scattering and absorbing properties even in the simple case of a dual combination of sublayers. ... This means that the (Kubelka-Munk) theory fails to include, in an explicit manner, any dependence of reflection on particle size or shape or refractive index". The field of Near infrared spectroscopy (NIR) got its start in 1968, when Karl Norris and co-workers with the Instrumentation Research Lab of the U.S. Department of Agriculture first applied the technology to agricultural products. The USDA discovered how to use NIR empirically, based on available sources, gratings, and detector materials. Even the wavelength range of NIR was empirically set based on the operational range of a PbS detector. Consequently, it was not seen as a rigorous science: it had not evolved in the usual way, from research institutions to general usage. Even though the Kubelka-Munk theory provided a remission function that could have been used as the absorption metric, Norris selected for convenience. He believed that the problem of non-linearity between the metric and concentration was due to particle size (a theoretical concern) and stray light (an instrumental effect). In qualitative terms, he would explain differences in spectra of different particle size as changes in the effective path length that the light traveled though the sample. In 1976, Hecht published an exhaustive evaluation of the various theories which were considered to be fairly general. In it, he presented his derivation of the Hecht finite difference formula by replacing the fundamental differential equations of the Kubelka-Munk theory by the finite difference equations, and obtained: . He noted "it is well known that a plot of versus deviates from linearity for high values of , and it appears that (this equation) can be used to explain the deviations in part", and "represents an improvement in the range of validity and shows the need to consider the particulate nature of scattering media in developing a more precise theory by which absolute absorptivities can be determined." In 1982, Gerry Birth convened a meeting of experts in several areas that impacted NIR Spectroscopy, with emphasis on diffuse reflectance spectroscopy, no matter which portion of the electromagnetic spectrum might be used. This was the beginning of the International Diffuse Reflectance Conference. At this meeting was Harry Hecht, who may have at the time been the world's most knowledgeable person in the theory of diffuse reflectance. Gerry himself took many photographs illustrating various aspects of diffuse reflectance, many of which were not explainable with the best available theories. In 1987, Birth and Hecht wrote a joint article in a new handbook, which pointed a direction for future theoretical work. In 1994, Donald and Kevin Dahm began using numerical techniques to calculate remission and transmission from samples of varying numbers of plane parallel layers from absorption and remission fractions for a single layer. Using this entirely independent approach, they found a function that was the independent of the number of layers of the sample. This function, called the Absorption/Remission function and nick-named the ART function, is defined as: . Besides the relationships displayed here, the formulas obtained for the general case are entirely consistent with the Stokes formulas, the equations of Benford, and Hecht's finite difference formula. For the special cases of infinitesimal or infinitely dilute particles, it gives results consistent with the Schuster equation for isotropic scattering and Kubelka–Munk equation. These equations are all for plane parallel layers using two light streams. This cumulative mathematics was tested on data collected using directed radiation on plastic sheets, a system that precisely matches the physical model of a series of plane parallel layers, and found to conform. The mathematics provided: 1) a method to use plane parallel mathematics to separate absorption and remission coefficients for a sample; 2) an Absorption/Remission function that is constant for all sample thickness; and 3) equations relating the absorption and remission of one thickness of sample to that of any other thickness. Mathematics of plane parallel layers in absorption spectroscopy Using simplifying assumptions, the spectroscopic parameters (absorption, remission, and transmission fractions) of a plane parallel layer can be built from the refractive index of the material making up the layer, the linear absorption coefficient (absorbing power) of the material, and the thickness of the layer. While other assumptions could be made, those most often used are those of normal incidence of a directed beam of light, with internal and external reflection from the surface being the same. Determining the A, R, T fractions for a surface For the special case where the incident radiation is normal (perpendicular) to a surface and the absorption is negligible, the intensity of the reflected and transmitted beams can be calculated from the refractive indices η1 and η2 of the two media, where is the fraction of the incident light reflected, and is the fraction of the transmitted light: , , with the fraction absorbed taken as zero ( = 0 ). Illustration For a beam of light traveling in air with an approximate index of refraction of 1.0, and encountering the surface of a material having an index of refraction of 1.5: , Determining the A, R, T fractions for a sheet There is a simplified special case for the spectroscopic parameters of a sheet. This sheet consists of three plane parallel layers (1:front surface, 2:interior, 3:rear surface) in which the surfaces both have the same remission fraction when illuminated from either direction, regardless of the relative refractive indices of the two media on either side of the surface. For the case of zero absorption in the interior, the total remission and transmission from the layer can be determined from the infinite series, where is the remission from the surface: These formulas can be modified to account for absorption. Alternatively, the spectroscopic parameters of a sheet (or slab) can be built up from the spectroscopic parameters of the individual pieces that compose the layer: surface, interior, surface. This can be done using an approach developed by Kubelka for treatment of inhomogeneous layers. Using the example from the previous section: { , , } {, , }. We will assume the interior of the sheet is composed of a material that has Napierian absorption coefficient of 0.5 cm−1, and the sheet is 1 mm thick (). For this case, on a single trip through the interior, according to the Bouguer-Lambert law, , which according to our assumptions yields and . Thus { , , }. Then one of Benford's equations can be applied. If , and are known for layer and and are known for layer , the ART fractions for a sample composed of layer and layer are: (The symbol means the reflectance of layer when the direction of illumination is antiparallel to that of the incident beam. The difference in direction is important when dealing with inhomogeneous layers. This consideration was added by Paul Kubelka in 1954. He also pointed out that transmission was independent of the direction of illumination, but absorption and remission were not.) Illustration Step 1: We take layer 1 as x, and layer 2 as y. By our assumptions in this case, { }. Step 2: We take the result from step 1 as the value for new x [ x is old x+y; (-x) is old y+x ], and the value for layer 3 as new y. Dahm has shown that for this special case, the total amount of light absorbed by the interior of the sheet (considering surface remission) is the same as that absorbed in a single trip (independent of surface remission). This is borne out by the calculations. The decadic absorbance () of the sheet is given by: Determining the A, R, T fractions for n layers The Stokes Formulas can be used to calculate the ART fractions for any number of layers. Alternatively, they can be calculated by successive application of Benford's equation for "one more layer". If , , and are known for the representative layer of a sample, and , and are known for a layer composed of representative layers, the ART fractions for a layer with thickness of are: Illustration In the above example, { }. The Table shows the results of repeated application of the above formulas. Absorbing Power: The Scatter Corrected Absorbance of a sample Within a homogeneous media such as a solution, there is no scatter. For this case, the function is linear with both the concentration of the absorbing species and the path-length. Additionally, the contributions of individual absorbing species are additive. For samples which scatter light, absorbance is defined as "the negative logarithm of one minus absorptance (absorption fraction: ) as measured on a uniform sample". For decadic absorbance, this may be symbolized as: .   Even though this absorbance function is useful with scattering samples, the function does not have the same desirable characteristics as it does for non-scattering samples. There is, however, a property called absorbing power which may be estimated for these samples. The absorbing power of a single unit thickness of material making up a scattering sample is the same as the absorbance of the same thickness of the materiel in the absence of scatter. Illustration Suppose that we have a sample consisting of 14 sheets described above, each one of which has an absorbance of 0.0222. If we are able to estimate the absorbing power (the absorbance of a sample of the same thickness, but having no scatter) from the sample without knowing how many sheets are in the sample (as would be the general case), it would have the desirable property of being proportional to the thickness. In this case, we know that the absorbing power (scatter corrected absorbance) should be: {14 x the absorbance of a single sheet} . This is the value we should have for the sample if the absorbance is to follow the law of Bouguer (often referred to as Beer's law). In the Table below, we see that the sample has the A,R,T values for the case of 14 sheets in the Table above. Because of the presence of scatter, the measured absorbance of the sample would be: . Then we calculate this for the half sample thickness using another of Benford's equations. If , and are known for a layer with thickness , the ART fractions for a layer with thickness of are: In the line for half sample [S/2], we see the values which are the same as those for 7 layers in the Table above, as we expect. Note that . We desire to have the absorbance be linear with sample thickness, but we find when we multiply this value by 2, we get , which is a significant departure from the previous estimate for the absorbing power. The next iteration of the formula produces the estimate for A,R,T for a quarter sample: . Note that this time the calculation corresponds to three and a half layers, a thickness of sample that cannot exist physically. Continuing for the sequentially higher powers of two, we see a monotonically increasing estimate. Eventually the numbers will start jumping with round off error, but one can stop when getting a constant value to a specified number of significant figures. In this case, we become constant to 4 significant figures at 0.3105, which is our estimate for the absorbing power of the sample. This corresponds to our target value of 0.312 determined above. Expressing particulate mixtures as layers If one wants to use a theory based on plane parallel layers, optimally the samples would be describable as layers. But a particulate sample often looks a jumbled maze of particles of various sizes and shapes, showing no structured pattern of any kind, and certainly not literally divided into distinct, identical layers. Even so, it is a tenet of Representative Layer Theory that for spectroscopic purposes, we may treat the complex sample as if it were a series of layers, each one representative of the sample as a whole. Definition of a representative layer To be representative, the layer must meet the following criteria: • The volume fraction of each type of particle is the same in the representative layer as in the sample as a whole. • The surface area fraction of each type of particle is the same in the representative layer as in the sample as a whole. • The void fraction of the representative layer is the same as in the sample. • The representative layer is nowhere more than one particle thick. Note this means the “thickness” of the representative layer is not uniform. This criterion is imposed so that we can assume that a given photon of light has only one interaction with the layer. It might be transmitted, remitted, or absorbed as a result of this interaction, but it is assumed not to interact with a second particle within the same layer. In the above discussion, when we talk about a “type” of particle, we must clearly distinguish between particles of different composition. In addition, however, we must distinguish between particles of different sizes. Recall that scattering is envisioned as a surface phenomenon and absorption is envisioned as occurring at the molecular level throughout the particle. Consequently, our expectation is that the contribution of a “type” of particle to absorption will be proportional to the volume fraction of that particle in the sample, and the contribution of a “type” of particle to scattering will be proportional to the surface area fraction of that particle in the sample. This is why our “representative layer” criteria above incorporate both volume fraction and surface area fraction. Since small particles have larger surface area-to-volume ratios than large particles, it is necessary to distinguish between them. Determining spectroscopic properties of a representative layer Under these criteria, we can propose a model for the fractions of incident light that are absorbed (), remitted (), and transmitted () by one representative layer. , , in which: • is the fraction of cross-sectional surface area that is occupied by particles of type . • is the effective absorption coefficient for particles of type . • is the remission coefficient for particles of type . • is the thickness of a particle of type in the direction of the incident beam. • The summation is carried out over all of the distinct “types” of particle. In effect, represents the fraction of light that will interact with a particle of type , and and quantify the likelihood of that interaction resulting in absorption and remission, respectively. Surface area fractions and volume fractions for each type of particle can be defined as follows: , , , in which: • is the mass fraction of particles of type i in the sample. • is the fraction of occupied volume composed of particles of type i. • is the fraction of particle surface area that is composed of particles of type i. • is the fraction of total volume composed of particles of type i. • is the fraction of cross-sectional surface area that is composed of particles of type i. • is the density of particles of type i. • is the void fraction of the sample. This is a logical way of relating the spectroscopic behavior of a “representative layer” to the properties of the individual particles that make up the layer. The values of the absorption and remission coefficients represent a challenge in this modeling approach. Absorption is calculated from the fraction of light striking each type of particle and a “Beer’s law”-type calculation of the absorption by each type of particle, so the values of used should ideally model the ability of the particle to absorb light, independent of other processes (scattering, remission) that also occur. We referred to this as the absorbing power in the section above. List of principle symbols used Where a given letter is used in both capital and lower case form (, and , ) the capital letter refers to the macroscopic observable and the lower case letter to the corresponding variable for an individual particle or layer of the material. Greek symbols are used for properties of a single particle. – absorption fraction of a single layer – remission fraction of a single layer – transmission fraction of a single layer An, Rn, Tn – The absorption, remission, and transmission fractions for a sample composed of n layers α – absorption fraction of a particle β – back-scattering from a particle σ – isotropic scattering from a particle – absorption coefficient defined as the fraction of incident light absorbed by a very thin layer divided by the thickness of that layer – scattering coefficient defined as the fraction of incident light scattered by a very thin layer divided by the thickness of that layer References Spectroscopy
Representative layer theory
[ "Physics", "Chemistry" ]
4,197
[ "Instrumental analysis", "Molecular physics", "Spectroscopy", "Spectrum (physical sciences)" ]
67,525,955
https://en.wikipedia.org/wiki/Stable%20massive%20particles
Stable massive particles (SMPs) are hypothetical particles that are long-lived and have appreciable mass. The precise definition varies depending on the different experimental or observational searches. SMPs may be defined as being at least as massive as electrons, and not decaying during its passage through a detector. They can be neutral or charged or carry a fractional charge, and interact with matter through gravitational force, strong force, weak force, electromagnetic force or any unknown force. If new SMPs are ever discovered, several questions related to the origin and constituent of dark matter, and about the unification of four fundamental forces may be answered. Collider experiments Heavy, exotic particles interacting with matter and which can be directly detected through collider experiments are termed as stable massive particles or SMPs. More specifically a SMP is defined to be a particle that can pass through a detector without decaying and can undergo electromagnetic or strong interaction with matter. Searches for SMPs have been carried out across a spectrum of collision experiments such as lepton–hadron, hadron–hadron, and electron–positron. Although none of these experiments have detected an SMP, they have put substantial constraints on the nature of SMPs. ATLAS Experiment During the proton–proton collisions with center of mass energy equal to 13 TeV at the ATLAS experiment, a search for charged SMPs was carried out. In this case SMPs were defined as particles with mass significantly more than that of standard model particles, sufficient lifetime to reach the ATLAS hadronic calorimeter and with measurable electric charge while it passes through the tracking chambers. MoEDAL experiment The MoEDAL experiment search for, among others, highly ionizing SMPs and pseudo-SMPs. Non-collider experiments In the case of the non-collider experiments, SMPs are defined as sufficiently long-lived particles which exist either as relics of the big bang singularity or are the products of secondary collisions, and are beyond the range of any conceivable accelerator experiment. References Hypothetical elementary particles Particle physics Exotic matter Hypothetical particles
Stable massive particles
[ "Physics" ]
424
[ "Hypothetical particles", "Matter", "Unsolved problems in physics", "Particle physics", "Exotic matter", "Hypothetical elementary particles", "Physics beyond the Standard Model", "Subatomic particles" ]
67,526,365
https://en.wikipedia.org/wiki/Gravitational%20contact%20terms
In quantum field theory, a contact term is a radiatively induced point-like interaction. These typically occur when the vertex for the emission of a massless particle such as a photon, a graviton, or a gluon, is proportional to (the invariant momentum of the radiated particle). This factor cancels the of the Feynman propagator, and causes the exchange of the massless particle to produce a point-like -function effective interaction, rather than the usual long-range potential. A notable example occurs in the weak interactions where a W-boson radiative correction to a gluon vertex produces a term, leading to what is known as a "penguin" interaction. The contact term then generates a correction to the full action of the theory. Contact terms occur in gravity when there are non-minimal interactions, , or in Brans-Dicke Theory, . The non-minimal couplings are quantum equivalent to an "Einstein frame," with a pure Einstein-Hilbert action, , owing to gravitational contact terms. These arise classically from graviton exchange interactions. The contact terms are an essential, yet hidden, part of the action and, if they are ignored, the Feynman diagram loops in different frames yield different results. At the leading order in including the contact terms is equivalent to performing a Weyl Transformation to remove the non-minimal couplings and taking the theory to the Einstein-Hilbert form. In this sense, the Einstein-Hilbert form of the action is unique and "frame ambiguities" in loop calculations do not exist. References Concepts in physics
Gravitational contact terms
[ "Physics" ]
331
[ "nan" ]
67,527,532
https://en.wikipedia.org/wiki/Photonic%20crystal%20sensor
Photonic crystal sensors use photonic crystals: nanostructures composed of periodic arrangements of dielectric materials that interact with light depending on their particular structure, reflecting lights of specific wavelengths at specific angles. Any change in the periodicity or refractive index of the structure can give rise to a change in the reflected color, or the color perceived by the observer or a spectrometer. That simple principle makes them useful colorimetric intuitive sensors for different applications including, but not limited to, environmental analysis, temperature sensing, magnetic sensing, biosensing, diagnostics, food quality control, security, and mechanical sensing. Many animals in nature such as fish or beetles employ responsive photonic crystals for camouflage, signaling or to bait their prey. The variety of materials utilizable in such structures ranging from inorganic, organic as well as plasmonic metal nanoparticles makes these structures highly customizable and versatile. In the case of inorganic materials, variation of the refractive index is the most commonly exploited effect in sensing, while periodicity change is more commonly exhibited in polymer-based sensors. Besides their small size, current developments in manufacturing technologies have made them easy and cheap to fabricate on a larger scale, making them mass-producible and practical. Types and structures Biosensors and integrated lab-on-a-chip As properly designed photonic crystals exhibit high sensitivity, selectivity, stability, and their electricity-free operation if needed, they have become highly researched portable biological sensors. Developments in analysis, device miniaturization, fluidic design and integration have catapulted the development of integrated photonic crystal sensors in what is known as lab-on-a-chip devices of high sensitivity, low limit of detection, faster response time and low cost. A large range of analytes of biological interest such as proteins, DNA, cancer cells, glucose and antibodies can be detected with this kind of sensors, providing fast, cheap and accurate diagnostic and health-monitoring tools that can detect concentrations as low as 15 nM. Certain chemical or biological target molecules can be integrated within the structure to provide specificity. Chemical sensors As chemical analytes have their own specific refractive indices, they can fill porous photonic structures, altering their effective index and consequently their color in a finger-print like manner. On the other hand, they can alter the volume of polymer-based structures, resulting in a change in the periodicity leading to a similar end effect. In ion-containing hydrogels, their selective swelling results in their specificity. Applications in gaseous and aqueous environment have been studied to detect concentrations of chemical species, solvents, vapors, ions, pH and humidity. The specificity and sensitivity can be controlled by the appropriate choice of materials and their interaction with the analytes, that can achieve even label-free sensors. The concentration of chemical species in vapor or liquid phases as well as in more complex mixtures can be determined with high confidence. Mechanical sensors Different mechanical signals such as pressure, strain, torsion and bending can be detected with photonic crystal sensors. Commonly, they are based on the deformation-induced change in the lattice constants in flexible materials such as elastomeric composites or colloidal crystals, causing a mechano-chromic effect as they stretch or contract. 3D photonic crystals Synthetic opals are three dimensional photonic crystals usually made of self-assembled nanospheres of diameters on the order of hundreds of nanometers, where the high refractive index material is that of the spheres and the low-index material is air or another filler. On the other hand, inverse opals are structures where the interstitial space between the spheres is filled with another material and the spheres are consequently removed, providing a larger free volume for faster diffusion of chemical species. Photonic crystal fibers Photonic crystal fibers are a special types of optical fibers that has contain air holes distributed in specific patterns around a solid or hollow core. Due to their high sensitivity, inherent flexibility, and small diameters, they can be used in a variety of situations requiring high robustness and portability. Compared to traditional optical fibers, they are highly birefringent with tailorable dispersion, limited loss and endless single-mode propagation for a long range of wavelengths and have a very fast sensing response. 2D gratings and slabs One-dimensional slabs with two dimensional order cause by selective removal of material, creating a pattern of holes or grooves in an otherwise homogeneous material is a popular photonic crystal structure used in sensing. Fabry-Pérot mirrors Fabry-Pérot mirrors are planar photonic crystal where the periodicity is maintained only in the z-dimension. Sputtered porous inorganic sensors, spin-coated polymer sensors and self-assembled block-copolymers are a few of the commonly used planar 1D structures. References Photonics Sensors
Photonic crystal sensor
[ "Technology", "Engineering" ]
1,008
[ "Sensors", "Measuring instruments" ]
67,527,587
https://en.wikipedia.org/wiki/TestOps
TestOps (or test operations) refers to the discipline of managing the operational aspects of testing within the software delivery lifecycle. Software testing is an evolving discipline that includes both functional and non-functional testing. The first mention of this is regarded in March 2019 where, Ditmir Hasani, CEO of QA Tech consultancy is thought to have coined the term. Increasingly, software testing, especially in agile development processes is shifting to become more of a continuous testing process where software developers, quality engineers, manual testers, product owners, and more are involved in the quality process. As more people have become involved in the testing process and testing projects have grown, so too has the need to advance the discipline of software testing management and to manage the software quality processes, programmers, people, systems, and tests. TestOps helps teams scale their teams, tests, and quality processes in and effectively. Elements of test operations (TestOps) TestOps involves several important disciplines that can be broken down into: Planning — Helps identify how the software is going to be tested. What are the priorities for testing? How will it be tested? Who will do the testing? In addition, the planning phase should identify the environment for the tests. Will they be run in a test environment, or in production? Production data can be valuable to identify real user flows that help prioritize test cases. The outputs include identifying the type of tests to use, the test automation tools, the timing of the testing, the ownership of the testing at different phases, and the design and outputs of the tests. Management — Test management includes the organization and governance of the team, the tools, the test environment, and the tests themselves. Tests follow a lifecycle pattern and must be managed whether they are in stages such as draft, active, or quarantine. TestOps helps ensure that the testing processes are efficient and scalable. Control — As teams and tests grow in number and diversity, they naturally increase complexity. Change control processes such as pull requests, approvals on merges, collaboration tools, and ownership labeling can help ensure that changes to tests are properly approved. Insights — The data and information you derive from your test automation systems should help inform operational as well as process transformation decisions. Operational information about your testing activities includes the test results (pass/fail), release readiness criteria, failure diagnostics, team testing productivity, test stability, and more. Information that can inform process improvements includes team and individual performance data, failure type trend analysis, test coverage mapping, and risk coverage analysis. TestOps Features DevOps integration — TestOps exists to ensure that the product development pipeline has all the testing frameworks and tools needed. It is common for QA engineers to rely on the pipelines that IT puts together without much input. TestOps changes this by owning test activities related to DevOps, allowing QA engineers and developers to have full ownership and visibility of the development pipeline, so they can tailor it to meet their needs. Cloud integration — Integrating tests and test runs with cloud providers Enhanced test planning — Automation is not effective if the entire codebase has to get tested every time a line of code is changed. TestOps provides a centralized platform that makes it easier for testers and developers to plan what tests to write, as well as identify what tests to run and when. Test lifecycle management — Automated tests follow a lifecycle including creation, evaluation, active, and quarantine or removal. The status of the test should impact how it is treated in build automation systems like CI/CD. Test version control — Processes that help ensure that changes to tests are properly reviewed and approved through capabilities like pull requests in code. Real-time dashboards — Real-time results and status help the test teams understand the state of software releases and the work that needs to be done to create, approve, or run more tests. TestOps vs. DevOps DevOps is a broader, more inclusive concept that includes software feature planning, code development, testing, deployment, configuration, monitoring, and feedback. It was an attempt to integrate some of the disconnected toolchains. Testing is included in the broader DevOps methodology. TestOps is not simply providing additional emphasis on testing. It is focused on operational aspects of testing that are necessary to ensure that testing, whether performed in development, production, or in its own testing phase, is well planned, managed, controlled, and provides insights to enable continuous improvement. The siloed working mindset is aimed to be removed from activities such as continuous delivery, software testing (manual and automated testing), environment setups, infrastructure and log management, and built-in security enforcement. References Software testing
TestOps
[ "Engineering" ]
948
[ "Software engineering", "Software testing" ]
62,279,536
https://en.wikipedia.org/wiki/Electrostatic%20septum
An electrostatic septum is a dipolar electric field device used in particle accelerators to inject or extract a particle beam into or from a synchrotron. In an electrostatic septum, basically an electric field septum, two separate areas can be identified, one with an electric field and a field free region. The two areas are separated by a physical wall that is called the septum. An important feature of septa is to have a homogeneous field in the gap and no field in the region of the circulating beam. The basic principle Electrostatic septa provide an electric field in the direction of extraction, by applying a voltage between the septum foil and an electrode. The septum foil is very thin to have the least interaction with the beam when it is slowly extracted. Slowly means over millions of turns of the particles in the synchrotron. The orbiting beam generally passes through the hollow support of the septum foil, which ensures a field free region, as not to affect the circulating beam. The field free region is achieved by using the hollow support of the septum and the septum foil itself as a Faraday cage. The extracted beam passes just on the other side of the septum, where the electric field changes the direction of the beam to be extracted. The septum separates the gap field between the electrode and the foil from the field free region for the circulating beam. Electrostatic septa are always sitting in a vacuum tank to allow high electric fields, since the vacuum works as an insulator between the septum and high voltage electrode. To allow precise matching of the septum position with the circulation beam trajectory, the septum is often fitted with a displacement system, which allows parallel and angular displacement with respect to the circulating beam. Great difficulty lies in the choice of materials and the manufacturing techniques of the different components. In the figure a typical cross section of an electrostatic septum is shown. The septum foil and its support are marked in blue, while the electrode is marked in red. In the lower part of the figure the electric field E is shown as it could be measured on the axis indicated as a dotted line in the cross section. The field free region is inside the support of the septum foil. The electric field E in the gap between the septum foil and the electrode is homogeneous on the axis and is equal to: Where V is the voltage applied to the electrode and d is the distance between the septum foil and the electrode. Typical technical specifications Typical device specifications are listed below. Electrode length: 500 – 3000 mm Gap width: variable between 10 – 35 mm Septum thickness: 0.1 mm Vacuum: (10−9 to 10−12 mbar range) Electric field strength: up to 15 MV/m Voltage: up to 300 kV Septum materials: Molybdenum foil, Tungsten Rhenium alloy wires, Tungsten Rhenium alloy ribbons Electrode materials: stainless steel, anodised aluminium or titanium for extreme low vacuum applications Bakeable up to 200 °C for low vacuum applications Power supplied by high voltage Cockcroft–Walton generator References Electrostatic Septum Accelerator physics
Electrostatic septum
[ "Physics" ]
639
[ "Applied and interdisciplinary physics", "Accelerator physics", "Experimental physics" ]
62,280,237
https://en.wikipedia.org/wiki/Electronics%20prototyping
In electronics, prototyping means building an actual circuit to a theoretical design to verify that it works, and to provide a physical platform for debugging it if it does not. The prototype is often constructed using techniques such as wire wrapping or using a breadboard, stripboard or perfboard, with the result being a circuit that is electrically identical to the design but not physically identical to the final product. Open-source tools like Fritzing exist to document electronic prototypes (especially the breadboard-based ones) and move toward physical production. Prototyping platforms such as Arduino also simplify the task of programming and interacting with a microcontroller. The developer can choose to deploy their invention as-is using the prototyping platform, or replace it with only the microcontroller chip and the circuitry that is relevant to their product. A technician can quickly build a prototype (and make additions and modifications) using these techniques, but for volume production it is much faster and usually cheaper to mass-produce custom printed circuit boards than to produce these other kinds of prototype boards. The proliferation of quick-turn PCB fabrication and assembly companies has enabled the concepts of rapid prototyping to be applied to electronic circuit design. It is now possible, even with the smallest passive components and largest fine-pitch packages, to have boards fabricated, assembled, and even tested in a matter of days. Boards Breadboard Perfboard Stripboard References P Prototypes
Electronics prototyping
[ "Engineering" ]
300
[ "Electronic engineering", "Electronic circuits" ]
62,285,602
https://en.wikipedia.org/wiki/Multi-agent%20reinforcement%20learning
Multi-agent reinforcement learning (MARL) is a sub-field of reinforcement learning. It focuses on studying the behavior of multiple learning agents that coexist in a shared environment. Each agent is motivated by its own rewards, and does actions to advance its own interests; in some environments these interests are opposed to the interests of other agents, resulting in complex group dynamics. Multi-agent reinforcement learning is closely related to game theory and especially repeated games, as well as multi-agent systems. Its study combines the pursuit of finding ideal algorithms that maximize rewards with a more sociological set of concepts. While research in single-agent reinforcement learning is concerned with finding the algorithm that gets the biggest number of points for one agent, research in multi-agent reinforcement learning evaluates and quantifies social metrics, such as cooperation, reciprocity, equity, social influence, language and discrimination. Definition Similarly to single-agent reinforcement learning, multi-agent reinforcement learning is modeled as some form of a Markov decision process (MDP). For example, A set of environment states. One set of actions for each of the agents . is the probability of transition (at time ) from state to state under joint action . is the immediate joint reward after the transition from to with joint action . In settings with perfect information, such as the games of chess and Go, the MDP would be fully observable. In settings with imperfect information, especially in real-world applications like self-driving cars, each agent would access an observation that only has part of the information about the current state. In the partially observable setting, the core model is the partially observable stochastic game in the general case, and the decentralized POMDP in the cooperative case. Cooperation vs. competition When multiple agents are acting in a shared environment their interests might be aligned or misaligned. MARL allows exploring all the different alignments and how they affect the agents' behavior: In pure competition settings, the agents' rewards are exactly opposite to each other, and therefore they are playing against each other. Pure cooperation settings are the other extreme, in which agents get the exact same rewards, and therefore they are playing with each other. Mixed-sum settings cover all the games that combine elements of both cooperation and competition. Pure competition settings When two agents are playing a zero-sum game, they are in pure competition with each other. Many traditional games such as chess and Go fall under this category, as do two-player variants of modern games like StarCraft. Because each agent can only win at the expense of the other agent, many complexities are stripped away. There's no prospect of communication or social dilemmas, as neither agent is incentivized to take actions that benefit its opponent. The Deep Blue and AlphaGo projects demonstrate how to optimize the performance of agents in pure competition settings. One complexity that is not stripped away in pure competition settings is autocurricula. As the agents' policy is improved using self-play, multiple layers of learning may occur. Pure cooperation settings MARL is used to explore how separate agents with identical interests can communicate and work together. Pure cooperation settings are explored in recreational cooperative games such as Overcooked, as well as real-world scenarios in robotics. In pure cooperation settings all the agents get identical rewards, which means that social dilemmas do not occur. In pure cooperation settings, oftentimes there are an arbitrary number of coordination strategies, and agents converge to specific "conventions" when coordinating with each other. The notion of conventions has been studied in language and also alluded to in more general multi-agent collaborative tasks. Mixed-sum settings Most real-world scenarios involving multiple agents have elements of both cooperation and competition. For example, when multiple self-driving cars are planning their respective paths, each of them has interests that are diverging but not exclusive: Each car is minimizing the amount of time it's taking to reach its destination, but all cars have the shared interest of avoiding a traffic collision. Zero-sum settings with three or more agents often exhibit similar properties to mixed-sum settings, since each pair of agents might have a non-zero utility sum between them. Mixed-sum settings can be explored using classic matrix games such as prisoner's dilemma, more complex sequential social dilemmas, and recreational games such as Among Us, Diplomacy and StarCraft II. Mixed-sum settings can give rise to communication and social dilemmas. Social dilemmas As in game theory, much of the research in MARL revolves around social dilemmas, such as prisoner's dilemma, chicken and stag hunt. While game theory research might focus on Nash equilibria and what an ideal policy for an agent would be, MARL research focuses on how the agents would learn these ideal policies using a trial-and-error process. The reinforcement learning algorithms that are used to train the agents are maximizing the agent's own reward; the conflict between the needs of the agents and the needs of the group is a subject of active research. Various techniques have been explored in order to induce cooperation in agents: Modifying the environment rules, adding intrinsic rewards, and more. Sequential social dilemmas Social dilemmas like prisoner's dilemma, chicken and stag hunt are "matrix games". Each agent takes only one action from a choice of two possible actions, and a simple 2x2 matrix is used to describe the reward that each agent will get, given the actions that each agent took. In humans and other living creatures, social dilemmas tend to be more complex. Agents take multiple actions over time, and the distinction between cooperating and defecting is not as clear cut as in matrix games. The concept of a sequential social dilemma (SSD) was introduced in 2017 as an attempt to model that complexity. There is ongoing research into defining different kinds of SSDs and showing cooperative behavior in the agents that act in them. Autocurricula An autocurriculum (plural: autocurricula) is a reinforcement learning concept that's salient in multi-agent experiments. As agents improve their performance, they change their environment; this change in the environment affects themselves and the other agents. The feedback loop results in several distinct phases of learning, each depending on the previous one. The stacked layers of learning are called an autocurriculum. Autocurricula are especially apparent in adversarial settings, where each group of agents is racing to counter the current strategy of the opposing group. The Hide and Seek game is an accessible example of an autocurriculum occurring in an adversarial setting. In this experiment, a team of seekers is competing against a team of hiders. Whenever one of the teams learns a new strategy, the opposing team adapts its strategy to give the best possible counter. When the hiders learn to use boxes to build a shelter, the seekers respond by learning to use a ramp to break into that shelter. The hiders respond by locking the ramps, making them unavailable for the seekers to use. The seekers then respond by "box surfing", exploiting a glitch in the game to penetrate the shelter. Each "level" of learning is an emergent phenomenon, with the previous level as its premise. This results in a stack of behaviors, each dependent on its predecessor. Autocurricula in reinforcement learning experiments are compared to the stages of the evolution of life on Earth and the development of human culture. A major stage in evolution happened 2-3 billion years ago, when photosynthesizing life forms started to produce massive amounts of oxygen, changing the balance of gases in the atmosphere. In the next stages of evolution, oxygen-breathing life forms evolved, eventually leading up to land mammals and human beings. These later stages could only happen after the photosynthesis stage made oxygen widely available. Similarly, human culture could not have gone through the Industrial Revolution in the 18th century without the resources and insights gained by the agricultural revolution at around 10,000 BC. Applications Multi-agent reinforcement learning has been applied to a variety of use cases in science and industry: AI alignment Multi-agent reinforcement learning has been used in research into AI alignment. The relationship between the different agents in a MARL setting can be compared to the relationship between a human and an AI agent. Research efforts in the intersection of these two fields attempt to simulate possible conflicts between a human's intentions and an AI agent's actions, and then explore which variables could be changed to prevent these conflicts. Limitations There are some inherent difficulties about multi-agent deep reinforcement learning. The environment is not stationary anymore, thus the Markov property is violated: transitions and rewards do not only depend on the current state of an agent. Further reading Stefano V. Albrecht, Filippos Christianos, Lukas Schäfer. Multi-Agent Reinforcement Learning: Foundations and Modern Approaches. MIT Press, 2024. https://www.marl-book.com Kaiqing Zhang, Zhuoran Yang, Tamer Basar. Multi-agent reinforcement learning: A selective overview of theories and algorithms. Studies in Systems, Decision and Control, Handbook on RL and Control, 2021. References Reinforcement learning Multi-agent systems Deep learning Game theory
Multi-agent reinforcement learning
[ "Engineering" ]
1,888
[ "Artificial intelligence engineering", "Multi-agent systems" ]
62,286,468
https://en.wikipedia.org/wiki/GraphBLAS
GraphBLAS () is an API specification that defines standard building blocks for graph algorithms in the language of linear algebra. GraphBLAS is built upon the notion that a sparse matrix can be used to represent graphs as either an adjacency matrix or an incidence matrix. The GraphBLAS specification describes how graph operations (e.g. traversing and transforming graphs) can be efficiently implemented via linear algebraic methods (e.g. matrix multiplication) over different semirings. The development of GraphBLAS and its various implementations is an ongoing community effort, including representatives from industry, academia, and government research labs. Background Graph algorithms have long taken advantage of the idea that a graph can be represented as a matrix, and graph operations can be performed as linear transformations and other linear algebraic operations on sparse matrices. For example, matrix-vector multiplication can be used to perform a step in a breadth-first search. The GraphBLAS specification (and the various libraries that implement it) provides data structures and functions to compute these linear algebraic operations. In particular, GraphBLAS specifies sparse matrix objects which map well to graphs where vertices are likely connected to relatively few neighbors (i.e. the degree of a vertex is significantly smaller than the total number of vertices in the graph). The specification also allows for the use of different semirings to accomplish operations in a variety of mathematical contexts. Originally motivated by the need for standardization in graph analytics, similar to its namesake BLAS, the GraphBLAS standard has also begun to interest people outside the graph community, including researchers in machine learning, and bioinformatics. GraphBLAS implementations have also been used in high-performance graph database applications such as RedisGraph. Specification The GraphBLAS specification has been in development since 2013, and has reached version 2.1.0 as of December 2023. While formally a specification for the C programming language, a variety of programming languages have been used to develop implementations in the spirit of GraphBLAS, including C++, Java, and Nvidia CUDA. Compliant implementations and language bindings There are currently two fully-compliant reference implementations of the GraphBLAS specification. Bindings assuming a compliant specification exist for the Python, MATLAB, and Julia programming languages. Linear algebraic foundations The mathematical foundations of GraphBLAS are based in linear algebra and the duality between matrices and graphs. Each graph operation in GraphBLAS operates on a semiring, which is made up of the following elements: A scalar addition operator () A scalar multiplication operator () A set (or domain) Note that the zero element (i.e. the element that represents the absence of an edge in the graph) can also be reinterpreted. For example, the following algebras can be implemented in GraphBLAS: All the examples above satisfy the following two conditions in their respective domains: Additive identity, Multiplicative annihilation, For instance, a user can specify the min-plus algebra over the domain of double-precision floating point numbers with GrB_Semiring_new(&min_plus_semiring, GrB_MIN_FP64, GrB_PLUS_FP64). Functionality While the GraphBLAS specification generally allows significant flexibility in implementation, some functionality and implementation details are explicitly described: GraphBLAS objects, including matrices and vectors, are opaque data structures. Non-blocking execution mode, which permits lazy or asynchronous evaluation of certain operations. Masked assignment, denoted , which assigns elements of matrix to matrix only in positions where the mask matrix is non-zero. The GraphBLAS specification also prescribes that library implementations be thread-safe. Example code The following is a GraphBLAS 2.1-compliant example of a breadth-first search in the C programming language. #include <stdlib.h> #include <stdio.h> #include <stdint.h> #include <stdbool.h> #include "GraphBLAS.h" /* * Given a boolean n x n adjacency matrix A and a source vertex s, performs a BFS traversal * of the graph and sets v[i] to the level in which vertex i is visited (v[s] == 1). * If i is not reachable from s, then v[i] = 0 does not have a stored element. * Vector v should be uninitialized on input. */ GrB_Info BFS(GrB_Vector *v, GrB_Matrix A, GrB_Index s) { GrB_Index n; GrB_Matrix_nrows(&n,A); // n = # of rows of A GrB_Vector_new(v,GrB_INT32,n); // Vector<int32_t> v(n) GrB_Vector q; // vertices visited in each level GrB_Vector_new(&q, GrB_BOOL, n); // Vector<bool> q(n) GrB_Vector_setElement(q, (bool)true, s); // q[s] = true, false everywhere else /* * BFS traversal and label the vertices. */ int32_t level = 0; // level = depth in BFS traversal GrB_Index nvals; do { ++level; // next level (start with 1) GrB_apply(*v, GrB_NULL, GrB_PLUS_INT32, GrB_SECOND_INT32, q, level, GrB_NULL); // v[q] = level GrB_vxm(q, *v, GrB_NULL, GrB_LOR_LAND_SEMIRING_BOOL, q, A, GrB_DESC_RC); // q[!v] = q ||.&& A; finds all the // unvisited successors from current q GrB_Vector_nvals(&nvals, q); } while (nvals); // if there is no successor in q, we are done. GrB_free(&q); // q vector no longer needed return GrB_SUCCESS; } See also Basic Linear Algebra Subprograms (BLAS) LEMON Graph Library References External links GraphBLAS Forum Numerical linear algebra Numerical software Graph description languages
GraphBLAS
[ "Mathematics" ]
1,349
[ "Graph theory", "Mathematical relations", "Numerical software", "Graph description languages", "Mathematical software" ]
62,287,040
https://en.wikipedia.org/wiki/Design%20Cities%20%28UNESCO%29
UNESCO's Design Cities project is part of the wider Creative Cities Network. The Network launched in 2004, and has member cities in seven creative fields. The other fields are: Crafts and Folk Art, Music, Film, Gastronomy, Literature, and Media Arts. Criteria for UNESCO Design Cities To be approved as a Design City, cities need to meet a number of criteria set by UNESCO. Designated UNESCO Design Cities share similar characteristics such as having an established design industry; cultural landscape maintained by design and the built environment (architecture, urban planning, public spaces, monuments, transportation); design schools and design research centers; practicing groups of designers with a continuous activity at a local and national level; experience in hosting fairs, events and exhibits dedicated to design; opportunity for local designers and urban planners to take advantage of local materials and urban/natural conditions; design-driven creative industries such as architecture and interiors, fashion and textiles, jewelry and accessories, interaction design, urban design, sustainable design. There are 40 Cities of Design: See also City of Crafts and Folk Arts City of Film City of Gastronomy City of Literature City of Music City of Media Arts References UNESCO Design Lists of cities
Design Cities (UNESCO)
[ "Engineering" ]
239
[ "Design" ]
70,436,716
https://en.wikipedia.org/wiki/Citrate%E2%80%93malate%20shuttle
The citrate-malate shuttle is a series of chemical reactions, commonly referred to as a biochemical cycle or system, that transports acetyl-CoA in the mitochondrial matrix across the inner and outer mitochondrial membranes for fatty acid synthesis. Mitochondria are enclosed in a double membrane. As the inner mitochondrial membrane is impermeable to acetyl-CoA, the shuttle system is essential to fatty acid synthesis in the cytosol. It plays an important role in the generation of lipids in the liver (hepatic lipogenesis). The name of the citrate-malate shuttle is derived from the two intermediates – short-lived chemicals that are generated in a reaction step and consumed entirely in the next – citrate and malate that carry the acetyl-CoA molecule across the mitochondrial double membrane. The citrate–malate shuttle is present in humans and other higher eukaryotic organisms and is closely related to the Krebs cycle. The system is responsible for the transportation of malate into the mitochondrial matrix to serve as an intermediate in the Krebs cycle and the transportation of citrate into the cytosol for secretion in Aspergillus niger, a fungus used in the commercial production of citric acid. Mechanism Structure of mitochondria All cells need energy to survive. Mitochondria is a double-membrane structure in the body cell that generates and transports essential metabolic products. The three layers of this structure are the outer membrane, intermembrane space, and inner membrane. The space inside the mitochondria is called the mitochondrial matrix, while the region outside is the cytosol. The outer membrane allows most small molecules to pass through. In contrast, the inner membrane transports specific molecules only, which is impermeable to many substances. Therefore, a shuttle is required for the transportation of molecules across the inner membrane. It acts as a pump to drive the substances from the inner membrane to the outside. Component of system On the surface of cells, there are many proteins. Some proteins are involved in recognition, attachment, or transportation. The citrate-malate shuttle system consists of citrate shuttle and malate shuttle, which are carrier proteins. Carrier proteins are present on the cell surface. They transport different molecules across the mitochondria. In this system, the substances being transported are malate and citrate. The starting material is acetyl-CoA. It is a molecule that is involved in ATP synthesis, protein metabolism, and lipid metabolism. As the inner membrane is not permeable to this molecule, acetyl-CoA needs to be converted into other products for effective transport. It is also the first step of the reaction. Movement of citrate and malate The process occurs in two cellular locations: the cytosol and the mitochondria matrix. A cycle is formed by the system, ensuring that the conversion between acetylene, oxaloacetate, citrate, and malate can continue without the need for foreign molecule addition. It involves six major steps: Step 1 An acetyl group of acetyl-CoA combines with oxaloacetate to form citrate, releasing the coenzyme group (CoA) in the mitochondrial matrix. Step 2 The citrate binds to citrate transporters. The shuttle delivers the citrate from the inner membrane to the intermembrane space. There is a net movement of the citrate from the intermembrane space to the cytosol across the outer membrane, following the concentration gradient. Step 3 Using ATP as energy, citrate is broken down into the acetyl group and oxaloacetate. The acetyl group joins the coenzyme in the cytosol, forming acetyl-CoA. Step 4 Oxaloacetate is reduced by NADH to malate in the cytosol, releasing free electrons. Step 5 The malate is transported by the malate shuttle, moving from the cytosol to the matrix. Step 6 The malate is oxidized by NAD+ (the oxidizing agent) to oxaloacetate again, releasing NADH. The replenishment of oxaloacetate can be achieved. The oxaloacetate can react with the acetyl-CoA in the first step, completing a cycle. Function The citrate-malate shuttle allows the cell to produce fatty acid with excess acetyl-CoA for storage. The principle is similar to that of insulin, which turns excess glucose in the body into glycogen for storage in the liver cells and skeletal muscles, so that when there is a lack of energy intake, the body could still provide itself with glucose by breaking down glycogen. The citrate-malate shuttle enables more compact storage of chemical energy in the body in the form of fatty acid by transporting acetyl-CoA into the cytosol for fatty acid and cholesterol synthesis. The lipids produced can then be stored so that they can be used in the future. Acetyl-CoA is generated in the mitochondrial matrix from two sources: pyruvate decarboxylation in glycolysis and the breakdown of fatty acids through β-oxidation, which are both essential pathways of energy production in humans. Pyruvate decarboxylation is the step that connects glycolysis and the Krebs cycle and is regulated by the pyruvate dehydrogenase complex when blood glucose levels are high. Otherwise, fatty acid β-oxidation occurs, and acetyl-CoA is required to generate ATP through the Krebs cycle. In a subject with defective citrate-malate shuttle, acetyl-CoA in mitochondria cannot exit into the cytosol. Fatty acid synthesis is hence hindered, and the body would not be able to store excess energy as efficiently as a normal subject. In addition, improper functioning of the citrate-malate shuttle can result in disruption of the Krebs cycle. Linkage to Krebs cycle The Krebs cycle, also known as the TCA cycle or Citric Acid cycle, is a biochemical pathway that facilitates the breakdown of glucose in a cell. Both citrate and malate involved in the citrate-malate shuttle are necessary intermediates of the Krebs cycle. Usually, oxaloacetate in the Krebs cycle is generated from the carboxylation of pyruvate in the mitochondrion; however, malate generated in the cytosol can also enter the mitochondrion through the transport protein located in the inner mitochondrial membrane to directly join the Krebs cycle. The mitochondrial transport proteins are encoded by the SLC25 gene in humans and facilitate the transportation of various metabolites, including citrate and malate, in the Krebs cycle. These transport proteins control the flow of metabolites in and out of the inner mitochondrial membrane, which is impermeable to most molecules. They connect the carbohydrate metabolism of the Krebs cycle to fatty acid synthesis in lipogenesis by catalyzing the transportation of acetyl-CoA out of the mitochondrial matrix into the cytosol, which is done in the form of citrate export from the mitochondria to the cytosol. Cytosolic citrate, meaning citrate in the cytosol, is a key substrate for the generation of energy. It releases acetyl-CoA and provides NADPH for fatty acid synthesis, and, in subsequent pathways, generates NAD+ for glycolysis. Citrate also activates acetyl-CoA carboxylase, an enzyme that is essential in the fatty acid synthesis pathway. Citrate-malate shuttle might partly or completely replace the function of the Krebs cycle in cancer cell metabolism. Association with cancer Alternate metabolic pathway in cancer cell Recent study proposed that the citrate–malate shuttle may contribute to sustaining cancer cells through a β-oxidation-citrate–malate shuttle metabolic pathway. In normal cells, β-oxidation produces acetyl-CoA which enters the Krebs cycle to produce ATP, and β-oxidation cannot continue if the Krebs cycle is impaired and acetyl-CoA accumulates. However, cancer cells may carry out continuous β-oxidation by connecting it to the citrate–malate shuttle. The new metabolic pathway consists of mitochondrial transport proteins and several enzymes, including ATP-citrate lyase (ACLY) and malate dehydrogenases 1 and 2 (MDH1 and MDH2). The proposed metabolic pathway may explain the Warburg effect – that cancer cells produce energy through a suboptimal pathway – and hypoxia in cancer. The energy efficiency of this pathway is 3.76 times less than the normal β-oxidation Krebs cycle pathway, only producing 26 moles instead of 98 moles of ATP from 1 mole of palmitate. It is still unsure whether this pathway exists in cancer cells. Factors preventing this pathway from occurring includes lipotoxicity of palmitate and stearate. Liver cancer Role of liver The liver contains metabolic active tissues as it is responsible for detoxification, protein and carbohydrate metabolism. Therefore, It needs a lot of energy to function and contains abundant mitochondria. Any abnormalities in mitochondria would affect liver metabolism. If the liver does not work properly, it may produce excess metabolites, leading to accumulation; in contrast, it may also fail to produce certain chemicals. As a result, the imbalance of metabolites may lead to liver cancer development, i.e. hepatocarcinogenesis. Cancer cells The growth and development of normal cells follow a cycle in a controlled and ordered manner. When they are damaged, they will die through a process called apoptosis. However, apoptosis is disrupted in cancer cells, allowing them to divide and grow uncontrollably, potentially invading other tissues or organs. They will not undergo the normal death process of body cells. Hepatocellular carcinoma (HCC) is a prevalent type of liver cancer that accounts for over 80% of cases. It is lethal cancer due to the remarkable drug tolerance, spread potential and high chance of relapse. Scientists have carried out many kinds of research in finding out the risk factors of HCC progression. Risk factors Metabolic disorder is one of the causes of liver cancer. Mitochondria is responsible for oxidation using NAD+, which is produced in Step 4 of the citrate–malate shuttle system. In high obesity or insulin resistance (diabetes) patients, their body contains large amounts of fatty acid, the shuttle system might not generate sufficient NAD+ to metabolize the fat efficiently. They also exhibit a low NAD+ level. Thus, it is more likely for obesity or diabetes patients to develop liver cancer. Moreover, overloading of mitochondria may occur. There is an increase in reactive oxygen species level in the liver. Those species are highly reactive and would attack liver cells. They can damage the DNA strands. Cells with DNA damage may divide abnormally. They might grow into cancer cells, resulting in HCC. Another risk factor is mutations and overexpressed citrate–malate shuttle. A high frequency mutated gene in a wide range of cancers, Ras oncogene, has a significantly close association to HCC. Many HCC patients carry this gene. They also have abnormal citrate–malate shuttle. The research of Dalian Medical University shows that there is a noticeable increase in the HCC patients’ citrate and malate levels, suggesting the possibility of higher activity of citrate–malate shuttle. This mechanism is effective when TCA cycle activity is low. The shuttle also helps the production of fatty acid and lactic acid. In liver cancer cells, the TCA cycle is blocked, causing accumulation of excess pyruvate. It is a signal of the body defense mechanism. Normally, the cancer cells would die under a high pyruvate level. However, the overexpressed citrate–malate shuttle can remove the excessive pyruvate. In this situation, the natural cell death of liver tumor will not occur. The cancer cells can keep growing. In addition, high shuttle activity is linked to increase in fatty acid generation. It is also a risk factor of HCC. Genetics and evolution Mitochondrial diseases Mitochondrial diseases are usually caused by mutation in mitochondrial DNA. These genes regulate different proteins synthesis, including carrier proteins and certain enzymes. The replication of mitochondrial DNA follows binary fission. In this process, 1 set of genes would divide into 2 sets. The mitochondrial gene of children is inherited from their mother only. If there are any genetic defects or mutations in the mother’s mitochondrial DNA, it would be inherited by the children. If those changes in genes can cause mitochondrial diseases, the children have a 100% possibility of acquiring the diseases. For the malate-oxaloacetate shuttle, 4 major genes are involved. They are PMDH1, MDH, PMDH2, mMDH1. PMDH-1 and PMDH-2 encode two different enzymes that provide NAD+ for the oxidation of malate. In addition, MDH and mMDH1 encode for an enzyme that directly oxidizes malate. Importance SLC25 is a gene that is essential for the synthesis of a wide range of mitochondrial transporters, such as citrate shuttle. Mutations in this gene can result in dysfunctional mitochondria. This leads to significant decrease in the energy production of our body cells, causing severe metabolic diseases. It can cause severe symptoms in organs or tissues that have high energy demand. These organs include the liver, brain, heart, kidneys. They require abundant functional mitochondria to function. Mitochondrial disorders caused by defective or reduced SLC25 gene expression can cause diseases, such as CAC deficiency, HHH syndrome, AGC2 deficiency (CTLN2/NICCD), , Congenital Amish microcephaly, Early epileptic encephalopathy, AAC1 deficiency, PiC (isoform A) deficiency, AGC1 deficiency, Neuropathy with striatal necrosis, and Congenital sideroblastic anaemia. In addition, SLC25 gene is crucial for the survival of organisms because of its high frequency in the genomics of different organisms. It indicates that this gene is favourable for the survival of a species in response to the environmental features, so it is preserved and passed along the generation. In other words, the gene is positively selected for evolution. Not only is SLC25 gene found in humans, but also in other animals, or even microorganisms like bacteria and viruses. It shows that this gene is conserved among different species. This might provide evidence for the significance and essentiality of the gene in the survival of organisms. References Biochemical reactions Citric acid cycle Metabolism de:Citrat-Shuttle
Citrate–malate shuttle
[ "Chemistry", "Biology" ]
3,122
[ "Carbohydrate metabolism", "Biochemical reactions", "Cellular processes", "Biochemistry", "Metabolism", "Citric acid cycle" ]
70,441,044
https://en.wikipedia.org/wiki/Macro-creatine%20kinase
Macro-creatine kinase (macro-CK) is a macroenzyme, an enzyme of high molecular weight and prolonged half-life found in human serum. It is one of the most common macroenzymes. Macro-CK type 1 is a complex formed by one of the creatine kinase isoenzyme types, typically CK-BB, and antibodies; typically IgG, sometimes IgA, rarely IgM. Macro-CK type 2 is formed from mitochondrial CK polymer. Macro-CK type 1 has been associated with autoimmune and other chronic conditions. Macro-CK type 2 has been associated with malignancy. Macro-CK has been implicated as a source of interference in interpretation of medical labs. References Blood tests Chemical pathology
Macro-creatine kinase
[ "Chemistry", "Biology" ]
156
[ "Biochemistry", "Blood tests", "Chemical pathology" ]
73,294,620
https://en.wikipedia.org/wiki/Indium%20perchlorate
Indium perchlorate is the inorganic compound with the chemical formula . The compound is an indium salt of perchloric acid. Synthesis Dissolving indium hydroxide in perchloric acid: Physical properties Indium(III) perchlorate forms colorless crystals. It is soluble in water and ethanol. The compound forms a hydrate , that melts in its own crystallization water at 80 °C. The octahydrate is easily soluble in ethanol and acetic acid. References Perchlorates Oxidizing agents Indium compounds
Indium perchlorate
[ "Chemistry" ]
112
[ "Perchlorates", "Redox", "Oxidizing agents", "Salts" ]
73,305,281
https://en.wikipedia.org/wiki/Green%20solvent
Green solvents are environmentally friendly chemical solvents that are used as a part of green chemistry. They came to prominence in 2015, when the UN defined a new sustainability-focused development plan based on 17 sustainable development goals, recognizing the need for green chemistry and green solvents for a more sustainable future. Green solvents are developed as more environmentally friendly solvents, derived from the processing of agricultural crops or otherwise sustainable methods as alternatives to petrochemical solvents. Some of the expected characteristics of green solvents include ease of recycling, ease of biodegradation, and low toxicity. Examples Water Although not an organic solvent, water is an attractive solvent because it its non-toxic and renewable. It is a useful solvent in many industrial processes. Traditional organic solvents can sometimes be replaced by aqueous preparations. Water-based coatings have largely replaced standard petroleum-based paints for the construction industry; however, solvent-based anti-corrosion paints remain among the most used today. Supercritical water (SCW) is obtained at a temperature of 374.2 °C and a pressure of 22.05 MPa. It behaves as a dense gas with a dissolving power equivalent to that of organic solvents of low polarity. However, the solubility of inorganic salts in SCW is radically reduced. SCW is used as a reaction medium, especially in oxidation processes for the destruction of toxic substances such as those found in industrial aqueous effluents. The use of supercritical water has two main technical challenges, namely corrosion and salt deposition. Supercritical carbon dioxide Supercritical carbon dioxide (CO2) is the most commonly used supercritical fluid because of its relatively easy to use. Temperatures above 31 °C and pressures above 7.38 MPa are sufficient to obtain supercriticality, at which point it behaves as a good nonpolar solvent. Alcohols and esters Ethanol is used in toiletries, cosmetics, some cleaners and coatings. . Bioethanol, made industrially by fermentation of sugars, starch, and cellulose is widely available. Biobutanol (butyl alcohol, various isomers) is also produced by fermentation of sugars. Tetrahydrofurfuryl alcohol (THFA) is a specialty solvent that may be obtained from hemicellulose. Ethyl lactate, made from lactic acid obtained from corn starch, is notably used as a mixture with other solvents in some paint strippers and cleaners. Ethyl lactate has replaced solvents such as toluene, acetone, and xylene in some applications. Lipid-derived solvents Lipids (triglycerides) themselves can be used as solvents, but are mostly hydrolyzed to fatty acids and glycerol (glycerin). Fatty acids can be esterified with an alcohol to give fatty acid esters, e.g., FAMEs (fatty acid methyl esters) if the esterification is performed with methanol. Usually derived from natural gas or petroleum, the methanol used to produce FAMEs can also be obtained by other routes, including gasification of biomass and household hazardous waste. Glycerol from lipid hydrolysis can be used as a solvent in synthetic chemistry, as can some of its derivatives. Deep eutectic solvents Deep eutectic solvents (DES) have low melting points, can be cheap, safe and useful in industries. One example is octylammonium bromide/decanoic acid (molar ratio of [1:2]) has a lower density compared to water of 0.8889 g.cm−3, up to 1.4851 g.cm−3 for choline chloride/trifluoroacetamine [1:2]. Their miscibility is also composition-dependent. A mixture whose melting point is lower than that of the constituents is called an eutectic mixture. Many such mixtures can be used as solvents, especially when the melting-point depression is very large, hence the term deep eutectic solvent (DES). One of the most commonly used substances to obtain DES is the ammonium salt choline chloride. Smith, Abbott, and Ryder report that a mixture of urea (melting point: 133 °C) and choline chloride (melting point: 302 °C) in a 2:1 molar ratio has a melting point of 12 °C. Natural deep eutectic solvents (NADES) are also a research area relevant to green chemistry, being easy to produce from two low-cost and well-known ecotoxicity components, a hydrogen-bond acceptor, and a hydrogen-bond donor. Terpenes Solvents in a diverse class of natural substances called terpenes are obtained by extraction from certain parts of plants. All terpenes are structurally presented as multiples of isoprene with the gross formula (C5H8)n. D-limonene, a monoterpene, is one of the best known solvents in this class, as is turpentine. D-limonene is extracted from citrus peels while turpentine is obtained from pine trees (sap, stump) and as a by-product of the Kraft paper-making process (Sell, 2006). Turpentine is a mixture of terpenes whose composition varies according to its origin and production method. In Canada and the United States, a range of mass concentrations of 40 to 65% α-pinene, 20 to 35% β-pinene, and 2 to 20% d-limonene are found. α-pinene can replace n-hexane for the extraction of vegetable oil, and as a substitute solvent for extracting molecules such as carotenoids used as food additives. Turpentine, formerly used as a solvent in organic coatings, is now largely replaced by petroleum hydrocarbons. Nowadays, it is mainly used as a source of its constituents, including α-pinene and β-pinene. Ionic liquids Ionic liquids are molten organic salts that are generally fluid at room temperature. Frequently used cationic liquids, include imidazolium, pyridinium, ammonium and phosphonium. Anionic liquids include halides, tetrafluoroborate, hexafluorophosphate, and nitrate. Bubalo et al. (2015) argue that ionic liquids are non-flammable, and chemically, electrochemically and thermally stable. These properties allow for ionic liquids to be used as green solvents, as their low volatility limits VOC emissions compared to conventional solvents. The ecotoxicity and poor degradability of ionic liquids has been recognized in the past because the resources typically used for their production are non-renewable, as is the case for imidazole and halogenated alkanes (derived from petroleum). Ionic liquids produced from renewable and biodegradable materials have recently emerged, but their availability is low because of high production costs. Switchable solvents Bubbling CO2 into water or an organic solvent results in changes to certain properties of the liquid such as its polarity, ionic strength, and hydrophilicity. This allows an organic solvent to form a homogeneous mixture with the otherwise immiscible water. This process is reversible, and was developed by Jessop et al. (2012) for potential uses in synthetic chemistry, extraction and separation of various substances. The degree of how green switchable solvents are is measured by the energy and material savings it provides; thus, one of the advantages of switchable solvents is the potential reuse of solvent and water in post-process applications. Solvents from waste materials First-generation biorefineries exploit food-based substances such as starch and vegetable oils. For example, corn grain is used to make ethanol. Second-generation biorefineries use residues or wastes generated by various industries as feedstock for the manufacture of their solvents. 2-Methyltetrahydrofuran, derived from lignocellulosic waste, would have the potential to replace tetrahydrofuran, toluene, DCM, and diethyl ether in some applications. Levulinic acid esters from the same source would have the potential to replace DCM in paint cleaners and strippers. Used cooking oils can be used to produce FAMEs. Glycerol, obtained as a byproduct of the synthesis of these, can in turn be used to produce various solvents such as 2,2-dimethyl-1,3-dioxolane-4-methanol, usable as a solvent in the formulation of inks and cleaners. Fusel oil, an isomeric mixture of amyl alcohol, is a byproduct of ethanol production from sugars. Green solvents derived from fusel oil such as isoamyl acetate or isoamyl methyl carbonate could be obtained. When these green solvents are used to manufacture nail polishes, VOC emissions report a minimum reduction of 68% compared to the emissions caused by using traditional solvents. Petrochemical solvents with green characteristics Due to the high price of new sustainable solvents, in 2017, Clark et al. listed twenty-five solvents that are currently considered acceptable to replace hazardous solvents, even if they are derived from petrochemicals. These include propylene carbonate and dibasic esters (DBEs). Propylene carbonate and DBEs have been the subject of monographs on solvent substitution. Propylene carbonate and two DBEs are considered green in the manufacturer GlaxoSmithKline's (GSK) Solvent Sustainability Guide, which is used in the pharmaceutical industry. Propylene carbonate can be produced from renewable resources, but DBEs that have appeared on the market in recent years are obtained as by-products of the synthesis of polyamides, derived from petroleum. Other petrochemical solvents are variously referred to as green solvents, such as halogenated hydrocarbons like parachlorobenzotrifluoride, which has been used since the early 1990s in paints to replace smog-forming solvents. Siloxanes are compounds known in industry in the form of polymers (silicones, R-SiO-R'), for their thermal stability and elastic and non-stick properties. The early 1990s saw the emergence of low molecular weight siloxanes (methylsiloxanes), which can be used as solvents in precision cleaning, replacing stratospheric ozone-depleting solvents. A final category of petrochemical solvents that qualify as green involves polymeric solvents. The International Union of Pure and Applied Chemistry defines the term "polymer solvent" as "a polymer that acts as a solvent for low-molecular weight compounds". In industrial chemistry, polyethylene glycols (PEGs, H(OCH2CH2)nOH) are one of the most widely used polymeric solvent families. PEGs, with molecular weights below 600 Da, are viscous liquids at room temperature, while heavier PEGs are waxy solids. Soluble in water and readily biodegradable, liquid PEGs have the advantage of negligible volatility (< 0.01 mmHg or < 1.3 Pa at 20 °C). PEGs are synthesized from ethylene glycol and ethylene oxide, both of which are petrochemical-derived molecules, though ethylene glycol from renewable sources (cellulose) is commercially available. Physical properties The physical properties of solvents are important in identifying the solvent used according to the reaction conditions. In particular, their dissolution properties make it possible to assess the use of a particular solvent for a chemical reaction, such as an extraction or a washing. Evaporation is also important to consider, as it can be indicative of the potential volatile organic compound (VOC) emissions. The following table shows selected properties of green solvents in each category: Other categories of green solvent have additional properties that preclude their usage in various applications: Fatty acid methyl esters have been investigated and compared to fossil diesel. At 20 °C or 40 °C, those solvents have a lower density than water at 4 °C (temperature in which the water is the densest): : from 0.9079 (acetate) to 0.8488 (arachidate); : from 0.9338 (acetate) to 0.8663 (pentadecanoate). Their kinematic viscosity depends if they are saturated or unsaturated or even the temperature. At 40 °C, for saturated FAMEs, it goes from 0.340 (acetate) to 6.39 (nonadecanoate), and for unsaturated FAMEs, it goes from 5.61 for the stearate to 7.21 for the erucate. Their dielectric constant decreases as their alkyl chain gets longer. For example, acetate has a tiny alkyl chain and has a dielectric constant of ε40= 6.852 and ε40= 2.982 for the nonadecanoate. The properties of switchable solvents are caused by the strength of their conjugate acid's pKa and octanol-water partition coefficient ratio Kow. They must have a pKa above 9.5 to be protonated by carbonated water and also a log(Kow) between 1.2 and 2.5 to be switchable, otherwise they will be hydrophilic or hydrophobic. These properties depend on the volumetric ratio of the compound compared to water. For example, N,N,N-Tributylpentanamidine is a switchable solvent, and for a volumetric ratio of compound to water of 2:1, it has a log(Kow)= 5.99, which is higher than 2.5. Ionic liquids with low melting points are associated with asymmetric cations, and liquids with high melting point are associated with symmetric cations. Additionally, if they have branched alkyl chains, they will have a higher melting point. They are more dense than water, ranging from 1.05 to 1.64 g·cm−3 at 20 °C and from 1.01 to 1.57 at 90 °C. Applications Some green solvents, in addition to being more sustainable, have been found to have more efficient physicochemical properties or reaction yields than when using traditional solvents. However, the results obtained are for the most part observations from experiments on particular green solvents and cannot be generalized. The effectiveness of a green solvent is quantified by calculating the "E factor", which is a ratio of waste materials to desired product produced through a process. Organic synthesis Green solvent efficiency has mainly been proven in extractions and separations in comparison to traditional solvents. Supercritical CO2 is largely used in the food industry as an extraction solvent. Among other processes like flavoring agents, fragrances, essential oils, or lipid extraction from plants, sc-CO2 is a green substitute to dichloromethane in coffee decaffeination, avoiding the use of a hazardous solvent and additional synthesis steps. Sc-CO2 can also apply to polymerization reactions, specifically in PTFE formation to manipulate monomers safely and avoid explosive reactions of peroxide with dioxygen. Although the original process involves water, a green solvent itself, sc-CO2 allows less waste materials. In deep eutectic solvents, observations report that the higher the solvent's hydrophobicity, the higher the extraction efficiency of neonicotinoids from aqueous solutions, although the exact trend has not been established yet. Additionally, the creation of a biphasic system is easier to achieve. In 2015, several hydrophobic DES composed of highly hydrophobic hydrogen bond donors were reported, one of them being decanoic acid with a quaternary ammonium salt, and a fatty acid as a hydrogen bond acceptor. The pharmaceutical industry intends to substitute their solvents for greener options, emphasized as solvent use in active substance synthesis is important, which aggrandizes solvents with a high boiling point. Solvent must generally be evaporated at the end of a chemical reaction, hence the insistence on low-boiling solvents in order to minimize the energy required for its removal by distillation. Industrial chemistry Ethyl lactate has uses in cleaning metal surfaces, removing greases, oils, adhesives and solid fuels. They are included in aqueous preparations used for industrial degreasing, coatings, adhesives and inks. Fatty acid methyl esters (FAMEs) have been used as a reactive diluent in coatings for continuous metal strip coating (e.g., the interior coating of food cans), reducing the amount of volatile solvent in this type of coating and lowering its overall toxicity. Tetrahydrofurfuryl alcohol (THFA) mixtures with other green solvents are studied for their cleaning properties. As an example, the mixture of THFA with FAME and ethyl lactate has been patented as a paint stripper. Ionic liquids particularly have applications in electrodeposition. Their relevance as green solvents is further enhanced by the emergence of production methods based on renewable and biodegradable resources. Solvent manufacturers also provide industrial companies with databases to propose green alternative solvent mixtures to those originally used in industrial processes with similar efficiency and reaction yield. However, environmental and safety requirements are not always considered in these suggestions. Safety The use of green solvents is increasingly preferred because of their lower environmental impact. These solvents still present dangers for human health as well as for the environment. However, for a number of green solvents, their impact is still unclear, or at least, not categorized yet. Listed here is selected information from the safety data sheets of common green solvents: Solvents derived from carbohydrates For ethanol, the American Conference of Governmental Industrial Hygienists, shortened ACGIH, advises a short-term exposure limit of 1000 ppm to avoid irritating the respiratory tract. The French National Agency for Food, Environmental, and Occupational Health Safety (ANSES) has recommended a short-term occupational exposure limit value of 100 mg/m3 for butan-1-ol, a solvent used in paints, cleaners, and degreasers, in order to prevent irritation of the mucous membranes of the eyes and upper airways. Since 1998, the ACGIH has suggested an 8-hour exposure limit value (ELV) of 20 ppm of butan-1-ol to prevent irritation of the upper respiratory tract and eyes. Male rats exposed to THFA develop reproductive toxicity. Moreover, it has an impact on fetal and embryonic development in rats. The American Industrial Hygiene Association suggested an ELV of 2 ppm for THFA to prevent testicular degeneration in 1993 based on the No-observed-effect level of two subchronic investigations in rats and dogs Deep eutectic solvents DES components, according to Wazeer, Hayyan, and Hadj-Kali, are typically non-toxic and biodegradable. According to Hayyan et al., the DES they investigated were more harmful to the small crustacean artemia than each of their individual components, which could be attributed to synergy. The abbreviation NADES refers to DES that contain only materials sourced from renewable resources. Compared to other DES, these would typically be less hazardous. Legislation Due to the recency of green solvent development, few laws related to their regulation have been developed beyond standard workplace safety precautions already in place, and laws that enforce the use of green solvents have not been widespread. References Green chemistry
Green solvent
[ "Chemistry", "Engineering", "Environmental_science" ]
4,118
[ "Green chemistry", "Chemical engineering", "Environmental chemistry", "nan" ]