id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
54,940,860
https://en.wikipedia.org/wiki/Integrin-like%20receptors
Integrin-like receptors (ILRs) are found in plants and carry unique functional properties similar to true integrin proteins. True homologs of integrins exist in mammals, invertebrates, and some fungi but not in plant cells. Mammalian integrins are heterodimer transmembrane proteins that play a large role in bidirectional signal transduction. As transmembrane proteins, integrins connect the extracellular matrix (ECM) to the plasma membrane of the animal cell. The extracellular matrix of plant cells, fungi, and some protist is referred to as the cell wall. The plant cell wall is composed of a tough cellulose polysaccharide rather than the collagen fibers of the animal ECM. Even with these differences, research indicates that similar proteins involved in the interaction between the ECM and animals cells are also involved in the interaction of the cell wall and plant cells. Integrin-like receptors and integrin-linked kinases together have been implicated in surface adhesion, immune response, and ion accumulation in plant cells in a manner akin to the family of integrin proteins. Structure ILRs contain a transmembrane region with a large extracellular portion and a smaller intracellular section. Most commonly, ILRs resembles the β1 subunit found in integrin proteins. This structural similarity between ILRs and integrins was determined through various imaging techniques, SDS-PAGE, western blotting, and kinetic studies. These proteins are around 55 to 110 kDa and some studies have found them to react with animal anti-β1 antibodies suggesting the structural similarity between animal integrins and these plant integrin-like receptors. Some ILRs mimic the α-subunit of integrin proteins containing the ligand binding region known as the I-domain. The I-domain functions primarily in the recognition and binding of a ligand. Conformational changes in the I-domain leads to ILR activation and is dependent on metal ion interaction at metal-ion-dependent adhesion sites (MIDAS). Activation of these sites occur in the presence of Mg2+, Mn2+, and Ca2+. The extracellular domain of most ILRs contain the highly conserved tripepetid sequence Arg-Gly-Asp (RGD). This sequence is commonly found in integrins and other molecules that attach to the extracellular matrix for cell adhesion. The discovery of the RGD sequence in many proteins suggest the same adhesive ability. While the RGD sequence is the most common, some ILRs have been found with sequences that are similar but differ in one amino acid. A plant protein with structural similarity to integrins contains the amino acid sequence Asn-Gly-Asp (NGD). Function Plants ILRs play a role in protein-protein interaction and are found in the plasma membrane of plant cells in the leaf, root and vasculature of plants. Plants produce a physiological response that is dependent on information obtained from the environment. The majority of this information is received through mechanical signals which include touch, sound, and gravity. Therefore, the interaction between the ECM and the internal cell response is incredibly important for receiving and interpreting information. The specific functionality of ILRs in plants is not well characterized but in addition to mechanical signaling transduction, they are believed to have some role in plant immune response, osmotic stress sensitivity, and ion regulation within the cell. Surface-Adhesion Some β1 integrin-like receptors on the root caps of Tabaco plants are found to play a role in the plant’s ability to detect gravitational pull and aid in root elongation in a process known as gravitropism. ILRs are found on the cellular membrane of plant protoplasts. The dispersion of the ILRs on these protoplasts can vary from species to species. The variation in the ILR surface placement has been correlated to species growth behavior. For example, Rubus fruticosus cells have a uniformed distribution of ILRs on their cellular membrane while Arabidopsis thaliana contains ILRs that cluster resulting in cell growth clusters. Immunology Integrin-like receptors have the capability to relay messages from inside the cell to the outside of the cell and vice versa. This is an important factor in the initiation and sustaining of an immunological response. A good body of research has found ILR proteins that model the glycoproteins vitronectin and fibronectin, two important molecules in membrane stability and homeostasis. These virtonectin-like and fibronectin-like protein provide further support that compounds in the cell membrane of plant cells have important regulatory functions in the immune response such as the activation of immune cells. The non-race specific disease resistance-1 (NDR1) primarily discovered to have a large function in plant immune response. This protein shares functional homology with mammalian integrins in that it connects the ECM to the intracellular matrix to both stabilize the cell structure and allow for signal exchange. NDR1 is also believed to be involved in cell wall adhesion to the plasma membrane and fluid retention of the cell. Fungi In addition to adhesive properties, integrin-like receptors with RGD-binding sites have special functions in fungi. Using peptides that inhibit the activity of proteins with RGD activation, ILR were discovered in Magnaporthe oryzae to initiate fungal conidial adhesion and appressorium formation needed for host infection. Candida albicans is an opportunistic fungi with an integrin-like receptor protein known as αInt1p. This protein maintains structural similarity and sequence homology to the α-subunits of human leukocyte integrins. The αInt1p protein contains an RGD extracellular binding site and allows the organism to attach to epithelial cells in the host organism to begin the infection process. Once bound, the protein then assists in the morphogenesis of the fungi into a tube-like structure. Invertebrates In invertebrates, protein structures with the RGD-binding sequence assist in an array of different functions such as the repairing of wounds and cell adhesion. Integrin-like receptors are found in mollusk and have a part in the spreading of hemocytes to damaged locations in the cellular system. Studies that block the RGD-binding site of these integrin-like receptors indicate a reduction in hemocyte aggregation and spreading suggesting the RGD-binding site on integrin-like receptors is a necessary component in organismal immune response. Further support for this calm shows RGD-binding inhibition reduces nodule formation and encapsulation in invertebrate immune response. References Transmembrane receptors
Integrin-like receptors
[ "Chemistry" ]
1,421
[ "Transmembrane receptors", "Signal transduction" ]
76,438,868
https://en.wikipedia.org/wiki/Michelson%E2%80%93Sivashinsky%20equation
In combustion, Michelson–Sivashinsky equation describes the evolution of a premixed flame front, subjected to the Darrieus–Landau instability, in the small heat release approximation. The equation was derived by Gregory Sivashinsky in 1977, who along the Daniel M. Michelson, presented the numerical solutions of the equation in the same year. Let the planar flame front, in a uitable frame of reference be on the -plane, then the evolution of this planar front is described by the amplitude function (where ) describing the deviation from the planar shape. The Michelson–Sivashinsky equation, reads as where is a constant. Incorporating also the Rayleigh–Taylor instability of the flame, one obtains the Rakib–Sivashinsky equation (named after Z. Rakib and Gregory Sivashinsky), where denotes the spatial average of , which is a time-dependent function and is another constant. N-pole solution The equations, in the absence of gravity, admits an explicit solution, which is called as the N-pole solution since the equation admits a pole decomposition,as shown by Olivier Thual, Uriel Frisch and Michel Hénon in 1988. Consider the 1d equation where is the Fourier transform of . This has a solution of the form where (which appear in complex conjugate pairs) are poles in the complex plane. In the case periodic solution with periodicity , the it is sufficient to consider poles whose real parts lie between the interval and . In this case, we have These poles are interesting because in physical space, they correspond to locations of the cusps forming in the flame front. Dold–Joulin equation In 1995, John W. Dold and Guy Joulin generalised the Michelson–Sivashinsky equation by introducing the second-order time derivative, which is consistent with the quadratic nature of the dispersion relation for the Darrieus–Landau instability. The Dold–Joulin equation is given by where corresponds to the non-local integral operator. Joulin–Cambray equation In 1992, Guy Joulin and Pierre Cambray extended the Michelson–Sivashinsky equation to include higher-order correction terms, following by an earlier incorrect attempt to derive such an equation by Gregory Sivashinsky and Paul Clavin. The Joulin–Cambray equation, in dimensional form, reads as See also Kuramoto–Sivashinsky equation References Differential equations Fluid dynamics Combustion 1977 in science
Michelson–Sivashinsky equation
[ "Chemistry", "Mathematics", "Engineering" ]
518
[ "Chemical engineering", "Mathematical objects", "Differential equations", "Equations", "Combustion", "Piping", "Fluid dynamics" ]
61,235,210
https://en.wikipedia.org/wiki/Techniques%20to%20isolate%20haematopoietic%20stem%20cells
Since haematopoietic stem cells cannot be isolated as a pure population, it is not possible to identify them under a microscope. Therefore, there are many techniques to isolate haematopoietic stem cells (HSCs). HSCs can be identified or isolated by the use of flow cytometry where the combination of several different cell surface markers is used to separate the rare HSCs from the surrounding blood cells. HSCs lack expression of mature blood cell markers and are thus, called Lin-. Lack of expression of lineage markers is used in combination with detection of several positive cell-surface markers to isolate HSCs. In addition, HSCs are characterized by their small size and low staining with vital dyes such as rhodamine 123 (rhodamine lo) or Hoechst 33342 (side population). CD34+ Cells can be isolated by 4 different techniques from peripheral blood samples By magnetic beads with MACS By FACS By labelled anti-antibodies Manually by culture. Since CD34 are in suspension culture and almost all cells in PBMC gets adhered, CD34 can be isolated through this process Cluster of differentiation and other markers The classical marker of human HSC is CD34 first described independently by Civin et al. and Tindle et al. It is used to isolate HSC for reconstitution of patients who are haematologically incompetent as a result of chemotherapy or disease. Many markers belong to the cluster of differentiation series, like: CD34, CD38, CD90, CD133, CD105, CD45, and also c-kit – the receptor for stem cell factor. There are many differences between the human and murine hematopoietic cell markers for the commonly accepted type of hematopoietic stem cells. Mouse HSC: EMCN+, CD34lo/−, SCA-1+, Thy1.1+/lo, CD38+, C-kit+, lin− Human HSC: EMCN+, CD34+, CD59+, Thy1/CD90+, CD38lo/−, C-kit/CD117+, lin− However, not all stem cells are covered by these combinations that, nonetheless, have become popular. In fact, even in humans, there are hematopoietic stem cells that are CD34−/CD38−. Also some later studies suggested that earliest stem cells may lack c-kit on the cell surface. For human HSCs use of CD133 was one step ahead as both CD34+ and CD34− HSCs were CD133+. Traditional purification method used to yield a reasonable purity level of mouse hematopoietic stem cells, in general, requires a large (~10–12) battery of markers, most of which were surrogate markers with little functional significance, and thus partial overlap with the stem cell populations and sometimes other closely related cells that are not stem cells. Also, some of these markers (e.g., Thy1) are not conserved across mouse species, and use of markers like CD34− for HSC purification requires mice to be at least 8 weeks old. SLAM code Alternative methods that could give rise to a similar or better harvest of stem cells is an active area of research, and are presently emerging. One such method uses a signature of SLAM family cell surface molecules. The SLAM (Signaling lymphocyte activation molecule) family is a group of more than 10 molecules whose genes are located mostly tandemly in a single locus on chromosome 1 (mouse), all belonging to a subset of the immunoglobulin gene superfamily, and originally thought to be involved in T-cell stimulation. This family includes CD48, CD150, CD244, etc., CD150 being the founding member, and, thus, also known as slamF1, i.e., SLAM family member 1. The signature SLAM codes for the hemopoietic hierarchy are: Hematopoietic stem cells (HSC): CD150+CD48−CD244− Multipotent progenitor cells (MPPs): CD150−CD48−CD244+ Lineage-restricted progenitor cells (LRPs): CD150−CD48+CD244+ Common myeloid progenitor (CMP): lin−SCA-1−c-kit+CD34+CD16/32mid Granulocyte-macrophage progenitor (GMP): lin−SCA-1−c-kit+CD34+CD16/32hi Megakaryocyte-erythroid progenitor (MEP): lin−SCA-1−c-kit+CD34−CD16/32low For HSCs, CD150+CD48− was sufficient instead of CD150+CD48−CD244− because CD48 is a ligand for CD244, and both would be positive only in the activated lineage-restricted progenitors. It seems that this code was more efficient than the more tedious earlier set of the large number of markers, and are also conserved across the mouse strains; however, recent work has shown that this method excludes a large number of HSCs and includes an equally large number of non-stem cells. CD150+CD48− gave stem cell purity comparable to Thy1loSCA-1+lin−c-kit+ in mice. LT-HSC/ST-HSC/early MPP/late MPP Irving Weissman's group at Stanford University was the first to isolate mouse hematopoietic stem cells in 1986 and was also the first to work out the markers to distinguish the mouse long-term (LT-HSC) and short-term (ST-HSC) hematopoietic stem cells (self-renew-capable), and the Multi-potent progenitors (MPP, low or no self-renew capability – the later the developmental stage of MPP, the lesser the self-renewal ability and the more of some of the markers like CD4 and CD135): LT-HSC: CD34−, CD38−, SCA-1+, Thy1.1+/lo, C-kit+, lin−, CD135−, Slamf1/CD150+ ST-HSC: CD34+, CD38+, SCA-1+, Thy1.1+/lo, C-kit+, lin−, CD135−, Slamf1/CD150+, Mac-1 (CD11b)lo Early MPP: CD34+, SCA-1+, Thy1.1−, C-kit+, lin−, CD135+, Slamf1/CD150−, Mac-1 (CD11b)lo, CD4lo Late MPP: CD34+, SCA-1+, Thy1.1−, C-kit+, lin−, CD135high, Slamf1/CD150−, Mac-1 (CD11b)lo, CD4lo References Stem cell research
Techniques to isolate haematopoietic stem cells
[ "Chemistry", "Biology" ]
1,506
[ "Translational medicine", "Tissue engineering", "Stem cell research" ]
58,125,639
https://en.wikipedia.org/wiki/Acetylthiocholine
Acetylthiocholine is an acetylcholine analog used in scientific research. References Thiocholine esters Thioesters Quaternary ammonium compounds Acetate esters
Acetylthiocholine
[ "Chemistry" ]
42
[ "Thioesters", "Functional groups" ]
58,125,800
https://en.wikipedia.org/wiki/R.%20Cengiz%20Ertekin
R. Cengiz Ertekin is a professor of Marine Hydrodynamics and Ocean Engineering. He currently holds a guest professor position at Harbin Engineering University of China. He is best known for his contributions to the development of nonlinear water wave theories, hydroelasticity of very large floating structures (VLFS), wave energy, and tsunami and storm impact on coastal bridges. He is also the co-developer, along with Professor H. Ronald Riggs of the University of Hawaiʻi, of the computer program HYDRAN for solving linear fluid-structure interaction problems of floating and fixed bodies. Early life and education R. Cengiz Ertekin was born and raised in Turkey. He received a B.Sc. degree in Naval Architecture and Marine Engineering from Istanbul Technical University, the top technical university of Turkey, in 1977. Following the encouragement of his advisor, Prof. M Cengiz Dokmeci, he moved to the Department of Naval Architecture and Offshore Engineering of the University of California, Berkeley, United States, for higher education. He received his M.Sc. and Ph.D. degrees in 1980 and 1984, respectively. His M.Sc. advisors were Professors Marshall P. Tulin and William C. Webster. His Ph.D. advisor was Professor John V. Wehausen. Cengiz was the last student of Prof. John V. Wehausen before his retirement. After graduation, Professor Wehausen offered Cengiz a postdoctoral research assistant position for 18 months at U.C. Berkeley. Professional career Most of Ertekin's professional career has been dedicated to academic work; however, he also has several years of experience of working in the industry. In 1985, Ertekin joined the Research Center of Shell Development Company in Houston, Texas. He took a faculty position (hired at the associate professor level) at the Department of Ocean Engineering of the University of Hawaiʻi at Mānoa in 1986, and received tenure within four years and was promoted to Professor in 1994. The Ocean Engineering Department of UH was established by Professor Charles Bretschneider in 1966 and is one of the first of its kind in the US. At the University of Hawaiʻi, Ertekin led and contributed immensely to the success of School of Ocean and Earth Science and Technology and the Department of Ocean and Resources Engineering (ORE, formerly Ocean Engineering). In the era of PCs, for example, Professor Ertekin played a key role in transferring the department from one focusing mostly on field and experimental studies, to also a leading institute in modern and computational hydrodynamics. The department was the host of some of the internationally leading conferences, workshops and meetings (details given below), mostly organized and chaired by Cengiz. After almost 30 years, he retired from the University of Hawaiʻi in September 2015. Starting in March 2014, he became a guest professor at the College of Shipbuilding Engineering of Harbin Engineering University in China. Teaching and advising Ertekin has taught numerous courses on hydrodynamics and ocean engineering at the University of Hawaiʻi at Mānoa, and at University of California, Berkeley. At the Ocean Engineering Department of the University of Hawaiʻi, Ertekin developed and taught several courses including Nonlinear water wave theories (ORE 707), Hydrodynamics of Fluid-Body Interaction (ORE 609), Buoyancy and Stability (ORE 411), and Marine Renewable Energy (ORE 677), to name a few. At the University of California, Berkeley, he taught Ship Statics (NAOE 151) and Ship Resistance and Propulsion (NAOE 152A). At the University of Hawaiʻi, Ertekin advised and mentored over 50 graduate students. Research Ertekin's research on Marine Hydrodynamics and Ocean Engineering has extended over a period of about forty years. His work cover both basic and applied research through analytical, computational and experimental approaches. Below are an examples of his pioneering contributions. Other topics of significant research contribution by Ertekin include ship resistance, marine energy, and oil spills. The Green-Naghdi water wave theory The Green-Naghdi (GN) equations are nonlinear water wave equations that were originally developed by British mathematician Albert E. Green and Iranian-American mechanical engineer Paul M. Naghdi in the 1970s (see,). The original equations, namely the Level I GN equations, are mostly applicable to the propagation of long waves in shallow waters. However, high level GN equations are also developed which are applicable to deep water waves. The equations differ from the classical water wave theories (e.g. Boussinesq equations) in that the flow need not be irrotational, and that no perturbation is used in deriving the equations. Hence, the GN equations satisfy the nonlinear boundary conditions exactly, and postulate the integrated conservation laws. Although the GN equations were developed very recently (compared to other wave theories), they are well-known and fairly understood by the research and scientific community. Ertekin's Ph.D. advisor and dissertation committee chair was Professor Wehausen. Others on his Ph.D. committee were Professor William Webster, and Professor Paul M. Naghdi. Working under close guidance of his advisors, he was one of the first to use the nonlinear equations (that were introduced just a couple of years earlier by Profs. Green and Naghdi). In his Ph.D. dissertation, Ertekin was the first to give the equations in now a familiar form to the hydrodynamics community by providing closed-form relations for the pressures. He named the equations, The Green-Naghdi Equations. Upon completion of his Ph.D., Ertekin continued research on the GN equations. He has patiently introduced the GN equations to his graduate students and postdoctoral researchers and has guided many of them to perform basic and applied research on or by use of the GN equations. Along with his research assistant and postdocs, they developed the Irrotational GN (IGN) equations (see e.g., and ), and high-level GN equations (see e.g., and ). They have solved some of the classical and challenging hydrodynamics problems by use of the GN equations, including nonlinear wave diffraction and refraction(see e.g.), nonlinear wave loads on vertical cylinders (see e.g.), wave interaction with elastic bodies and VLFS (see e.g.), wave loads on coastal bridges (see e.g.), and wave interaction with wave energy devices (see e.g.), among many others. Hydroelasticity and VLFS The Mobile Offshore Base (MOB) project of USA and the Mega-Float project of Japan are two examples of Very Large Floating Structures (VLFS). These are very large floating platforms consist of interconnected modules whose length can extend to several kilometers. Due to the unprecedented long length, displacement and associated hydroelastic response of VLFS, the state of the art analysis and design approaches that was used for smaller floating platforms was not adequate. It quickly became obvious that new approaches must be developed to tackle the complex problems associated to dynamics and response of VLFS. Starting 1990's, Ertekin pioneered the research on hydroelasticity of VLFS. He and H. Ronald Riggs of the Civil Engineering Department at the University of Hawaii coined the term VLFS. They have solved the hydroelasticity problem of VLFS by use of both linear and nonlinear approaches, in two and three dimensions. Ertekin has also introduced new approaches and equations to study this topic, including the use of nonlinear water wave models to analyse the hydroelastic response of VLFS of mat type (see e.g., and ). His work and research on hydroelasticity of VLFS has opened a new era for these topics and gave more confidence in understanding the dynamics and response of the structures. Wave loads on coastal bridges Some of the recent tsunami and hurricanes, such as Tohoku tsunami in Japan (2011) and Hurricane Katrina in the United States (2005), caused significant damage to the decks of coastal bridges and structures. Interaction of surface waves with coastal bridges is a complex problem, involving fluid-structure interaction, multi-phase fluids, wave breaking, and overtopping. These are of course in addition to the difficulties associated to the structural analysis. Ertekin and his students studied bridge failure mechanisms and possible mitigating solutions. They developed models used to assess the vulnerability of coastal bridges in USA to tsunami and storm surge and waves. Publications and professional services Ertekin has over 150 peer-reviewed publications. He has been on the editorial board of more than ten internationally leading journals since early 1990s (see e.g., and ), and editor of several special issues in various journals, see e.g. Renewable Energy: Leveraging Ocean and Waterways special issue of Applied Ocean Research journal (2009). He was the co-editor-in-chief of Elsevier's Ocean Engineering journal (2006–2010), and he is the founding editor-in-chief of Springer's Journal of Ocean Engineering and Marine Energy. Ertekin has been keynote speaker of several leading meetings and conferences, see e.g. and. References 1954 births Living people Fluid dynamicists Scientific journal editors Shell plc people People from Turgutlu Istanbul Technical University alumni University of California, Berkeley alumni University of Hawaiʻi at Mānoa faculty UC Berkeley College of Engineering faculty Academic staff of Harbin Engineering University
R. Cengiz Ertekin
[ "Chemistry" ]
1,981
[ "Fluid dynamicists", "Fluid dynamics" ]
58,128,377
https://en.wikipedia.org/wiki/Cranial%20evolutionary%20allometry
Cranial evolutionary allometry (CREA) is a scientific theory regarding trends in the shape of mammalian skulls during the course of evolution in accordance with body size (i.e., allometry). Specifically, the theory posits that there is a propensity among closely related mammalian groups for the skulls of the smaller species to be short and those of the larger species to be long. This propensity appears to hold true for placental as well as non-placental mammals, and is highly robust. Examples of groups which exhibit this characteristic include antelopes, fruit bats, mongooses, squirrels and kangaroos as well as felids. It is believed that the reason for this trend has to do with size-related constraints on the formation and development of the mammalian skull. Facial length is one of the best known examples of heterochrony. However, biomechanical principles relating to bite force might also be a major driver of the pattern among species that share similar diets. Because the hardness of a bite is a product of muscle force and leverage, larger species with bigger jaw muscles can bite a given food item with a longer face, but to bite into the same food, smaller species often need to have a shorter face to increase leverage as compensation for their weaker jaw muscles. References Branches of biology
Cranial evolutionary allometry
[ "Biology" ]
274
[ "nan" ]
58,128,495
https://en.wikipedia.org/wiki/SEC%20classification%20of%20goods%20and%20services
Economists and marketers use the Search, Experience, Credence (SEC) classification of goods and services, which is based on the ease or difficulty with which consumers can evaluate or obtain information. These days most economics and marketers treat the three classes of goods as a continuum. Archetypal goods are:<ref>Harsh V. Verma, Services Marketing: Text and Cases 2nd ed, India, Dorling-Kinderly, 2012, pp 261-264</ref> Search goods: those with attributes that can be evaluated prior to purchase or consumption. Consumers rely on prior experience, direct product inspection and other information search activities to locate information that assists in the evaluation process. Most products fall into the search goods category (e.g. clothing, office stationery, home furnishings). Experience goods: those that can be accurately evaluated only after the product has been purchased and experienced. Many personal services fall into this category (e.g. restaurant, hairdresser, beauty salon, theme park, travel, holiday). Credence goods: those that are difficult or impossible to evaluate even after consumption has occurred. Evaluation difficulties may arise because the consumer lacks the knowledge or technical expertise to make a realistic evaluation or, alternatively because the cost of information-acquisition may outweigh the value of the information available. Many professional services fall into this category (e.g. accountant, legal services, medical diagnosis/treatment, cosmetic surgery) Search good A search good is a product or service with features and characteristics easily evaluated before purchase. In a distinction originally due to Philip Nelson, a search good is contrasted with an experience good. Search goods are more subject to substitution and price competition, as consumers can easily verify the price of the product and alternatives at other outlets and make sure that the products are comparable. Branding and detailed product specifications act to transform a product from an experience good into a search good. Experience good An experience good is a product or service where product characteristics, such as quality or price, are difficult to observe in advance, but these characteristics can be ascertained upon consumption. The concept is originally due to Philip Nelson, who contrasted an experience good with a search good. Experience goods pose difficulties for consumers in accurately making consumption choices. In service areas, such as healthcare, they reward reputation and create inertia. Experience goods typically have lower price elasticity than search goods, as consumers fear that lower prices may be due to unobservable problems or quality issues. Credence good A credence good (or post-experience good) is a good whose utility impact is difficult or impossible for the consumer to ascertain. In contrast to experience goods, the utility gain or loss of credence goods is difficult to measure after consumption as well. The seller of the good knows the utility impact of the good, creating a situation of asymmetric information. Examples of credence goods include; Vitamin supplements Education Car repairs Many forms of medical treatment Home maintenance services, such as plumbing and electricity Transactional legal services Psychology Credence goods may display a direct (rather than inverse) relationship between price and demand—similar to Veblen goods, when price is the only possible indicator of quality. A consumer might avoid the least expensive products to avoid suspected fraud and poor quality. So a restaurant customer may avoid the cheapest wine on the menu, but instead purchase something slightly more expensive. However, even after drinking it the buyer is unable to evaluate its relative value compared to all the wines they have not tried (unless they are a wine expert). This course of action—buying the second cheapest option—is observable by the restaurateur, who can manipulate the pricing on the menu to maximize their margin, i.e. ensuring that the second cheapest wine is actually the least costly to the restaurant. Another practical application of this principle would be for competing job applicants not to propose too low a wage when asked, lest the employer think that the employee has something to hide or does not have the necessary qualification for the job. In an unregulated market, prices of credence goods tend to converge, i.e. the same flat rate is charged for high and low value goods. The reason is that suppliers of credence goods tend to overcharge for low value goods, since the customers are not aware of the low value, while competitive pressures force down the price of high value goods. Another reason for price convergence is that customers become aware of the possibility of being overcharged, and compensate by favoring more expensive goods over cheaper ones. For example, a customer may ask for a complete replacement of a broken car part with a new one, irrespective of whether the damage is small or large (which the customer doesn't know). In this case the new part is "proof" that the customer hasn't been overcharged. A 2020 study into credence goods within the medical sector also showed connections between social economic standards (SES) and the likelihood of over treatment in the dental industry. Results showed that when portraying a higher SES, practitioners are less likely to offer treatment that is more invasive and expensive. This goes against intuition as the wealthier would be more likely to experience over charging compared to people of a lower SES. Recent studies show that consumer decision making varies in the context of search, experience and credence services. Consumers spend more time searching for information about credence services and least time when buying search services. References Bibliography Search Luis M. B. Cabral: Introduction to Industrial Organisation, Massachusetts Institute of Technology Press, 2000, page 223. Philip Nelson, "Information and Consumer Behavior", 78 Journal of Political Economy 311, 312 (1970). Credence Experience Luis M. B. Cabral: Introduction to Industrial Organization, Massachusetts Institute of Technology Press, 2000, page 223. Philip Nelson, "Information and Consumer Behavior", 78(2) Journal of Political Economy 311-329 (1970). Aidan R. Vining and David L. Weimer, "Information Asymmetry Favoring Sellers: A Policy Framework," 21(4) Policy Sciences'' 281–303 (1988). Goods (economics)
SEC classification of goods and services
[ "Physics" ]
1,265
[ "Materials", "Goods (economics)", "Matter" ]
64,719,459
https://en.wikipedia.org/wiki/Reciprocity%20%28electrical%20networks%29
Reciprocity in electrical networks is a property of a circuit that relates voltages and currents at two points. The reciprocity theorem states that the current at one point in a circuit due to a voltage at a second point is the same as the current at the second point due to the same voltage at the first. The reciprocity theorem is valid for almost all passive networks. The reciprocity theorem is a feature of a more general principle of reciprocity in electromagnetism. Description If a current, , injected into port A produces a voltage, , at port B and injected into port B produces at port A, then the network is said to be reciprocal. Equivalently, reciprocity can be defined by the dual situation; applying voltage, , at port A producing current at port B and at port B producing current at port A. In general, passive networks are reciprocal. Any network that consists entirely of ideal capacitances, inductances (including mutual inductances), and resistances, that is, elements that are linear and bilateral, will be reciprocal. However, passive components that are non-reciprocal do exist. Any component containing ferromagnetic material is likely to be non-reciprocal. Examples of passive components deliberately designed to be non-reciprocal include circulators and isolators. The transfer function of a reciprocal network has the property that it is symmetrical about the main diagonal if expressed in terms of a z-parameter, y-parameter, or s-parameter matrix. A non-symmetrical matrix implies a non-reciprocal network. A symmetric matrix does not imply a symmetric network. In some parametisations of networks, the representative matrix is not symmetrical for reciprocal networks. Common examples are h-parameters and ABCD-parameters, but they all have some other condition for reciprocity that can be calculated from the parameters. For h-parameters the condition is and for the ABCD parameters it is . These representations mix voltages and currents in the same column vector and therefore do not even have matching units in transposed elements. Example An example of reciprocity can be demonstrated using an asymmetrical resistive attenuator. An asymmetrical network is chosen as the example because a symmetrical network is self-evidently reciprocal. Injecting 6 amperes into port 1 of this network produces 24 volts at port 2. Injecting 6 amperes into port 2 produces 24 volts at port 1. Hence, the network is reciprocal. In this example, the port that is not injecting current is left open circuit. This is because a current generator applying zero current is an open circuit. If, on the other hand, one wished to apply voltages and measure the resulting current, then the port to which the voltage is not applied would be made short circuit. This is because a voltage generator applying zero volts is a short circuit. Proof Reciprocity of electrical networks is a special case of Lorentz reciprocity, but it can also be proven more directly from network theorems. This proof shows reciprocity for a two-node network in terms of its admittance matrix, and then shows reciprocity for a network with an arbitrary number of nodes by an induction argument. A linear network can be represented as a set of linear equations through nodal analysis. For a network consisting of n+1 nodes (one being a reference node) where, in general, an admittance is connected between each pair of nodes and where a current is injected in each node (provided by an ideal current source connected between the node and the reference node), these equations can be expressed in the form of an admittance matrix, where is the current injected into node k by a generator (which amounts to zero if no current source is connected to node k) is the voltage at node k with respect to the reference node (one could also say, it is the electric potential at node k) (j ≠ k) is the negative of the admittance directly connecting nodes j and k (if any) is the sum of the admittances connected to node k (regardless of the other node the admittance is connected to). This representation corresponds to the one obtained by nodal analysis. If we further require that network is made up of passive, bilateral elements, then since the admittance connected between nodes j and k is the same element as the admittance connected between nodes k and j. The matrix is therefore symmetrical. For the case where the matrix reduces to, . From which it can be seen that, and But since then, which is synonymous with the condition for reciprocity. In words, the ratio of the current at one port to the voltage at another is the same ratio if the ports being driven and measured are interchanged. Thus reciprocity is proven for the case of . For the case of a matrix of arbitrary size, the order of the matrix can be reduced through node elimination. After eliminating the sth node, the new admittance matrix will have the form, It can be seen that this new matrix is also symmetrical. Nodes can continue to be eliminated in this way until only a 2×2 symmetrical matrix remains involving the two nodes of interest. Since this matrix is symmetrical it is proved that reciprocity applies to a matrix of arbitrary size when one node is driven by a voltage and current measured at another. A similar process using the impedance matrix from mesh analysis demonstrates reciprocity where one node is driven by a current and voltage is measured at another. References Bibliography Bakshi, U.A.; Bakshi, A.V., Electrical Networks, Technical Publications, 2008 . Guillemin, Ernst A., Introductory Circuit Theory, New York: John Wiley & Sons, 1953 Kumar, K. S. Suresh, Electric Circuits and Networks, Pearson Education India, 2008 . Harris, Vincent G., "Microwave ferrites and applications", ch. 14 in, Mailadil T. Sebastian, Rick Ubic, Heli Jantunen, Microwave Materials and Applications, John Wiley & Sons, 2017 . Zhang, Kequian; Li, Dejie, Electromagnetic Theory for Microwaves and Optoelectronics, Springer Science & Business Media, 2013 . Circuit theorems Linear electronic circuits
Reciprocity (electrical networks)
[ "Physics" ]
1,286
[ "Equations of physics", "Circuit theorems", "Physics theorems" ]
64,723,653
https://en.wikipedia.org/wiki/Pro-gastrin-releasing-peptide
Pro-gastrin-releasing-peptide, also known as Pro-GRP, is a gastrin-releasing peptide (GRP) precursor, a neurotransmitter that belongs to the bombesin-related neuromedin B family. GRP stimulates the secretion of gastrin in order to increase the acidity of the gastric acid. Pro-GRP is a peptide composed of 125 amino acids, expressed in the nervous system and digestive tract. It is different from progastrin, consisting of 80 amino acids, precursor of gastrin in its intracellular version and oncogene in its extracellular version (hPG80). The presence of GRP in lung cancer samples was identified in 1983. In pathological situations, GRP has mitogenic activity in vitro in many cancers including pancreatic cancer, small cell lung carcinoma, prostate cancer, kidney cancer, breast and colorectal cancer. GRP could operate as an autocrine growth factor. In cancers, GRP induces cell growth and inhibits apoptosis by shutting down the endoplasmic reticulum stress pathway. The mechanisms of the impacted signal pathways have not been established.  As early as 1994, research on Pro-GRP as a biomarker for small-cell lung carcinoma began. Because of the very short half-life of GRP (2 minutes), the Pro-GRP is used for measurements and analysis. Since then, Pro-GRP has been used as a tumor marker for patients with small-cell lung carcinoma in limited and extended stages. References Molecular biology Cell biology Neurotransmitters
Pro-gastrin-releasing-peptide
[ "Chemistry", "Biology" ]
349
[ "Cell biology", "Neurotransmitters", "Molecular biology", "Biochemistry", "Neurochemistry" ]
64,723,881
https://en.wikipedia.org/wiki/Luisa%20Torsi
Luisa Torsi (born 1964) is an Italian chemist who is a professor at the Università degli Studi di Bari. She was the first woman to serve as President of the European Materials Research Society (E-MRS). In 2019 she was named by the International Union of Pure and Applied Chemistry as one of the world's most Distinguished Women in Chemistry. Early life and education Torsi was born in Bari. She earned her undergraduate degree in physics at the University of Bari. Torsi has said that she realised that she wanted to be a researcher during her master's research project. She remained there for her graduate studies, switching specialties to chemical sciences. In 1994 Torsi moved to the United States, where she joined Bell Labs as a postdoctoral researcher. During her postdoctoral research Torsi investigated organic field-effect transistors . Research and career She returned to Italy in 1993, when she was made Assistant Professor in the Department of Chemistry. In 2005 Torsi was made a full Professor of Chemistry. Her research considers organic semiconductors and their application in electronic devices. She has developed single molecule transistors that are capable of label-free disease detection. The device was capable of detecting zeptomolar concentrations. The discovery launched the Horizon 2020 project SiMBiT, a bio-electronic system that looks to achieve single molecule detection of biomarkers for point-of-care testing. Torsi looks to use nanoparticle based sensor to detect toxic gases. In 2015 Torsi delivered a TED talk at TEDxBari, where she discussed resilience. Torsi serves on the editorial board of ACS Omega. Awards and honours 2010 Henrick Emmanuel Merck international award 2015 Global-Women Inventors and Innovators Network Platinum Prize 2016 Elected President of the European Materials Research Society 2017 Elected Fellow of the Materials Research Society 2018 Italian Chemical Society SCI Silver Medal 2019 Alto Riconoscimento "Virtù e Conoscenza" 2019 European Chemical Society Robert Kellner Lecturer 2019 Elected to the Board of Directors of the Leonardo Foundation 2021 Wilhelm Exner Medal Selected publications References 1964 births Living people University of Bari alumni Academic staff of the University of Bari 20th-century Italian chemists Scientists at Bell Labs Organic semiconductors
Luisa Torsi
[ "Chemistry" ]
453
[ "Semiconductor materials", "Molecular electronics", "Organic semiconductors" ]
64,731,118
https://en.wikipedia.org/wiki/List%20of%20SysML%20tools
This article compares SysML tools. SysML tools are software applications which support some functions of the Systems Modeling Language. General Features References Technical communication Software comparisons Diagramming software Computing-related lists
List of SysML tools
[ "Technology", "Engineering" ]
41
[ "Systems engineering", "Computing-related lists", "Computing comparisons", "Systems Modeling Language", "Software comparisons" ]
64,733,330
https://en.wikipedia.org/wiki/Potassium%20octacyanomolybdate%28IV%29
Potassium octacyanomolybdate(IV) is the inorganic salt with the formula K4[Mo(CN)8]. A yellow light-sensitive solid, it is the potassium salt of the cyanometalate with the coordination number eight. The complex anion consists of a Mo(IV) center bound to eight cyanide ligands resulting in an overall charge of −4, which is balanced with four potassium cations. The salt is often prepared as its dihydrate K4[Mo(CN)8].(H2O)2. Preparation The dihydrate K4[Mo(CN)8] · 2 H2O can be prepared by the reduction of molybdate (MoO42-) with potassium borohydride (KBH4) in a solution with potassium cyanide and acetic acid. Yields of 70% are typical and the method is suited for scale-up. 4MoO42- + 32CN− + BH4− + 25H+ → 4 [Mo(CN)8]4- + 13H2O + H3BO3 An alternative route starts from MoCl4(Et2O)2 avoiding the need for reductants. The yield of this route is typically around 70%. This synthesis is convenient for lower batch sizes than the earlier method but the MoCl4(Et2O)2 is typically less available than the molybdate. MoCl4(Et2O)2 + 8KCN → K4[Mo(CN)8] + 4KCl + 2Et2O Reactions Octacyanomolybdate(IV) can be oxidized to the paramagnetic octacyanomolybdate(V). The cyanide ligands in [Mo(CN)8]4- remain basic. Strong acids lead to the hydrogen isocyanide complex [Mo(CNH)8]4+, in common with many cyanometalate complexes. These ligands can be substituted by others, for example H2O. The cyanide ligands also bind to other metals, leading to cages. References Cyano complexes Coordination complexes Molybdenum(IV) compounds Potassium compounds Cyanometallates
Potassium octacyanomolybdate(IV)
[ "Chemistry" ]
478
[ "Coordination chemistry", "Coordination complexes" ]
64,740,557
https://en.wikipedia.org/wiki/Lyth%20bound
In cosmological inflation, within the slow-roll paradigm, the Lyth argument places a theoretical upper bound on the amount of gravitational waves produced during inflation, given the amount of departure from the homogeneity of the cosmic microwave background (CMB). Summary During slow-roll inflation, the ratio of gravitational waves to inhomogeneities of the CMB is correlated to the inflationary potential steepness. Temperature inhomogeneities of the CMB were successfully and accurately measured. in the CMB. There are current CMB polarization experiments (see this article for instance for an overview of gravitational wave observatories) aimed at measuring the primordial gravitational wave signature in the CMB. However, to date, a significant signal of primordial gravitational waves was not detected. Thus the ratio cannot exceed a certain value. Thus the steepness of the inflationary potential is bounded. Detail The argument was first introduced by David H. Lyth in his 1997 paper "What Would We Learn by Detecting a Gravitational Wave Signal in the Cosmic Microwave Background Anisotropy?" The detailed argument is as follows: The power spectrum for curvature perturbations is given by: , Whereas the power spectrum for tensor perturbations is given by: , in which is the Hubble parameter, is the wave number, is the Planck mass and is the first slow-roll parameter given by . Thus the ratio of tensor to scalar power spectra at a certain wave number , denoted as the so-called tensor-to-scalar ratio , is given by: . While strictly speaking is a function of , during slow-roll inflation, it is understood to change very mildly, thus it is customary to simply omit the wavenumber dependence. Additionally, the numeric pre-factor is susceptible to slight changes owing to more detailed calculations but is usually between . Although the slow-roll parameter is given as above, it was shown that in the slow-roll limit, this parameter can be given by the slope of the inflationary potential such that: , in which is the inflationary potential over a scalar field . Thus, , and the upper bound on placed by CMB measurements and the lack of gravitational wave signal is translated to and upper bound on the steepness of the inflationary potential. Acceptance and significance Although the Lyth bound argument was adopted relatively slowly, it has been used in many subsequent theoretical works. The original argument deals only with the original inflationary time period that is reflected in the CMB signature, which at the time were about 5 e-folds, as opposed to about 8 e-folds to date. However, an effort was made to generalize this argument to the entire span of physical inflation, which corresponds to the order of 50 to 60 e-folds On the base of these generalized arguments, an unnecessary constraining view arose, which preferred realization of inflation based in large-field models, as opposed to small-field models. This view was prevalent until the last decade which saw a revival in small-field model prevalence due to the theoretical works that pointed to possible likely small-field model candidates. The likelihood of these models was further developed and numerically demonstrated. References Inflation (cosmology) Gravitational waves
Lyth bound
[ "Physics" ]
664
[ "Waves", "Physical phenomena", "Gravitational waves" ]
64,740,708
https://en.wikipedia.org/wiki/FICD
FIC domain protein adenylyltransferase (FICD) is an enzyme in metazoans possessing adenylylation and deadenylylation activity (also known as (de)AMPylation), and is a member of the Fic (filamentation induced by cAMP) domain family of proteins. AMPylation is a reversible post-translational modification that FICD performs on target cellular protein substrates. FICD is the only known Fic domain encoded by the metazoan genome, and is located on chromosome 12 in humans. Catalytic activity is reliant on the enzyme's Fic domain, which catalyzes the addition of an AMP (adenylyl group) moiety to the substrate. FICD has been linked to many cellular pathways, most notably the ATF6 and PERK branches of the UPR (unfolded protein response) pathway regulating ER homeostasis. FICD is present at very low basal levels in most cell types in humans, and its expression is highly regulated. Examples of FICD include HYPE (Huntingtin Yeast Interacting Partner E) in humans, Fic-1 in C. elegans, and dfic in D. melanogaster. Structure The structure of FICD proteins consists of different regions, which are the SS/TM (signal sequence/transmembrane domain),TPR (tetratricopeptide repeat) domain and fic (filamentation induced by cAMP) domain. The secondary structure is primarily composed of nine α-helices. All FICD proteins share the same catalytic motif in their fic domain, consisting of the amino acid sequence HxFx(D/E)(G/A)N(G/K)R1xxR2, located at the C terminus of the protein. At the N terminus of the protein is an inhibitory α-helix, composed of the motif (S/T)xxxE(G/N). Interaction between the glutamate of the inhibitory α-helix and the second arginine of the fic motif prevent ATP from entering the catalytic cleft of the protein and participating in AMPylation. This auto-inhibition leads to very low activity of FICD proteins in vitro. Mutants of FICD have been created which lead to different activity levels in vitro. A mutation of the catalytic histidine in the fic motif to an alanine leads to a complete loss of AMPylation activity by the enzyme. Conversely, mutation of the glutamate in the inhibitory α-helix to a glycine abolishes any auto-inhibition, creating a constitutively active enzyme. FICD proteins can exist as either a dimer or a monomer, although they generally exist as an asymmetric dimer in solution. Linkage occurs between the fic domains of the two monomers to form the dimer. New crystal structures have recently been published to model the structure the HYPE monomer. Mechanism FICD's AMPylation activity is dependent on the sequence of its fic motif (HxFx(D/E)(G/A)N(G/K)R1xxR2). The basic mechanism of AMPylation involves the addition of an AMP group to a substrate residue containing a hydroxyl group, where the AMP group is taken from an ATP molecule. In the case of FICD proteins, the catalytic histidine acts as a general base, drawing a proton away from the hydroxyl group of the substrate (usually located on a threonine or serine residue). The hydroxyl group, now a nucleophile, will then attack the α-phosphate of ATP, thus attaching the AMP group to the target residue's hydroxyl group. This mechanism both requires the presence of the catalytic histidine and the correct orientation of ATP in the ATP-binding pocket of the FICD protein. Interactions between the secondary arginine (HxFx(D/E)GN(G/K)R1xxR2) and the γ-phosphate of ATP orients ATP correctly in the pocket so that the proton transfer between the hydroxyl group of the substrate and the α-phosphate of ATP can take place. One mechanism by which FICD proteins are self-regulated is through interactions of the inhibitory α-helix and the catalytic fic domain. The glutamate found in the inhibitory α-helix motif ((S/T)xxxE(G/N))interacts with the secondary arginine in the fic motif, which in turn prevents interactions of the γ-phosphate of ATP at that site. Because ATP is unable to orient properly in the ATP binding pocket, AMPylation cannot occur. FICD proteins are also capable of de-AMPylation activity, the complement to AMPylation. Research suggests that a switch between the dimerization state of FICD may be responsible for the switch between AMPylation and de-AMPylation activity, in which FICD in its monomeric form is responsible for AMPylation, while FICD as a dimer is responsible for de-AMPylation. Function FIC proteins are known for their general function of carrying out post-translational modifications on target proteins that are a part of the cell signaling system. The conserved fic domain is involved in the addition of phosphate-containing compounds including AMP,GTP and other nucleoside monophosphates and phosphates. Fic proteins play a vital role in the mediation of post-translational modifications in host cell proteins that interfere with cytoskeletal, trafficking, signaling or translation pathways such as the UPR pathway. The UPR (unfolded protein response) pathway depends on the activation of transmembrane transducers during ER stress to promote specific downstream effects. In the event of fluctuating unfolded proteins the UPR pathway becomes activated, and this includes the activation of Hsp70 chaperone, Bip. This process incorporates inactive oligomers and reversible AMPylation and de-AMPylation activities. Bip/GRP78 is adenylated by HYPE, which induces the UPR activation, reportedly at the specific sites Thr366 and Thr518 in vitro, and thereby is able to assist in carrying out the modifications to target proteins in order to maintain ER homeostasis. The expression of HYPE is regulated depending on the magnitude of ER stress. Expression Expression of FICD in most species and cell types occurs at very low levels, with human FICD (HYPE) being expressed between 2-20 NX of RNA in human cell lines. In humans, HYPE's under-expression has been linked to decreased ability of the UPR to maintain ER homeostasis. In Drosophilia, knockdown of FICD (dfic in D. melanogaster) causes blindness in flies. This phenotype could be rescued by the expression of wild-type dfic in the glial cells of those flies. In C. elegans, FICD (FIC-1 in C. elegans) is also expressed at a basal level throughout all cell types in the worm body. FIC-1 also shows no change in level of expression throughout the worm's lifetime. FIC-1 expression has been linked to immunity to P. aeruginosa, as mutants with inactive FIC-1 showed increased susceptibility to the bacteria. Localization The region of localization depends on the type of organism that different FIC proteins reside in. In many cases FICD is situated in the lumen of the endoplasmic reticulum where it adenylates Hsp70 chaperone, binding immunoglobulin protein (BiP) at the Thr-366 and Thr-518. HYPE localizes in the ER via its hydrophobic N-terminus. Drosophila CG9523 or dFic  can be found in the cytosolic region but are also transcriptionally activated during ER stress. HYPE has two sites of N- glycosylation at Asn275 and Asn446. FICD is a type II transmembrane protein with the Fic domain facing the ER lumen. Similar results have been shown for dFic through in vitro translation in the presence of microsomes, which indicated N-glycosylation at site Asn288. Clinical Significance While FICD (HYPE in humans) has not been directly linked to any disease pathways, some of its substrates are known participants in human diseases. BiP (GRP78/HSPA5), a validated substrate of HYPE, has been substantially linked to increased rates of cancer cell survival under proapoptotic conditions. α-syn, a putative substrate for HYPE, is a known factor in the development of Parkinson's disease through the formation of protein aggregates in the brain. HYPE also has other potential roles in the fields of neurodevelopment and neurodegeneration. References Enzymes Post-translational modification
FICD
[ "Chemistry" ]
1,910
[ "Post-translational modification", "Gene expression", "Biochemical reactions" ]
64,743,403
https://en.wikipedia.org/wiki/Mixed%20Hodge%20structure
In algebraic geometry, a mixed Hodge structure is an algebraic structure containing information about the cohomology of general algebraic varieties. It is a generalization of a Hodge structure, which is used to study smooth projective varieties. In mixed Hodge theory, where the decomposition of a cohomology group may have subspaces of different weights, i.e. as a direct sum of Hodge structures where each of the Hodge structures have weight . One of the early hints that such structures should exist comes from the long exact sequence associated to a pair of smooth projective varieties . This sequence suggests that the cohomology groups (for ) should have differing weights coming from both and . Motivation Originally, Hodge structures were introduced as a tool for keeping track of abstract Hodge decompositions on the cohomology groups of smooth projective algebraic varieties. These structures gave geometers new tools for studying algebraic curves, such as the Torelli theorem, Abelian varieties, and the cohomology of smooth projective varieties. One of the chief results for computing Hodge structures is an explicit decomposition of the cohomology groups of smooth hypersurfaces using the relation between the Jacobian ideal and the Hodge decomposition of a smooth projective hypersurface through Griffith's residue theorem. Porting this language to smooth non-projective varieties and singular varieties requires the concept of mixed Hodge structures. Definition A mixed Hodge structure (MHS) is a triple such that is a -module of finite type is an increasing -filtration on , is a decreasing -filtration on , where the induced filtration of on the graded piecesare pure Hodge structures of weight . Remark on filtrations Note that similar to Hodge structures, mixed Hodge structures use a filtration instead of a direct sum decomposition since the cohomology groups with anti-holomorphic terms, where , don't vary holomorphically. But, the filtrations can vary holomorphically, giving a better defined structure. Morphisms of mixed Hodge structures Morphisms of mixed Hodge structures are defined by maps of abelian groupssuch thatand the induced map of -vector spaces has the property Further definitions and properties Hodge numbers The Hodge numbers of a MHS are defined as the dimensionssince is a weight Hodge structure, andis the -component of a weight Hodge structure. Homological properties There is an Abelian category of mixed Hodge structures which has vanishing -groups whenever the cohomological degree is greater than : that is, given mixed hodge structures the groupsfor pg 83. Mixed Hodge structures on bi-filtered complexes Many mixed Hodge structures can be constructed from a bifiltered complex. This includes complements of smooth varieties defined by the complement of a normal crossing variety. Given a complex of sheaves of abelian groups and filtrations of the complex, meaningThere is an induced mixed Hodge structure on the hyperhomology groupsfrom the bi-filtered complex . Such a bi-filtered complex is called a mixed Hodge complex Logarithmic complex Given a smooth variety where is a normal crossing divisor (meaning all intersections of components are complete intersections), there are filtrations on the logarithmic de Rham complex given byIt turns out these filtrations define a natural mixed Hodge structure on the cohomology group from the mixed Hodge complex defined on the logarithmic complex . Smooth compactifications The above construction of the logarithmic complex extends to every smooth variety; and the mixed Hodge structure is isomorphic under any such compactificaiton. Note a smooth compactification of a smooth variety is defined as a smooth variety and an embedding such that is a normal crossing divisor. That is, given compactifications with boundary divisors there is an isomorphism of mixed Hodge structureshowing the mixed Hodge structure is invariant under smooth compactification. Example For example, on a genus plane curve logarithmic cohomology of with the normal crossing divisor with can be easily computed since the terms of the complex equal toare both acyclic. Then, the Hypercohomology is justthe first vector space are just the constant sections, hence the differential is the zero map. The second is the vector space is isomorphic to the vector space spanned byThen has a weight mixed Hodge structure and has a weight mixed Hodge structure. Examples Complement of a smooth projective variety by a closed subvariety Given a smooth projective variety of dimension and a closed subvariety there is a long exact sequence in cohomologypg7-8coming from the distinguished triangleof constructible sheaves. There is another long exact sequencefrom the distinguished trianglewhenever is smooth. Note the homology groups are called Borel–Moore homology, which are dual to cohomology for general spaces and the means tensoring with the Tate structure add weight to the weight filtration. The smoothness hypothesis is required because Verdier duality implies , and whenever is smooth. Also, the dualizing complex for has weight , hence . Also, the maps from Borel-Moore homology must be twisted by up to weight is order for it to have a map to . Also, there is the perfect duality pairinggiving an isomorphism of the two groups. Algebraic torus A one dimensional algebraic torus is isomorphic to the variety , hence its cohomology groups are isomorphic toThe long exact exact sequence then readsSince and this gives the exact sequencesince there is a twisting of weights for well-defined maps of mixed Hodge structures, there is the isomorphism Quartic K3 surface minus a genus 3 curve Given a quartic K3 surface , and a genus 3 curve defined by the vanishing locus of a generic section of , hence it is isomorphic to a degree plane curve, which has genus 3. Then, the Gysin sequence gives the long exact sequenceBut, it is a result that the maps take a Hodge class of type to a Hodge class of type . The Hodge structures for both the K3 surface and the curve are well-known, and can be computed using the Jacobian ideal. In the case of the curve there are two zero maps hence contains the weight one pieces . Because has dimension , but the Leftschetz class is killed off by the mapsending the class in to the class in . Then the primitive cohomology group is the weight 2 piece of . Therefore,The induced filtrations on these graded pieces are the Hodge filtrations coming from each cohomology group. See also Motive (algebraic geometry) Jacobian ideal Milnor fiber Mixed Hodge module References Examples A Naive Guide to Mixed Hodge Theory Introduction to Limit Mixed Hodge Structures Deligne’s Mixed Hodge Structure for Projective Varieties with only Normal Crossing Singularities In Mirror Symmetry Local B-Model and Mixed Hodge Structure Algebraic geometry Homological algebra Hodge theory
Mixed Hodge structure
[ "Mathematics", "Engineering" ]
1,401
[ "Mathematical structures", "Tensors", "Hodge theory", "Differential forms", "Fields of abstract algebra", "Category theory", "Algebraic geometry", "Homological algebra" ]
74,869,096
https://en.wikipedia.org/wiki/Compartmentalisation%20dam
A compartmentalisation dam is a dam that divides a body of water into two parts. A typical use of such a dam is the regulation of water levels separately in different sections of a basin. One application of a compartmentalisation dam is to facilitate closures of areas with multiple tidal inlets, such as in the case of the Delta Works. Compartmentalisation dams employed as a watershed Compartmentalisation dams have been deployed in scenarios where there is a significant disparity in water quality across different basins, where separation is used to address undesirable conditions. Such structures play a crucial role in water management by creating physical barriers between bodies of water with differing qualities. Noteworthy examples include the following compartmentalisation dams in the Netherlands: Volkerakdam: This dam was constructed to prevent saltwater intrusion into the freshwater Haringvliet, and to protect the relatively pristine Oosterschelde area from being contaminated by the polluted waters of the Rhine. Houtribdijk: Initially built as the northern boundary for the Markerwaard, it now functions to delineate the waters between the Markermeer and IJsselmeer. Furthermore, the Houtribdijk mitigates the impact of wind fetch under certain wind conditions, which in turn reduces wave formation and wind setup within the basin. Oesterdam: Erected to facilitate a tide-free navigational route from Antwerp to Rotterdam, the Oesterdam also narrows the tidal basin of the Oosterschelde. This constriction ensures that the tidal range at Yerseke and Zierikzee remains significant, even following the construction of the Oosterscheldekering. Role of compartmentalisation dams in Dutch closure projects In regions with tidal influences where closure dams are essential and multiple tidal inlets are present, the establishment of a compartmentalisation dam becomes crucial. Without such a structure (represented by the dotted line), Dam A would require the basin to be entirely filled through sea inlet B. This scenario could lead to a significant increase in flow rate at the inlet, causing the channel to widen and deepen, thereby complicating or outright preventing closure. Constructing a compartmentalisation dam is notably easier in areas over a , a Dutch term denoting a shallow part or tidal divide in a delta system where two tidal currents meet. At these junctures, the converging tides neutralise each other, creating an area with minimal current, facilitating easier dam construction despite the rapid movement of adjacent waters. The wantij serves as a critical navigational feature, offering shelter from strong currents or presenting challenges for vessels with deeper drafts. After the Storm Surge of 1953, it was decided to close the main inlets in the South-West of the Netherlands: the Oosterschelde, the Brouwershavenische Gat, and the Haringvliet. As these basins are connected to each other, and it is not possible to simply close them one-by-one, prior separation is required. Some notable examples of compartmentalisation dams used to implement such separation as part of the Delta Works project in The Netherlands include: Volkerakdam: separates the Haringvliet basin from the Volkerak. Grevelingendam: separates the Oosterschelde basin from the Grevelingen. Zandkreekdam: separates the Oosterschelde basin from the Veerse Gat. The (English: Delta Commission), a governmental expert panel convened to advise on measures to avert disasters like the 1953 flood, described these structures as side dams (Dutch: ), rather than compartmentalisation dams. Following the completion of these dams, the original Delta Plan was adapted. Instead of constructing a closure at the Oosterschelde estuary, the plan was revised to include a storm surge barrier, the Oosterscheldekering. This shift affected the dams' intended functions. Initially, the Oosterschelde dam was to transform the area into a vast freshwater body, dubbed the Zeeland Lake, to ensure a tide-free route from Antwerp to Rotterdam. This would have involved lock complexes at both the Volkerakdam and Kreekrakdam. Due to the non-completion of the Oosterschelde closure and the deferred decision to build a storm surge barrier, the Volkerak remained open, maintaining significant flow rates through the Zijpe channel. This persistent flow led to erosion, necessitating additional protective measures until the completion of the Oosterscheldekering. See also Delta Works Flood control in the Netherlands Rijkswaterstaat Johan van Veen References Dams in South Holland Dams in North Brabant Delta Works Hydraulic engineering Civil engineering
Compartmentalisation dam
[ "Physics", "Engineering", "Environmental_science" ]
967
[ "Hydrology", "Physical systems", "Construction", "Hydraulics", "Delta Works", "Civil engineering", "Hydraulic engineering" ]
73,502,548
https://en.wikipedia.org/wiki/Marine%20restoration
Marine restoration involves actions taken to restore the marine environment to its state prior to anthropogenic damage. This is particularly disastrous given that the ocean takes up the largest part of our planet and serves as the home to many organisms, including the algae that provides most. The ocean is currently suffering from the impacts of human damage including pollution, acidification, species loss, and more. This could prove particularly catastrophic given that the ocean takes up the largest part of the planet and serves as the home to many organisms, including the algae that provides around half of the oxygen on Earth. Efforts have been made by various agencies to help alleviate these issues. Methods Carbon dioxide removal The ocean has long helped get rid of excess carbon on Earth with its role in the Carbon cycle. However, the excess emissions and warming temperature as the result of climate change may change the ocean's ability to cycle carbon as efficiently. This has made it necessary to augment natural processes to increase the natural amount of Carbon Dioxide Removal (CDR). Electrochemical approaches remain the most popular. Current methods involve applying voltage to a membrane stack to split the stream of water. A team of professors from MIT have hypothesized ways to make the methods cheaper and more efficient. Efforts to improve carbon capture naturally by means of preserving the seagrass meadows have been enacted by various British conservation efforts. Coral restoration Coral reefs provide a vital part of the ocean ecosystem, serving as the habitat to many species and protection for the coastline from erosion and storms. At this time, thirty to fifty percent of Earth's coral reefs have already been lost . Coral has been threatened by pollution, overfishing, and unsafe fishing techniques. Methods to restore coral reefs have involved the gardening and transplanting of coral developed at different sites to locations previously inhabited by coral, as well as the use of green engineering methods to help the coral become more suited to the changing environment. Mangrove regrowth Mangrove forests, like coral reefs, are essential for protecting the coastlines and providing a habitat for various aqueous creatures . They too have been severely threatened, mostly for wood harvesting and fish farms. Wetland scientist Robin Lewis has spent the past few decades restoring mangrove forests. Previously, attempts to restore mangrove environments were made by replanting mangrove seedlings grown elsewhere, but this proved to be ineffective. Lewis took to moving dirt and relying on tide systems, which proved more effective . There are currently multiple mangrove restoration organizations across the world to help protect biodiversity. Removing pollutants Plastic, oil, and other pollutants have detrimental impacts on marine environments. As a result, various organizations have developed technology to physically remove pollutants. There has been some criticism of clean-up methods, such as The Ocean Cleanup, over concerns that the methods, similar to fish trawling, may harm marine life and do not understand the nature of plastic pollution in the ocean. At this point, the more effective method of combating ocean pollution is keeping it from reaching the ocean to begin with through waste management and legislation to prevent further pollution . Some research has shown that certain bacteria could naturally consume plastics, but more studies are needed. Legislation In order to enforce protection to the marine environment, various pieces of legislation have been passed. The Clean Water Act (CWA) was passed in the US Congress in 1972 to protect national waters. The United Nations Environment Programme (UNEP) created The Global Programme of Action for the Protection of the Marine Environment from Land-based Activities to protect marine ecosystems. They also passed the International Convention for the Prevention of Pollution from Ships in 1973 to prevent ship-based pollution to the ocean. The United Nations Convention on the Law of the Sea (UNCLOS) was updated from its original intention of the Law of the Sea to include protecting wildlife and marine ecoystems, which was agreed upon by countries from all inhabited continents. Various cities, states, and countries have placed bans on single use plastic with marine protection often cited as a primary reason. The Break Free From Plastic Act reached the US Senate in 2021 and is still being decided. Organizations Ocean Visions Running Tide Oceana Woods Hole Oceanographic Institution (WHOI) National Ocean and Atmospheric Administration (NOAA) Coral Restoration Foundation Oceanic Preservation Society See also Climate restoration References Wikipedia Student Program Ecological restoration
Marine restoration
[ "Chemistry", "Engineering" ]
864
[ "Ecological restoration", "Environmental engineering" ]
73,504,338
https://en.wikipedia.org/wiki/Harvest%20now%2C%20decrypt%20later
Harvest now, decrypt later, also known as store now, decrypt later, steal now decrypt later or retrospective decryption, is a surveillance strategy that relies on the acquisition and long-term storage of currently unreadable encrypted data awaiting possible breakthroughs in decryption technology that would render it readable in the future - a hypothetical date referred to as Y2Q (a reference to Y2K) or Q-Day. The most common concern is the prospect of developments in quantum computing which would allow current strong encryption algorithms to be broken at some time in the future, making it possible to decrypt any stored material that had been encrypted using those algorithms. However, the improvement in decryption technology need not be due to a quantum-cryptographic advance; any other form of attack capable of enabling decryption would be sufficient. The existence of this strategy has led to concerns about the need to urgently deploy post-quantum cryptography, even though no practical quantum attacks yet exist, as some data stored now may still remain sensitive even decades into the future. , the U.S. federal government has proposed a roadmap for organizations to start migrating toward quantum-cryptography-resistant algorithms to mitigate these threats. References See also Communications interception (disambiguation) Indiscriminate monitoring Mass surveillance Perfect forward secrecy Cryptography Espionage techniques Mass surveillance Computer data storage Privacy
Harvest now, decrypt later
[ "Mathematics", "Engineering" ]
295
[ "Applied mathematics", "Cryptography", "Cybersecurity engineering" ]
73,507,031
https://en.wikipedia.org/wiki/Burn%3A%20The%20Misunderstood%20Science%20of%20Metabolism
Burn: The Misunderstood Science of Metabolism is a 2022 book written by Herman Pontzer in which he discusses metabolism, human health and use of energy in the human body. The book examines research and proposes a constrained approach to total energy expenditure. References 2022 non-fiction books Metabolism
Burn: The Misunderstood Science of Metabolism
[ "Chemistry", "Biology" ]
58
[ "Biochemistry", "Metabolism", "Cellular processes" ]
73,510,470
https://en.wikipedia.org/wiki/Engineered%20CAR%20T%20cell%20delivery
Engineered chimeric antigen receptor (CAR)-T cell delivery is the methodology by which clinicians introduce the cancer-targeting therapeutic system of the CAR-T cell to the human body. CAR-T cells, which utilizes genetic modification of human T-cells to contain antigen binding sequences in addition to the receptor systems CD4 or CD8, are useful in direct targeting and elimination of cancer cells through cytotoxicity. CAR-T cell delivery involves many varying modalities for implementation, spurring innovative biomedical research to address these modalities. These delivery mechanisms serve to address the limitations of CAR-T cells in translational experimentation and clinical trials, including shelf-life, off-target effects, and tumor infiltration. As of April 2023, six CAR-T cell therapies are clinically approved by the FDA, all of which target hematologic (blood-based) cancers, including multiple myeloma and B-cell leukemias. Novel engineered compound-based delivery methods, some of which are in clinical trials, aim to address limitations related to CAR-T cell delivery with the focus to target non-blood based cancers. Systemic and intravenous delivery The classic method of administration of CAR-T cells to cancers within the human body is through intravenous (IV) central line infusion. This infusion allows the CAR-T cells to enter the body’s cardiovascular system, entering the circulation (systemically) amongst developing hematologic cancers. This facilitates the final step in generation and implementation of both autologous and allogeneic CAR-T cell therapy. While this delivery method is reliable for hematologic cancers, as demonstrated by successful clinical trials and FDA regulation, systemic delivery may result in an increase in autoimmune overload, leading to toxic disorders such as cytokine release syndrome (CRS). Discrimination between healthy and malignant cancer cells may additionally result in aplasia, or extremely low or absent amounts of healthy blood cells. Thus, clinically recommended dosage amounts are in place for current CAR-T cell therapies. Current methods exploring ways to improve such complications have been introduced recently by researchers, including “off-switches” to turn off CAR-T cells after initial therapies and further genetic modification to avoid immune rejection. While systemic delivery is important for targeting hematologic cancers, it remains inefficient at targeting solid, or non-circulatory, cancerous tumors. Therefore, regional, or localized targeting strategies utilizing CAR-T cells have arisen in pre-clinical research. Localized delivery mechanisms Solid tumors, which typically take the form of neoplasms in epithelial cells or in bones, tissue, or adipose (fat), are different than hematologic cancers in that they form a mass of cells, thereby maintaining multiple layers of protection. Because CAR-T cells attack cancerous cells at a surface level, this leaves the CAR-T cells vulnerable to cancerous cell resistance, which renders the CAR-T cell inefficient. In recent years, cellular and genetic engineering methods have been explored by researchers to overcome layered protection of solid tumors, in addition to other challenges that have been presented in the advent of CAR-T cell delivery such as in-situ editing and manufacturing, negative immune responses, and biocompatibility of delivery structures. Some of the methodologies used to suspend and deliver CAR-T cells include hydrogel and polymeric gel-based delivery systems, thin polymeric films, and microneedle patches. Most of these devices, currently still in the pre-clinical phase, are intended to be injected or surgically inserted directly into the solid tumor mass. While initial clinical trials have been unsuccessful due to relatively inefficient delivery as compared to direct injection and high immunosuppression, recent research has shown promise in overcoming these barriers. Gel-based delivery Gel-based delivery of CAR-T cells involves implantation or injection of a hydrogel or polymer gel into the target solid tumor. These gels suspend CAR-T cells in various ways through manipulation of cell-specific chemistry or by fixing the cells in a polymeric matrix. One strength of gel-based delivery is that these systems are functionally biodegradable, so once the CAR-T cells have been administered, the depot does not stay in the body, reducing immunosuppressive conditions or tumor resistance. One of the earliest examples of this methodology developed by Luo et al. in 2020 consisted of a layered hydrogel microchip that facilitated CAR-T survival from immunosuppressive elements of the tumor system which could be injected into the tumor. In addition, this system was able to take advantage of the hypoxic (low oxygen) conditions of the tumor by additionally containing oxygen-releasing agents that when released with interleukin-15 (IL-15) cytokines could cause tumor cell death. Two recent examples utilizing gel-based delivery towards specific cancers include targeted fibrin gel-based delivery to glioblastoma and modified CAR-T cell hydrogel complexes to retinoblastoma. In the glioblastoma-targeting system, Ogunnaike et al. used CAR-T cells loaded in a fibrin matrix, which undergoes in-situ polymerization upon mixture of fibrinogen and thrombin (as part of the coagulation cascade). In addition, the CAR-T cells were modified to target the B7-H3 antigen, present on some forms of cancer including glioblastoma. In the retinoblastoma-targeting system, Wang et al. fabricated CAR-T cells specific to the GD2 ganglioside, specific to retinoblastoma, and suspended them into an IL-15/chitosan-polyethylene glycol (PEG) hydrogel suspension, effectively targeting the tumor via injection. Additionally, researchers have developed hydrogel-based systems that limit further growth and proliferation of tumors, such as the hyaluronic acid hydrogel system developed by Hu et al. This system, which targets melanoma through targeting the CSPG4 antigen, was shown to slowly but efficiently release IL-15 and CAR-T cells from a cross-linked hyaluronic acid matrix and poly(lactic-co-glycolic acid) (PLGA) nanoparticle suspension intratumorally. In addition, the system utilized anti-PDL1 antibody delivery, increasing platelet release of programmed cell death molecules onto the tumors, killing them. Sustained delivery and long-term retention of CAR-T cells through hydrogel systems has also been developed, as shown by Grosskopf et al., to address controlled release of CAR-T cells onto tumors. In this system, the researchers crosslinked a combination of dodecyl-modified hydroxypropyl methylcellulose (HPMC-C12) and PEG-poly(lactic acid) (PLA) nanoparticles to form the hydrogel, then mixed CAR-T cells and IL-15 into the matrix. The sustained release profiles showed promise in tumor suppression treatments due to the slow release duration profiles. Codelivery of CAR-T cells with other agonists, such as stimulator of interferon (IFN) genes (STING), remain a popular new method of administration of CAR-T cells to solid tumors. Smith et al. developed a silica nanoparticle based coating conjugated with lipid film to suspend CAR-T cells and select antibodies onto an alginate polymer scaffold. This system was tested in mouse pancreatic cancer and melanoma models, and showed expansion of CAR-T cells at the tumor site, increasing the therapeutic efficiency. There are still challenges with these delivery systems, however, as they have limitations on CAR-T cell suspension amounts, host-specific immunogenic responses, and have been performed primarily in mouse models. Regardless, hydrogel-based CAR-T delivery systems have shown promise in translational experimentations towards solid tumor targeting. Thin Film Delivery Thin-film targeting has emerged as an alternative method of CAR-T cell delivery to solid tumors, utilizing metal-based microfilms made of nitinol, an alloy of nickel and titanium, a composite commonly used in stents. One such example using these nitinol scaffolds was developed by Coon et al., whereby the researchers suspended CAR-T cells into pores laden throughout the film, combined with fibrin. Upon implantation, the CAR-T cells would release from the film, but remain within the site of the tumor due to engineered antibody attraction of the film. This caused increased localization and duration of CAR-T therapy to the tumor site as compared to intravenous or local injection of CAR-T cells. Microneedle Delivery Microneedle patches have been used previously as a minimally-invasive, distributive system to release drugs across certain body systems, most notably on the skin. Microneedles patches have been used recently as a method of CAR-T cell release to solid tumor cancers such as melanoma and pancreatic cancer. Using a porous PLGA microneedle-shaped scaffold, Li et al. developed a system to suspend and release CAR-T cells. An advantage of this method is that the patch could be inserted on the surface of the tumor, allowing for surface-targeting of CAR-T cells upon release. Additionally, the needles would penetrate into the solid tumor, allowing for a distribution of CAR-T cells release along the interior axis of the tumor. In-situ generation of CAR-T cells Production of CAR-T cells involve removal of T cells via an extraction process known as leukapheresis, followed by cell culture with viral vectors containing ingredients needed to construct the chimeric system. These methodologies, while important, have been shown to be expensive and time-consuming. Recent advances in research have introduced “scaffold factories” to program, produce, and release autologous CAR-T cells into the body after implantation. These “scaffold factories” have shown to function in targeting hematologic cancers and show promise in future research to target solid tumors. One method of in-situ generation involves the injection of lentiviral vectors, containing genetic information relating to targeting of CD4 antigens directly to lymphocytes in conjugation with the CAR gene, allowing for the construction of this system within the body, as shown by Agarwal et al. Another method of in-situ generation involves the manufacture polymeric nanocarriers to carry genes involved in the development of the CAR-T cell. Smith et al. used poly(β-amino ester) (PBAE) particles laden with polyglutamic acid (PGA), nuclear localization signals, and T-cell targeting fragments to deliver CAR genes to T cells for translation into CAR-T cells. These advances have led to the investigation and development of a novel system for manufacturing CAR-T cells known as Multifunctional Alginate Scaffold for T Cell Engineering and Release (MASTER). These alginate scaffolds, which are highly porous, are embedded with azide, cyclooctyne-conjugated targeting antibodies, CAR-based retroviral vectors, and mononuclear blood cells. The azide and cyclooctyne-conjugated particles undergo a rapid click chemistry reaction, whereby when implanted, alongside introduction of the blood cells, the scaffold will generate CAR-T cells. This system was shown to be long-lasting and biodegradable, factors that are important for long suppression of cancers. See also CAR T cell Cellular adoptive immunotherapy Cancer immunotherapy Bioinstructive material Cell encapsulation Drug delivery devices References
Engineered CAR T cell delivery
[ "Chemistry" ]
2,458
[ "Pharmacology", "Drug delivery devices" ]
77,840,102
https://en.wikipedia.org/wiki/Ignacy%20Daszy%C5%84ski%20Monument
The Ignacy Daszyński Monument (Polish: Pomnik Ignacego Daszyńskiego) is a bronze statue in Warsaw, Poland, placed in the Crossroads Square, at the intersection of Szucha Avenue and People's Army Avenue. It is dedicated to Ignacy Daszyński, a socialist and politician who was the first prime minister of Poland in 1918. The monument was designed by Jacek Kucaba and unveiled on 11 November 2018. History In 2012, Bronisław Komorowski, the president of Poland, gave his support to the long-proposed idea of erecting monument dedicated to Ignacy Daszyński (1866–1936), a socialist and politician who was the first prime minister of Poland in 1918. The monument was designed by sculptor Jacek Kucaba. It was unveiled on 11 November 2018, in the 100th National Independence Day of Poland. The ceremony was attended, among others, by the deputy prime minister and minister of culture and national heritage Piotr Gliński, the Deputy Marshal of the Senate Bogdan Borusewicz, former President of Poland Aleksander Kwaśniewski, and the chairperson of the All-Poland Alliance of Trade Unions, Jan Guz. There were also representatives of the Democratic Left Alliance, Polish People's Party, Labour Union, Greens, Left Together, and Ignacy Daszyński Centre. During the ceremony, a letter from President of Poland Andrzej Duda was read. Characteristics The monument is located in the Crossroads Square, at the intersection of Szucha Avenue and People's Army Avenue. It consists of a bronze statue of Ignacy Daszyński standing behind a small lectern. It has a height of 4.5 m. References Monuments and memorials in Warsaw 2018 establishments in Poland Buildings and structures completed in 2018 2018 sculptures Outdoor sculptures in Warsaw Statues of men in Poland Statues of prime ministers Bronze sculptures in Poland Śródmieście Południowe Colossal statues
Ignacy Daszyński Monument
[ "Physics", "Mathematics" ]
402
[ "Quantity", "Colossal statues", "Physical quantities", "Size" ]
77,849,243
https://en.wikipedia.org/wiki/What%20Is%20Real%3F
What Is Real?: The Unfinished Quest for the Meaning of Quantum Physics is a book on quantum physics by American astronomer Adam Becker. It was first published in 2018. Background Becker had been a member of the California Quantum Interpretation Network, "a research collaboration among faculty and staff at multiple UC campuses and other universities across California, focusing on the interpretation of quantum physics." In 2016, he received a grant from the Alfred P. Sloan Foundation to research and publish a written work concerning the history, development, and controversy surrounding the study and development of the mysticized field of Quantum Foundations. The resulting work, What is Real? (2018), focused on the question of what exactly quantum physics says about the nature of reality. Becker, stated the motivation for the book as follows: Themes The book deals with the personalities behind the competing interpretations of quantum physics as well as the historical factors that influenced the debate—factors such as military spending on physics research due to World War II, the Cold War ethos that caused the eschewing of physicists thought to be Marxist, the assumed infallibility of John von Neumann, the sexism that quashed the work of Grete Hermann (the female mathematician who first spotted von Neumann's error), and the sway of prominent philosophical schools of the period, like the logical positivists of the Vienna Circle. Niels Bohr appears in the book as the charismatic figure whose stature and obtuse writing style made it hard for alternate interpretations to be voiced. The book also challenges the popular portrayal of Albert Einstein as a behind-the-times thinker who couldn't accept the new paradigm. Becker argues that Einstein's thought experiments aimed at quantum dynamics are not stodgy quibbles with the seeming randomness of quantum physics, as characterized by the popularity of the quote that "God does not play dice". Rather, Einstein's thought experiments are apt critiques of violations of the principle of locality. Reception "What is Real?" was given mostly positive reviews by lay and expert audiences alike, including the New York Times, Publishers Weekly, the Wall Street Journal, and New Scientist, among others,. In Physics Today, philosopher David Wallace called the book "a superb contribution both to popular understanding of quantum theory and to ongoing debates among experts." And in the journal Nature, Ramin Skibba said "What Is Real? is an argument for keeping an open mind. Becker reminds us that we need humility as we investigate the myriad interpretations and narratives that explain the same data." The journal Science explained, "What Is Real? offers an engaging and accessible overview of the debates surrounding the interpretation of quantum mechanics,". Philosopher of science, Tim Maudlin said, "There is no more reliable, careful, and readable account of the whole history of quantum theory in all its scandalous detail." Physicist Sheldon Glashow wrote a critical review, saying, "I found it distasteful to find a trained astrophysicist invoking a conspiracy by physicists and physics teachers to foist the Copenhagen interpretation upon naive students of quantum mechanics". A review in the journal Science declared the project to be the sporadically accurate presentation of an "oversimplified" summary of either imaginary or merely ostensible conflicts between very complex schools of thought. Reviews in Science News and the American Journal of Physics were also negative, similarly criticizing the book for numerous historical inaccuracies and philosophical oversimplifications. The book was nominated for the PEN/E. O. Wilson Literary Science Writing Award and Physics World Magazine's Book of the Year Award. References 2018 non-fiction books English-language non-fiction books Popular physics books Interpretations of quantum mechanics Basic Books books
What Is Real?
[ "Physics" ]
760
[ "Interpretations of quantum mechanics", "Quantum mechanics", "Works about quantum mechanics" ]
77,850,423
https://en.wikipedia.org/wiki/You%20Only%20Look%20Once
You Only Look Once (YOLO) is a series of real-time object detection systems based on convolutional neural networks. First introduced by Joseph Redmon et al. in 2015, YOLO has undergone several iterations and improvements, becoming one of the most popular object detection frameworks. The name "You Only Look Once" refers to the fact that the algorithm requires only one forward propagation pass through the neural network to make predictions, unlike previous region proposal-based techniques like R-CNN that require thousands for a single image. Overview Compared to previous methods like R-CNN and OverFeat, instead of applying the model to an image at multiple locations and scales, YOLO applies a single neural network to the full image. This network divides the image into regions and predicts bounding boxes and probabilities for each region. These bounding boxes are weighted by the predicted probabilities. OverFeat OverFeat was an early influential model for simultaneous object classification and localization. Its architecture is as follows: Train a neural network for image classification only ("classification-trained network"). This could be one like the AlexNet. The last layer of the trained network is removed, and for every possible object class, initialize a network module at the last layer ("regression network"). The base network has its parameters frozen. The regression network is trained to predict the coordinates of two corners of the object's bounding box. During inference time, the classification-trained network is run over the same image over many different zoom levels and croppings. For each, it outputs a class label and a probability for that class label. Each output is then processed by the regression network of the corresponding class. This results in thousands of bounding boxes with class labels and probability. These boxes are merged until only one single box with a single class label remains. Versions There are two parts to the YOLO series. The original part contained YOLOv1, v2, and v3, all released on a website maintained by Joseph Redmon. YOLOv1 The original YOLO algorithm, introduced in 2015, divides the image into an grid of cells. If the center of an object's bounding box falls into a grid cell, that cell is said to "contain" that object. Each grid cell predicts B bounding boxes and confidence scores for those boxes. These confidence scores reflect how confident the model is that the box contains an object and how accurate it thinks the box is that it predicts. In more detail, the network performs the same convolutional operation over each of the patches. The output of the network on each patch is a tuple as follows:where is the conditional probability that the cell contains an object of class , conditional on the cell containing at least one object. are the center coordinates, width, and height of the -th predicted bounding box that is centered in the cell. Multiple bounding boxes are predicted to allow each prediction to specialize in one kind of bounding box. For example, slender objects might be predicted by while stout objects might be predicted by . is the predicted intersection over union (IoU) of each bounding box with its corresponding ground truth. The network architecture has 24 convolutional layers followed by 2 fully connected layers. During training, for each cell, if it contains a ground truth bounding box, then only the predicted bounding boxes with the highest IoU with the ground truth bounding boxes is used for gradient descent. Concretely, let be that predicted bounding box, and let be the ground truth class label, then are trained by gradient descent to approach the ground truth, is trained towards , other are trained towards zero. If a cell contains no ground truth, then only are trained by gradient descent to approach zero. YOLOv2 Released in 2016, YOLOv2 (also known as YOLO9000) improved upon the original model by incorporating batch normalization, a higher resolution classifier, and using anchor boxes to predict bounding boxes. It could detect over 9000 object categories. It was also released on GitHub under the Apache 2.0 license. YOLOv3 YOLOv3, introduced in 2018, contained only "incremental" improvements, including the use of a more complex backbone network, multiple scales for detection, and a more sophisticated loss function. YOLOv4 and beyond Subsequent versions of YOLO (v4, v5, etc.) have been developed by different researchers, further improving performance and introducing new features. These versions are not officially associated with the original YOLO authors but build upon their work. , there are up to YOLOv8. See also Computer vision Object detection Convolutional neural network R-CNN SqueezeNet MobileNet EfficientNet References External links Official YOLO website YOLO implementation in Darknet Computer vision Deep learning Neural networks
You Only Look Once
[ "Engineering" ]
994
[ "Artificial intelligence engineering", "Packaging machinery", "Neural networks", "Computer vision" ]
77,850,743
https://en.wikipedia.org/wiki/Crosswall%20construction
Crosswall construction is a building technique that uses prefabricated concrete modules with load-bearing walls that act to communicate the entire weight of the building to its foundation. References See also Curtain wall (architecture) Types of wall Building engineering Construction Architectural elements
Crosswall construction
[ "Technology", "Engineering" ]
52
[ "Structural engineering", "Building engineering", "Construction", "Types of wall", "Architectural elements", "Civil engineering", "Architecture stubs", "Components", "Architecture" ]
62,480,458
https://en.wikipedia.org/wiki/Sidorenko%27s%20conjecture
Sidorenko's conjecture is a major conjecture in the field of extremal graph theory, posed by Alexander Sidorenko in 1986. Roughly speaking, the conjecture states that for any bipartite graph and graph on vertices with average degree , there are at least labeled copies of in , up to a small error term. Formally, it provides an intuitive inequality about graph homomorphism densities in graphons. The conjectured inequality can be interpreted as a statement that the density of copies of in a graph is asymptotically minimized by a random graph, as one would expect a fraction of possible subgraphs to be a copy of if each edge exists with probability . Statement Let be a graph. Then is said to have Sidorenko's property if, for all graphons , the inequality is true, where is the homomorphism density of in . Sidorenko's conjecture (1986) states that every bipartite graph has Sidorenko's property. If is a graph , this means that the probability of a uniform random mapping from to being a homomorphism is at least the product over each edge in of the probability of that edge being mapped to an edge in . This roughly means that a randomly chosen graph with fixed number of vertices and average degree has the minimum number of labeled copies of . This is not a surprising conjecture because the right hand side of the inequality is the probability of the mapping being a homomorphism if each edge map is independent. So one should expect the two sides to be at least of the same order. The natural extension to graphons would follow from the fact that every graphon is the limit point of some sequence of graphs. The requirement that is bipartite to have Sidorenko's property is necessary — if is a bipartite graph, then since is triangle-free. But is twice the number of edges in , so Sidorenko's property does not hold for . A similar argument shows that no graph with an odd cycle has Sidorenko's property. Since a graph is bipartite if and only if it has no odd cycles, this implies that the only possible graphs that can have Sidorenko's property are bipartite graphs. Equivalent formulation Sidorenko's property is equivalent to the following reformulation: For all graphs , if has vertices and an average degree of , then . This is equivalent because the number of homomorphisms from to is twice the number of edges in , and the inequality only needs to be checked when is a graph as previously mentioned. In this formulation, since the number of non-injective homomorphisms from to is at most a constant times , Sidorenko's property would imply that there are at least labeled copies of in . Examples As previously noted, to prove Sidorenko's property it suffices to demonstrate the inequality for all graphs . Throughout this section, is a graph on vertices with average degree . The quantity refers to the number of homomorphisms from to . This quantity is the same as . Elementary proofs of Sidorenko's property for some graphs follow from the Cauchy–Schwarz inequality or Hölder's inequality. Others can be done by using spectral graph theory, especially noting the observation that the number of closed paths of length from vertex to vertex in is the component in the th row and th column of the matrix , where is the adjacency matrix of . Cauchy–Schwarz: The 4-cycle C4 By fixing two vertices and of , each copy of that have and on opposite ends can be identified by choosing two (not necessarily distinct) common neighbors of and . Letting denote the codegree of and (i.e. the number of common neighbors), this implies: by the Cauchy–Schwarz inequality. The sum has now become a count of all pairs of vertices and their common neighbors, which is the same as the count of all vertices and pairs of their neighbors. So: by Cauchy–Schwarz again. So: as desired. Spectral graph theory: The 2k-cycle C2k Although the Cauchy–Schwarz approach for is elegant and elementary, it does not immediately generalize to all even cycles. However, one can apply spectral graph theory to prove that all even cycles have Sidorenko's property. Note that odd cycles are not accounted for in Sidorenko's conjecture because they are not bipartite. Using the observation about closed paths, it follows that is the sum of the diagonal entries in . This is equal to the trace of , which in turn is equal to the sum of the th powers of the eigenvalues of . If are the eigenvalues of , then the min-max theorem implies that: where is the vector with components, all of which are . But then: because the eigenvalues of a real symmetric matrix are real. So: as desired. Entropy: Paths of length 3 J.L. Xiang Li and Balázs Szegedy (2011) introduced the idea of using entropy to prove some cases of Sidorenko's conjecture. Szegedy (2015) later applied the ideas further to prove that an even wider class of bipartite graphs have Sidorenko's property. While Szegedy's proof wound up being abstract and technical, Tim Gowers and Jason Long reduced the argument to a simpler one for specific cases such as paths of length . In essence, the proof chooses a nice probability distribution of choosing the vertices in the path and applies Jensen's inequality (i.e. convexity) to deduce the inequality. Partial results Here is a list of some bipartite graphs which have been shown to have Sidorenko's property. Let have bipartition . Paths have Sidorenko's property, as shown by Mulholland and Smith in 1959 (before Sidorenko formulated the conjecture). Trees have Sidorenko's property, generalizing paths. This was shown by Sidorenko in a 1991 paper. Cycles of even length have Sidorenko's property as previously shown. Sidorenko also demonstrated this in his 1991 paper. Complete bipartite graphs have Sidorenko's property. This was also shown in Sidorenko's 1991 paper. Bipartite graphs with have Sidorenko's property. This was also shown in Sidorenko's 1991 paper. Hypercube graphs (generalizations of ) have Sidorenko's property, as shown by Hatami in 2008. More generally, norming graphs (as introduced by Hatami) have Sidorenko's property. If there is a vertex in that is neighbors with every vertex in (or vice versa), then has Sidorenko's property as shown by Conlon, Fox, and Sudakov in 2010. This proof used the dependent random choice method. For all bipartite graphs , there is some positive integer such that the -blow-up of has Sidorenko's property. Here, the -blow-up of is formed by replacing each vertex in with copies of itself, each connected with its original neighbors in . This was shown by Conlon and Lee in 2018. Some recursive approaches have been attempted, which take a collection of graphs that have Sidorenko's property to create a new graph that has Sidorenko's property. The main progress in this manner was done by Sidorenko in his 1991 paper, Li and Szegedy in 2011, and Kim, Lee, and Lee in 2013. Li and Szegedy's paper also used entropy methods to prove the property for a class of graphs called "reflection trees." Kim, Lee, and Lee's paper extended this idea to a class of graphs with a tree-like substructure called "tree-arrangeable graphs." However, there are graphs for which Sidorenko's conjecture is still open. An example is the "Möbius strip" graph , formed by removing a -cycle from the complete bipartite graph with parts of size . László Lovász proved a local version of Sidorenko's conjecture, i.e. for graphs that are "close" to random graphs in a sense of cut norm. Forcing conjecture A sequence of graphs is called quasi-random with density for some density if for every graph : The sequence of graphs would thus have properties of the Erdős–Rényi random graph . If the edge density is fixed at , then the condition implies that the sequence of graphs is near the equality case in Sidorenko's property for every graph . From Chung, Graham, and Wilson's 1989 paper about quasi-random graphs, it suffices for the count to match what would be expected of a random graph (i.e. the condition holds for ). The paper also asks which graphs have this property besides . Such graphs are called forcing graphs as their count controls the quasi-randomness of a sequence of graphs. The forcing conjecture states the following: A graph is forcing if and only if it is bipartite and not a tree. It is straightforward to see that if is forcing, then it is bipartite and not a tree. Some examples of forcing graphs are even cycles (shown by Chung, Graham, and Wilson). Skokan and Thoma showed that all complete bipartite graphs that are not trees are forcing. Sidorenko's conjecture for graphs of density follows from the forcing conjecture. Furthermore, the forcing conjecture would show that graphs that are close to equality in Sidorenko's property must satisfy quasi-randomness conditions. See also Common graph References Graph theory Conjectures
Sidorenko's conjecture
[ "Mathematics" ]
1,988
[ "Unsolved problems in mathematics", "Graph theory", "Conjectures", "Mathematical relations", "Statements in graph theory", "Mathematical problems" ]
62,480,574
https://en.wikipedia.org/wiki/Alon%E2%80%93Boppana%20bound
In spectral graph theory, the Alon–Boppana bound provides a lower bound on the second-largest eigenvalue of the adjacency matrix of a -regular graph, meaning a graph in which every vertex has degree . The reason for the interest in the second-largest eigenvalue is that the largest eigenvalue is guaranteed to be due to -regularity, with the all-ones vector being the associated eigenvector. The graphs that come close to meeting this bound are Ramanujan graphs, which are examples of the best possible expander graphs. Its discoverers are Noga Alon and Ravi Boppana. Theorem statement Let be a -regular graph on vertices with diameter , and let be its adjacency matrix. Let be its eigenvalues. Then The above statement is the original one proved by Noga Alon. Some slightly weaker variants exist to improve the ease of proof or improve intuition. Two of these are shown in the proofs below. Intuition The intuition for the number comes from considering the infinite -regular tree. This graph is a universal cover of -regular graphs, and it has spectral radius Saturation A graph that essentially saturates the Alon–Boppana bound is called a Ramanujan graph. More precisely, a Ramanujan graph is a -regular graph such that A theorem by Friedman shows that, for every and and for sufficiently large , a random -regular graph on vertices satisfies with high probability. This means that a random -vertex -regular graph is typically "almost Ramanujan." First proof (slightly weaker statement) We will prove a slightly weaker statement, namely dropping the specificity on the second term and simply asserting Here, the term refers to the asymptotic behavior as grows without bound while remains fixed. Let the vertex set be By the min-max theorem, it suffices to construct a nonzero vector such that and Pick some value For each vertex in define a vector as follows. Each component will be indexed by a vertex in the graph. For each if the distance between and is then the -component of is if and if We claim that any such vector satisfies To prove this, let denote the set of all vertices that have a distance of exactly from First, note that Second, note that where the last term on the right comes from a possible overcounting of terms in the initial expression. The above then implies which, when combined with the fact that for any yields The combination of the above results proves the desired inequality. For convenience, define the -ball of a vertex to be the set of vertices with a distance of at most from Notice that the entry of corresponding to a vertex is nonzero if and only if lies in the -ball of The number of vertices within distance of a given vertex is at most Therefore, if then there exist vertices with distance at least Let and It then follows that because there is no vertex that lies in the -balls of both and It is also true that because no vertex in the -ball of can be adjacent to a vertex in the -ball of Now, there exists some constant such that satisfies Then, since Finally, letting grow without bound while ensuring that (this can be done by letting grow sublogarithmically as a function of ) makes the error term in Second proof (slightly modified statement) This proof will demonstrate a slightly modified result, but it provides better intuition for the source of the number Rather than showing that we will show that First, pick some value Notice that the number of closed walks of length is However, it is also true that the number of closed walks of length starting at a fixed vertex in a -regular graph is at least the number of such walks in an infinite -regular tree, because an infinite -regular tree can be used to cover the graph. By the definition of the Catalan numbers, this number is at least where is the Catalan number. It follows that Letting grow without bound and letting grow without bound but sublogarithmically in yields References Algebraic graph theory Spectral theory
Alon–Boppana bound
[ "Mathematics" ]
836
[ "Mathematical relations", "Graph theory", "Algebra", "Algebraic graph theory" ]
62,480,647
https://en.wikipedia.org/wiki/Mining%20sludge
Mining sludge is the waste product of alluvial mining, and in particular hydraulic sluicing. It has been particularly prominent in gold fields in Australia and California in the nineteenth century. In the 1840s in California and 1850s in Australia, methods for extracting alluvial gold were developed which involved washing soil and gravel through sluice boxes using diverted streams and other water sources. The waste or tailings were released into the waterways forming large deposits of highly mobile sediment. This 'sludge' as it was generally termed, blocked the stream channels causing flooding and burial of land downstream. The cyanide process also involved releasing sediment contaminated with cyanide, while other sludge deposits have a variety of contaminants used in the mining process. Large areas of land were affected by sludge, particularly in Victoria, where a Royal Commission was established in 1858-9 to investigate and manage the problem. This resulted in a number of regulations and the construction of large stone-lined sludge channels to concentrate and divert the sludge away from settled areas and buildings. the towns of Bendigo, Ballarat, Castlemaine, Creswick and Maryborough have channelized streams running through them as a result. Ultimately hydraulic sluicing was banned in 1904 as a result of the continuing environmental damage caused to waterways in places such as Omeo, and a Sludge Abatement Board was established to regulate and repair the problem. References History of mining in Australia Surface mining Hydraulic engineering Australian gold rushes
Mining sludge
[ "Physics", "Engineering", "Environmental_science" ]
304
[ "Hydrology", "Physical systems", "Hydraulics", "Civil engineering", "Hydraulic engineering" ]
62,485,335
https://en.wikipedia.org/wiki/Doppler%20parameter
The Doppler parameter, or Doppler broadening parameter, usually denoted as , is a parameter commonly used in astrophysics to characterize the width of observed spectral lines of astronomical objects. It is defined as , where is the one-dimensional velocity dispersion . Given this parameter, the velocity distribution of the line-emitting/absorbing atoms and ions proximated by a Gaussian can be rewritten as , where is the probability of the velocity along the line of sight being in the interval . The line width is also often specified in terms of the FWHM (full width at half maximum), which is . Distribution The Doppler parameters of Lyman-alpha forest absorption lines are in the range 10–100 km s−1, with a median value around that decrease with redshift . Analyses of the HST/COS dataset of low-redshift quasars gives a median parameter of around (, ). See also Doppler broadening Doppler spectroscopy References Doppler effects Astronomical spectroscopy
Doppler parameter
[ "Physics", "Chemistry" ]
215
[ "Physical phenomena", "Spectrum (physical sciences)", "Astrophysics", "Astronomical spectroscopy", "Doppler effects", "Spectroscopy" ]
62,485,831
https://en.wikipedia.org/wiki/Ted%20Janssen
Theo Willem Jan Marie Janssen (13 August 1936 – 29 September 2017), better known as Ted Janssen, was a Dutch physicist and Full Professor of Theoretical Physics at the Radboud University Nijmegen. Together with Pim de Wolff and Aloysio Janner, he was one of the founding fathers of N-dimensional superspace approach in crystal structure analysis for the description of quasi periodic crystals and modulated structures. For this work he received the Aminoff Prize of the Royal Swedish Academy of Sciences (together with de Wolff and Janner) in 1988 and the Ewald Prize of the International Union of Crystallography (with Janner) in 2014. These achievements were merit of his unique talent, combining a deep knowledge of physics with a rigorous mathematical approach. Their theoretical description of the structure and symmetry of incommensurate crystals using higher dimensional superspace groups also included the quasicrystals that were discovered in 1982 by Dan Schechtman, who received the Nobel Prize in Chemistry in 2011. The Swedish Academy of Sciences explicitly mentioned their work at this occasion. Early life and education Ted Janssen was born on August 13, 1936, in Vught, near 's-Hertogenbosch in the Netherlands. Already as a young boy he was fascinated by the sciences. He built radios, set up a chemistry lab in the attic of his parental home, was an avid bird watcher and he built his own telescopes. He remembered high school as ‘not very inspiring’ and he passed all exams without much effort, but viewed it as a time that truly formed him. Instead of spending time on homework he studied the history and philosophy of science and was very interested in astronomy and astrophysics. During his high school years he also developed a deep appreciation of literature and music. Later he added the visual arts, ballet, and architecture to that list. The enjoyment of the arts was vital to Ted. He called it essential components of life. He started playing the piano, harpsichord and cello in his early twenties. Too late to become an accomplished musician, but it brought him great joy. In 1954 he started college in Utrecht, studying mathematics and physics with minors in chemistry and astronomy. He again showed his interest in a wide variety of topics by attending lectures in ethics, philosophy, music and sculpture. After his candidate degree he concentrated on theoretical physics, but always included a deep understanding of mathematics in his work. After studying theoretical physics in Utrecht University Ted graduated under Leon van Hove with his doctoral dissertation ‘the classical limit of quantum mechanical diagram expansions’ and he was offered the opportunity to present it at an international conference in Utrecht on ‘Many-body Problems’. No less than six Nobel laureates (Yang, Lee, Prigogine, Anderson, Cooper and Schrieffer) were in the audience for Ted’s first presentation, which led to his first publication as well: ’On the classical limit of the diagram expansion in quantum statistics’ by T.W.J.M. Janssen. All his later publications were published as T. Janssen or Ted Janssen. After his doctoral exam in 1960 Ted worked for several years with professors Theo Ruijgrok, Tini Veltman, and John Tjon in Utrecht. Earlier he developed a friendship with co-student and co-worker Geert Fast. Geert’s promotor van Hove moved from Utrecht to CERN in Geneva and Geert asked Ted to keep an eye on his little sister, Loes Fast, who was studying veterinary medicine in Utrecht. Ted quickly developed strong feelings for Loes and in 1965 they got married. In 1965, he became the first PhD student of Aloysio Janner at the Catholic University Nijmegen and started on the work that resulted in his PhD thesis, Crystallographic Groups in Space and Time, in 1968, thereby already providing the theoretical basis of what would become the superspace approach. Career After his promotion Ted Janssen got a position in Nijmegen at the department of Theoretical Solid State Physics. He immediately was given teaching responsibilities. In the years that followed Ted taught many classes, including electrodynamics, classical mechanics, quantum mechanics, complex functions, crystallographic groups, group theory for physicists, chaos theory, soft modes and solid state physics. Ted was always interested in international collaboration and taught ‘crystallographic groups’ for 6 months in Leuven in 1969. In 1971 Ted accepted an invite from professor Baltensberger to come to the ETH in Zürich for one year. Baltensberger organized weekly meetings between theoretical and experimental physicists. Ted ever since made it a habit to bring theoretical and experimental physicists together on a regular basis. Back in Nijmegen Ted was promoted to associate professor in 1972 and he continued working with Aloysio Janner and Li Ching Chen on space-time symmetry of electromagnetic fields and independently on PUA (projective unitary/anti-unitary) representations. In 1972 Aloysio and Ted also started their long collaboration with Pim de Wolff. Together with Aloysio Janner and Pim de Wolff he was one of the founders of the higher dimensional superspace approach in crystal structure analysis for the description of quasiperiodic crystals and modulated structures. This collaboration and its results received international recognition in 1998 with the Aminoff Prize from the Swedish Academy of Science. The award ceremony was followed by a symposium and the speakers were Aloysio Janner, Ted Janssen, Gervais Chapuis, Mike Glazer, Borje Johansson, Sander van Smaalen, Vaclav Petrcek and Reine Wallenberg. In 1973 and 1975 Ted and Aloysio organized conferences on ‘Group Theoretical Methods in Physics’ in Nijmegen. These are small conferences that attract both mathematicians and physicists. The series still exists. In 1993 Ted was appointed as professor at Utrecht University and in 1994 he took Aloysio’s position in Nijmegen after Aloysio retired. Also in 1994 Ted organized the conference Dyproso 1994 (Dynamic Properties of Solids) in Lunteren. In 1987 Ted joined the board of EMF (European Meeting on Ferro- electricity) and a few years later also the IMF (International Meeting on Ferroelectricity). Ted organized EMF-8 in 1995, in Nijmegen. In 1997 he joined the board of Aperiodic (Modulated Structures, Polytypes and Quasicrystals) and he organized Aperiodic-2000, again in Nijmegen. Ted also was a board member of ICQ, NVK (Nederlandse Vereniging voor Kristallografie – Dutch Union of Crystallography), LOTN (Collaboration of Dutch Institutes for Theoretical Physics), and the Dutch organization of Fundamental Research in Solid State Physics. Ted attended many conferences and was often traveling. In the earlier years his wife Loes worked as a veterinarian and took care of the children, but once all children had left the house Loes would join Ted on many of his travels. Ted spent time as visiting lector or professor in Leuven (1969), Zürich (1971-1972), Dijon (1987), Paris, Orsay, Palaiseau (1992), Gif-sur-Yvette (1993), Grenoble (1986 and 1990), Marseille (2001), Nagoya (1992), Lausanne (2003), Beer Sheva (2003) en Sendai (2004-2005 and 2013). In 2014 Aloyiso and Ted received a second award, the Ewald Prize, one of the most prestigious prizes in crystallography, of the International Union of Crystallography during the IUCr conference in Montreal. Death Ted Janssen died in Groesbeek, Netherlands, on September 29, 2017, after a short and devastating struggle with leukemia. He did however work until the last day, finishing his edits for the second edition of the book "Aperiodic structures: from modulated structures to quasicrystals" that was published in 2018. Selected bibliography Books Papers References 1936 births Theoretical physicists 2017 deaths 20th-century Dutch physicists Crystallographers Mathematical chemistry Academic staff of Radboud University Nijmegen Utrecht University alumni Radboud University Nijmegen alumni
Ted Janssen
[ "Physics", "Chemistry", "Materials_science", "Mathematics" ]
1,701
[ "Drug discovery", "Applied mathematics", "Theoretical physics", "Theoretical chemistry", "Crystallography", "Mathematical chemistry", "Molecular modelling", "Crystallographers", "Theoretical physicists" ]
67,700,026
https://en.wikipedia.org/wiki/SPIRE1
SPIRE1 is a protein that interacts with actin monomers and actin nucleating formin proteins. SPIRE1 was first identified in Drosophila melanogaster. SPIRE1 contains an N-terminal KIND domain which binds formins and four actin-binding WH2 domains which nucleate actin filaments. References Proteins
SPIRE1
[ "Chemistry" ]
74
[ "Biomolecules by chemical classification", "Proteins", "Molecular biology" ]
67,703,573
https://en.wikipedia.org/wiki/Organotechnetium%20chemistry
Organotechnetium chemistry is the science of describing the physical properties, synthesis, and reactions of organotechnetium compounds, which are organometallic compounds containing carbon-to-technetium chemical bonds. The most common organotechnetium compounds are coordination complexes used as radiopharmaceutical imaging agents. In general, organotechnetium compounds are not typically used in chemical reactions or catalysis due to their radioactivity. Research on technetium chemistry is often done in conjunction with rhenium as a isoelectronic non-radioactive alternative to technetium. Brief history Technetium were first used as a radiopharmaceutical in 1961. Of the radiopharmaceuticals in clinical use for SPECT (single photon emission computed tomography), a majority of the compounds are 99mTc complexes. Three generations of technetium radiopharmaceuticals currently exist and are used. The first generation do not localize specifically and are considered perfusion agents. Second generation has a peptide based targeting portion. The third generation of technetium radiopharmaceuticals features organotechnetium compounds that can localize in the body in a biomimetic manner. Examples A vast majority of technetium compounds used in radiopharmaceutical imaging and diagnosis are inorganic coordination complexes. There are a number of “classical” organometallic organotechnetium compounds, specifically containing carbon-technetium bonds are in clinical use. These organotechnetium compounds are mostly seen as technetium tri-carbonyl compounds and technetium cyclopentadienyl compounds. One of the most prominent radio pharmaceutical compounds in clinical use is Cardiolie®, also known as 99mTc-Sestamibi. This organotechnetium compound is applied for myocardial imaging. The d6 electron configuration is highly stable due to its low oxidation state. The Tc(I) complex is further stabilized by the high reducing potential of the isonitrile ligands. The above piano-stool organotechnetium complex is a third generation radiopharmaceutical. The cyclopentadienyl ligand acts as a bio isostere to a phenyl group in the amino acid phenylalanine. Synthesis Radioactive 99mTc is obtained in the pertechnetate form in dilute aqueous solution from 99Mo/99mTc generators. Pertechnetate can then be made into more useful carbonyl and hydrate precursors for subsequent synthesis into technetate complexes. As the starting radiometals are most available in aqueous solution due to method of isolation, the chemistry for synthesis of technetate compounds must be done in aqueous solution. The study of technetium compounds is typically done in conjugation with rhenium as an isoelectronic and non-radioactive alternative to technetium. Precursors For 99mTc and 188Re, the synthesis of compounds start with pertechnetate or perrhenate in saline at low concentration, obtained from 99Mo/99mTc and 188W/188Re generators. The aquo tricarbonyl precursors are useful for accessing Tc and Re complexes. The metals have d6 low-spin electronic configuration, providing high kinetic stability, and highly stable M-C bonds. Consequently, the three CO ligands always remain coordinated, while ligands readily replace the three water molecules. Typical organotechnetium compounds thus feature the tricarbonyl motif. Typical methods of organometallic compounds synthesis difficult to utilize. To be useful as a radiopharmaceutical, the reaction should be done in an aqueous saline solution that can be injected into the body intravenously. Double Ligand Transfer A double ligand transfer (DLT) reaction was developed by Martin Wenzel for synthesis of organotechnetium/organorhenium complexes. The reaction features the synthesis of organotechnetium piano-stool compounds from ferrocene. The reaction was further studied and optimized by Katzenellegbogen. Unfortunately, the utility of this method in the synthesis of radiopharmaceuticals is limited by the use of organic solvent. Mechanism This mechanism is proposed to proceed by ring slippage. First, reduction and carbonylation of the pertechnetate/perrhenate with CrCl3 and/or Cr(CO)6 to from the 6 coordinate intermediate. Subsequent reaction with the substituted ferrocene through ring-slipped, bridged intermediates then gives product. The transfer of the more electron deficient ring is favored by the stabilization of the transition state of η5- η3 ring slip of ferrocene. Metal-Mediated Retro Diels-Alder Aqueous synthesis enables development for medically relevant radiopharmaceuticals. First aqueous synthesis of fac-[99mTc(η5 -Cp-C(O)CH3)(CO)3] was described by the Alberto lab utilized a metal-mediated retro Diels-Alder to synthesize the organotechnetium complexes. Mechanism In a step-wise manner, the carboxylate first coordinates to technetium followed by coordination to the adjacent cyclopentadiene (Path A). The reaction is thermodynamically driven, given a strong electronic interaction between [99mTc(CO)3]+ and the cyclopentadiene. The favorable formation of the {(η5-Cp)Tc} as a driving force for formation of the product 2, prompted the use of the Diels-Ader dimer (HCp-COOH)2 (Thiele’s acid) as a precursor to the cyclopentadiene. Thermal cracking of 3 typically requires T >160 °C. Reaction of 3 and 1 at 95 °C for 30 min in buffer gave quantitative formation of 2. As no free HCp-COOH was observed, in situ retro Diels-Alder and subsequent entry into path A was excluded. Examples The metal-mediated retro Diels-Alder reaction suggests a general approach to [(Cp-R)99mTc(CO)3], enable access to a variety of R groups on the Cp ring. With the development of this retro Diels-Alder method for synthesis of 99mTc and Re complexes in aqueous media by the Alberto lab, The labeling of biomolecules with piano-stool like complexes is now possible. Enabling access to the development of novel radiopharmaceuticals. Reactivity Technetium has been shown to react similarity to osmium. Able to catalyze a cis dihydroxylation. References Organometallic compounds
Organotechnetium chemistry
[ "Chemistry" ]
1,410
[ "Organic compounds", "Organometallic compounds", "Organometallic chemistry", "Inorganic compounds" ]
59,785,705
https://en.wikipedia.org/wiki/Elisabeth%20Larsson%20%28scientific%20computing%29
Elisabeth Larsson (born December 30, 1971) is a Swedish applied mathematician and numerical analyst. She is a professor in the Department of Information Technology of Uppsala University, and the director of the Uppsala Multidisciplinary Centre for Advanced Computational Science. Research Larsson's research involves the applications of radial basis functions to scientific computing. It has included work on the propagation of sound waves through water, pricing of financial options, and simulation of the Earth's climate. Education and career Larsson was born on December 30, 1971 in Ljusdal, and went to high school in Ljusdal. She earned a master's degree in engineering physics at Uppsala University in 1994, and completed a Ph.D. in numerical analysis at Uppsala University in 2000. Her dissertation was Domain Decomposition and Preconditioned Iterative Methods for the Helmholtz Equation. Her doctorate was supervised by Kurt Otto, with as outside examiner. She became a junior researcher in the Department of Information Technology at Uppsala University in 2001, and an assistant professor in 2007. She was promoted to senior lecturer (associate professor) in 2011 and professor in 2020. Recognition In 2007, Larsson was one of two winners of the Göran Gustafsson Award for outstanding young Swedish scientists. References External links Home page 1971 births Living people 20th-century Swedish mathematicians 21st-century Swedish mathematicians Applied mathematicians Numerical analysts Academic staff of Uppsala University 20th-century women mathematicians 21st-century women mathematicians Swedish women mathematicians Scientific computing researchers 20th-century Swedish women
Elisabeth Larsson (scientific computing)
[ "Mathematics" ]
304
[ "Applied mathematics", "Applied mathematicians" ]
59,786,049
https://en.wikipedia.org/wiki/Jacobo%20Bielak
Jacobo Bielak is a Mexican-born earthquake engineer. Bielak was raised in Mexico City and earned his bachelor's of science degree from the National Autonomous University of Mexico in 1963. He subsequently attended Rice University in the United States, completing his master's degree in 1966, followed by a doctorate from the California Institute of Technology, graduating in 1971. Bielak taught at Carnegie Mellon University, where he was named Hamerschlag University Professor of Civil and Environmental Engineering, and granted emeritus status upon retirement in 2018. In 2018, he and his collaborators completed the Quake Project, which helped predict how earthquakes impact urban infrastructure. His early work was the basis for the soil-structure seismic provisions within the National Earthquake Hazards Reduction Program (NEHRP). Bielak was awarded the Gordon Bell Prize in 2003 for computational research on the effects of future earthquakes in Los Angeles. He was elected to membership of the United States National Academy of Engineering in 2010 "for advancing knowledge and methods in earthquake engineering and in regional-scale seismic motion simulation." References 20th-century American engineers 21st-century American engineers Mexican civil engineers American structural engineers Environmental engineers National Autonomous University of Mexico alumni Rice University alumni Mexican emigrants to the United States California Institute of Technology alumni Engineers from Mexico City Members of the United States National Academy of Engineering Year of birth missing (living people) Living people
Jacobo Bielak
[ "Chemistry", "Engineering" ]
278
[ "Environmental engineers", "Environmental engineering" ]
54,942,000
https://en.wikipedia.org/wiki/MAPK%20networks
Mitogen-activated protein kinase (MAPK) networks are the pathways and signaling of MAPK, which is a protein kinase that consists of amino acids serine and threonine. MAPK pathways have both a positive and negative regulation in plants. A positive regulation of MAPK networks is to help in assisting with stresses from the environment. A negative regulation of MAPK networks is pertaining to a high quantity of reactive oxygen species (ROS) in the plant. MAPK networks Mitogen-activated protein kinase (MAPK) networks can be found in eukaryotic cells. MAPK pathways in plants are known to regulate cell growth, cell development, cell death, and cell responses to environmental stimuli. Only a few of the MAPK mechanism components are known and have been studied. The components such as Arabidopsis MAPKKKs YODA, ANP2/ANP3, and MP3K6/MP3K7 functions in the development of the cell. MEKK1 and ANP1 function in the response to environmental stress. Unfortunately, only eight out of the twenty mitogen-activated protein kinases have been studied. The most commonly studied MAPKs are MPK3, MPK4, and MPK6, which are activated by a diversity of stimuli including abiotic stresses, pathogens, and oxidative stressors. MPK4 negatively regulates biotic stress signaling, while MPK3 and MPK6 function as positive mediators of defense responses. The plant has these positive and negative mediators allowing for normal plant growth and development, which has been proven true by the severely dwarfed phenotype of mpk4 and the embryo lethal phenotype of mpk3 and mpk6 mutants. Positive regulation pathways in plants Plants have many protection mechanisms to cope with stresses from the environment, which include ultraviolet light, cold or hot weather, windy days, and mechanical wounding. There are multiple pathways, but one pathway that plants have been able to develop is a self-defense mechanism by recognize pathogens through pathogen-associated molecular patterns (PAMPs) via cell surface-located pathogen-recognition receptors. These receptors induce intracellular signal pathways within the plant cells, while also resulting in PAMP-triggered immunity. Responses to PAMPs target broadly instead of specifically. This immunity requires downstream components via the MAPK cascade to activate the MAP kinases. The flagellin, a peptide of flg22, triggers a rapid and strong activation of MPK3, MPK4, and MPK6. MPK4 and MPK6 can be activated by harpin proteins. MPK3 and MPK6 are very similar proteins and have a function as regulators in abscission, stomatal development, signaling various abiotic stresses, and defense responses to certain pathogens. Experimentation has proposed that the MAPK module MEKK1-MKK4/MKK5-MPK3/MPK6 may be responsible for flg22 signal transmission. All of the proposed modules appear to be correct expect for MEKK1 because plants with mekk1 mutated have a compromised flg22-triggered activation of MPK4, yet they have normal activation of MPK3 and MPK6. Data has shown that MAPK cascade is composed of MKK4/MKK5 and MPK3/MPK6 in response to fungal pathogens. The observation shows that the activation of MPK3/MPK6 in conditional gain-of-function plants for MKK4/MKK5 or MEKK1/MKKKa is sufficient to induce camalexin, which is a major phytoalexin in Arabidopsis. The stomata are considered to be the entry point for pathogenic invaders because microbial invaders enter the plant at the stomata. A recent study has shown that MAPK cascades play a role in abiotic and biotic stress responses. The main pathways in stomatal development and dynamics are MPK3 and MPK6. During a drought, the stomata closes and is believed to be mediated by the phytohormone, abscisic acid, and involves MKK1, MPK3, and MPK6. Another way of closing the stomata is through a closing process that is called pathogen-induced, which is an innate response from the plant. Campestris (Xcc) excretes a chemical that reverts stomatal closure that was caused by bacteria and abscisic acid (ABA). Most stomata close in the presence of ABA, but some are unresponsive to bacteria. In Arabidopsis Xcc does not revert bacteria-induced or ABA-induced stomatal closure. Scientists are not certain if MAPK cascades are responsible for the signaling, so further investigation is needed for this. Negative regulation The identification of MEKK1-MKK1/2-MPK4 in pathogen signaling has been a tremendous finding. Mekk1, mkk1/mkk2 double and mpk4 mutations are dwarfed and acquire too much of reactive oxygen species. The mutations are considered to be from the enhancement of SA levels, which is partially reversed by bacterial SA hydrolase. Mekk1, mkk1/mkk2 double and mpk4 mutations have cell death occur spontaneously, pathogenesis-related genes and increased resistance to pathogens. MEKK1 appears to have deregulation pathways that are unknown and do not involve MKK1/MKK2 nor MPK4. MEKK1 interact with WRKY53, which is responsible for mekk1 genes set, and alter the activity of WRKY53 that is a short portion of MAPK signaling. Substrates of MPK4 are three proteins: WRKY33, WRKY25, and MKS1. Ternary MKS1-MPK4-WRKY33 complexes have been recognized by nuclear extracts. Recruitment of WRKY33 depends on the phosphorylation of MPK4. Once activated, MPK3 phosphorylates MKS1, which releases WRKY33 from the ternary complex. The free WRKY33 is believed to induce transcription on target genes., allowing for a negative regulation by MPK4. Pathogens have developed mechanisms that inactivate PAMP-induced signaling pathways through the MAPK networks. Andrea Pitzschke and her colleges claim “AvrPto and AvrPtoB interact with the FLS2 receptor and its co-receptor BAK1. AvrPtoB catalyzes the polyubiquitination and subsequent proteasome-dependent degradation of FLS2” (Pitzschke 3). AvrPto interacts with BAK1 and interrupts the binding of FLS2. Pseudomonas syringae have HopAI1, which is a phosphothreonin lyase, and dephosphorylates the threonine residue at the upstream MAPKKs. HopAI1 interacts with MPK3 and MPK6 allowing for flg22 activation to occur. In certain soil-borne pathogens that carry flagellin variants cannot be detected by FLS2, but there is still a triggered innate immune response. The immune response has been shown to be from the EF-Tu protein. Flg22, elf18, FLS2 and EFR have receptors that are in the same subfamily of LRR-RLKs, LRRXII. This means that elf18 and flg22 induce extracellular alkalization, rapid activation of MAPKs, and gene responses that are similar to each other. Although there appears to be an important relationship between MAPKs with EF-Tu-triggered defense, the evidence remains to be unclear. The reason for this unclear relationship is because of Agrobacterium tumefaciens, which infects into segments of plant DNA.  EFR1 mutants do not recognize EF-Tu, but there is no research on MAPK activities and efr1. Initiation of defense signaling can be a positive effect to the plant pathogens because activating MPK3 in response to flg22 causes phosphorylation and translocation of virE2 interacting protein 1 (VIP1). VIP1 serves as a shuttle for the pathogenic T-DNA, but the induction of defense genes can occur as well. This allows for the spreading and cessation of the pathogen in the plant, but the pathogen can overcome the problem by attacking VIP1 for proteasome degradation by VirF, which is a virulence factor that encodes an F-box protein. References Signal transduction
MAPK networks
[ "Chemistry", "Biology" ]
1,785
[ "Biochemistry", "Neurochemistry", "Signal transduction" ]
54,948,509
https://en.wikipedia.org/wiki/Electromagnetic%20vortex%20intensifier%20with%20ferromagnetic%20particles
Electromagnetic vortex intensifier with ferromagnetic particles (vortex layer device, electromagnetic mill) consists of an operating chamber (pipeline) with a diameter of 60–330 mm, located inside an inductor with a rotating electromagnetic field. The operating chamber contains cylindrical ferromagnetic particles 0.5–5 mm in diameter and 5–60 mm in length, ranging from tens to several thousand pieces (0.05–20 kg), depending on the dimensions of the operating chamber of the intensifier. History of electromagnetic vortex intensification Electromagnetic devices with a vortex layer were proposed in 1967 by D.D. Logvinenko and O.P. Shelyakov. The monograph "intensification of technological processes on devices with a vortex layer", written by these authors, showed the effective use of these devices in: mixing of liquids and gases mixing of loose materials dry grinding of solids (micro-resin) grinding and dispersion of solids in liquid media activation of substance surface implementation of chemical reactions changes in the physical and chemical properties of substances Following this research, these intensifiers found their application in many researches and developments. Physical processes in electromagnetic vortex intensifiers Intensification of technological processes and chemical reactions is achieved due to intensive mixing and dispersion, acoustic and electromagnetic treatment, high local pressure and electrolysis of processed components. Electromagnetic devices with a vortex layer with ferromagnetic elements accelerate the reactions 1.5-2 times; reduce the consumption of reagents and electricity by 20%. The grinding effect is achieved by the motion of ferromagnetic particles and their free collision with each other, and a constrained collision between the particles and a body. The degree of grinding is 0.5 μm (with an initial size of 20 mm). At present, the electromagnetic devices with a vortex layer with ferromagnetic elements actually exist (D.D. Logvinenko himself designed and produced more than 2000 pieces), their principle is also implemented in some technological lines. Industrial application of electromagnetic vortex intensifiers Examples of industrial applications of these devices for intensifying processes are: preparation of food emulsions preparation of multicomponent suspensions with vulcanizing and gelling agents (sulfur, zinc oxide, soot, kaolin, sodium silicofluoride) in latex sponge production; Obtaining suspensions of titanium dioxide used as matting agent for chemical fibers wastewater treatment from acids, alkalis, hexavalent chromium compounds, nickel, iron, zinc, copper, cadmium, other heavy metals, cyanide compounds, and other contaminants production of greases and emulsions drilling fluid preparation preparation of kerosene in water emulsions, silicone, rubber, latex, etc. Electromagnetic vortex intensifier grinds and regrinds coal, alumina-containing slag, quartz sand, technical diamonds, cellulose, chalk, wood flour, fluoroplastics, etc. Also, it can be used for decontamination of agricultural animal waste. Issues of electromechanics and device design The main parameters that characterize the rotating magnetic field created by a three-phase inductor in the working area of the apparatus in the absence of ferromagnetic particles include: the number of pairs of magnetic poles, the angular speed of their rotation; magnitude and speed of rotation of the magnetic induction vector hodograph, which in real devices is an ellipse with eccentricity increasing when approaching the surface of the working chamber. It is advisable to characterize the magnetic properties of the vortex layer by volume-averaged values; a convenient parameter for energy control of the operation of the vortex layer is its power density. Devices AVS-100, АVS-150, etc. (Russian Cyrillic acronym: ) are focused on uniform distribution of ferromagnetic particles throughout the working area and have a bipolar inductor. When developing an inductor for these devices, the salient-pole design of liquid steel induction rotators was chosen as an analogue. The choice of salient-pole inductor design was associated mainly with simplified manufacturing technology, ease of operation, repair and cooling. In the central part of the working area of these devices, the magnetic field in the absence of ferromagnetic particles is close to uniform: the hodograph of the magnetic induction vector in this area is close to a circle, coinciding with it in the center of the working area of these devices; the modulus of the magnetic induction vector is approximately 0.12 T (in various devices from 0.1 to 0.15 T); the angular speed of its rotation is 314 radians per second, which corresponds to a rotation speed of 3000 rpm. In a working vortex layer, the modulus of the averaged magnetic induction vector reaches values of 0.2 T and lags behind the external field strength by a certain phase angle. The specific power of the vortex layer in various modes for these devices ranges from 0.1 to 1.5 kW per cubic decimeter of the working area. The devices have dual-circuit oil-water cooling, power capacitors to compensate for the reactive power of the inductor and are powered from a 380V, 50 Hz network. Other design features of the devices are described in detail in the monograph. Subsequently, the design of these and similar devices was mastered, modified and expanded by other manufacturers and developers. Currently, devices use both salient-pole inductors and inductors with distributed windings, similar to the stators of electric motors; different types of cooling and power capacitors are used. If necessary, the device includes power converters of voltage and frequency from the supply network. Methods for monitoring and controlling the operation of the vortex layer and technological lines based on it are also being improved. In scientific and technical developments related to issues of the electromechanics of devices of the class under consideration, computer modelling of the inductor and behavior of ferromagnetic particles is sometimes used. An analytical model of the force effect of a circular rotating magnetic field on a magnetic particle in devices with an external electric inductor with a different number of magnetic poles is considered in the work. Recently, a method has been proposed and examples of rapid engineering evaluation calculation and analysis of the characteristics of the working rotating magnetic field of various cylindrical inductors with a longitudinal winding have been given References Electromagnetic components Electrochemical engineering Fluid dynamics Fluid technology Ferromagnetism Drilling fluid Electrolysis Chemical reactions
Electromagnetic vortex intensifier with ferromagnetic particles
[ "Chemistry", "Materials_science", "Engineering" ]
1,349
[ "Chemical engineering", "Electrochemical engineering", "Fluid technology", "Magnetic ordering", "Electrochemistry", "Ferromagnetism", "Mechanical engineering by discipline", "nan", "Electrolysis", "Piping", "Electrical engineering", "Fluid dynamics" ]
66,219,272
https://en.wikipedia.org/wiki/Weld%20tests%20for%20friction%20welding
Quality requirements of welded joints depend on the form of application, e.g. in the space or fly industry weld errors are not allowed. Science try to gets good quality welds. There are many scientific articles describing the weld test, e.g. hardness, tensile tests. The weld structure can be examined by optical microscopy and scanning electron microscopy. The computer finite element method (FEM) is used to predict the shape of the flash and interface and others, not only for rotary friction welding (RFW), but also for friction stir welding (FSW), linear friction welding (LFW), FRIEX, and others. Temperature measurements are also carried out for scientific purposes e.g. by use thermocouples or sometimes thermography, mentions about measurements are generally found in research materials and journals. See also Friction welding Temperature Heat-affected zone Weld quality assurance References Nondestructive testing Quality control Welding
Weld tests for friction welding
[ "Materials_science", "Engineering" ]
198
[ "Nondestructive testing", "Materials testing", "Welding", "Mechanical engineering" ]
70,611,659
https://en.wikipedia.org/wiki/Ormanc%C4%B1k%2C%20Savur
Ormancık () is a neighbourhood in the municipality and district of Savur, Mardin Province of Turkey. The village is populated by Kurds of the Dereverî tribe and had a population of 31 in 2021. History On 21 January 1994 it was reportedly attacked with grenades by the PKK. Nineteen people, composed of nine women, six children and four village guards - were killed in what Human Rights Watch described as a "massacre." There is speculation that the event was a chemical attack. References Kurdish settlements in Mardin Province Neighbourhoods in Savur District Massacres in Turkey Massacres in 1994 1994 murders in Turkey Kurdistan Workers' Party attacks Chemical weapons attacks
Ormancık, Savur
[ "Chemistry" ]
139
[ "Chemical weapons attacks", "Chemical weapons" ]
70,615,665
https://en.wikipedia.org/wiki/Alexander%20F.%20Wells
Alexander Frank Wells (2 September 1912 – 28 November 1994), or A. F. Wells, was a British chemist and crystallographer. He is known for his work on structural inorganic chemistry, which includes the description and classification of structural motifs, such as the polyhedral coordination environments, in crystals obtained from X-ray crystallography. His work is summarized in a classic reference book, Structural inorganic chemistry, first appeared in 1945 and has since gone through five editions. In addition, his work on crystal structures in terms of nets have been important and inspirational for the field of metal-organic frameworks and related materials. Education and career Wells studied at The Queens' College, University of Oxford and obtained his BA and MA in 1934 and 1937, respectively. He then moved to University of Cambridge, where he obtained his PhD in X-ray crystallography in 1939, under the supervision of J. D. Bernal. His PhD thesis was titled The Crystal Structures of Certain Complex Metallic Compounds. He worked as research scientist at Cambridge from 1937 to 1940 and at University of Birmingham from 1940 till 1944. He moved to the industry afterwards, working as a senior research associate at Imperial Chemical Industries from 1944 to 1968. Wells was not interested in senior administrative jobs offer to him in the industry, he moved back to academia and became a professor of chemistry at University of Connecticut in the US from 1968 until his retirement in 1980. Personal life Wells is known to his friends and family as Jumbo. He is an accomplished pianist. He married Ada Squires, then a widow, in 1939. During World War II, Wells worked on developing phosphors to be used in cathode-ray tubes and in helping service people move about in the dark. Bibliography Paper series Other selected papers Books See also Periodic graph (crystallography) Coordination geometry Michael O'Keeffe (chemist) List of books about polyhedra References British chemists 1912 births 1994 deaths British crystallographers University of Connecticut faculty Alumni of the University of Oxford Alumni of the University of Cambridge Imperial Chemical Industries people Crystallographers Inorganic chemists 20th-century British chemists
Alexander F. Wells
[ "Chemistry", "Materials_science" ]
434
[ "Crystallography", "British inorganic chemists", "Inorganic chemists", "Crystallographers" ]
70,620,666
https://en.wikipedia.org/wiki/Waxman-Bahcall%20bound
The Waxman-Bahcall bound is a computed upper limit on the observed flux of high energy neutrinos based on the observed flux of high energy cosmic rays. Since the highest energy neutrinos are produced from interactions of utlra-high-energy cosmic rays, the observed rate of production of the latter places a limit on the former. It is named for John Bahcall and Eli Waxman. Cosmic rays The Waxman-Bahcall limit comes from the analysis of cosmic rays at various energy levels and their respective fluxes. Cosmic rays are high energy particles, like protons or atomic nuclei that move at near the speed of light. These rays can come from a variety of sources such as the Sun, the Solar System, the Milky Way galaxy, or even further beyond. Upon entry into our atmosphere, these cosmic rays interact with atoms in the atmosphere, initiating cosmic-ray air showers. These showers are cascades of secondary particles, including muons and neutrinos. These atmospheric neutrinos can be studied and a general plot of the energy of said neutrinos and their fluxes can be determined and created. The plot below shows the cosmic-ray energy spectrum. Caution: the energy spectrum of atmospheric neutrinos is different; also, the Waxman-Bahcall bound does not apply to atmospheric neutrinos, but to (ultra)-high-energy neutrinos from outside of our galaxy. During Waxman's and Bahcall's research and work into neutrinos, there seemed to be a gap of very high energetic neutrinos, past the atmospheric neutrino limit, but still below the GZK limit, meaning there exists some extra-galactic high energy neutrino source yet to be detected. Atmospheric neutrinos Atmospheric neutrinos are produced in the atmosphere, about 15 km above the Earth's surface. They are the result of particles, usually protons or light atomic nuclei, hitting other particles in the atmosphere and causing a shower or neutrinos into the Earths surface. Atmospheric neutrinos were successfully detected in the 1960s when experiments were able to successfully find muons that resulted from these neutrinos. From that, they were able to find the energy of the neutrinos and the flux associated with them. Currently, neutrinos are able to be detected by many different experiments, such as IceCube Lab, allowing for the higher accurate measurements of their energy and fluxes. GZK limit The GZK limit exists as a limit on the highest possible cosmic-ray energy that can travel without interaction through the universe, and cosmic rays above around 5 x 1019 eV can reach Earth only from the nearby universe. The limit exists because at these higher energies, and at travel distances further than 50 Mpc, interactions of cosmic rays with the CMB photons increase. With these interactions, the new cosmic-ray product particles have lower and lower energy, and cosmic rays above a few 1020 eV do not reach Earth (except if their source would be very close). Important in this context is that the GZK interactions also produce neutrinos, called cosmogenic neutrinos. Their energy is typically one order of magnitude below the energy per nucleon of the cosmic ray particle (e.g., a 1020 eV proton would lead to 1019 eV neutrinos, but a 1020 eV iron nucleus with 56 nucleons, would lead to neutrions of 56 times lower energy than for the proton case). Waxman - Bahcall upper bound The Waxman - Bahcall upper bound is derived from a problem where neutrinos were discovered to have a higher energy than the atmospheric limit but still below the GZK limit discussed above. Unsure about what possible source could be the cause of these neutrinos, Waxman and Bahcall worked to cross off possible other sources, such as assist from magnetic fields, redshift correction, and sources of high energy outside the Milky Way Galaxy. The current upper bound on the intensity of muon neutrino is said to be: with the expected neutrino intensity to be 1/2 Imax. Redshift losses Initially in the derivation for the muon neutrino intensity above, redshift factors were ignored. However, if a correction factor was included, it could also be found that the neutrinos detected above either started out at high energies and were detected at a lower energy due to redshift. It is known, however, that if red-shift is to be the prime factor in the limit, that the proton would have had to have a redshift z of less than 1. If the particle started from outside this range, as told by the GZK limit, other interactions would take place during the particles travel, and make it so that the neutrinos detected would be far below the threshold discussed. Deriving a correction factor to multiply by Imax to change the threshold of the system, it was found to be: Working with nearby galaxies and clusters, it was found that there is no significant change on the limit from the redshift correction, and that the reason for the limit and expected values outside of the limit has to come from some other external source. Magnetic fields Neutrino Source Another factor to consider was the addition of the magnetic fields at the source of the neutrinos and how it might allow for increased energy of an incoming charged particle from a cosmic ray. If protons can be prevented from leaving the source due to a magnetic field, then only neutrinos would be allowed to go through, meaning we would be able to see higher level neutrinos. Bahcall and Waxman quickly ruled this out as a permanent option, as when there is a proto-meso interaction a charged pion is created but a proton is then also turned into a neutron. The neutron will not be affected by the field in any way, and will travel about 100 kpc when with high energies. This makes it impossible to exceed the upper bound found earlier from Waxman and Bahcall. Intergalactic magnetic field Another theory is that the intergalactic magnetic field would be able to change the direction of the protons on their way to Earth, allowing for the neutrinos to come in relatively in a straight line. To derive this theory, Waxman and Bahcall started with the basic proton traveling with energy E, in a magnetic field B, and with correlation length λ. If the proton travels a distance of λ, the resulting angle of deflection is: Where Rl is the Larmor Radius. If the angle is kept small, and propagating a distance l, the new deflection angle becomes: Plugging in values for time, which would give us a maximum propagation distance that the particle could travel in that time, we find that the existence of a uniformly distributed inter-galactic magnetic field would have no effect on the limit. Possible causes Active galactic nuclei jet models When looking out into the galaxy, and starting to think about what could have caused such high energy neutrinos to appear, it was thought of that jets from Active Galactic Nuclei (AGN) were the main cause. Looking further into the details, Waxman and Bachall saw that the intensities for jets from AGN's are two times higher in magnitude than the limit discussed above. Initially, it was thought that the photons and protons were accelerated into the jets thanks to Fermi acceleration with an energy spectrum: for both protons and photons (simply plug in the values for photons or protons for either quantity). This implies the optical depth is related to Ep and assuming a small optical depth allows us to have the neutrino spectrum of: Later, it was realized that the decay of neutral pions, which are created along with charged pions, cause a high energy gamma ray emission. It was then found that the large energies being seen was not a result of Compton scattering of protons and photon, but of neutral pion decay. Once this emission was fixed, the intensity of the neutrinos found from AGN was under the max limit discussed above, and AGN then became a valid cause for these higher energy neutrinos if the area was optically thin and the energy burst was cause by a single interaction of a decaying neutral pion. Gamma ray bursts The Gamma-Ray Bursts (GRB) fireball model has also been another candidate for the reasoning behind higher energy neutrinos. The high energy neutrino model already took multiple variables into account and was a match for the limit discussed above. Similar to AGN's, the GRB's are optically thin, however, unlike AGN's which needed some more assumptions to be made on how the energy was being expelled and reached to match the flux calculations, the GRB model was able to correctly match this limit. The fireball model works by having the initial burst of the GRB, but then has another shock later on which goes onto explain the afterglow associated with GRB's. This second shock continues to push particles away and allows them to reach detectors on Earth within the limits discussed earlier. Bibliography Alemany, R.; Burrage, C.; et al. "(2019). Summary Report of Physics Beyond Colliders at CERN". Gives a sense of the colliders at CERN that can help with this data set Bradascio, F.. (2019). "Search for high-energy neutrinos from AGN cores". Proceedings of Science. Another peer reviewed article, this source allows for further information into parts of AGN that is included in the initial article. Kachelriess, M.. (2022). "Extragalactic cosmic rays". FOS: Physical Science A journal reviewed article describing extra-galactic cosmic rays Kachelriess, M.; Semikoz, D.V. (2019). "Cosmic ray models". Progress in Particle and Nuclear Physics, 109, 103710. Peer reviewed journal that gives a broad understanding of cosmic ray models and different experiments Kajita T. Atmospheric neutrinos and discovery of neutrino oscillations. Proc Jpn Acad Ser B Phys Biol Sci. Gives information about neutrinos - journal reviewed document Kimura, S.. (2022). "Neutrinos from Gamma-ray Bursts". FOS: Physical Sciences. The peer reviewed journal article above works to give a further insight in to high energy neutrinos from GRB's, which is discussed in the original article by Bahcall and Waxman and gives a further insight into the paper Letessier-Selvon, A. (2001). "Establishing the GZK cutoff with ultra high energy tau neutrinos". AIP Conference Proceedings. This is a peer reviewed source, and it works to add details about other factors and components necessary for our analysis Piran, T. (1999). Gamma-ray bursts and the fireball model. Physics Reports, 314(6), 575–667. journal reviewed article that gives a brief insight into how the fireball method works. Kronberg, P. (2003). "Intergalactic Magnetic Fields". Physics Today, 55 The journal reviewed article above gives more information on the intergalactic magnetic fields mentioned above in the paper Waxman, E; Bahcall, J (1998). "High energy neutrinos from astrophysical sources: An upper bound". Physical Review D, 59(2). This paper is a peer reviewed paper made by the authors Waxman and Bachall and gives a good overview of the overall topic chosen References Neutrinos Physical quantities Cosmic rays
Waxman-Bahcall bound
[ "Physics", "Mathematics" ]
2,467
[ "Physical phenomena", "Physical quantities", "Quantity", "Astrophysics", "Radiation", "Physical properties", "Cosmic rays" ]
70,621,516
https://en.wikipedia.org/wiki/Cytokine%20delivery%20systems
Cytokines are polypeptides or glycoproteins that help immune cells communicate to each other to induce proliferation, activation, differentiation, and inflammatory or anti-inflammatory signals in various cell types. Studies utilizing cytokines for antitumor therapies has increased significantly since 2000, and different cytokines provide unique antitumor activities. Cytokines hinder tumor cell development mostly through antiproliferative or proapoptotic pathways but can also interrupt development indirectly by eliciting immune cells to have cytotoxic effects against tumor cells. Even though there are FDA-approved cytokine therapies, there are two main challenges associated with cytokine delivery. The first is that cytokines have a short half-life, so frequent administration of high doses is required for therapeutic effect. The second is that systemic toxicity could occur if the cytokines delivered cause an intense immune response, known as a cytokine storm. Pegylated cytokines Pegylation is the process of covalently binding polyethylene glycol (PEG) to proteins. Pegylation prolongs the half-life of the bound protein, leading to sustained delivery. This is advantageous because lower, less frequent dosing will be needed to have the same therapeutic effect in the patient, which will limit the cytotoxicity of the delivery system. Pegfilgrastim is a successful example of this delivery system. Pegfilgrastim is the pegylated form of the granulocyte-colony stimulating factor (G-CSF) filgrastim. Pegfilgrastim stimulates production and release of neutrophils in patients who experience bone marrow toxicity after receiving myelosuppressive anticancer drugs or radiation. Filgrastim has a half-life of 3-4 h, while Pegfilgrastim has a half-life of 45 h. This is much more convenient for patients, as they will only need one dose of Pegfilgrastim instead of multiple doses of Filgrastim drawn out over a long period. Ciliary neurotrophic factor (CNTF) is a cytokine used to combat diabetes symptoms such as appetite reduction and weight loss. A pegylated version of CNTF retained biological activity in vitro and had enhanced pharmacokinetics. The pegylated CNTF also reduced glycemia in diet-induced obese animals, with a dose 10-fold higher than unmodified CNTF. These studies demonstrate that pegylated cytokines can be used for sustained delivery of cytokines, increasing the therapeutic window of these treatments. A major problem caused by pegylation is that the cytokine may change its molecular conformation, activity, and bioavailability upon PEG binding. Cytokines are advantageous due to their small sizes, which allow them to reach intracellular targets. PEG binding to cytokines could cause them to become too bulky to reach their specific targets, which should be taken into consideration when designing pegylated cytokines. GAG-based biomaterials Glycosaminoglycans (GAGs) are naturally derived polysaccharides with distinct sequences of disaccharides. GAGs bind to cytokines and regulate cell recruitment, inflammation, and tissue remodeling by delivering cytokines to the extracellular matrix. Binding studies suggest GAGs have a natural affinity for cytokines, and cytokine binding to GAGs is mediated by nonspecific electrostatic interactions between positively charged domains on cytokines and negatively charged sulfate and carboxylic acid residues on GAGs. The GAG heparin has been explored extensively for cytokine delivery. Heparin-based hydrogels have shown to provide sustained delivery of IL-4 for more than 2 weeks, leading to a greater anti-inflammatory response than IL-4 alone. Another study utilized a starPEG-heparin hydrogel system to deliver vascular endothelial growth factor (VEGF) and fibroblast growth factor (FGF-2). The large concentration of heparin allowed loading and release of the cytokines to be independent of each other. The codelivery of the cytokines from the hydrogels led to pro-angiogenic effects both in vitro and in vivo, with the effect being much greater than administration of the single growth factors Immunocytokines Antibody conjugation is another promising method for cytokine delivery. Antibody conjugation to cytokines can be used to improve site-specific delivery and prolong the cytokine half-life. Immunocytokines are delivered systemically but can specifically target the tumor through overexpressed or unique tumor antigens, cryptic extracellular matrix epitopes found only in tumors, or neovasculature markers indicating tumor angiogenesis. Cergutuzumab amunaleukin (CEA-IL2v) is an antibody-cytokine conjugate that links the cytokine IL-2 with an antibody targeting the carcinoembryonic antigen (CEA). This conjugate preferentially targets the tumor microenvironment to increase local delivery of IL-2 and minimize off-target toxicity. CEA-IL2v is ongoing phase I clinical trials in treating solid malignancies expressing CEA. Another immunocytokine is huBC1-IL12, which was developed to target the ED-B domain of fibronectin. This domain is overexpressed in tumor tissues but undetectable in almost all normal adult tissues. Systemic administration of huBC1-IL12 eliminated experimental PC3 metastases and suppressed the growth of multiple human tumor lines in immunocompromised mice more effectively than IL-12 alone. A Phase I trial studied the safety of weekly infusions of huBC1-IL12 in renal carcinoma and malignant melanoma patients. The maximum tolerated dose was found to be 15 ug/kg, which is 30 times higher than the maximum tolerated dose of IL-12 alone Systemically administered immunocytokines are likely to significantly reduce cytokine-related cytotoxicity, but not eliminate it. Immunocytokines still interact with immune cells to induce signaling outside of the tumor, and there are problems with non-specific binding in non-target tissues that could disrupt regular immune functions in the body. Since immunocytokines are foreign to the body, they could also cause an immune reaction that produces anti-immunocytokine antibodies leading to pharmacological abrogation, therapeutic alteration, or hypersensitivity reactions. Nonviral nanoparticles Nanoparticle delivery systems are popular due to their ability to encapsulate compounds without affecting bioactivity, and they exhibit controlled and sustained release of the encapsulated compounds to target tissues. Nanoparticles can be made of organic or inorganic agents and possess the ability to stabilize cytokines in vivo, enhance activity at the target site, improve aqueous solubility, and reduce systemic toxicity. Solvent, pH, temperature, charge, and size are many parameters that influence the encapsulation efficiency, nanoparticle toxicity, and cytokine stability. Cytokines can be encapsulated, adsorbed, or conjugated into nanoparticle systems for their delivery, and there are multiple nanoparticle systems that can be utilized for cytokine delivery Polymeric nanoparticles are biocompatible, have low toxicity, and are biodegradable. They can be used to improve circulation times, stability, and encapsulation capacity compared to other nanoparticle systems. Polylactic-co-glycolic acid (PLGA) is the most popular polymer for nanoparticle delivery systems due to its simple synthesis using the oil-in-water method, high stability, easy encapsulation or adsorption of both hydrophobic and hydrophilic molecules, and easy surface modification. PLGA-PEG nanoparticles encapsulating IL-10 were created to prevent plaque formation in advanced atherosclerosis lesions. PLGA was charged at its terminal portions for better electrostatic interaction with IL-10 while PEG was used to functionalize the nanoparticle. These PLGA-PEG-IL-10 nanoparticles allowed for greater stability and prolonged systemic circulation of IL-10 and did not show any in vitro or in vivo cytotoxicity. Liposomes are another type of nanoparticle delivery system widely studied. Liposomes can easily cross lipid bilayers and cell membranes but usually get rapidly eliminated in vivo unless stabilized with PEG or another polymer. Formation of liposomes can also present issues, as toxic solvents, high temperatures, and low pH can decrease the biosafety of the nanoparticles or denature the cytokine being delivered. Conjugating cytokines to liposomal surfaces is a useful approach because it allows cytokines to bind to their respective target cell receptors and allows multiple drugs to be delivered to potentiate the desired effect. Conjugating cytokines to the surface of lipids is typically done using the layer-by-layer technique which involves layering polymer materials to create a thin film that can regulate material properties of the carrier. Advantages of this nanoparticle design include multiple drug compartments for sequential cargo release, the ability to tailor surface chemistry with polymer layers to affect targeting and biodistribution, and improved pharmacokinetics. Rationally engineered layer-by-layer nanoparticles demonstrated high loading and release of active IL-12, localization of nanoparticles on the surface of tumor cells allowing IL-12 to be available to membrane receptors, and decreased systemic exposure. These layer-by-layer lipid nanoparticles significantly reduced IL-12 toxicity and demonstrated antitumor activity against colorectal and ovarian tumors at doses that were not tolerated with free IL-12 delivery Gold nanoparticles are becoming increasingly popular as they exhibit high surface-to-volume ratio, they can easily travel to target cells, and they support high drug load. They can also easily be functionalized and synthesized. Gold-bound tumor necrosis factor (TNF) is in phase I trials for treatment of solid tumors. The trial found that gold-bound TNF has a tolerable dose that is three times higher than the tolerable dose for unmodified TNF. There was also higher drug concentration in the tumor tissue, indicating increased local delivery to the tumor microenvironment. Gold nanoparticles can elicit an immune response that hinders their efficacy, so it is important to evaluate cellular response such as cytokine production and reactive oxygen species production Silica nanoparticles have also been evaluated for cytokine delivery due to their high colloidal stability, extensive surface functionalization, and possibility to control both structure and pore size. However, silica nanoparticles present limitations for cytokine delivery due to the low internalization efficiency for larger biomolecules. This challenge can be overcome by developing mesoporous silica nanoparticles with extra-large pores. These nanoparticles were used to deliver IL-4 to induce M2 macrophage polarization for anti-inflammatory and tissue homeostasis therapies. These nanoparticles increased IL-4 half-life, showed minimal toxicity, efficiently loaded and delivered IL-4, and stimulated M2 macrophage polarization Plasmid nanoparticles Nucleic acids are much easier to produce, purify, and manipulate than recombinant cytokines and offer a method to deliver them locally and sustainably. Plasmid nanoparticles expressing cytokines coupled with electroporation is currently being explored for cytokine delivery. Electroporation temporarily increases the permeability of cell membranes without damaging the membrane structure. IL-12 is a cytokine known to have antitumor properties but shows severe dose-related toxicities in many patients. Intratumoral delivery of a plasmid encoding IL-12 followed by electroporation in a murine melanoma model resulted in a 47% cure rate. A phase II trial examined the safety and activity of plasmid encoding IL-12 followed by electroporation to treat stage III/IV unresectable melanoma. The median survival was 3.7 months with an objective overall response rate of 29.8%, including two complete responses. There were no grade IV events reported, and adverse events were rare Other DNA complexes being evaluated to enhance cytokine delivery include lipoplexes, polyplexes, and lipopolyplexes, which are complexes of lipids, polymers, and lipids with polymers, respectively. Polyehtyleneimine (PEI) is a highly cationic polymer that complexes with negatively charged DNA. PEI protects DNA from degradation in vivo, promotes interaction with negatively charged cell membranes, and enhances release from lysosomes by acting as a proton sponge. PEI:IL-12 complexes were shown to transfect lung tissue following delivery via nebulization, leading to production of IL-12 in the lungs. Weekly or twice weekly administration of PEI:IL-12 was found to suppress or eliminate pulmonary metastases of SAOS-2 human osteosarcomas in athymic nude mice. In a different study, nanoparticles were created with a liposomal shell and two DNA encoding complementary sequences as the core. The liposomal shell was designed to be degraded by phospholipase A2 (PLA2), which is overexpressed by various tumors. The cytokine TRAIL was loaded onto the Ni2+ modified DNA cores. Upon interaction with PLA2, the DNA nanoparticles transformed into nanofibers to deliver TRAIL to death receptors on the cancer cell membrane. Delivery of TRAIL amplified apoptotic signaling with reduced TRAIL internalization to enhance antitumor efficacy Viral systems Oncolytic viruses preferentially infect malignant cells, inducing immunogenic cell death. Oncolytic viruses include adenoviruses, Herpes simplex viruses, Semliki forest viruses, poxviruses, among others. They can be altered to optimize distribution and facilitate delivery to target tissues. Cytokine-loaded oncolytic viruses have shown activity in murine models, with several clinical trials under investigation. IMLYGIC (talimogene laherparepvec) was the first FDA-approved oncolytic virus for cancer treatment. It is a modified herpes simplex virus-1 expressing granulocyte-macrophage colony-stimulating factor (GM-CSF) for recurrent melanoma. It is currently being studied for soft-tissue sarcoma and liver cancer and for combination with immunotherapies. Adenoviruses are widely studied for cytokine delivery. In preclinical trials, intratumoral injections of adenoviruses encoding IL-12 (Ad-IL-12) mediated regression of murine colorectal carcinomas, breast carcinomas, prostate carcinomas, gliomas, bladder carcinomas, fibrosarcomas, laryngeal squamous cell carcinoma, hepatomas and hepatocellular carcinomas, medullary thyroid carcinomas, thyroid follicular cancer, and Ewing's sarcoma. The antitumor immune response of Ad-IL-12 is primarily mediated by CD8+ T cells. There are many limitations associated with virus-based delivery systems. Anti-viral antibodies could be produced against viral cytokine delivery systems leading to delayed-type hypersensitivity responses, which may prevent repeated dosing. There is also a large disparity in the susceptibility patients have to viral infections since viral delivery of cytokines requires transfection of host cells, which is highly variable from patient to patient. There has also been significant off-target transgene expression seen in clinical trials. Small volumes of intratumoral injection of adenoviruses were shown to cause significant transgene expression in the liver, intestine, spleen, kidney, and brain. Activity-on-Target Cytokines Activity-on-Target Cytokines, known as AcTakines, are mutated cytokines that have reduced binding affinity for their native receptor complex and enhanced binding affinity for a specific tumor cell receptor. This causes the cytokine to be inactive in circulation, limiting systemic toxicity. The cytokine is activated upon binding to its tumor-specific antigen, allowing for local delivery. The cytokine tumor necrosis factor (TNF) causes rapid hemorrhagic tumor necrosis in both animal models and patients but is associated with high systemic toxicities. TNF is known to exert its antitumor effect through stromal cells in the tumor microenvironment. A TNF AcTakine was created to improve localized delivery of TNF and decrease systemic toxicity by changing its antitumor pathway to target endothelial cells. This mutated TNF cytokine was shown to only target endothelial cells of the tumor vasculature, allowing for a safe and effective delivery system. This TNF-based AcTakine resulted in a 100-fold increase in targeting efficiency, and when the AcTakine was targeted to CD13 expressed on endothelial cells of the tumor vasculature, it demonstrated selective activation of tumor neovasculature without any detectable toxicity in vivo. When administered with CAR-T cells, this therapy was shown to enhance T cell infiltration to control solid tumors, while combination with a CD8-targeted type II interferon AcTakine led to eradication of solid tumors Cytokine factories Cytokine factories are cell-generated cytokines that can locally deliver a cytokine of interest, offering spatial and temporal control of dosing. The homing capacity and tumor tropism capabilities of mesenchymal stem/stromal cells (MSCs) make them ideal drug delivery vehicles. MSCs also have reduced immunogenicity due to their limited expression of costimulatory molecules. Using MSCs to express cytokines provides greater cytokine delivery to the tumor tissue, which increases therapeutic efficacy of the treatment. IL-2 gene engineered MSCs (MSC-IL-2) have been studied as a potential antitumor therapy since IL-2 is an immunogenic cytokine. Intratumoral injection of MSC-IL-2 was shown to significantly regress glioma tumor growth and improve the overall survival of rats with glioma. Similarly, subcutaneous injections of bone marrow MSC-IFN-α affected tumor growth in vivo and increased overall survival in a multiple myeloma mouse model. Antitumor effects were attributed to increased apoptosis of tumor cells, decreased microvessel density, and ischemic necrosis. TRAIL is an immunogenic cytokine that selectively targets tumor cells for apoptosis, reducing systemic toxicity. MSCs engineered to overexpress TRAIL have shown promising antitumor effects in xenograft models through apoptotic pathways. Numerous studies of MSC-TRAIL systems are ongoing, including treatments for neuroblastoma, non-small-cell lung carcinoma, breast cancer, pancreatic cancer, glioblastoma, and multiple myeloma, among others Human retinal pigmented epithelial (RPE) cells can also be engineered to express a cytokine of interest. RPE cells are ideal cytokine delivery systems because they are nontumorigenic, display contact inhibition, are amenable to genetic modification, have bene previously used in human trials for therapeutic delivery systems, and are safe to use. RPE cells engineered to produce different cytokines were encapsulated in alginate-based microparticles. The encapsulated cells will still viable after encapsulation, did not divide within the capsules, produced the cytokine of interest, and persisted longer in vivo than unencapsulated cells. The IL-2 producing RPE cells eradicated peritoneal tumors in ovarian and colorectal mouse models, and computational modeling of pharmacokinetics, predicts clinical translation to humans, indicating potential success in future human clinical trials References Cytokines Immunotherapy
Cytokine delivery systems
[ "Chemistry" ]
4,313
[ "Cytokines", "Signal transduction" ]
70,621,532
https://en.wikipedia.org/wiki/Underground%20World%20Home
The Underground World Home was an exhibit at the 1964 New York World's Fair of a partially underground house which doubled as a bomb shelter. Designed by architect Jay Swayze, who made a specialty of underground homes, it was situated on the campus of the expo besides the Hall of Science and north of the expo's heliport in Flushing Meadows–Corona Park in Queens. History The home/bomb shelter was designed by architect Jay Swayze. Swayze, a proponent of underground living, constructed and lived in his own underground bunker-house in Plainview, Texas, which he named Atomitat. Built during the Cold War only two years after the Cuban Missile Crisis, it was the promotion of the company "Underground World Homes", which was owned by Avon investor and millionaire Girard B. Henderson, who remained convinced that tensions between the U.S. and the U.S.S.R. would escalate eventually escalate to WWIII. (In addition to the prototypical underground home/bomb shelter, there was companion chthonic exhibit sponsored by Henderson: "Why Live Underground?") The brochure for the Underground World Home touted its comfort, luxury, interior design and safety. However, the $1.00 for adults and 50¢ on top of the expo's fee entry, plus the expo's numerous, much more glamorous exhibits, deterred many potential tourists. A May 1964 LIFE magazine cover story on the exposition did not so much as mention the Underground World Home. Exhibits were contractually required to be dismantled and removed after the fair. Swayze eventually wrote a book, Underground Gardens & Homes: The Best of Two Worlds, Above and Below, but the building's fate was not mentioned. The New York Public Library held archives on the expo, however, and in 2017 it was found that the demolition of the home had been completed on March 15, 1966. Only its foundations, if anything, remain. Design The ten-room home featured backlit murals to create the illusion of outdoor space and preclude claustrophobia. The murals were painted by Texas-based artist Mrs. Glenn Smith. Swayze cited research to convince fairgoers that people did not look out their windows 80% of the time, and that and when people did look out their windows, half the time what they saw was undesirable. He stated that he could give people better views with selected murals. The home was touted as peeping Tom proof, less expensive than normal homes, (sic), secure from intruders, and a way to save space above ground. The home was . The walls were of steel and concrete, and the roof supported by steel beams rated for a load of of soil (which provided the insulation). There were three bedrooms; the ceilings were of gypsum. There was a "snorkel-like system" for air conditioning— an apparatus which purportedly enabled the home to be dusted monthly. The foyer was , the kitchen/dining room , the living room (with a television set and a wood-burning fireplace) , and three bedrooms of , , and , respectively, connected by a hallway wide . The model home also had a terrace area simulating outdoor space next to the living room of . Reception In a 1964 New York Times piece science fiction author Isaac Asimov speculated what the 2014 World's Fair would look like. He deemed the Underground World Home a "sign of the future" with controlled temperatures which allowed occupants to live free from the weather. The home was not a draw, however, and was scarcely to appear in popular memory. Priced at $80,000 (approximately four times the cost of an average home that year), none were commissioned. Popular culture The LP record The Best of the Johnny Mann Singers: Underground at the Fair played in background of the exhibit; it did not sell well. This was its only appearance in pop culture (save in the niche mythos of urban exploration, and as one of the oddities of architecture) until its interior was reproduced in the 2009 CSI: NY episode Manhattanhenge as the anachronistic lair of a mad killer, the structure supposedly simply having had soil layered on top of it and been abandoned. The set was complex and impressive. See also 1964 New York World's Fair pavilions References External links Underground Dream World Underground World Home Brochure 1964 introductions 1964 New York World's Fair Air raid shelters in the United States Cold War sites Nuclear fallout Radiation protection Survivalism Flushing Meadows–Corona Park
Underground World Home
[ "Chemistry", "Technology" ]
927
[ "Nuclear fallout", "Environmental impact of nuclear power", "Radioactive contamination" ]
70,628,267
https://en.wikipedia.org/wiki/Kahn%E2%80%93Kalai%20conjecture
The Kahn–Kalai conjecture, also known as the expectation threshold conjecture or more recently the Park-Pham Theorem, was a conjecture in the field of graph theory and statistical mechanics, proposed by Jeff Kahn and Gil Kalai in 2006. It was proven in a paper published in 2024. Background This conjecture concerns the general problem of estimating when phase transitions occur in systems. For example, in a random network with nodes, where each edge is included with probability , it is unlikely for the graph to contain a Hamiltonian cycle if is less than a threshold value , but highly likely if exceeds that threshold. Threshold values are often difficult to calculate, but a lower bound for the threshold, the "expectation threshold", is generally easier to calculate. The Kahn–Kalai conjecture is that the two values are generally close together in a precisely defined way, namely that there is a universal constant for which the ratio between the two is less than where is the size of a largest minimal element of an increasing family of subsets of a power set. Proof Jinyoung Park and Huy Tuan Pham announced a proof of the conjecture in 2022; it was published in 2024. References See also Percolation theory Conjectures 21st century in mathematics Graph theory Statistical mechanics 2006 in science
Kahn–Kalai conjecture
[ "Physics", "Mathematics" ]
260
[ "Unsolved problems in mathematics", "Graph theory", "Statements in graph theory", "Conjectures", "Mathematical relations", "Statistical mechanics", "Mathematical problems" ]
72,105,261
https://en.wikipedia.org/wiki/Bow-tie%20diagram
A bow-tie diagram is a graphic tool used to describe a possible damage process in terms of the mechanisms that may initiate an event in which energy is released, creating possible outcomes, which themselves produce adverse consequences such as injury and damage. The diagram is centred on the (generally unintended) event with credible initiating mechanisms on the left (being where reading diagrams starts) and resulting outcomes and associated consequences (such as injury, loss of property, damage to the environment, etc.) on the right. Needed control measures, or barriers, can be identified for each possible path from mechanisms to the final consequences. The shape of the diagram resembles a bow tie, after which it is named. A bow-tie diagram can be considered as a simplified, linear, and qualitative representation of a fault tree (analyzing the cause of an event) combined with an event tree (analyzing the consequences), although it can maintain the quantitative, probabilistic aspects of the fault and event tree when it is used in the context of quantified risk assessments. Bow-tie analysis is used to display and communicate information about risks in situations where an event has a range of possible causes and consequences. A bow tie is used when assessing controls to check that each pathway from cause to event and event to consequence has effective controls, and that factors that could cause controls to fail (including management systems failures) are recognized. It can be used proactively to consider potential events and also retrospectively to model events that have already occurred, such as in an accident analysis. The diagram follows the same basic principles as those on which fault tree analysis and event tree analysis are based, but, in being far less complex than these, is attractive as a means of rapidly establishing an overall scope of risk concerns for an organisation, only some few of which may justify those more rigorous and logical methods. Bow-tie diagrams are used in several industries, such as oil and gas production, the process industries, aviation, and finance. History It has been commonly noted that the earliest mention of the bow-tie methodology appeared in the Imperial Chemical Industries (ICI) course notes of a lecture on hazard analysis given at the University of Queensland, Australia in 1979. Other sources point to Derek Viner (in the same year) at the then Ballarat College of Advanced Education (now the Federation University of Australia), who drew it as an aid to visualization of his generalized time sequence model (GTSM) for damage processes. The more complex risk analysis tools of fault tree analysis, event tree analysis use the same principle: Things go wrong, there is a reason for that and a result too, with the result generating the adverse consequences. The bow-tie diagram introduces the concept of a central energy-based event (the "bow tie knot") in which the damaging properties of the energy are no longer under control so that they result in outcomes and consequences. Royal Dutch Shell is considered to be the first major company to successfully integrate bow-tie diagrams into their business practices, at least since the early 1990s. Logic and structure of the diagrams Bow-tie diagrams contribute to the identification, description and understanding of the different types of hazards that can arise in a given situation, facility or production process. They also help identify the relevant risk control measures (barriers) for a given hazard. The fact that scientific effort benefits greatly from a focus on the process giving rise to the phenomenon of interest is well known in several scientific domains, as noted by William Haddon. The generalized time sequence model (GTSM) was developed in the 1970s by Viner as a process model suited to understanding this process to the phenomenon of unwanted damage. Bow-tie diagrams are a simplified extract of this, conceived of (and then named by students) during a lecture to assist explanation. Bow-tie diagrams are centred on a central event in which the energy necessary to bring about the ultimate undesired consequences is released. In William Rowe’s seminal work, which explained half of the process of damage, the event of interest is defined as what produces outcomes and consequences of interest and outcomes as what results from an event. Derek Viner resolved this circularity by defining the event as "the point in time when control is lost of the potentially damaging properties of the energy source of interest." This is sometimes referred to as the top event (a fault-tree term) or the critical event. Thus, a bow-tie analysis is centred on an energy-based event. The need for energy sources in any damage process had been noted by Lewis DeBlois as early as 1926 as well as Gibson and Haddon in the decade prior to the introduction of the bow-tie diagram. It is evident that any central event may be originated by more than one mechanism and that, following the release of energy, a number of different outcomes may result. As Rowe made clear, it is these various unwanted outcomes that produce the adverse consequences of injury, damage etc. Credible initiating mechanisms (which some call causes, triggers, threats, etc.) are shown on the left of the central event and its ultimate outcomes and consequences, such as injury, loss of property, damage to the environment, etc. on the right. This left to right flow of the process is also a time axis. Control barriers, either hard/engineered or administrative/procedural, are identified for each path from the mechanisms to the final outcomes. For example, pressure in a process vessel is a form of energy that can be released if containment is breached (the central event). Possible mechanisms for breach of containment, shown to the left, include structural degradation (abrasion, corrosion, fatigue), spurious pressurization above design limits, inadvertent opening, etc. Shown to the right of the central event, are the results/outcomes of the release (e.g., noise, blast overpressure propagation, flying debris, loss of fluid, etc.) When mechanisms and outcomes and subsequently routes to adverse consequences are understood, the analyst can ensure that control measures (often now called barriers) exist to stop the initiating mechanisms from resulting in the central event and the central event from leading to the ultimate unwanted outcomes and consequences. Left-hand side (mechanism) control measures are, in this example, external and internal surface coatings, vessel inspection (internal and external), wall thickness measurements, pressure safety valves, etc. While some are relevant to design and commissioning, others are to maintenance and condition monitoring. Outcome (right-hand side) control measures in this example would include nearby structures designed to withstand modelled blast overpressure. Bow-tie diagrams are typically a qualitative tool, used for simple damage process analysis as well as for illustrative purposes, such as in training courses to plant operators and in support of safety cases. However, a different type of bow-tie diagram exists that is more apt at supporting quantified risk analysis. This diagram is essentially the combination of a fault tree and an event tree and maintains the Boolean and probabilistic features of those approaches. Use in various domains Bow-tie diagrams are used in various disciplines and domains, including for example: Occupational safety and health (OSH) Process safety Aviation safety Information security and cyber security risks Finance Several software packages are available in the market for bow-tie diagram creation and management. References Accident analysis Diagrams Process safety Safety engineering
Bow-tie diagram
[ "Chemistry", "Engineering" ]
1,494
[ "Chemical process engineering", "Systems engineering", "Safety engineering", "Process safety" ]
72,112,048
https://en.wikipedia.org/wiki/Ddbar%20lemma
In complex geometry, the lemma (pronounced ddbar lemma) is a mathematical lemma about the de Rham cohomology class of a complex differential form. The -lemma is a result of Hodge theory and the Kähler identities on a compact Kähler manifold. Sometimes it is also known as the -lemma, due to the use of a related operator , with the relation between the two operators being and so . Statement The lemma asserts that if is a compact Kähler manifold and is a complex differential form of bidegree (p,q) (with ) whose class is zero in de Rham cohomology, then there exists a form of bidegree (p-1,q-1) such that where and are the Dolbeault operators of the complex manifold . ddbar potential The form is called the -potential of . The inclusion of the factor ensures that is a real differential operator, that is if is a differential form with real coefficients, then so is . This lemma should be compared to the notion of an exact differential form in de Rham cohomology. In particular if is a closed differential k-form (on any smooth manifold) whose class is zero in de Rham cohomology, then for some differential (k-1)-form called the -potential (or just potential) of , where is the exterior derivative. Indeed, since the Dolbeault operators sum to give the exterior derivative and square to give zero , the -lemma implies that , refining the -potential to the -potential in the setting of compact Kähler manifolds. Proof The -lemma is a consequence of Hodge theory applied to a compact Kähler manifold. The Hodge theorem for an elliptic complex may be applied to any of the operators and respectively to their Laplace operators . To these operators one can define spaces of harmonic differential forms given by the kernels: The Hodge decomposition theorem asserts that there are three orthogonal decompositions associated to these spaces of harmonic forms, given by where are the formal adjoints of with respect to the Riemannian metric of the Kähler manifold, respectively. These decompositions hold separately on any compact complex manifold. The importance of the manifold being Kähler is that there is a relationship between the Laplacians of and hence of the orthogonal decompositions above. In particular on a compact Kähler manifold which implies an orthogonal decomposition where there are the further relations relating the spaces of and -harmonic forms. As a result of the above decompositions, one can prove the following lemma. The proof is as follows. Let be a closed (p,q)-form on a compact Kähler manifold . It follows quickly that (d) implies (a), (b), and (c). Moreover, the orthogonal decompositions above imply that any of (a), (b), or (c) imply (e). Therefore, the main difficulty is to show that (e) implies (d). To that end, suppose that is orthogonal to the subspace . Then . Since is -closed and , it is also -closed (that is ). If where and is contained in then since this sum is from an orthogonal decomposition with respect to the inner product induced by the Riemannian metric, or in other words and . Thus it is the case that . This allows us to write for some differential form . Applying the Hodge decomposition for to , where is -harmonic, and . The equality implies that is also -harmonic and therefore . Thus . However, since is -closed, it is also -closed. Then using a similar trick to above, also applying the Kähler identity that . Thus and setting produces the -potential. Local version A local version of the -lemma holds and can be proven without the need to appeal to the Hodge decomposition theorem. It is the analogue of the Poincaré lemma or Dolbeault–Grothendieck lemma for the operator. The local -lemma holds over any domain on which the aforementioned lemmas hold. The proof follows quickly from the aforementioned lemmas. Firstly observe that if is locally of the form for some then because , , and . On the other hand, suppose is -closed. Then by the Poincaré lemma there exists an open neighbourhood of any point and a form such that . Now writing for and note that and comparing the bidegrees of the forms in implies that and and that . After possibly shrinking the size of the open neighbourhood , the Dolbeault–Grothendieck lemma may be applied to and (the latter because ) to obtain local forms such that and . Noting then that this completes the proof as where . Bott–Chern cohomology The Bott–Chern cohomology is a cohomology theory for compact complex manifolds which depends on the operators and , and measures the extent to which the -lemma fails to hold. In particular when a compact complex manifold is a Kähler manifold, the Bott–Chern cohomology is isomorphic to the Dolbeault cohomology, but in general it contains more information. The Bott–Chern cohomology groups of a compact complex manifold are defined by Since a differential form which is both and -closed is -closed, there is a natural map from Bott–Chern cohomology groups to de Rham cohomology groups. There are also maps to the and Dolbeault cohomology groups . When the manifold satisfies the -lemma, for example if it is a compact Kähler manifold, then the above maps from Bott–Chern cohomology to Dolbeault cohomology are isomorphisms, and furthermore the map from Bott–Chern cohomology to de Rham cohomology is injective. As a consequence, there is an isomorphism whenever satisfies the -lemma. In this way, the kernel of the maps above measure the failure of the manifold to satisfy the lemma, and in particular measure the failure of to be a Kähler manifold. Consequences for bidegree (1,1) The most significant consequence of the -lemma occurs when the complex differential form has bidegree (1,1). In this case the lemma states that an exact differential form has a -potential given by a smooth function : In particular this occurs in the case where is a Kähler form restricted to a small open subset of a Kähler manifold (this case follows from the local version of the lemma), where the aforementioned Poincaré lemma ensures that it is an exact differential form. This leads to the notion of a Kähler potential, a locally defined function which completely specifies the Kähler form. Another important case is when is the difference of two Kähler forms which are in the same de Rham cohomology class . In this case in de Rham cohomology so the -lemma applies. By allowing (differences of) Kähler forms to be completely described using a single function, which is automatically a plurisubharmonic function, the study of compact Kähler manifolds can be undertaken using techniques of pluripotential theory, for which many analytical tools are available. For example, the -lemma is used to rephrase the Kähler–Einstein equation in terms of potentials, transforming it into a complex Monge–Ampère equation for the Kähler potential. ddbar manifolds Complex manifolds which are not necessarily Kähler but still happen to satisfy the -lemma are known as -manifolds. For example, compact complex manifolds which are Fujiki class C satisfy the -lemma but are not necessarily Kähler. See also Poincaré lemma Dolbeault–Grothendieck lemma References External links Hodge theory Complex manifolds
Ddbar lemma
[ "Engineering" ]
1,615
[ "Tensors", "Differential forms", "Hodge theory" ]
72,114,025
https://en.wikipedia.org/wiki/Cl6b
Cl6b (μ-THTX-Cl6b) is a peptide toxin from the venom of the spider Cyriopagopus longipes. It acts as a sodium channel blocker: Cl6b significantly and persistently reduces currents through the tetrodotoxin-sensitive sodium channels NaV1.2-1.4, NaV1.6, and NaV1.7. Structure The Cl6b peptide has a molecular weight of 3708.9 Da. It contains 33 amino acid residues, among which six cysteines that engage in three disulfide bonds to form a structural motif known as an inhibitor cystine knot (ICK). This structure grants stability to the toxin and has been identified previously in other spider peptide toxins that share high sequence similarity to Cl6b. Family Simultaneously with the isolation of Cl6b, another peptide toxin known as Cl6a was characterized from the same spider species. The two Cl6 peptides share a sequence identity of 78.8%, including the six cysteines that make both peptides adopt the ICK motif. Target Cl6b acts as a selective sodium channel blocker. Source in nature Cl6b has been isolated from Cyriopagopus longipes, an Asian spider mainly found in Thailand, Cambodia, Laos, and China. Activity mechanism Cl6b significantly reduces currents through the tetrodotoxin-sensitive sodium channels NaV1.2, NaV1.3, NaV1.4, NaV1.6, and NaV1.7, with no effect on the tetrodotoxin-resistant sodium channels NaV1.5, NaV1.8, NaV1.9. Cl6b exhibits a particularly high affinity to NaV1.7 channels, which are present in great numbers in nociceptors (pain neurons) located at the dorsal root ganglion. The activity of Cl6b on NaV1.7 has similar characteristics compared to previously reported NaV1.7-peptide inhibitors, such as HWTX-IV., as Cl6b binds to the domain II segments three and four, which are part of the domain's voltage sensor. The binding is high-affinity (half-maximal inhibitory concentration (IC50) 18.80 ± 2.4 nM). It is also irreversible, which poises it as a candidate for the development of long-term in-vivo analgesia. References Neurotoxins Spider toxins Sodium channel blockers Ion channel toxins
Cl6b
[ "Chemistry" ]
531
[ "Neurochemistry", "Neurotoxins" ]
72,114,296
https://en.wikipedia.org/wiki/Thermal%20Science
Thermal Science is a peer-reviewed open-access scientific journal founded in 1997 and published by Vinča Institute of Nuclear Sciences. The journal is focused on physics and chemistry, and aims to amplify recent scientific results accomplished in Serbia and Southeast Europe. The editor-in-chief is Vukman Bakić (Vinča Institute of Nuclear Sciences, Serbia) and Editor-In-Chief Emeritus is Prof Simeon Oka (University of Belgrade, Serbia). Since beginning of 2021 year, authors need to pay article processing charges, and they retain unrestricted copyrights and publishing rights. Abstracting and indexing Since 2007 year, the journal is abstracted and indexed in Scopus, and the Science Citation Index Expanded. According to the Journal Citation Reports, the journal has a 2021 impact factor of 1.971. References External links Engineering journals Academic journals established in 1997 English-language journals Creative Commons Attribution-licensed journals 5 times per year journals
Thermal Science
[ "Physics", "Chemistry" ]
193
[ "Thermodynamics stubs", "Physical chemistry stubs", "Thermodynamics" ]
61,240,448
https://en.wikipedia.org/wiki/C23H34O2
{{DISPLAYTITLE:C23H34O2}} The molecular formula C23H34O2 (molar mass: 342.51 g/mol, exact mass: 342.2559 u) may refer to: Cannabidiol dimethyl ether Cardenolide Tetrahydrocannabiphorol Molecular formulas
C23H34O2
[ "Physics", "Chemistry" ]
76
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
61,243,379
https://en.wikipedia.org/wiki/Synthetic%20biopolymer
Synthetic biopolymers are human-made copies of biopolymers obtained by abiotic chemical routes. Synthetic biopolymer of different chemical nature have been obtained, including polysaccharides, glycoproteins, peptides and proteins, polyhydroxoalkanoates, polyisoprenes. Synthesis of biopolymer The high molecular weight of biopolymers make their synthesis inherently laborious. Further challenges can arise from specific spatial arrangement adopted by the natural biopolymer, which may be vital for its properties/activity but not easily reproducible in the synthetic copy. Despite this, chemical approaches to obtain biopolymer are highly desirable to overcome issues arising from low abundance of the target biopolymer in Nature, the need for cumbersome isolation processes or high batch-to-batch variability or inhomogeneity of the naturally-sourced species. Examples of synthetic biopolymers obtained by chemical routes cis-1,4-polyisoprene (synthetic analogue of rubber) and trans-1,4-polyisoprene (synthetic analogue of gutta percha) are obtained by coordination polymerisation using suitable Ziegler-Natta catalysts. Polyhydroxoalkanoates such as poly(3-hydroxobutyrate), poly(hydroxovaleric acid) etc. obtained by polycondensation and polyaddition. Low-molecular weight polylactide and other polyglycolides can also be obtained by chemical synthesis. Oligonucleotides and polynucleotides (DNA or RNA) can be obtain by chemical synthesis through a variety of established approaches. A variety of proteins have been obtained by chemical synthesis. A successful approach relies on native chemical ligation, which achieves the synthesis of proteins by linking shorter unprotected peptides. This strategy allowed to obtain, amongst many others, proteins such as insulin-like growth factor 1, the precursor of Aequorea green fluorescent protein and the influenza A virus M2 membrane protein. Examples of biopolymers obtained by chemoenzymatic routes Polyhydroxoalkanoates and polyesters obtained by enzyme-assisted esterification using lipases. Heparin, heparan sulfate and other glycosaminoglycans and plant glycans. Polysaccharides such as cellulose, amylose, chitin and derivatives Natural and non-natural polynucleotides can be successfully obtained by enzyme-assisted synthesis using ligase- or polymerase-based approaches and template-assisted polymerisation. Human-made biopolymers obtained through approaches that involve genetic engineering or recombinant DNA technology are different from synthetic biopolymers and should be referred to as artificial biopolymer (e.g., artificial protein, artificial polynucleotide, etc.). Applications of synthetic biopolymers As their natural analogues, synthetic biopolymers find applications in numerous fields, including materials for commodities, drug delivery, tissue engineering, therapeutic and diagnostic applications. References Biomolecules Polymers
Synthetic biopolymer
[ "Chemistry", "Materials_science", "Biology" ]
650
[ "Natural products", "Biochemistry", "Organic compounds", "Structural biology", "Molecular biology", "Polymer chemistry", "Polymers", "Biomolecules" ]
61,245,307
https://en.wikipedia.org/wiki/Protein%20quinary%20structure
Protein quinary structure refers to the features of protein surfaces that are shaped by evolutionary adaptation to the physiological context of living cells. Quinary structure is thus the fifth level of protein complexity, additional to protein primary, secondary, tertiary and quaternary structures. As opposed to the first four levels of protein structure, which are relevant to isolated proteins in dilute conditions, quinary structure emerges from the crowdedness of the cellular context, in which transient encounters among macromolecules are constantly occurring. In order to perform their functions, proteins often need to find a specific counterpart to which they will bind in a relatively long encounter. In a very crowded cytosol, in which proteins engage in a vast and complex network of attracting and repelling interactions, such search becomes challenging, because it involves sampling a huge space of possible partners, of which very few will be productive. A solution to this challenge requires that proteins spend as little time as possible on each encounter, so that they can explore a larger number of surfaces, while simultaneously making this interaction as intimate as possible, so if they do come across the right partner, they will not miss it. In this sense, quinary structure is the result of a series of adaptations present in protein surfaces, which allow proteins to navigate the complexity of the cellular environment. Early observations With the sense with which it is used today, the term quinary structure first appeared in the work of McConkey, in 1989. In his work, McConkey runs 2D electrophoresis gels on the total protein content of hamster (CHO) and human (HeLa) cells. In a 2D electrophoresis gel experiment, the coordinates of a protein depend on its molecular weight and its isoelectric point. Given the evolutionary distance between humans and hamsters, and considering evolutionary rates typical of mammals, one would expect a large number of substitutions to have occurred between hamsters and humans, a fraction of which would involve acidic (aspartate and glutamate) and basic (arginine and lysine) residues, resulting in changes in the isoelectric point of many proteins. Strikingly, hamster and human cells yielded almost identical fingerprints in the experiment, implying that many fewer of those substitutions actually took place. McConkey suggested in that paper that the reason why the proteins of humans and hamsters had not diverged as much he anticipated was that an additional selective pressure must have been related to the many non-specific “interactions that are inherently transient” experienced by proteins in the cytoplasm and which “constitute the fifth level of protein organization”. Protein interactions and quinary structure Despite the crudeness of McConkey's experiment, his interpretation of the results have proved to be accurate. Rather than simply being hydrophilic, protein surfaces must have carefully been modulated by evolution and adapted to this network of weak interactions, often called quinary interactions. It is important to note that protein-protein interactions responsible for the emergence of quinary structure are fundamentally different from specific protein encounters. The latter are the result of relatively high-stability binding, often linked to functionally meaningful events –many of which have already been described – while the former are often interpreted as some background noise of physiologically unproductive misinteractions that complicate the interpretation of protein networks and need to be avoided, so that normal cellular functions can proceed. The transient nature of these protein encounters complicates the study of quinary structure. Indeed, the interactions responsible for this upper level of protein organisation are weak and short-lived, and hence would not produce protein-protein complexes that could be isolated by conventional biochemical methods. Therefore, quinary structure can only be understood in vivo. In-cell NMR and quinary structure In-cell NMR is an experimental technique prominent in the research field of protein quinary structure. The physical principle of in-cell NMR measurements is identical to that of conventional protein NMR, but the experiments rely on expressing high concentrations of the probe protein, which should remain soluble and contained in the cellular space; which introduces additional difficulties and limitations. However, these experiments provide critical insights about the cross-talk between a probe protein and the intracellular environment. Early attempts at using in-cell NMR to study protein quinary structure were hindered by a limitation caused by the very phenomenon they were trying to understand. Many probe proteins tested in these experiments turned out to produce broad signals, near the detection limit of the method, when measured inside cells of Escherichia coli. In particular, these proteins seemed to tumble as if they had molecular weights much larger than those corresponding to their size. These observations seemed to indicate that the proteins were sticking to other macromolecules, which would have led to poor relaxation properties Other in-cell NMR experiments showed that single amino acid changes of surface residues could be used to consistently modulate the tumbling of three different proteins inside bacterial cells. Charged and hydrophobic residues were shown to have the largest impact in protein intracellular mobility. In particular, more negatively charged proteins would tumble faster in comparison with near-null or positively charged proteins. In contrast, the presence of many hydrophobic residues in the protein surface would slow down protein intracellular tumbling. Protein dipole moment, a measure of charge separation across the protein, was shown to have a significant contribution to protein mobility, where high dipole moments would correlate with slower tumbling. References Molecular biology Protein structure
Protein quinary structure
[ "Chemistry", "Biology" ]
1,112
[ "Biochemistry", "Protein structure", "Structural biology", "Molecular biology" ]
61,245,573
https://en.wikipedia.org/wiki/ASME%20QME-1
ASME QME-1 is a standard maintained by the American Society of Mechanical Engineers that provides the requirements and guidelines for the qualification of active mechanical equipment (QME) whose function is required to ensure the safe operation or safe shutdown of a nuclear facility. Organization of QME-1 The 2017 edition of QME-1 is organized by the following major sections: Section QR: General Requirements Section QDR: Qualification of Dynamic Restraints Section QP: Qualification of Active Pump Assemblies Section QV: Qualification Requirements for Active Valve Assemblies for Nuclear Facilities Standards Committee on Qualification of Mechanical Equipment Used in Nuclear Facilities (QME) ASME QME-1 is maintained and revised by QME and its associated sub-tier groups using the ASME standards development process. Work activities are delegated to specific subcommittees, as per their established charters. QME Subcommittee on General Requirements QME Subcommittee on Qualification of Active Dynamic Restraints QME Subcommittee on Qualification of Pump Assemblies QME Subcommittee on Qualification of Valve Assemblies References External links QME Subcommittee on General Requirements QME Subcommittee on Qualification of Active Dynamic Restraints QME Subcommittee on Qualification of Pump Assemblies QME Subcommittee on Qualification of Valve Assemblies Mechanical standards ASME standards
ASME QME-1
[ "Engineering" ]
241
[ "Mechanical standards", "Mechanical engineering" ]
61,248,076
https://en.wikipedia.org/wiki/C55H70MgN4O6
{{DISPLAYTITLE:C55H70MgN4O6}} The molecular formula C55H70MgN4O6 (molar mass: 907.49 g/mol, exact mass: 906.5146 u) may refer to: Chlorophyll_b Chlorophyll_f Molecular formulas
C55H70MgN4O6
[ "Physics", "Chemistry" ]
75
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
61,248,470
https://en.wikipedia.org/wiki/C13H13N3O3
{{DISPLAYTITLE:C13H13N3O3}} The molecular formula C13H13N3O3 (molar mass: 259.27 g/mol, exact mass: 259.0957 u) may refer to: Ciclobendazole Lenalidomide
C13H13N3O3
[ "Chemistry" ]
65
[ "Isomerism", "Set index articles on molecular formulas" ]
61,249,402
https://en.wikipedia.org/wiki/Quantum%20telescope
A quantum telescope is a concept for a telescope aimed at overcoming the diffraction limit of traditional telescopes by exploiting some properties of quantum mechanics, such as entanglement and photon cloning. References Proposed telescopes Telescope types
Quantum telescope
[ "Physics" ]
46
[ "Quantum mechanics", "Quantum physics stubs" ]
76,450,769
https://en.wikipedia.org/wiki/Mobile%20DevOps
Mobile DevOps is a set of practices that applies the principles of DevOps specifically to the development of mobile applications. Traditional DevOps focuses on streamlining the software development process in general, but mobile development has its own unique challenges that require a tailored approach. Mobile DevOps is not simply as a branch of DevOps specific to mobile app development, instead an extension and reinterpretation of the DevOps philosophy due to very specific requirements of the mobile world. Rationale Traditional DevOps approach has been formed around 2007-2008, close to the dates when iOS and Android mobile operating systems were released to the public. The traditional DevOps approach primarily evolved to meet the changing needs of the software development world with the paradigm shift towards continuous and rapid development and deployment (such as in web development, where interpreted languages are more prevalent than compiled languages). While traditional DevOps embraced agility and flexibility, mobile operating system providers steered towards a walled-garden approach with compiled apps with tight controls over how they can be distributed and installed on a mobile device. This difference in the mobile development mindset compared to what the traditional DevOps approach is advocating, is augmented further with the mobile applications to be deployed on a high number of varying devices and operating systems. Eventually, the concept of Mobile DevOps took off as a trend around 2014-2015, in line with the fast growth of the number of applications in mobile app stores. As individuals and corporations alike are developing and publishing more and more mobile applications, the need for efficiency and shorter release cycles increased, which is addressed by the continuous feedback and continuous development approach within the concept of DevOps, while requiring a significant level of adaptation and extension of the traditional DevOps practices. Mindset shift from traditional DevOps to mobile DevOps Mobile DevOps has a unique set of challenges and constraints, which solidifies the fact that it needs to be approached as a separate discipline. These challenges can be outlined as follows: Platform-specific requirements and tight controls of mobile operating system providers, where for instance a macOS device is mandatory for iOS application development and release. The walled-garden approach of distributing mobile apps, specifically applying to iOS applications, which comes with app review and app release delays that would not be needed in web development, for instance. Code signing requirements that come with the walled-garden approach, which introduce additional processes in the mobile application build pipeline along with new security concerns. An entire deployment cycle is re-run even in the slightest code change due to how applications are compiled and delivered to the users. The final product is to be deployed to a wide variety of mobile devices worldwide, which requires extensive testing and user feedback. Monitoring mobile applications require additional tools and approaches to be able to get data from an application running on a mobile device while respecting user privacy. Frequent operating system updates by mobile platforms can require rapid adaptation of apps, introducing further complexity to the development and maintenance cycles. Benefits of mobile DevOps Mobile DevOps is not an abstract concept and offers a range of benefits that can help improve the efficiency and effectiveness of the mobile app development process. These benefits can even be quantified by collecting the data within the mobile application development lifecycle. The benefits can be categorized into the following areas: Faster Release Cycles: By automating tasks and streamlining the development process, mobile DevOps enables teams to deliver new features and updates more frequently. Improved Quality: Automated testing and continuous monitoring help to identify and fix bugs earlier in the development cycle, leading to higher quality apps. Optimized Resource Utilization: Mobile DevOps promotes optimized resource utilization by automating tasks and streamlining workflows. Furthermore, mobile DevOps practices like containerization can help to create more efficient and scalable development environments. Increased Agility: Mobile DevOps allows teams to be more responsive to changes in the market and user feedback. List of Dedicated Mobile DevOps Platforms Even though it is possible to run a mobile DevOps cycle with most of the CI/CD platforms, they may require significant effort compared to non-mobile CI/CD (e.g. you need to bring your own infrastructure or it may require "reinventing the wheel" for commonly-used platforms like Jenkins). To overcome the mobile-specific challenges specified, there are certain platforms that are dedicated to the lifecycle of mobile applications. These platforms exclusively focus on DevOps processes for mobile app development and are also referred as mobile CI/CD platforms. Appcircle (Multiplatform | Cloud-based & On-premise) Visual Studio App Center (Multiplatform | Cloud-based) Xcode Cloud (Apple platforms only | Cloud-based) See also Continuous integration Continuous delivery Continuous deployment Mobile app development List of mobile app distribution platforms Mobile-device testing References External links Android Developer Documentation Apple Developer Documentation Flutter Developer Documentation What is mobile DevOps? by Microsoft Mobile software development Software development process Mobile applications Software development Software engineering
Mobile DevOps
[ "Technology", "Engineering" ]
991
[ "Systems engineering", "Computer engineering", "Computer occupations", "Software engineering", "Information technology", "Software development" ]
76,451,701
https://en.wikipedia.org/wiki/Bridge%20protection%20systems
Bridge protection systems prevent ship collision damage to a bridge by either deflecting an aberrant ship from striking the piers of a bridge, or sustaining and absorbing the impact. History Protecting bridges against ship collisions got attention of architects and regulators in the last third of the 20th century due to a marked increase in the frequency of collision accidents: worldwide, 30 major bridges collapsed in the 1960-1998 timeframe after being rammed by ships or barges, 321 persons were killed. The rate of smaller accidents is much higher: there were 811 serious accidents that did not cause a collapse just in the United States between 1970 and 1974, with 14 persons killed. Minor collisions are routine: the US Coast Guard gets 35 reports per day. In the US, the turning point was the collapse of the Sunshine Skyway Bridge in 1980. Since then, A "Committee on Ship/Barge Collision" appointed by the National Research Council issued a report on the history of ship collisions with bridges (1983); The Louisiana Department of Transportation and Development issued vessel collision protection criteria for the bridge piers in Louisiana (1984); Eleven states together with the Federal Highway Administration commissioned guidelines for the bridge protection design in the United States (1988); The American Association of State Highway and Transportation Officials (AASHTO) issued a Vessel Collision Design Guide Specification in February, 1991, based on the previous study; The International Association for Bridge and Structural Engineering published its "Ship Collision with Bridges" guide in 1993; AASHTO adopted the LRFD bridge design specifications with provisions for bridge protection (1994). Designs There are several types of bridge protection systems used: Fender systems attached to the pier with the goal to absorb the vessel impact. Their ability to withstand a typical ship collision is low. Fenders are built using a variety of materials: thin-walled concrete box; thin-walled steel membrane steel; rubber. artificial islands built with sand and rock core that is protected by riprap. The islands are quite effective in protecting the pier by pushing the ship away, but cause environmental damage to the river bottom and, while settling, might shift the bridge piers; dolphins are made of piles driven into the river bottom in a group, with space in between sometimes filled with rocks and capped with concrete. The collision is absorbed via deformations of the structure; pile-supported systems on dedicated piles that are driven into the bottom either vertically or at an angle ("batter piles"). The piles are connected together with rigid or flexible links, can be attached to the pier, and sometimes are fitted with fenders; floating systems (cable nets and pontoons) have multiple problems from low efficiency to high construction and maintenance costs and environmental impacts, and are therefore used as a last resort, when the location of the bridge precludes the use of other designs. Starlings are widenings of the bridge piers near their base, typically extending some distance above water level, providing some degree of reinforcement of the pier against impact. Alternatives Physical bridge protection systems designed to prevent catastrophic collisions are expensive and represent a "significant" share of overall construction costs. Therefore, alternatives are typically considered during the design phase: fortifying the piers and superstructure to the point where they will be able to handle the impact, either on their own, or with the help of a fender system; increasing the span length, so that the piers are away from the fairway and thus protected by the shallow water around them; improving the navigational aids to reduce the probability of a catastrophic impact (60-85% of the collisions are due to pilot error. Regulations Highway designs in the US are subject to the AASHTO specifications, but the text does not contain specific procedures and recommendations. Railway bridges are built according to the "Manual for Railway Engineering" published by the American Railway Engineering and Maintenance-of-Way Association (AREMA). In Australia, the subject is covered in the Australian standard AS 5100.2:2017, "Bridge design, Part 2: Design loads". References Sources Bridge design
Bridge protection systems
[ "Engineering" ]
809
[ "Structural engineering", "Bridge design", "Architecture" ]
76,454,282
https://en.wikipedia.org/wiki/Confocal%20endoscopy
Confocal endoscopy, or confocal laser endomicroscopy (CLE), is a modern imaging technique that allows the examination of real-time microscopic and histological features inside the body. In the word "endomicroscopy", endo- means "within" and -skopein means "to view or observe". CLE, also known as "optical biopsy", can analyse histology and cytology features of a tissue which otherwise is only possible by tissue biopsy. Similar to confocal microscopy, the laser in CLE filtered by the pinhole excites the fluorescent dye through a beam splitter and objective lens. The fluorescent emission then follows similar paths into the detector. A pinhole is used to select emissions from the desired focal plane. Two categories of CLE exist, namely probe-based (pCLE) and the less common endoscopy-based endoscopy (eCLE). CLE can be intubated to study the gastrointestinal (GI) tract and accessory digestive organs with a fluorescent dye. A variety of diseases, including inflammatory bowel disease (IBD) and Barrett's oesophagus, can be diagnosed by the magnified and in-depth view in combination with traditional endoscopy. Significance CLE can identify the lesions with a small depths of view under the tissue, in contrast to the surface level in conventional endoscopy. It also allows clinicians to discriminate benign or malignant lesions through real-time histological diagnosis by revealing the properties of the lamina at a cellular level. An example is Whipple's disease. Conventional endoscopy presents a whitish-patterned duodenal mucosa. CLE, in comparison, generates two images –– the superficial images show capillary leak in duodenal mucosa while the deep images show cells of duodenal mucosa, including goblet cells and foamy macrophages in lamina propria. Compared to histological examination of the same duodenal site after periodic acid-Schiff staining, CLE identifies similar patterns of goblet cells and foamy macrophages. Types Two types of CLE have been invented, namely probe-based (pCLE) and endoscope-based CLE (eCLE). Probe-based CLE pCLE, developed by Mauna Kea Technologies, is a fibre bundle transit through the 2.8 mm working channel (the hollow hole) of the standard endoscope into the GI tract. With a fixed plane of imaging, each fibre acts as a pinhole to filter unwanted noise. The frame rate lies between 9 and 12 images/second. Endoscope-based CLE eCLE, developed by Pentax, is a confocal microscopy fixed at the end of the endoscopic tube. The integrated machine of eCLE is larger than the pCLE in diameter, making GI tract endoscopic intubation more difficult. eCLE has ceased commercially due to the camera's inflexibility. Medical uses Oesophagus CLE is effective in detecting premalignant, including Barrett's oesophagus, and malignant (cancerous) lesions in the upper GI tract. The modifications of mucosa shown in histopathology as an index of malignancy can be identified under CLE, such as high-grade dysplasia. CLE can also be implemented to refer to the treatment of Barrett's oesophagus by measuring the lateral extent of neoplasia. The Miami classification is the most popular system in oesophageal CLE diagnosis. Stomach and duodenum Similar to that of the oesophagus, CLE is able to detect early gastric cancer, as well as premalignant conditions, such as gastritis and intestinal metaplasia. CLE can detect and distinguish the stomach pit patterns to identify the disease in accordance with the Miami classification, which was refined in 2016 to include both pit patterns and the architecture of blood vessels. The refined classification allows clinicians to differentiate between neoplastic and non-neoplastic lesions. The presence of Helicobacter pylori can also be identified using CLE by viewing the morphological changes in tissues. Lower Gastrointestinal Tract CLE reveals "soccer ball-like pattern" of narrower capillaries in malignant lymphomas; distorted architecture and fluorescein leakage from lumen in colonic adenocarcinoma; and blunt-shaped villi and crypts and increased intraepithelial lymphocytes in coeliac disease. CLE can be utilized to identify adenoma and neoplasia in colorectal polyps and lesions. The Miami classification provides guidelines for clinicians to differentiate neoplastic and non-neoplastic lesions. Inflammatory Bowel Disease (IBD) CLE can be used for the identification of IBD and its subtypes (Crohn's disease and ulcerative colitis) based on the observation on morphological characteristics, such as architectural distortion, lowered crypt density, crypt irregularity and an abnormally high density of epithelial gaps. The prediction of IBD progression on non-inflamed epithelium is achievable, too, making way for a novel "treat-to-target" therapeutic approach. Pancreas Incorporating an EUS, CLE can accurately diagnose pancreatic cystic lesions, including mucinous and non-mucinous lesions. Special needles are used to collect fluid and cyst wall tissues for testing. Pancreatic ductal adenocarcinoma (PDAC) can also be viewed by CLE. Observing cystic lesions and PDAC, clinicians can identify early chronic pancreatitis and determine the malignancies of lesions. Biliary duct Biliary stricture can be viewed by CLE. The Miami and Paris classifications can be adapted to differentiate cancerous and inflammatory causes. Others The discrimination of inflammation and malignant tumor in lung and the urinary system may be done by using CLE and this is currently under research. Some usages such as oral and other head-and-neck cancer diagnosis have been proposed. Molecular imaging Antibodies of molecular targets are used to diagnose GI diseases by histology. CLE captures the fluorescence produced by specific antibodies binding to vascular endothelial growth factor (VEGF). Comparing the significant difference in fluorescent strength, clinicians can differentiate normal and neoplastic tissue. Molecular imaging with antibodies may be applied to CLE as a diagnostic benchmark due to high correlation with ex vivo microscopy. The molecular imaging technique can be used in a similar manner in the examination of head and neck cancer using CLE, though the diagnostic targets may be different from those in the gastrointestinal tract. Mechanism Basic mechanism The laser emitted by CLE through a pinhole is reflected by the beam splitter or a dichroic mirror and focused by an objective lens. The fluorescent dye in targeted tissue is excited and emits a specific wavelength. The emission from the focal plane of the tissue then is collected by the objective lens and the beam splitter. The laser is eventually filtered by a pinhole to reduce out-of-focus noise to enter the detector or photomultiplier tube. Topical dyes Cresyl violet and acriflavine can be used as topical dyes. Cresyl violet is a common stain in histology used for light microscopy sections, especially brain sections. In CLE, it can enhance the viewing of the cytoplasm, yet it limits tissue penetration and does not show anything about vasculature. Acriflavine is an antiseptic and dye. In CLE, it can stain the nuclei of GI surface epithelial cells. It is however subjected to cytotoxic and mutagenic properties, in addition to common side effects of irritation. Intravenous dyes Fluorescein is the most popular IV dye for CLE. Fluorescein is an FDA-cleared dye that is used in ophthalmology clinics in routine as it appears green under cobalt blue light. It is commonly applied topically to identify corneal diseases with slit lamp microscopes including corneal abrasion, ulcers, and infections; or intravenously to identify retinal diseases with angiography including macular degeneration and diabetic retinopathy. In CLE, it is usually administered intravenously immediately before the intubation of an endoscopic tube. The fluorescence is reported to be the most prominent from a few seconds to 8 minutes. Fluorescein is slowly eliminated; thus the fluorescence slowly decays up to a minimal detectable level after 1 hour, giving a time window for clinicians to investigate. Recognition and optical flow algorithm CLE's narrow field of vision makes it difficult for clinicians to identify the location and path of the probe, making it challenging to correspond the image obtained and the lesion location and direction. Research has proposed a crypt recognition algorithm, which predicts the pixel displacement by the moving angle and distance. By restoring the exploration path of CLE, clinicians can locate the sites of interest and improve diagnostic efficiency. Image quality assessment Research has proposed a new assessment method for filtering images yielded from CLE. As CLE often encounters image distortions, the degradation of image quality and loss of image information, eventually increase the difficulty of accurate diagnosis. A new image quality assessment (IQA) utilising Weber's Law and local descriptors assesses the quality and filters images with low diagnostic value. Limitations The variety of pathology conditions identifiable by CLE is limited. The histological diagnosis is limited to cancerous lesions and inflammation where the number of specific diseases identifiable is not numerous. Moreover, it requires specific training to operate CLE and correctly interpret CLE images, which are rarely used skills by experts in endoscopy. Owing to the narrow field of view, the applications of CLE might be restricted. Computer-aided diagnosis with AI technology may be beneficial in diagnosing CLE images. Fluorescein is the only safe dye approved while cresyl violet and acriflavine are commonly used agents. The lack of choice of contrast agents may limit the application of CLE. For instance, patients allergic to fluorescein should never undergo CLE that involves the use of this intravenous dye. The optical system consists of complex microscopic optical instruments, which are difficult to manufacture and assemble. Therefore, the tool is expensive. CLE is mostly used in combination with other techniques instead of replacing conventional endoscopy with biopsy. CLE can only serve as a complementary to the traditional biopsy. By sharing the same working channel, conventional biopsy and CLE can be done alternatively by single intubation. Adverse effects The allergic properties of fluorescein, the common intravenous fluorescent dye for CLE, is the major culprit for the mild adverse events. Intubation CLE, similar to other diagnostic endoscopic techniques, may give rise to pancreatitis when used to examine the pancreas. The likelihood for pancreatitis is especially high in needle-based CLE. The incidence can be minimized by shortening the inspection time and avoiding excessive needle movement within the pancreatic cyst wall. Fluorescein Mild side effects, which are rare, include Nausea Vomiting with mild epigastric pain Rash at injection site Transient hypotension without shock Diffuse erythema These effects are manageable unless patients have prior experiences of them. Cases of anaphylaxis are reported by ophthalmological uses of fluorescein. Prophylactic use of antihistamines can reduce the chances of allergic reactions or skin prick tests can identify the risk of allergic reactions. Acriflavine Acriflavine, another contrast agent for CLE, is potentially carcinogenic to humans due to its known mutagenic ability. The dye is therefore not approved by the FDA. History CLE is a modern, in vivo adaptation of confocal microscopy, the microscopic technique invented by Marvin Minsky in 1957. Since 2004, CLE has been used for observing histopathological changes in gastrointestinal tissues. See also Confocal microscopy Endoscopy Contrast agent References Medicine
Confocal endoscopy
[ "Biology" ]
2,587
[ "Medicine" ]
76,454,827
https://en.wikipedia.org/wiki/Temporins
Temporins are a family of peptides isolated originally from the skin secretion of the European red frog, Rana temporaria. Peptides belonging to the temporin family have been isolated also from closely related North American frogs, such as Rana sphenocephala. Discovery In 1996, the skin of the Rana temporaria was screened into a cDNA library, discovering three peptide precursors, known as Temporin B, Temporin G, and Temporin H. Once discovered, the three peptide, along with other temporins found in the sample, were separated from secretions of the frog's skin. Biological assays were performed, revealing that all temporins, in exception of temporin D and H, exhibited antibacterial activity. It was soon discovered that other species of frog also possess temporal, resulting in the discovery of more than 150 peptides from the temporin family. Some of the genera of frogs include the Amolops, Hylarana, Lithobathes, Odorrana, Pelophylax, Rana, and Hylarana. Clinical uses Temporins are a family of antimicrobial peptides (AMPs), which are highly targetable toward Gram-positive bacteria, allowing it to penetrate the cell wall and damage the inner membrane of the bacteria. These proteins are also capable of killing specific cancer cells when they are locally administered to a tumor, making it a good anticancer drug. References Peptides
Temporins
[ "Chemistry" ]
301
[ "Biomolecules by chemical classification", "Peptides", "Molecular biology" ]
76,455,081
https://en.wikipedia.org/wiki/Microneedles
Microneedles (MNs) are medical tools used for microneedling, primarily in drug delivery, disease diagnosis, and collagen induction therapy. Known for their minimally invasive and precise nature, MNs consist of arrays of micro-sized needles ranging from 25μm to 2000μm. Although the concept of microneedling was first introduced in the 1970s, its popularity has surged due to its effectiveness in drug delivery and its cosmetic benefits. Since the 2000s, there has been discoveries on new fabrication materials of MNs, like silicon, metal and polymer. Alongside with materials, a variety of MNs types (solid, hollow, coated, hydrogel) has also been developed to possess different functions. The research on MNs has led to improvements in different aspects, including instruments and techniques, yet adverse events are possible in MNs users. Microneedle patches or Microarray patches are micron-scaled medical devices used to administer vaccines, drugs, and other therapeutic agents. While microneedles were initially explored for transdermal drug delivery applications, their use has been extended for the intraocular, vaginal, transungual, cardiac, vascular, gastrointestinal, and intracochlear delivery of drugs. Microneedles are constructed through various methods, usually involving photolithographic processes or micromolding. These methods involve etching microscopic structure into resin or silicon in order to cast microneedles. Microneedles are made from a variety of material ranging from silicon, titanium, stainless steel, and polymers. Some microneedles are made of a drug to be delivered to the body but are shaped into a needle so they will penetrate the skin. The microneedles range in size, shape, and function but are all used as an alternative to other delivery methods like the conventional hypodermic needle or other injection apparatus. Stimuli-responsive microneedles are advanced devices that respond to environmental triggers such as temperature, pH, or light to release therapeutic agents. Microneedles are usually applied through even single needle or small arrays. The arrays used are a collection of microneedles, ranging from only a few microneedles to several hundred, attached to an applicator, sometimes a patch or other solid stamping device. The arrays are applied to the skin of patients and are given time to allow for the effective administration of drugs. Microneedles are an easier method for physicians as they require less training to apply and because they are not as hazardous as other needles, making the administration of drugs to patients safer and less painful while also avoiding some of the drawbacks of using other forms of drug delivery, such as risk of infection, production of hazardous waste, or cost. History The concept of microneedles was first derived from the use of large hypodermic needles in the 1970s, but it only became prominent in the 1990s as microfabrication manufacturing technology developed. Later, the concept of MNs finally came into experimentation in 1994 when Orentreich discovered the insertion of tri-beveled needles to the skin could possibly stimulates the release of fibrous strand. The investigation on MNs’ potential to improve transdermal drug delivery gradually raised public awareness of MNs. Since then, there has been massive research conducted on MNs, contributing to the development of different materials, types, and fabrication methods of MNs. Application and adverse events are explored. In the 2000s, clinical trials on MNs’ use in drug delivery began. Microneedles were first mentioned in a 1998 paper by the research group headed by Mark Prausnitz at the Georgia Institute of Technology that demonstrated that microneedles could penetrate the uppermost layer (stratum corneum) of the human skin and were therefore suitable for the transdermal delivery of therapeutic agents. Subsequent research into microneedle drug delivery has explored the medical and cosmetic applications of this technology through its design. This early paper sought to explore the possibility of using microneedles in the future for vaccination. Since then researchers have studied microneedle delivery of insulin, vaccines, anti-inflammatories, and other pharmaceuticals. In dermatology, microneedles are used for scarring treatment with skin rollers. As mentioned before, microneedles have also been explored for local targeted drug delivery at other drug delivery sites, such as the gastrointestinal, ocular, vascular etc., of which, ocular, vaginal and gastrointestinal have shown increasingnly convincing outcomes where they serve as a more efficient, localised drug delivery system, without the drawbacks of systemic exposure/toxicity. The major goal of any microneedle design is to penetrate the skin's outermost layer, the stratum corneum (10-15μm). Microneedles are long enough to cross the stratum corneum but not so long that they stimulate nerves which are located deeper in the tissues and therefore cause little to no pain. Research has shown that there is a limit on the type of drugs that can be delivered through intact skin. Only compounds with a relatively low molecular weight, like the common allergen nickel (130 Da), can penetrate the skin. Compounds that weigh more than 500 Da cannot penetrate the skin. Materials of microneedles Microneedles (MNs) consist of micro-sized needles arrays that are made of various materials exhibiting different characteristics and are suitable in the synthesis of different types of MNs. The selection of materials for formation of MNs greatly depends on the strength of skin penetration, manufacturing method, and rate of drug release. Silicon is the first material used for the production of MNs. While the flexible nature of silicon allows easy manufacture of different sizes and types of MNs, silicon MNs can easily fracture during insertion in the skin. On the contrary, MNs made of metals like stainless steel, titanium, and aluminum, are non-toxic and possess strong mechanical properties to penetrate the skin without breakage. Nevertheless, metal MNs may cause allergic effects in some patients and it creates non-biodegradable wastes. Polymer is also regarded as a promising material for MNs due to its good biocompatibility and low toxicity. Water-soluble polymers are more commonly used within the big polymer group and MNs tip breaking is more likely compared to MNs made of silicon and metal. Therefore, polymer is a more suitable material for dissolving MNs or hydrogel-forming MNs. Types of microneedles Since their conceptualization in 1998, several advances have been made in terms of the variety of types of microneedles that can be fabricated. The 5 main types of microneedles are solid, hollow, coated, dissolvable/dissolving, and hydrogel-forming. The distinct characteristic of each type of MNs allow a variety of clinical applications, including diagnosis and treatment. Micro-sized needles in a microneedles (MNs) device can be as short as 25μm or even 2000μm in length depending on their types. Solid microneedles Solid MNs are the first type of MNs fabricated and are the most commonly used. Hard solid MNs have sharp tips that pierce through and form pores on the stratum corneum. A drug patch will then be applied to the skin for drug to be absorbed slowly and passively through numerous micropores. This type of array is designed as a two part system; the microneedle array is first applied to the skin to create microscopic wells just deep enough to penetrate the outermost layer of skin, and then the drug is applied via transdermal patch. Solid microneedles are already used by dermatologists in collagen induction therapy, a method which uses repeated puncturing of the skin with microneedles to induce the expression and deposition of the proteins collagen and elastin in the skin. Solid MNs help increase the permeability and absorption of drugs. Hollow microneedles Hollow MNs are designed with a hole at the tip and a hollow capacity that store drugs. Upon MNs insertion, stored drug is directly injected into the dermis and this effectively facilitates the absorption of either large-molecular or large-dosage drug. Yet, a portion of the drug can be leaked or clogged and it may hinders the overall drug administration. Since the delivery of the drug depends on the flow rate of the microneedle, this type of array could become clogged by excessive swelling or flawed design. This design also increases the likelihood of buckling under the pressure and therefore failing to deliver any drugs. Coated microneedles Coated MNs are fabricated by coating drug solution over solid MNs and the thickness of the drug layer can be adjusted depending on the amount of drug to be administered. A benefit of coated MNs is that less amount of drug is needed as compared to other drug administration route. This is because the layer of drug will quickly dissolve and delivered into the systemic circulation directly across the skin. The solid MNs which are removed afterwards may be contaminated by left-over drugs and the reuse of those MNs raise the concern of cross-infection between patients. Coated microneedles are often covered in other surfactants or thickening agents to assure that the drug is delivered properly. Some of the chemicals used on coated microneedles are known irritants. While there is risk of local inflammation to the area where the array was, the array can be removed immediately with no harm to the patient. Dissolving microneedles Dissolving MNs are mostly composed of water-soluble drugs that enable the dissolution of MN tips when inserted into skin. This is a one-step approach which does not require the removal of MNs and is convenient for long-term therapy. However, incomplete insertion and delay dissolution is observed with the use of dissolving MNs. This polymer would allow the drug to be delivered into the skin and could be broken down once inside the body. Pharmaceutical companies and researchers have begun to study and implement polymers such as Fibroin, a silk-based protein that can be molded into structures like microneedles and dissolved once in the body. Hydrogel-forming microneedles The primary material for the fabrication of hydrogel-forming microneedles (HFMs) is hydrophilic polymer that encloses drugs. This material draws water from interstitial fluid in the stratum corneum and results in polymer swelling and release of drug. Besides, the hydrophilic features of HFMs allow readily uptake of interstitial fluid that could be used for disease diagnosis. Application and principle Transdermal drug delivery The most abundant transdermal drug administration route currently is via hypodermic needles, transdermal patches, and topical creams. However, these routes have limited therapeutic effects because stratum corneum serves as a barrier that reduces the entry of drug molecules into the systemic circulation and target tissues. The invention of MNs have retained the benefits of both hypodermic needles and transdermal patches while minimizing their cons. Compared to hypodermic needles, MNs provide a pain-free administration. MNs are able to penetrate through the epidermis, but not any deeper to compress on nerve-ends to produce pain responses. The superficial penetration also lessen the infection risk. Compared to transdermal patches, MNs are proven to be effective in producing micropores on the epidermis. The micropores facilitate the absorption of large molecules, like calcein and insulin, by 4 times via in-vitro skin models. In addition, MNs' direct drug delivery to systemic circulation avoided the first-pass effect in the liver. Significantly increasing the drug bioavailability, and the fast absorption into the systemic circulation also allowed a fast onset of action. Therefore, MNs could benefit diabetes treatment as common oral delivery would lead to a significant loss of insulin from degradation in the liver (first-pass effect) and insulin molecules are too large to be absorbed using common transdermal patches. Furthermore, the high precision of MNs also allows drug reaching to localized tissues precisely, for instance, intradermal layers for cancer or the eye for ophthalmic disorder. Vaccination MNs are suitable for vaccination with their capability to deliver macromolecules and maintain a slow and sustained release of vaccine agents by using both coated and dissolving MNs. In addition, MNs' biodegradability minimizes biohazardous waste, unlike hypodermic needles. The application of MNs in vaccination would benefit people who avoid vaccination due to trypanophobia (fear of needles in medical settings).As of 2024, it has been found to generate an immune response similar to injection of measles and rubella vaccine. Disease diagnosis and monitoring Disease diagnosis and monitoring of therapeutic efficacy is possible by detecting several biomarkers in body fluid. However, current tissue fluid extraction methods are pain-inducing, and it may take up to hours or days for samples to be analyzed in medical laboratories. MNs could collect body fluid in an almost painless manner, and it could provide immediate diagnosis when combined with a sensor. MNs allow penetration through the epidermis but not long enough to compress nerves in deeper layers, and thus, they are minimally invasive and almost painless. MNs' precision also allow the extraction of fluid surrounding diseased tissues, which may contain higher concentration of different biomarkers and specific biomarkers that are not present in the systemic circulation. These fluids provide more clinically significant and accurate values than those extracted from the systemic circulation, subsequently lowering the chances of underestimation of disease severity, especially for localized diseases. Furthermore, MNs are capable of providing (near) real-time diagnosis, and it is easily administrated with simple procedures. Thus, MNs are potential candidates for Point-of-care (PoC) testing which could be conducted bedside. Hollow MNs and hydrogel MNs could be used to diagnose and monitor several diseases including Cataracts, Diabetes, Cancer, and Alzheimer’s disease. For instance, hollow glass MNs and hydrogel MNs could extract skin interstitial fluid for the detection of glucose levels. Collagen induction therapy In the field of dermatology, MNs are more commonly known as collagen induction therapy. The therapy induces dermis regeneration via repetitive perforation of the skin using sterilized MNs. The repetitive penetration through the stratum corneum forms micropores, and these physical traumas to the skin sequentially stimulate the wound-healing cascade and expression of collagen and elastin in the dermis. By making use of the human natural regeneration properties, microneedling could be used alone to treat scars, wrinkles, and skin rejuvenation, or in combination therapy with topical tretinoin and vitamin C for enhanced effect. Recent research has expanded the possibilities of MNs to treat pigmentation disorder, actinic keratosis, and promote hair growth in patients of androgenetic alopecia and alopecia areata. MNs have been diverged into different forms, including Dermapen and Dermarollers. Dermarollers are hand-held rollers equipped with a total of 192 solid steel micro-sized needles arranged into 24 arrays, lengths ranging from 0.5-1.5mm. With the growing popularity of microneedling, MNs have also been commodified into home care Dermarollers, which are similar to medical dermarollers, except that the needles are shorter (0.15mm). This is a more budget-friendly device that allows individuals to perform microneedling at home. Advantages There are many advantages to the use of microneedles, the most prominent being the improved comfort of patients. Needle phobia can affect both adults and children, and sometimes can lead to fainting. The benefit of microneedle arrays is that they reduce anxiety that patients have when confronted with a hypodermic needle. In addition to improving psychological and emotional comfort, microneedles have been shown to be substantially less painful than conventional injections. Some studies recorded children's views on blood sampling with microneedles and found patients were more willing when prompted with a less painful procedure than traditional sampling with needles. Microneedles are beneficial to physicians as well, since they produce less hazardous waste than needles and are generally easier to use. Microneedles are also less expensive than needles as they require less material and the material used is cheaper than the materials in hypodermic needles. Microneedles present a new opportunity for home and community-based healthcare. One of the biggest drawbacks of traditional needles is the hazardous waste that they produce, making disposal a serious concern for doctors and hospitals. For patients who require regular administration of medication at home, disposal can become an environmental concern is needles are placed in the trash. Dissolvable or swelling microneedles would provide those who are limited in their ability to seek hospital care with the ability to safely administer drugs in the comfort of their homes, although disposal of solid or hollow microneedles could still pose a needle-stick or blood borne pathogen infection risk. Another benefit of microneedles is their lower rates of microbial invasion into delivery sites. Traditional injection methods can leave puncture wounds for up to 48 hours post-treatment. This leaves a large window of opportunity for harmful bacteria to enter into the skin. Microneedles only damage the skin to a depth of 10-15μm, making it difficult for bacteria to enter the bloodstream and giving the body a smaller wound to repair. Further research is required to determine the types of bacteria able to breach the shallow puncture site of microneedles. Disadvantages There are some concerns about how physicians can be sure that all of the drug or vaccine has entered the skin when microneedles are applied. Hollow and coated microneedles both possess the risk that the drug will not properly enter the skin and will not be effective. Both of these types of microneedles can leak onto a person's skin either by damage of the microneedle or incorrect application by the physician. This is why it is essential that physicians are trained how to properly apply the arrays. Another concern is that incorrectly applied arrays could leave foreign material in the body. Although there is a lower risk of infection associated with microneedles, the arrays are more fragile than a typical hypodermic needle due to their small size and thus have a chance of breaking off and remaining in the skin. Some of the material used to construct the microneedles, such as titanium, cannot be absorbed by the body and any fragments of the needles would cause irritation. There is a limited amount of literature available on the subject of microneedle drug delivery, as current research is still exploring how to make effective needles. In terms of design and manufacture, low drug loading is a key barrier towards reaching the clinics. Safety profile Apart from procedural pain, some common post-treatment adverse events (AEs) of MNs include temporary discomfort, erythema (skin redness), and edema. Pinpoint bleeding, itching, irritation, and bruising are also possible in some cases. However, most of the adverse side effects are not long-lasting and could be resolved spontaneously within 24 hours after the treatment, making MNs a rather safe tool. Photoprotection and minimal exposure to chemicals irritants are often advised for an effective recovery and lowered chance of skin inflammation. Severe risks may be possible if there are technical errors during the procedure. For example, the usage of non-sterile tools might result in post-inflammatory hyperpigmentation, systemic hypersensitivity, local infections, etc. Moreover, if excess pressure is used over a bony prominence, it could lead to “Tram-track scarring”. But this could be avoided by using smaller needles and prevent over-pressurizing on top of these areas. In addition, if the patient is allergic to the either the drug used or the material of MNs, contact dermatitis is possible. Therefore, clinicians should be cautious towards patients with high risks of allergy. References Further reading External links Microneedles: a new way to deliver vaccines, Dawn Connelly, The Pharmaceutical Journal, 2021 Medical equipment Drug delivery devices
Microneedles
[ "Chemistry", "Biology" ]
4,178
[ "Pharmacology", "Drug delivery devices", "Medical equipment", "Medical technology" ]
64,753,670
https://en.wikipedia.org/wiki/Wastewater%20surveillance
Wastewater surveillance is the process of monitoring wastewater for contaminants. Amongst other uses, it can be used for biosurveillance, to detect the presence of pathogens in local populations, and to detect the presence of psychoactive drugs. One example of this is the use of wastewater monitoring to detect the presence of the SARS-CoV-2 virus in populations during the COVID-19 pandemic. In one study, wastewater surveillance showed signs of SARS-CoV-2 RNA before any cases were detected in the local population. Later in the pandemic, wastewater surveillance was demonstrated to be one technique to detect SARS-CoV-2 variants and to monitor their prevalence over time. Comparison between case based epidemiological records and deep-sequenced wastewater samples validated that the composition of the virus population in the wastewater is in strong agreement with the virus variants circulating in the infected population. Following the 2022-23 reopening surge of COVID-19 cases in China, airplane wastewater surveillance began to be employed as a less intrusive method of monitoring for potential variants of concern arising within specific countries and regions. At the request of the US Centers for Disease Control, the National Academies of Sciences, Engineering, and Medicine revealed, in a January 2023 report, its vision for a national wastewater surveillance system. Such a system would, according to the report committee, remove geographical inequities in the identification of future SARS-CoV-2 variants, influenza strains, antibiotic resistant bacteria, and other potential threats. Nationwide wastewater surveillance would likewise be combined with wastewater collection at other early-warning "sentinel sites", such as zoos and international airports. The European Union identified wastewater surveillance as "a cost effective, rapid and reliable source of information on the spread of SARS-CoV-2 in the population and that it can form a valuable part of an increased genomic and epidemiological surveillance" and proposes to extend the urban wastewater surveillance to poliovirus, influenza, emerging pathogens, contaminants of emerging concern, antimicrobial resistance and any other public health parameters that are considered relevant by the Member States. See also Wastewater-based epidemiology References Epidemiology Sewerage Surveillance
Wastewater surveillance
[ "Chemistry", "Engineering", "Environmental_science" ]
464
[ "Water pollution", "Sewerage", "Environmental engineering", "Epidemiology", "Environmental social science" ]
69,124,573
https://en.wikipedia.org/wiki/Optogenetic%20methods%20to%20record%20cellular%20activity
Optogenetics began with methods to alter neuronal activity with light, using e.g. channelrhodopsins. In a broader sense, optogenetic approaches also include the use of genetically encoded biosensors to monitor the activity of neurons or other cell types by measuring fluorescence or bioluminescence. Genetically encoded calcium indicators (GECIs) are used frequently to monitor neuronal activity, but other cellular parameters such as membrane voltage or second messenger activity can also be recorded optically. The use of optogenetic sensors is not restricted to neuroscience, but plays increasingly important roles in immunology, cardiology and cancer research. History The first experiments to measure intracellular calcium levels via protein expression were based on aequorin, a bioluminescent protein from the jellyfish Aequorea. To produce light, however, this enzyme needs the 'fuel' compound coelenteracine, which has to be added to the preparation. This is not practical in intact animals, and in addition, the temporal resolution of bioluminescence imaging is relatively poor (seconds-minutes). The first genetically encoded fluorescent calcium indicator (GECI) to be used to image activity in an animal was cameleon, designed by Atsushi Miyawaki, Roger Tsien and coworkers in 1997. Cameleon was first used successfully in an animal by Rex Kerr, William Schafer and coworkers to record from neurons and muscle cells of the nematode C. elegans. Cameleon was subsequently used to record neural activity in flies and zebrafish. In mammals, the first GECI to be used in vivo was GCaMP, first developed by Junichi Nakai and coworkers in 2001. GCaMP has undergone numerous improvements, notably by a team of scientists at the Janelia Farm Research Campus (GENIE project, HHMI), and GCaMP6 in particular has become widely used in neuroscience. Very recently, G protein-coupled receptors have been harnessed to generate a series of highly specific indicators for various neurotransmitters. Design principles Genetically encoded sensors are fusion proteins, consisting of a ligand binding domain (sensor) and a fluorescent protein, attached by a short linker (flexible peptide). When the sensor domain binds the correct ligand, it changes conformation. This movement is transferred to the fluorescent protein and the resulting deformation leads to a change in fluorescence. The efficiency of this process depends critically on the length of the linker region, which has to be optimized in a labor-intensive process. The fluorescent protein is often circularly permuted, i.e. new C-terminal and N-terminal ends were created. Single-wavelength sensors are easy to use for qualitative measurements, but difficult to calibrate for quantitative measurements of ligand concentration. A second class of sensors relies on Förster resonance energy transfer (FRET) between two fluorescent proteins (FP) of different color. The shorter wavelength FP (donor) is excited with blue light from a laser or LED. If the second FP (acceptor) is very close, the energy is transferred to the acceptor, resulting in yellow or red fluorescence. When the acceptor FP moves further away, the donor emits green fluorescence. The sensor domain is typically spliced between the two FPs, resulting in a hinge-type movement upon ligand binding that changes the distance between donor and acceptor. The imaging procedure is more complex for FRET sensors, but the fluorescence ratio can be calibrated to measure the absolute concentration of a ligand. Read-out via fluorescence lifetime imaging (FLIM) of donor fluorescence is also possible, as the FRET process speeds up the fluorescence decay. Advantages of optogenetic sensors can be targeted to specific classes of cells (e.g. astrocytes or pyramidal cells). This allows for optical read-out without spatial resolution, e.g. fiber photometry from deep brain areas. can be targeted to sub-cellular compartments (e.g. synapses, organelles, nucleus) by fusing the indicator protein with specific anchoring domains, retention signals or intrabodies. work in a variety of species (nematodes, insects, fish, mammals) and in cell culture systems (FLIPR assay) can be delivered by viral vectors (e.g. rAAV) can be used to record the activity of thousands of neurons at the same time Drawbacks, limitations will buffer the measured ion or protein, potentially interfering with cellular signaling are subject to photobleaching, compromising long-term measurements can be toxic when expressed at very high concentration require highly sensitive cameras or laser scanning microscopes some GPCR-based sensors are sensitive to polarization most indicators are green fluorescent, making it difficult to measure several cellular parameters simultaneously (multiplexing). Classes of genetically encoded indicators Indicators have been designed to measure ion concentrations, membrane potential, neurotransmitters, and various intracellular signaling molecules. The following list provides only examples for each class; many more have been published. Intracellular signaling Genetically encoded calcium indicators (GECI): A large class of tools, based on natural calcium binding proteins (calmodulin, troponin). Different affinities, kinetics, and colors (green, red) available. Read-out via fluorescence intensity (single wavelength indicators), FRET or BRET. Have been targeted to various organelles. Current version: JGCaMP8 Genetically encoded chloride indicators: Clomeleon Genetically encoded potassium indicators: GINKO2 Genetically encoded indicators for intracellular pH (GEPhI): CypHer Genetically encoded voltage indicators (GEVI): ArcLight Genetically encoded vesicle fusion sensors: Synapto-pHluorin, Synaptophysin-pHluorin Genetically encoded cAMP sensors: EPAC Genetically encoded ATP sensors: QUEEN-37C Genetically encoded kinase activity sensors: CaMui, SmURFP Genetically encoded Small G-protein sensors: FRas Neurotransmitters and other extracellular signals Genetically encoded glutamate sensors: GluSnFR Genetically encoded GABA sensors: iGABASnFR Genetically encoded dopamine sensors: dLight1, GRAB-DA Genetically encoded serotonin sensors: GRAB5-HT, sDarken, iSeroSnFR Genetically encoded norepinephrine sensors: GRABNE Genetically encoded sensor for endocannabinoid activity: GRABeCB2.0 Genetically encoded sensor for orexin/hypocretin neuropeptides: OxLight1 Genetically encoded sensor for lactate: eLACCO1.1 Further reading A recent review of GPCR-based genetically encoded fluorescent indicators for neuromodulators External links Fluorescent Biosensor Database, a fairly complete searchable list of published sensors and their basic properties, maintained by Jin Zhang's lab at UCSD. Fluorescent Biosensors available on Addgene, a nonprofit plasmid repository. References Neuroscience Biological techniques and tools Optics
Optogenetic methods to record cellular activity
[ "Physics", "Chemistry", "Biology" ]
1,445
[ "Neuroscience", "Applied and interdisciplinary physics", "Optics", " molecular", "nan", "Atomic", " and optical physics" ]
69,125,797
https://en.wikipedia.org/wiki/Colossal%20Biosciences
Colossal Biosciences Inc. is an American biotechnology and genetic engineering company working to de-extinct the woolly mammoth, the Tasmanian tiger, the northern white rhinoceros, and the dodo. In 2023, it stated that it wants to have woolly mammoth hybrid calves by 2028, and wants to reintroduce them to the Arctic tundra habitat. Likewise, it launched the Tasmanian Thylacine Advisory Committee, a thylacine research project to release Tasmanian tiger joeys back to their original Tasmanian and broader Australian habitat after a period of observation in captivity. The company develops genetic engineering and reproductive technology for conservation biology. It was founded in 2021 by George Church and Ben Lamm, and is based in Dallas, Texas. History Foundation In a 2008 interview with The New York Times, George Church first expressed his interest in engineering a hybrid Asian elephant-mammoth by sequencing the woolly mammoth genome. In 2012, Church was part of a team that pioneered the CRISPR-Cas9 gene editing tool, through which the potential for altering genetic code to engineer the envisioned “mammophant" surfaced. Church presented a talk at the National Geographic Society in 2013, where he mapped out the idea of Colossal. Church and his genetics team used CRISPR to copy mammoth genes into the genome of an Asian elephant in 2015. That same year, Church's lab integrated mammoth genes into the DNA of elephant skin cells; the lab zeroed in on 60 genes that experiments hypothesized as being important to the distinctive traits of mammoths, such as a high-domed skull, ability to hold oxygen at low temperatures, and fatty tissue. Church's lab reported in 2017 that it had successfully added 45 genes to the genome of an Asian elephant. In 2019, Ben Lamm, a serial entrepreneur, contacted Church to meet at his lab in Boston. Lamm was intrigued by press reports of Church's de-extinction idea. Launch Colossal was officially launched on September 13, 2021. The launch included a $15 million seed round led by Thomas Tull, Tim Draper, Tony Robbins, Winklevoss Capital Management, Breyer Capital and Richard Garriott. In addition to the de-extinction of the woolly mammoth, Colossal, in partnership with Dr. Paul Ling of Baylor College of Medicine, hopes to synthesize the elephant endotheliotropic herpesvirus, a virus which infects and kills many young Asian elephants. Colossal also announced that the company's mission was to preserve endangered animals through gene-editing technology and use those same animals to reshape Arctic ecosystems to combat climate change. The company's genomic modeling software development could potentially bring forth advancements in disease treatment, multiplexed genetic engineering, synthetic biology, and biotechnology. Colossal recruited mammoth and modern elephant experts Michael Hofreiter and Fritz Vollrath, as well as bioethicists R. Alta Charo and S. Matthew Liao for their consultation. Other scientific advisory board members include: Carolyn Bertozzi, Austin Gallagher, Kenneth Lacovara, Beth Shapiro, Helen Hobbs, David Haussler, Elazar Edelman, Joseph DeSimone, Erez Lieberman Aiden, Christopher E. Mason, and Doris Taylor. In October 2021, Colossal announced its partnership with VGP; through this collaboration, Colossal will provide funding for VGP to sequence and assemble Asian, African bush, and African forest elephant genomes for preservation purposes. These genomes were made publicly accessible for research without use restrictions in July 2022 and May 2023, respectively. In March 2022, Colossal raised $60 million in a Series A funding round led by Thomas Tull with participation from Animoca Brands, Paris Hilton, Charles Hoskinson, bringing the total funding to $75 million. Colossal spun out its software platform, Form Bio, in September 2022 with $30 million in funding. Lamm has stated that Colossal is run like a software company, and monetization will stem from technologies developed by the company. Form Bio has developed an AI-based software platform designed to help scientists manage large and complicated datasets. In January 2023, Colossal completed a Series B funding round, raising an additional $150 million and putting the company's valuation at over $1 billion. The same month, Colossal launched its Conservation Advisory Board, which includes Forrest Galante, Iain Douglas-Hamilton, Mead Treadwell, and Aurelia Skipwith as its members. In October 2024, Colossal announced $50 million in funding for its launch of the Colossal Foundation. Science and development Because the woolly mammoth and Asian elephant share 99.6% of the same DNA, Colossal aims to develop a proxy species by swapping enough key mammoth genes into the Asian elephant genome. Key mammoth genealogical traits include: a 10-centimeter layer of insulating fat, five different types of shaggy hair, and smaller ears to help the hybrid tolerate cold weather. Colossal's lab will pair CRISPR/Cas9 with other DNA-editing enzymes, such as integrases, recombinases, and deaminases, to splice woolly mammoth genes into the Asian elephant. The company plans on sequencing both elephant and mammoth samples in order to identify key genes in both species to promote population diversification. By doing so, Colossal hopes to prevent any rogue mutations within the hybrid herd. Colossal set a goal for the company to grow a woolly mammoth calf by 2028. The company plans to use African and Asian elephants as potential surrogates and largely plans to develop artificial elephant wombs lined with uterine tissue as a parallel path to gestation. Colossal scientists plan on creating these embryos by taking skin cells from Asian elephants and reprogramming them into induced pluripotent stem cells which carry mammoth DNA. Lamm stated that Colossal will use both induced pluripotent stem cells (iPSC) as well as somatic cell nuclear transfer in the process. In July 2022, VGP and Colossal announced that they successfully sequenced the entire Asian elephant genome; this is the first time that mammalian genetic code has been fully sequenced to this degree since the Human Genome Project was completed in the early 2000s. In August 2022, Colossal announced that they would launch a thylacine research project, in hopes of "de-extincting" the Tasmanian tiger. Colossal plans to reintroduce the thylacine proxy to selected areas in Tasmania and broader Australia and claims that, by doing so, this will re-balance ecosystems that have suffered biodiversity loss and degradation since the species disappeared. A successful thylacine proxy birth could also introduce new marsupial-assisted reproductive technology which can aid in other marsupial conservation efforts. Colossal is partnering with the University of Melbourne, and the project is led by Andrew Pask. The Tasmanian Thylacine Advisory Committee was launched in December 2023. In January 2023, Colossal announced the formation of its Avian Genomics Group, which will be dedicated to reconstructing the DNA of the dodo bird, which went extinct in the 1600s. Led by Beth Shapiro, who serves as Chief Science Officer to Colossal, this research group aims to create a hybrid composed of specific traits most commonly associated with the dodo and plans to reintroduce these hybrids into their respective environments. Colossal will be working with primordial germ cells to pair dodo DNA with the genome of the Nicobar pigeon, the extinct dodo's closest living relative. Breaking, a plastic degradation and synthetic biology startup, was launched in April 2024. Gestated at Colossal, Breaking discovered X-32, a microbe that is capable of breaking down various plastics in as little as 22 months while leaving behind carbon dioxide, water and biomass. It was reported in 2024 that Colossal successfully produced the first-ever elephant and dunnart iPSCs. In October 2024, the company announced that it had rebuilt a 99.9% accurate genome of the thylacine, using a “pickled” 110-year-old fossilized Tasmanian tiger skull. This marks “the most complete ancient genome of any species known to date” and provides a full DNA blueprint to potentially bring back the Tasmanian tiger. Three months later in January 2025, the company sequenced the complete genome of the Tasmanian tiger, bringing the species closer to de-exinction" Future de-extinction projects Outside of their first four projects, Colossal Biosciences has stated that they have a "long list" of species that they want to revive and reintroduce to appropriate ecosystems. Such species include Castoroides, Arctodus, and Steller's sea cow. Ben Lamm has stated that he and his company want to revive Steller's sea in cow once they have developed an artificial animal womb, as there are no adequate living relatives of the extinct sea cow to act a surrogate species. Colossal has also done genetic research for species such as the Irish elk, great auk, bluebuck, ground sloth, moas, and woolly rhinoceros with the intent to potentially revive them in the future. Conservation In October 2022, Colossal announced that it was developing a vaccine for elephant endotheliotropic herpesvirus (EEHV), in partnership with the Baylor College of Medicine. In May 2023, Colossal partnered with the Vertebrate Genomes Project to successfully generate the first high-quality reference genome of an African elephant. This sequencing work is part of a long-term conservation effort for the endangered elephant species. In September 2023, Colossal partnered with BioRescue to help save the northern white rhino from extinction by using reproduction technology and stem cell technology, as the subspecies is functionally extinct with only two known infertile female members left. Colossal and Zoos Victoria began a conservation project in October 2023 to preserve the Victorian Grassland Earless Dragon as well as sequence its genome. In November 2023, Colossal announced a research partnership with Save the Elephants to track African elephants in the Samburu National Reserve. Save the Elephants has already tracked over 900 elephants in the area, using drones equipped with high-resolution infrared cameras. Colossal plans to use pose estimation to develop algorithms for labeling elephants and automatically identifying individual & collective social behavior. Colossal also began a partnership with the Mauritian Wildlife Foundation in November 2023. Under this collaboration, the organizations will work to restore “critical ecosystems through invasive species removal, revegetation, and community awareness efforts.” Additionally, Colossal will also focus on the rewilding of the dodo bird as well as the genetic rescue of the pink pigeon. In March 2024, Colossal and Re:wild partnered together to establish a “10-year conservation strategy” to accelerate efforts to “save species on the brink of extinction, search for lost species, and restore key habitats for species recovery and rewilding.” In May 2024, Colossal and the University of Melbourne announced the successful engineering of cane toad toxin resistance in marsupial cells, as part of conservation efforts for the northern quoll. Colossal’s refined engineering resistance strategy can create over 6,000-fold increased resistance with just one edit to the genome. The first-ever mRNA vaccine for elephant endotheliotropic herpesvirus (EEHV), developed by Colossal, the Houston Zoo, and the Baylor College of Medicine, was administered to an elephant in July 2024. In October 2024, Colossal announced its launch of the Colossal Foundation, a non-profit initiative that utilizes Colossal-developed science and technology methods for partner-led conservative efforts. Included in its conservation agenda is the Colossal Biovault, "the world's largest distributed biobanking initiative." The Colossal Biovault collects tissue samples of endangered species in hopes to allow cell lines to become accessible and stored in domestic partner facilities. Its first projects include: Sumatran rhinoceros, red wolf, northern quoll, pink pigeon, tooth-billed pigeon, Victorian grassland earless dragon, vaquita, and ivory-billed woodpecker. In December 2024, Colossal and University of Melbourne began research into engineering an immunity to chytridiomycosis, a lethal fungal-based disease in amphibians that is responsible for many extinctions and declines in amphibians, including the golden toad and Rabbs' fringe-limbed tree frog. Reception In 2022, Colossal was listed as one of the World Economic Forum's Technology Pioneers and was named Genomics Innovation of the Year by the BioTech Breakthrough Awards. Colossal was included as part of Time's 100 Most Influential Companies 2023 list. Colossal was voted one of the best places to work in Dallas, Texas, U.S. by Builtin in 2025. References Biotechnology companies established in 2021 Genome projects Conservation biology Biotechnology companies Companies based in Dallas Privately held companies based in Texas 2021 establishments in Texas
Colossal Biosciences
[ "Engineering", "Biology" ]
2,629
[ "Colossal Biosciences", "Genome projects", "Biotechnology companies", "Biotechnology organizations", "Conservation biology" ]
69,131,064
https://en.wikipedia.org/wiki/Space%20Concordia
Space Concordia, commonly referred to as SC, is a student organisation at Concordia University in Montreal, Canada, dedicated to the development of space technology and teaching students about space related sciences. Over 150 members are organized in four divisions. Space Concordia's Rocketry division is currently competing in the Base 11 Space Challenge, developing a liquid fuelled rocket with the goal to cross the Kármán line. The development and most of the manufacturing is done in-house by students, including the construction of the mobile engine test stand Trailer Tom. History Space Concordia was founded in 2010 by assistant professor Scott Gleason at Concordia University with the purpose of competing in the Canadian Satellite Design Challenge (CSDC). The team had less than ten members then. Their entry in the competition, the satellite Consat-1, won first place. The team working on satellites became later the Spacecraft division of Space Concordia. In 2012 Rocketry and Robotics divisions were founded. Robotics builds rovers and Rocketry is dedicated to the construction of rockets. Rocketry division's first rocket ever, Arcturus, was awarded 2nd place in the payload category of the 10th Intercollegiate Rocket Engineering Competition (IREC) in 2015. The solid rocket reached a height of . In 2016, Space Concordia won 2nd place in the basic category of the 11th IREC. In 2018, the rocket Supersonice reached a height of and a top speed of mach 1.8. Space Concordia's first supersonic rocket won first place in the Spaceport America Cup. Later in 2018, Rocketry started development of a liquid fuelled rocket to compete in the Base 11 Space Challenge. Space Concordia won second place in the design phase and first place in the Critical Design Review (CDR). On June 18 2021, they successfully completed a hot-fire test of the engine. Space Concordia fired the most powerful liquid fuelled student rocket engine, producing an average thrust of 35 kN at ground level. Divisions Space Concordia is organised in four divisions: Robotics, Rocketry, Spacecraft and Space Health. Rocketry Division Space Concordia Rocketry Division, commonly abbreviated as SCRD, focuses on the design, development and testing of liquid rockets. Since 2018 Space Concordia Rocketry Division has been developing a liquid rocket known as "StarSailor". This rocket is designed to fly to the Kármán line. The rocket is powered a pressure-fed Kerosene-Liquid Oxygen rocket which currently holds the Canadian record for highest thrust produced by amateur rocketry teams in Canada. Facilities Space Concordia’s club room, commonly referred to as 'Space Lab', is located on the 9th floor of the Henry F. Hall Building on Concordia’s downtown campus. Space Concordia’s members have also access to the 'Cage', an area in the basement of the Henry F. Hall Building, used by different clubs and capstone students. The Cage is used for manufacturing, and storing equipment and parts. Concordia’s Engineering Design and Manufacturing Lab (EDML) is used by members to manufacture parts that require machining. The EDML is also located in the basement of the Henry F. Hall Building. It is supervised by university staff members, who also assist and train students. The lab is equipped with manual drills, mills, lathes and a welding area. Awards By Division Spacecraft Division 2012 1st price in the first Canadian Satellite Design Challenge (CSDC) 2016 1st place in the third Canadian Satellite Design Challenge (CSDC) Rocketry Division 2016 2nd place in the basic category of the 11th IREC 2018 1st place in 30k ft category of the Spaceport America Cup 2019 2nd place in design review of the Base 11 Space Challenge 2021 1st place in the Critical Design Review (CDR) of the Base 11 Space Challenge Rockets Space Concordia has built and launched four rockets so far. Arcturus (2015) Aurelius (2016) Maurice (2017) Supersonice (2018) StarSailor (2018-Present) See also Concordia University Delft Aerospace Rocket Engineering Spaceport America Cup Liquid-propellant rocket WARR (TUM) References External links Space Concordia - Space & Aerospace Student Organization Experimental Sounding Rocket Association Base 11 Space Challenge Concordia University Rocketry
Space Concordia
[ "Engineering" ]
843
[ "Rocketry", "Aerospace engineering" ]
74,883,647
https://en.wikipedia.org/wiki/Conley%20conjecture
The Conley conjecture, named after mathematician Charles Conley, is a mathematical conjecture in the field of symplectic geometry, a branch of differential geometry. Background Let be a compact symplectic manifold. A vector field on is called a Hamiltonian vector field if the 1-form is exact (i.e., equals to the differential of a function . A Hamiltonian diffeomorphism is the integration of a 1-parameter family of Hamiltonian vector fields . In dynamical systems one would like to understand the distribution of fixed points or periodic points. A periodic point of a Hamiltonian diffeomorphism (of periodic ) is a point such that . A feature of Hamiltonian dynamics is that Hamiltonian diffeomorphisms tend to have infinitely many periodic points. Conley first made such a conjecture for the case that is a torus. The Conley conjecture is false in many simple cases. For example, a rotation of a round sphere by an angle equal to an irrational multiple of , which is a Hamiltonian diffeomorphism, has only 2 geometrically different periodic points. On the other hand, it is proved for various types of symplectic manifolds. History of studies The Conley conjecture was proved by Franks and Handel for surfaces with positive genus. The case of higher dimensional torus was proved by Hingston. Hingston's proof inspired the proof of Ginzburg of the Conley conjecture for symplectically aspherical manifolds. Later Ginzburg--Gurel and Hein proved the Conley conjecture for manifolds whose first Chern class vanishes on spherical classes. Finally, Ginzburg--Gurel proved the Conley conjecture for negatively monotone symplectic manifolds. References Symplectic geometry Conjectures
Conley conjecture
[ "Mathematics" ]
369
[ "Unsolved problems in mathematics", "Mathematical problems", "Conjectures" ]
74,889,407
https://en.wikipedia.org/wiki/Quantum%20Mechanics%20%28book%29
Quantum Mechanics (), often called the Cohen-Tannoudji, is a series of standard ungraduate-level quantum mechanics textbook written originally in French by Nobel laureate in Physics Claude Cohen-Tannoudji, and Franck Laloë; in 1973. The first edition was published by Collection Enseignement des Sciences in Paris, and was translated to English by Wiley. The book was originally divided into two volumes. A third volume was published in 2017. The book structure is notable for having an extensive set of complementary chapters, introduced along with a "reader's guide", at the end of each main chapter. Table of contents Vol. 1 I. Waves and particles. Introduction to the ideas of quantum mechanics II. Mathematical tools of quantum mechanics III. The postulates of quantum mechanics IV. Applications of the postulates to simple cases: Spin-1/2 and two-level systems V. The one dimensional harmonic oscillator VI. General properties of angular momentum in quantum mechanics VII. Particle in a central potential: the hydrogen atom Vol. 2 VIII An elementary approach to the quantum theory of scattering by a potential IX. Electron spin X. Addition of angular momenta XI. Stationary perturbation theory XII. An application of perturbation theory: The fine and hyperfine structure of the hydrogen atom XIII. Approximation methods for time-dependent problems XIV. Systems of identical particles Appendices Vol. 3 XV. Creation and annihilation operators for identical particles XIV. Field operator XVI. Paired states of identical particles XVII. Review of classical electrodynamics XIX. Quantization of electromagnetic radiation XX. Absorption, emission and scattering of photons by atoms XXI. Quantum entanglement, measurements, Bell's inequalities Appendices Reception Bernd Crasemann writing for the American Journal of Physics praised the book for its clarity and its unusual structure that introduces the reader to intermediate topics. According to him, the "gems" of the book are the complements related to atomic, molecular, and optical physics; condensed matter physics and nuclear physics. The book has also been suggested as a complement to simplified introductory books in quantum mechanics. Experimental physicist and 2022 Nobel laureate in Physics Alain Aspect, has frequently mentioned that the book was a revelation early in his career, helping him better understand the research papers of quantum mechanics and the work of John Stewart Bell. See also Introduction to Quantum Mechanics, an undergraduate text by David J. Griffiths Modern Quantum Mechanics undergraduate book by J. J. Sakurai List of textbooks on classical mechanics and quantum mechanics References Physics textbooks 1985 non-fiction books 1994 non-fiction books 2020 non-fiction books Quantum mechanics
Quantum Mechanics (book)
[ "Physics" ]
544
[ "Quantum mechanics", "Works about quantum mechanics" ]
74,890,992
https://en.wikipedia.org/wiki/Triiron%20ditin%20intermetallic
The compound with empirical formula Fe3Sn2 is the first known kagome magnet. It is an intermetallic compound composed of iron (Fe) and tin (Sn), with alternating planes of Fe3Sn and Sn. Preparation The iron-tin intermetallic forms at around and naturally assumes a kagome structure. Quenching in an ice bath then cools the material to room temperature without disrupting the atomic structure. Electronic structure The compound's band structure exhibits a double Dirac cone, enabling Dirac fermions. A 30 meV gap separates the cones, which indicates the quantum Hall effect and massive Dirac fermions. Close measurement of the Fermi surface via the de Haas-van Alphen effect suggests that the massive fermions also exhibit Kane-Mele-type spin-orbit coupling. Fe3Sn2 can also host magnetic skyrmions, but these typically require high magnetic fields to nucleate. For samples with a small (but nonzero) thickness gradient, only a small-amplitude (5-10 mT), direction-variant magnetic field suffices to nucleate the quasiparticles. References Intermetallics Ferrous alloys Tin alloys 2018 in science
Triiron ditin intermetallic
[ "Physics", "Chemistry", "Materials_science" ]
262
[ "Ferrous alloys", "Inorganic compounds", "Metallurgy", "Tin alloys", "Alloys", "Intermetallics", "Condensed matter physics" ]
77,851,730
https://en.wikipedia.org/wiki/Ophelia%20Tsui
Ophelia Kwan Chui Tsui (, born 1967) is a Chinese physicist who studies experimental polymer science, particularly focusing on the physical properties and glass transition behavior of thin films of polymers, and their study using atomic force microscopy and dielectric spectroscopy. She is a professor of physics at Hong Kong University of Science and Technology, where she directs the William Mong Institute of Nano Science and Technology. Education and career Tsui has a bachelor's degree from the University of Hong Kong and a Ph.D. from Princeton University, completed in 1996. After postdoctoral research at the Massachusetts Institute of Technology and University of Massachusetts Amherst, she became an assistant professor of physics at Hong Kong University of Science and Technology (HKUST) in 1998. She moved to Boston University from 2007 until 2017, when she returned to HKUST. Recognition Tsui was elected as a Fellow of the American Physical Society (APS) in 2011, after a nomination from the APS Division of Polymer Physics, "for outstanding contributions on the dynamics of thin polymer films". References External links Home page 1967 births Living people Chinese physicists Chinese women physicists Polymer scientists and engineers Alumni of the University of Hong Kong Princeton University alumni Academic staff of the Hong Kong University of Science and Technology Boston University faculty Fellows of the American Physical Society
Ophelia Tsui
[ "Chemistry", "Materials_science" ]
263
[ "Polymer scientists and engineers", "Physical chemists", "Polymer chemistry" ]
77,852,725
https://en.wikipedia.org/wiki/Osserman%20manifold
In mathematics, particularly in differential geometry, an Osserman manifold is a Riemannian manifold in which the characteristic polynomial of the Jacobi operator of unit tangent vectors is a constant on the unit tangent bundle. It is named after American mathematician Robert Osserman. Definition Let be a Riemannian manifold. For a point and a unit vector , the Jacobi operator is defined by , where is the Riemann curvature tensor. A manifold is called pointwise Osserman if, for every , the spectrum of the Jacobi operator does not depend on the choice of the unit vector . The manifold is called globally Osserman if the spectrum depends neither on nor on . All two-point homogeneous spaces are globally Osserman, including Euclidean spaces , real projective spaces , spheres , hyperbolic spaces , complex projective spaces , complex hyperbolic spaces , quaternionic projective spaces , quaternionic hyperbolic spaces , the Cayley projective plane , and the Cayley hyperbolic plane . Properties Clifford structures are fundamental in studying Osserman manifolds. An algebraic curvature tensor in has a -structure if it can be expressed as where are skew-symmetric orthogonal operators satisfying the Hurwitz relations . A Riemannian manifold is said to have -structure if its curvature tensor also does. These structures naturally arise from unitary representations of Clifford algebras and provide a way to construct examples of Osserman manifolds. The study of Osserman manifolds has connections to isospectral geometry, Einstein manifolds, curvature operators in differential geometry, and the classification of symmetric spaces. Osserman conjecture The Osserman conjecture asks whether every Osserman manifold is either a flat manifold or locally a rank-one symmetric space. Considerable progress has been made on this conjecture, with proofs established for manifolds of dimension where is not divisible by 4 or . For pointwise Osserman manifolds, the conjecture holds in dimensions not divisible by 4. The case of manifolds with exactly two eigenvalues of the Jacobi operator has been extensively studied, with the conjecture proven except for specific cases in dimension 16. See also Clifford algebra Jacobi operator List of unsolved problems in mathematics References Riemannian manifolds Differential geometry
Osserman manifold
[ "Mathematics" ]
459
[ "Riemannian manifolds", "Space (mathematics)", "Metric spaces" ]
77,853,900
https://en.wikipedia.org/wiki/Ammonium%20hexafluorouranate
Ammonium hexafluorouranate is an inorganic chemical compound with the chemical formula . Synthesis Ammonia reduces uranium hexafluoride at room temperature to produce the compound. Physical properties Ammonium hexafluorouranate exists in four crystal modifications. References Fluoro complexes Uranates Ammonium compounds Fluorometallates Hexafluorides
Ammonium hexafluorouranate
[ "Chemistry" ]
76
[ "Ammonium compounds", "Salts" ]
77,854,547
https://en.wikipedia.org/wiki/4-Fluoroephedrine
4-Fluoroephedrine (4-FEP) is a "novel psychoactive substance" and substituted β-hydroxyamphetamine derivative related to ephedrine. Pharmacology Similarly to other amphetamines, 4-fluoroephedrine acts as a monoamine reuptake inhibitor and monoamine releasing agent. It specifically acts as a selective norepinephrine releasing agent. In contrast to many other amphetamines, but similarly to most cathinones, 4-fluoroephedrine lacks affinity for the human trace amine-associated receptor 1 (hTAAR1). Chemistry 4-Fluoroephedrine, also known as 4-fluoro-β-hydroxy-N-methylamphetamine, is a substituted phenethylamine, amphetamine, and β-hydroxyamphetamine derivative. It is the 4-fluoro analogue of ephedrine. The synthesis of 4-fluoroephedrine has been described. It can serve as a precursor in the synthesis of 4-fluoromethamphetamine (4-FMA). The predicted log P (XLogP3) of 4-fluoroephedrine is 1.0. For comparison, the predicted log P of ephedrine is 0.9. History 4-Fluoroephedrine was first described in the scientific literature by 1991. The next mention of it in the literature was in 2013, when it was identified as a "novel psychoactive substance". The pharmacology of 4-fluoroephedrine was characterized in 2015. Other drugs 4-Fluoroephedrine is known to be a metabolite of 4-fluoromethcathinone (4-FMC; flephedrone). References 4-Fluorophenyl compounds Beta-Hydroxyamphetamines Designer drugs Drugs acting on the cardiovascular system Drugs acting on the nervous system Human drug metabolites Methamphetamines Norepinephrine releasing agents Peripherally selective drugs Stimulants Sympathomimetics
4-Fluoroephedrine
[ "Chemistry" ]
438
[ "Chemicals in medicine", "Human drug metabolites" ]
77,857,410
https://en.wikipedia.org/wiki/Fosgonimeton
Fosgonimeton is an investigational new drug that is being evaluated to treat neurodegenerative diseases such as Alzheimer's and Parkinson's disease. It is a pro-drug of the active metabolite dihexa. Dihexa in turn binds to the hepatocyte growth factor (HGF) and potentiates its activity at its receptor, c-Met. References Antidementia agents Antiparkinsonian agents Amides Amino acids Benzyl compounds Organophosphates Sec-Butyl compounds Carboxamides
Fosgonimeton
[ "Chemistry" ]
118
[ "Biomolecules by chemical classification", "Pharmacology", "Functional groups", "Medicinal chemistry stubs", "Amino acids", "Pharmacology stubs", "Amides" ]
77,857,583
https://en.wikipedia.org/wiki/Somantadine
Somantadine (; developmental code name PR 741-976), or somantadine hydrochloride () in the case of the hydrochloride salt, is an experimental antiviral drug of the adamantane family related to amantadine and rimantadine that was never marketed. It was first described by 1978. References Abandoned drugs Adamantanes Amines Antiviral drugs
Somantadine
[ "Chemistry", "Biology" ]
86
[ "Pharmacology", "Antiviral drugs", "Drug safety", "Medicinal chemistry stubs", "Amines", "Functional groups", "Pharmacology stubs", "Biocides", "Bases (chemistry)", "Abandoned drugs" ]
77,858,923
https://en.wikipedia.org/wiki/Ethofumesate
Ethofumesate is a pre- and post-emergence herbicide used on sugar beets to control weeds, notably blackgrasses. UK registration in 2016 is planned for pre-emergence use on wheat as an auxiliary component of tank mix. Ethofumesate is used in Australia, to control wintergrasses in turfgrasses, along fencelines and tree plantations. Young weeds absorb ethofumesate through roots and shoots, and the ethofumesate inhibits respiration and photosynthesis. Ethofumesate is a Group J (Australia), K3 (Global), Group 15 (numeric), resistance class herbicide. In soil is ethofumesate biodegraded by soil's microörganisms. In soils with over 1% organic matter content, ethofumesate doesn't leach. The halflife in soil is 5-14 weeks, and residual herbicide activity can last four to eight months. Nortron is an ethofumesate emulsifiable concentrate from Nor-Am. References Links Herbicides Benzofurans Sulfonate esters
Ethofumesate
[ "Chemistry", "Biology" ]
235
[ "Herbicides", "Sulfonate esters", "Biocides", "Functional groups" ]
77,863,076
https://en.wikipedia.org/wiki/Ocrelizumab/hyaluronidase
Ocrelizumab/hyaluronidase, sold under the brand name Ocrevus Zunovo, is a fixed-dose combination medication used for the treatment of multiple sclerosis. It contains ocrelizumab, a recombinant humanized monoclonal antibody directed at CD20; and hyaluronidase (human recombinant), an endoglycosidase. It is taken by subcutaneous injection. Ocrelizumab/hyaluronidase was approved for medical use in the United States in September 2024. Medical uses Ocrelizumab/hyaluronidase is indicated for the treatment of relapsing forms of multiple sclerosis, to include clinically isolated syndrome, relapsing-remitting disease, and active secondary progressive disease; and primary progressive multiple sclerosis. References External links Combination drugs Drugs developed by Genentech Drugs developed by Hoffmann-La Roche Monoclonal antibodies
Ocrelizumab/hyaluronidase
[ "Chemistry" ]
210
[ "Pharmacology", "Pharmacology stubs", "Medicinal chemistry stubs" ]
77,867,014
https://en.wikipedia.org/wiki/Tolebrutinib
Tolebrutinib is an investigational new drug that is being evaluated to treat multiple sclerosis. It is a Bruton's tyrosine kinase (BTK) inhibitor. References Anti-inflammatory agents Tyrosine kinase inhibitors Acrylamides Amines Imidazoles Ethers Piperidines Pyridines Imidazopyridines Phenol ethers Ureas
Tolebrutinib
[ "Chemistry" ]
86
[ "Pharmacology", "Functional groups", "Medicinal chemistry stubs", "Amines", "Organic compounds", "Ethers", "Pharmacology stubs", "Bases (chemistry)", "Ureas" ]
56,428,519
https://en.wikipedia.org/wiki/C17orf53
C17orf53 is a gene in humans that encodes a protein known as C17orf53, uncharacterized protein C17orf53. It has been shown to target the nucleus, with minor localization in the cytoplasm. Based on current findings C17orf53 is predicted to perform functions of transport, however further research into the protein could provide more specific evidence regarding its function. Gene Location C17orf53 is located on the long arm of chromosome 17, and is 16,727 bp long. C17orf53 spans from 44,145,203 to 44,161,929, and is located on the positive stand. Gene neighborhood Neighboring genes of C17orf53 are that of RNU6-131, RNA U6 Small Nuclear 131 Pseudogene, and ASB16, Ankyrin Repeat And SOCS Box Containing 16. Expression C17orf53 has been observed to be expressed ubiquitously across almost all tissue types of the body. Expression levels for C17orf53 are observed to be significantly high for tissue types including the pons, the thalamus, the superior cervical ganglion, the testis, the heart, cardiac myocytes, and multiple types of lymphoma Furthermore, based on in situ hybridization data, the hypothalamus exhibits high expression of C17orf53, responsible for relaying of sensory information, in contrast to the low expression of C17orf53 in the mesencephalon region of the brain, responsible for vision, hearing, and motor control. Transcript variants The coding region of C17orf53 consists of 2699 base pairs and encodes for a protein that is 647 amino acids long. Per NCBI AceView, the transcription of C17orf53 produces nine alternatively spliced mRNAs and 17 distinct gt-ag introns Of these nine alternatively spliced variants four distinct protein products are formed. Homology Paralogs No paralogs of C17orf53 exist. There also exist no gene duplications. Orthologs Listed in the table to the right is a selection of C17orf53 orthologs of varying relatedness levels. Orthologs of the human protein C17orf53 are listed in a descending order based on date of divergence and percent sequence identity. Evolutionary History Based on the ortholog table and phylogenetic tree listed to the right, the C17orf53 gene diverged around 1624 MYA when Eukaryotes broke off from Prokaryotes and Archaea. Since then the gene has diverged rapidly, comparable to the speed of fibrinogen. Furthermore, the most distantly related ortholog, Terenaya hassleriana, has three distinct isoforms of its own. Homologous domains C17orf53 falls within two distinct families; DUF4539, a domain of unknown function, and PRR18 super family, which consists of a proline rich family found in Eukaryotes. The proline rich family 18 domain as well as the domain of unknown function are conserved in all known orthologs of C17orf53. Protein General properties The molecular weight of C17orf53 is 69 kilodaltons. The isoelectric point is 5.85. The protein sequence of C17orf53 is both Proline and Glutamine rich, while low in Tyrosine. Aside from Proline, Glutamine, and Tyrosine, there exists a relatively even distribution of amino acids in the protein product of C17orf53. Additionally, the most distantly related orthologs display the most variance in amino acid composition. Post-translational modifications Phosphorylation: A large number of predicted phosphorylation sites could indicate that protein receptors regulating the protein are turned on and off. This is especially important in the proline rich family as well as the domain of unknown function. Glycation: The protein product of C17orf53 also displays a decent amount of glycation throughout the protein product. This may indicate that this protein is the start to a pathway that leads to advanced glycation end products, which have been found to be implicated in many chronic diseases such as cardiovascular problems. This aligns with previous research indicating that there is high expression of this protein in the heart. o-glycosylation: As noted by the conceptual translation there are some o-glycosylation sites located around the proline rich family 18 domain. O-glycosylation indicates the attachment of a sugar molecule to an oxygen atom in the amino acid sequence. This is said to occur in the cytoplasm, a minor predicted localization for the C17orf53 protein product. NES Sites: Nuclear export signals are located in the protein product. This indicates that the protein is likely involved in the transport out of the cell nucleus and into the cytoplasm. Secondary structure The secondary structure of C17orf53 consists of all three structure types; Alpha helix, Beta sheet, and random coils, with the majority of its structure taking on a random coil form. Tertiary Structure Shown in the figure to the right is the predicted tertiary structure of protein C17orf53. This predicted tertiary structure has been found to be 92.7% similar to 3IXZ, also known as Pig gastric H+/K+-ATPase complexed with aluminium fluoride, which is an ATP proton pump involved in creating a proton gradient across the gastric membrane. Furthermore, the tertiary structure of C17orf53 has also been shown to be 90.6% similar to that of 3B8EC, a sodium potassium pump. These findings support the prediction that C17orf53 is a protein involved in transportation mechanisms. Subcellular localization The protein product of C17orf53 has been shown to target the nucleus, with minor localization in the cytoplasm. Interacting proteins Listed in the table below are interacting proteins of C17orf53, and their known functions in the human body. As noted by the table below and the visual representation of interacting proteins, similar to the post translational modifications and tertiary structure, C17orf53 is likely linked to pathways involved in the transfer out of the nucleus into the cytoplasm as indicated by Expo1 and TRIM33. References Proteins
C17orf53
[ "Chemistry" ]
1,304
[ "Biomolecules by chemical classification", "Proteins", "Molecular biology" ]
56,429,585
https://en.wikipedia.org/wiki/Ethosome
Ethosomes are phospholipid nanovesicles used for dermal and transdermal delivery of molecules. Ethosomes were developed by Touitou et al.,1997, as additional novel lipid carriers composed of ethanol, phospholipids, and water. They are reported to improve the skin delivery of various drugs. Ethanol is an efficient permeation enhancer that is believed to act by affecting the intercellular region of the stratum corneum. Ethosomes are soft malleable vesicles composed mainly of phospholipids, ethanol (relatively high concentration), and water. These soft vesicles represent novel vesicles carriers for enhanced delivery through the skin. The size of the ethosomes vesicles can be modulated from tens of nanometers to microns. Structure and composition Ethosomes are mainly composed of multiple, concentric layers of flexible phospholipid bilayers, with a relative high concentration of ethanol (20-45%), glycols and water. Their overall structure has been confirmed by 31P-NMR, EM and DSC. They have high penetration of the horny layer of the skin, which enhances the permeation of encapsulated drugs. The mechanism of permeation enhancement is attributed to the overall properties of the system. Applications Because of their unique structure, ethosomes are able to efficiently encapsulate and deliver into the skin highly lipophilic molecules such as testosterone, cannabinoids and ibuprofen, as well as hydrophilic drugs such as clindamycin phosphate, buspirone hydrochloride. They have been studied for the transdermal and intradermal delivery of peptides, steroids, antibiotics, prostaglandins, antivirals and anti-pyretics. The components used to make ethosomes are already approved for pharmaceutical and cosmetic use and the formulated vesicles are stable when stored. They can be incorporated in various pharmaceutical formulations such as gels, creams, emulsions and sprays. They're consequently being developed for pharmaceutical and cosmeceutical products. Ethosomal systems compare favourably to alternative carriers for quantity and depth of molecule delivery. References Membrane biology Drug delivery devices
Ethosome
[ "Chemistry" ]
487
[ "Pharmacology", "Membrane biology", "Drug delivery devices", "Molecular biology" ]
63,401,284
https://en.wikipedia.org/wiki/Rayleigh%20theorem%20for%20eigenvalues
In mathematics, the Rayleigh theorem for eigenvalues pertains to the behavior of the solutions of an eigenvalue equation as the number of basis functions employed in its resolution increases. Rayleigh, Lord Rayleigh, and 3rd Baron Rayleigh are the titles of John William Strutt, after the death of his father, the 2nd Baron Rayleigh. Lord Rayleigh made contributions not just to both theoretical and experimental physics, but also to applied mathematics. The Rayleigh theorem for eigenvalues, as discussed below, enables the energy minimization that is required in many self-consistent calculations of electronic and related properties of materials, from atoms, molecules, and nanostructures to semiconductors, insulators, and metals. Except for metals, most of these other materials have an energy or a band gap, i.e., the difference between the lowest, unoccupied energy and the highest, occupied energy. For crystals, the energy spectrum is in bands and there is a band gap, if any, as opposed to energy gap. Given the diverse contributions of Lord Rayleigh, his name is associated with other theorems, including Parseval's theorem. For this reason, keeping the full name of "Rayleigh Theorem for Eigenvalues" avoids confusions. Statement of the theorem The theorem, as indicated above, applies to the resolution of equations called eigenvalue equations. i.e., the ones of the form HѰ = λѰ, where H is an operator, Ѱ is a function and λ is number called the eigenvalue. To solve problems of this type, we expand the unknown function Ѱ in terms of known functions. The number of these known functions is the size of the basis set. The expansion coefficients are also numbers. The number of known functions included in the expansion, the same as that of coefficients, is the dimension of the Hamiltonian matrix that will be generated. The statement of the theorem follows. Let an eigenvalue equation be solved by linearly expanding the unknown function in terms of N known functions. Let the resulting eigenvalues be ordered from the smallest (lowest), λ1, to the largest (highest), λN. Let the same eigenvalue equation be solved using a basis set of dimension N + 1 that comprises the previous N functions plus an additional one. Let the resulting eigenvalues be ordered from the smallest, 1, to the largest, N+1. Then, the Rayleigh theorem for eigenvalues states that i ≤ λi for A subtle point about the above statement is that the smaller of the two sets of functions must be a subset of the larger one. The above inequality does not hold otherwise. Self-consistent calculations In quantum mechanics, where the operator H is the Hamiltonian, the lowest eigenvalues are occupied (by electrons) up to the applicable number of electrons; the remaining eigenvalues, not occupied by electrons, are empty energy levels. The energy content of the Hamiltonian is the sum of the occupied eigenvalues. The Rayleigh theorem for eigenvalues is extensively utilized in calculations of electronic and related properties of materials. The electronic energies of materials are obtained through calculations said to be self-consistent, as explained below. In density functional theory (DFT) calculations of electronic energies of materials, the eigenvalue equation, HѰ = λѰ, has a companion equation that gives the electronic charge density of the material in terms of the wave functions of the occupied energies. To be reliable, these calculations have to be self-consistent, as explained below. The process of obtaining the electronic energies of material begins with the selection of an initial set of known functions (and related coefficients) in terms of which one expands the unknown function  Ѱ . Using the known functions for the occupied states, one constructs an initial charge density for the material. For density functional theory calculations, once the charge density is known, the potential, the Hamiltonian, and the eigenvalue equation are generated. Solving this equation leads to eigenvalues (occupied or unoccupied) and their corresponding wave functions (in terms of the known functions and new coefficients of expansion). Using only the new wave functions of the occupied energies, one repeats the cycle of constructing the charge density and of generating the potential and the Hamiltonian. Then, using all the new wave functions (for occupied and empty states), one regenerates the eigenvalue equation and solves it. Each one of these cycles is called an iteration. The calculations are complete when the difference between the potentials generated in Iteration n + 1 and the one immediately preceding it (i.e., n) is 10−5 or less. The iterations are then said to have converged and the outcomes of the last iteration are the self-consistent results that are reliable. The basis set conundrum of self-consistent calculations The characteristics and number of the known functions utilized in the expansion of Ѱ naturally have a bearing on the quality of the final, self-consistent results. The selection of atomic orbitals that include exponential or Gaussian functions, in additional to polynomial and angular features that apply, practically ensures the high quality of self-consistent results, except for the effects of the size and of attendant characteristics (features) of the basis set. These characteristics include the polynomial and angular functions that are inherent to the description of s, p, d, and f states for an atom. While the s functions are spherically symmetric, the others are not; they are often called polarization orbitals or functions. The conundrum is the following. Density functional theory is for the description of the ground state of materials, i.e., the state of lowest energy. The second theorem of DFT states that the energy functional for the Hamiltonian [i.e., the energy content of the Hamiltonian] reaches its minimum value (i.e., the ground state) if the charge density employed in the calculation is that of the ground state. We described above the selection of an initial basis set in order to perform self-consistent calculations. A priori, there is no known mechanism for selecting a single basis set so that, after self consistency, the charge density it generates is that of the ground state. Self consistency with a given basis set leads to the reliable energy content of the Hamiltonian for that basis set. As per the Rayleigh theorem for eigenvalues, upon augmenting that initial basis set, the ensuing self consistent calculations lead to an energy content of the Hamiltonian that is lower than or equal to that obtained with the initial basis set. We recall that the reliable, self-consistent energy content of the Hamiltonian obtained with a basis set, after self consistency, is relative to that basis set. A larger basis set that contains the first one generally leads self consistent eigenvalues that are lower than or equal to their corresponding values from the previous calculation. One may paraphrase the issue as follows. Several basis sets of different sizes, upon the attainment of self-consistency, lead to stationary (converged) solutions. There exists an infinite number of such stationary solutions. The conundrum stems from the fact that, a priori, one has no means to determine the basis set, if any, after self consistency, leads to the ground state charge density of the material, and, according to the second DFT theorem, to the ground state energy of the material under study. Resolution of the basis set conundrum with the Rayleigh theorem for eigenvalues Let us first recall that a self-consistent density functional theory calculation, with a single basis set, produces a stationary solution which cannot be claimed to be that of the ground state. To find the DFT ground state of a material, one has to vary the basis set (in size and attendant features) in order to minimize the energy content of the Hamiltonian, while keeping the number of particles constant. Hohenberg and Kohn, specifically stated that the energy content of the Hamiltonian "has a minimum at the 'correct' ground state Ψ, relative to arbitrary variations of Ψ in which the total number of particles is kept constant." Hence, the trial basis set is to be varied in order to minimize the energy. The Rayleigh theorem for eigenvalues shows how to perform such a minimization with successive augmentation of the basis set. The first trial basis set has to be a small one that accounts for all the electrons in the system. After performing a self consistent calculation (following many iterations) with this initial basis set, one augments it with one atomic orbital . Depending on the s, p, d, or f character of this orbital, the size of the new basis set (and the dimension of the Hamiltonian matrix) will be larger than that of the initial one by 2, 6, 10, or 14, respectively, taking the spin into account. Given that the initial, trial basis set was deliberately selected to be small, the resulting self consistent results cannot be assumed to describe the ground state of the material. Upon performing self-consistent calculations with the augmented basis set, one compares the occupied energies from Calculations I and II, after setting the Fermi level to zero. Invariably, the occupied energies from Calculation II are lower than or equal to their corresponding values from Calculation I. Naturally, one cannot affirm that the results from Calculation II describe the ground state of the material, given the absence of any proof that the occupied energies cannot be lowered further. Hence, one continues the process of augmenting the basis set with one orbital and of performing the next self-consistent calculation. The process is complete when three consecutive calculations yield the same occupied energies. One can affirm that the occupied energies from these three calculations represent the ground state of the material. Indeed, while two consecutive calculations can produce the same occupied energies, these energies may be for a local minimum energy content of the Hamiltonian as opposed to the absolute minimum. To have three consecutive calculations produce the same occupied energies is the robust criterion for the attainment of the ground state of a material (i.e., the state where the occupied energies have their absolute minimal values). This paragraph described how successive augmentation of the basis set solves one aspect of the conundrum, i.e., a generalized minimization of the energy content of the Hamiltonian to reach the ground state of the system under study. Even though the paragraph above shows how the Rayleigh theorem enables the generalized minimization of the energy content of the Hamiltonian, to reach the ground state, we are still left with the fact that three different calculations produced this ground state. Let the respective numbers of these calculations be N, (N+1), and (N+2). While the occupied energies from these calculations are the same (i.e., the ground state), the unoccupied energies are not identical. Indeed, the general trend is that the unoccupied energies from the calculations are in the reverse order of the sizes of the basis sets for these calculations. In other words, for a given unoccupied eigenvalue (say the lowest one of the unoccupied energies), the result from Calculation (N+2) is smaller than or equal to that from Calculation (N+1). The latter, in turn, is smaller than or equal to the result from Calculation N. In the case of semiconductors, the lowest-laying unoccupied energies from the three calculations are generally the same, up to 6 to 10 eV or above, depending on the material, if the sizes of the basis sets of the three calculations are not vastly different. Still, for higher, unoccupied energies, the Rayleigh theorem for eigenvalues applies. This paragraph poses the question as to which one of the three, consecutive, self-consistent calculations leading to the ground state energy provides the true DFT description of the material – given the differences between some of their unoccupied energies. There are two distinct ways of determining the calculation providing the DFT description of the material. The first one starts by recalling that self-consistency requires the performance of iterations to obtain the reliable energy, the number of iterations may vary with the size of the basis set. With the generalized minimization made possible by the Rayleigh theorem, with successively augmented size and attendant features (i.e., polynomial and angular ones) of the basis set, the Hamiltonian changes from one calculation to the next, up to Calculation N. Calculations N + 1 and N + 2 reproduce the result from Calculation N for the occupied energies. The charge density changes from one calculation to the next, up Calculation N. Afterwards, it does not change in Calculations N + 1 and N + 2 or higher, nor does the Hamiltonian from its value in Calculation N. When the Hamiltonian does not change, a change in an unoccupied eigenvalue cannot be due to a physical interaction.. Therefore, any change of an unoccupied eigenvalue, from its value in Calculation N, is an artifact of the Rayleigh theorem for eigenvalues. Calculation N is therefore the only one that provide the DFT description of the material. The second way in determining the calculation that provides the DFT description of the material follows. The first DFT theorem states that the external potential is a unique functional of the charge density, except for an additive constant. The first corollary of this theorem is that the energy content of the Hamiltonian is also a unique functional of the charge density. The second corollary to the first DFT theorem is that the spectrum of the Hamiltonian is a unique functional of the charge density. Consequently, given that the charge density and the Hamiltonian do not change from their respective values in Calculation N, following an augmentation of the basis set, then any unoccupied eigenvalue, obtained in Calculations N + 1, N + 2, or higher, that is different (lower than) from its corresponding value in Calculation N, no longer belongs to the physically meaningful spectrum of the Hamiltonian, a unique functional of the charge density, given by the output of Calculation N. Hence, Calculation N is the one whose outputs possess the full, physical content of DFT; this Calculation N provides the DFT solution. The value of the above determination of the physically meaningful calculation is that it avoids the consideration of basis sets that are larger than that of Calculation N and are heretofore over-complete for the description of the ground state of the material. In the current literature, the only calculations that have reproduced or predicted the correct, electronic properties of semiconductors have been the ones that (1) searched for and reached the true ground state of materials and (2) avoided the utilization of over complete basis sets as described above. These accurate DFT calculations did not invoke the self-interaction correction (SIC) or the derivative discontinuity employed extensively in the literature to explain the woeful underestimation of the band gaps of semiconductors and insulators. In light of the content of the two bullets above, an alternative, plausible explanation of the energy and band gap underestimation in the literature is the use of over-complete basis sets that lead to an unphysical lowering of some unoccupied energies, including some of the lowest-laying ones. References Linear algebra Mathematical physics
Rayleigh theorem for eigenvalues
[ "Physics", "Mathematics" ]
3,181
[ "Applied mathematics", "Theoretical physics", "Linear algebra", "Mathematical physics", "Algebra" ]
73,517,901
https://en.wikipedia.org/wiki/Polyesteramide
Polyesteramides are a class of synthetic polymers connected by ester and amide bonds. Types Common polyesteramides can be separated in to two different types. Nylon-type According to Rainer Höfer, nylon-type polyesteramides can be synthesized through the polymerisation of caprolactam or caprolactone, or through polycondensation of synthetic alcohols like 1,4-butanediol. Nylon-type polyesteramides have been investigated for their use in drug delivery systems and smart materials. Oil-based Höfer described oil-based polyesteramides as "products of a fatty acid alkanolamide with a dicarboxylic acid (anhydride) such as terephthalic acid or pthalic acid anhydride". These polyesteramides are often manufactured from regional vegetable oils including neem oil. See also Polyester Polyamide Self healing material References Polymers
Polyesteramide
[ "Chemistry", "Materials_science" ]
197
[ "Polymers", "Polymer chemistry" ]
73,521,325
https://en.wikipedia.org/wiki/Wave%20overtopping
Wave overtopping is the time-averaged amount of water that is discharged (in liters per second) per structure length (in meters) by waves over a structure such as a breakwater, revetment or dike which has a crest height above still water level. When waves break over a dike, it causes water to flow onto the land behind it. Excessive overtopping is undesirable because it can compromise the integrity of the structure or result in a safety hazard, particularly when the structure is in an area where people, infrastructure or vehicles are present, such as in the case of a dike fronting an esplanade or densely populated area. Wave overtopping typically transpires during extreme weather events, such as intense storms, which often elevate water levels beyond average due to wind setup. These effects may be further intensified when the storm coincides with a high spring tide. Excessive overtopping may cause damage to the inner slope of the dike, potentially leading to failure and inundation of the land behind the dike, or create water-related issues on the inside of the dike due to excess water pressure and inadequate drainage. The process is highly stochastic, and the amount of overtopping depends on factors including the freeboard, wave height, wave period, the geometry of the structure, and slope of the dike. Overtopping factors and influences Overtopping can transpire through various combinations of water levels and wave heights, wherein a low water level accompanied by high waves may yield an equivalent overtopping outcome to that of a higher water level with lower waves. This phenomenon is inconsequential when water levels and wave heights exhibit correlation; however, it poses difficulties in river systems where these factors are uncorrelated. In such instances, a probabilistic calculation is necessary. The freeboard is the height of the dike's crest above the still water level, which usually corresponds to the determining storm surge level or river water level. Overtopping is typically expressed in litres per second per metre of dike length (L/s/m), as an average value. Overtopping follows the cyclical nature of waves, resulting in a large amount of water flowing over a structure, followed by a period with no water. The official website of the EurOtop Manual, which is widely used in the design of coastal engineering structures, features a number of visualisations of wave overtopping. In the case of overtopping at rubble-mound breakwaters, recent research using numerical models indicates that overtopping is strongly dependent on the slope angle. Since present design guidelines for non-breaking waves do not include the effect of the slope angle, modified guidelines have also been proposed. Whilst these observed slope effects are too large to be ignored, they still need to be verified by tests using physical models. Overtopping behaviour is also influenced by the geometry and layout of different coastal structures. For example, seawalls (which are typically vertical, or near-vertical, as opposed to sloping breakwaters or revetments), are often situated behind natural beaches. Scour at the base of these structures during storms can have a direct impact on wave energy dissipation along their frontage, thus influencing wave overtopping. This phenomenon assumes critical importance when storms occur in such quick succession that the beach doesn't have sufficient time for sediments removed by the storm to be re-established. Experimental results show that, for near-vertical structures at the back of a beach, there is an increase in wave overtopping volume for a storm that starts from an eroded beach configuration, rather than a simple slope. Calculation of overtopping Wave overtopping predominantly depends on the respective heights of individual waves compared to the crest level of the coastal structure involved. This overtopping doesn't occur continuously; rather, it's a sporadic event that takes place when particularly high waves within a storm impact the structure. The extent of wave overtopping is quantified by the volume of water that overflows onto the adjacent land. This can be measured either as the volume of water per wave for each unit length of the seawall, or as the average rate of overtopped water volume per unit length during the storm wave period. Much research into overtopping has been carried out, ranging from laboratory experiments to full-scale testing and the use of simulators. In 1971, Jurjen Battjes developed a theoretically accurate equation for determining the average overtopping. However, the formula's complexity, involving error functions, has limited its widespread adoption in practical applications. Consequently, an alternative empirical relationship has been established: in which is the dimensionless overtopping, and is the dimensionless freeboard: in which: is the water depth is the freeboard is the overtopping discharge (in m³/s) is the significant wave height at the toe of the structure is the deep water wavelength is the inclination of the slope (of e.g. the breakwater or revetment) is the Iribarren number is a resistance term. The values of and depend on the type of breaking wave, as shown in the table below: {|class="wikitable" style="float:left;" ! Type of wave || Value for || Value for |- |breaking (plunging)|| 0.067 || 4.3 |- |non-breaking (surging)|| 0.2 || 2.3 |} The resistance term has a value between approximately 0.5 (for two layers of loosely dumped armourstone) and 1.0 (for a smooth slope). The effect of a berm and obliquely incident waves is also taken into account through the resistance term. This is determined in the same way as when calculating wave run-up. Special revetment blocks that reduce wave run-up (e.g., Hillblock, Quattroblock) also reduce wave overtopping. Since the governing overtopping is the boundary condition, this means that the use of such elements allows for a slightly lower flood barrier. Research for the EurOtop manual has provided much additional data, and based on this, the formula has been slightly modified to: with a maximum of: It turns out that this formula is also a perfect rational approximation of the original Battjes formula. In certain applications, it may also be necessary to calculate individual overtopping quantities, i.e. the overtopping per wave. The volumes of individual overtopping waves are Weibull distributed. The overtopping volume per wave for a given probability of exceedance is given by: in which is the probability of exceedance of the calculated volume, is the probability of overtopping waves, and is the crest height. Calculation and measurement of overtopping at rock revetment crests In terms of revetments, the overtopping discussed in the EurOtop manual refers to the overtopping measured at the seaward edge of the revetment crest. The formulas above describe the wave overtopping occurring at the sea-side edge of the crest. In scenarios where the crest is impermeable (for example, a road surface or a clay layer), the volume of water overtopping the inland side of the crest would roughly equal that on the seaside. However, in the case of a rock armour breakwater with a more permeable crest, a large part of the overtopping water will seep into the crest, thus providing less overtopping on the inside of it. To analyse this effect, reduction coefficient can be used. This factor can be multiplied by 0.5 for a standard crest, with a width of about three rocks. This can result in a significant reduction in overtopping, and thus in the required crest height. If, behind the crest at a lower level, a permeable rock armour layer is installed with width , the amount of overtopping on the landside of this layer decreases still further. In that case, the reduction term (not to be confused with the reduction co-efficient ) can be multiplied by , in which is the crest width. Berm breakwaters The circumstances surrounding overtopping at berm-type breakwaters differ slightly from those of dikes. Minor wave overtopping may occur as splashes from waves striking individual rocks. However, significant overtopping typically results in a horizontal flow across the crest, similar to what happens with dikes. The primary distinction lies in the wave heights used for designing these structures. Dikes rarely face wave heights exceeding 3 metres, while berm breakwaters are often designed to withstand wave heights of around 5 metres. This difference impacts the overtopping behaviour when dealing with smaller overtopping discharges. Tolerable overtopping An understanding wave overtopping involves a combination of empirical data, physical modelling, and numerical simulations to predict and mitigate its impacts on coastal structures and safety. Traditionally, permissible average overtopping discharge has been utilised as a standard for designing coastal structures. It is necessary to restrict the average overtopping discharge to guarantee both the structural integrity of the structure, as well as the protection of individuals, vehicles, and properties situated behind it. Design handbooks often stipulate the thresholds for the maximum individual overtopping volumes, necessitating the examination of wave overtopping on a wave-per-wave basis. Often, to ensure a more dependable level of safety for pedestrians and vehicles, or to evaluate the stability of the inner slope of a revetment, it is necessary to consider the peak velocity and thickness of the overtopping flow. The tolerable overtopping is the overtopping which the design accepts may occur during a design storm condition. It is dependent on a number of factors including the intended use of the dike or coastal structure, and the quality of the revetment. Tolerable overtopping volumes are site-specific and depend on various factors, including the size and usage of the receiving area, the dimensions and capacity of drainage ditches, damage versus inundation curves, and return period. For coastal defences safeguarding the lives and well-being of residents, workers, and recreational users, designers and overseeing authorities must also address the direct hazards posed by overtopping. This necessitates evaluating the level of hazard and its likelihood of occurrence, thereby enabling the development of suitable action plans to mitigate risks associated with overtopping events. For rubble mound breakwaters (e.g., in harbour breakwaters) and a significant wave height greater than 5m on the outside, a heavy rubble mound revetment on the inside is required for overtopping of 10-30 L/s per metre. For overtopping of 5-20 L/s per metre, there is a high risk of damage to the crest. {| class="wikitable" style="float:left;" ! Situation on the slope || (L/s per metre) |- |Quarry stone in waves > 5m, and some damage|| 1 |- |Quarry stone in waves > 5m, and some damage(*)|| 5 - 10 |- |Good grass cover between 1m and 3m|| 5 |- |Poor grass cover between 0.5m and 3m|| 0.1 |- |Poor grass cover < 1m|| 5 - 10 |- |Poor grass cover < 0.3 m|| Unlimited |- | colspan="2" |(*)and inner slope designed for overtopping |} For regular grass, an average overtopping of 5 L/s per metre of dike is considered permissible. For very good grass cover, without special elements or street furniture such as stairs, sign poles, or fences, 10 L/s per metre is allowed. Overtopping tests with a wave overtopping simulator have shown that for an undamaged grass cover, without special elements, 50L/s per metre often causes no damage. The problem is not so much the strength of the grass cover, but the presence of other elements such as gates, stairs and fences. It should be considered that, for example, 5 L/s per metre can occur due to high waves and a high freeboard, or low waves with a low freeboard. In the first case, there are not many overtopping waves, but when one overtops, it creates a high flow velocity on the inner slope. In the second case, there are many overtopping waves, but they create relatively low flow velocities. As a result, the requirements for overtopping over river dikes are different from those for sea dikes. A good sea dike with a continuous grass cover can easily handle 10 L/s per metre without problems, assuming good drainage is provided at the foot of the inner slope. Without adequate drainage, the amount of water that could potentially enter properties at the foot of the inner slope would be unacceptable, which is why such dikes are designed for a lower overtopping amount. Since it has been found that a grass cover does not fail due to the average overtopping, but rather due to the frequent occurrence of high flow velocities, coastal authorities such as Rijkswaterstaat in the Netherlands have decided (since 2015) to no longer test grass slopes on the inner side of the dike for average overtopping discharge, but rather for the frequency of high flow velocities during overtopping. Research has shown that grass roots can contribute to improving the shear strength of soil used in dike construction, providing that the grass is properly maintained. Developing a grass cover takes time and requires a suitable substrate, such as lean and reasonably compacted clay. Firmly compacted clay soil is initially unsuitable for colonisation by grass plants. However, after a frost or winter period, the top layer of such a compacted clay layer is sufficiently open for the establishment of grass. To function properly, grass cover formation must begin well before winter. Research in The Netherlands has found that dikes with a well-compacted and flat clay lining can withstand a limited wave height or limited wave overtopping, such as in the majority of river areas, during the first winter after construction even without a grass cover, for many days without significant damage. If the wave load in the river area is higher, no damage that threatens safety will occur if the clay lining is thick enough (0.8 metres or more) and adequately compacted throughout its entire thickness. An immature grass cover can be temporarily protected against hydraulic loads with stapled geotextile mats. {|class="wikitable" ! Category || (m) || L/s per metre |- |Pedestrians with a view of the sea|| 3 || 0.3 |- |Pedestrians with a view of the sea|| 2 || 1 |- |Pedestrians with a view of the sea|| 1 || 10–20 |- |Pedestrians with a view of the sea|| <0.5 || Unlimited |- |Cars, trains|| 3 || <5 |- |Cars, trains|| 2 || 10–20 |- |Cars, trains|| 1 || <75 |} For damage to ships in harbours or marinas, the following figures can be used: {|class="wikitable" ! Category || (m) || L/s per metre |- |Sinking of large ships|| >5 || >10 |- |Sinking of large ships|| 3 - 5 || >20 |- |Damage to small ships|| 3 - 5 || >5 |- |Safe for large ships|| > 5 || <5 |- |Safe for small ships|| 3- 5 || <1 |- |Damage to buildings|| 1-3 || <1 |- |Damage to equipment and street furniture|| || <1 |} These values provide guidance on the expected impact of overtopping on ships in marinas or harbours, on nearby buildings and other infrastructure, depending on the significant wave height and overtopping rate (in L/s per metre). This information then helps to inform the appropriate design, the required protection measures, and response plans for different scenarios. Wave transmission When there is water on both sides of a barrier (such as in the case of a harbour dam, breakwater or closure dam), wave overtopping over the dam will also generate waves on the other side of the dam. This is called wave transmission. To determine the amount of wave transmission, it is not necessary to determine the amount of overtopping. The transmission depends only on the wave height on the outer side, the freeboard, and the roughness of the slope. For a smooth slope, the transmission coefficient (the relationship between the wave on the inside of the dam and the incoming wave) is: In which ξ0p is the Iribarren number based on the peak period of the waves, and β is the angle of incidence of the waves. Overtopping simulation In order to assess the safety and resilience of dikes, as well as the robustness of the grass lining on their crests and landward slopes, a wave overtopping simulator can be employed. The most onerous wave conditions for which a dike is designed occur relatively rarely, so using a wave overtopping simulator enables in-situ replication of anticipated conditions on the dike itself. This allows the responsible organisation overseeing the structure to evaluate its capacity to withstand predicted wave overtopping during specific extreme scenarios. During these tests, the wave overtopping simulator is positioned on the dike's crest and continuously filled with water. The device features valves at its base that can be opened to release varying volumes of water, thereby simulating a wide range of wave overtopping events. This approach helps ensure that the dike's integrity is accurately and effectively assessed. In the case of dikes with grass slopes, another test method is to use a sod puller to determine the tensile strength of the sod, which can then be translated into strength under the load caused by wave overtopping. In addition to simulating wave overtopping, the simulation of wave impacts and wave run-up is possible with a specially developed generator and simulator. See also Coastal Engineering References External links EurOtop Manual on wave overtopping of sea defences and related structures An overtopping manual largely based on European research, but for worldwide application: Second Edition 2018 Coastal engineering Civil engineering Hydraulic engineering
Wave overtopping
[ "Physics", "Engineering", "Environmental_science" ]
3,862
[ "Hydrology", "Coastal engineering", "Physical systems", "Construction", "Hydraulics", "Civil engineering", "Hydraulic engineering" ]
67,718,862
https://en.wikipedia.org/wiki/Urbach%20tail
In the solid-state physics of semiconductors, the Urbach tail is an exponential part in the energy spectrum of the absorption coefficient. This tail appears near the optical band edge in amorphous, disordered and crystalline materials. History Researchers began questioning the nature of "tail states" in disordered semiconductors in the 1950s. It was found that such tails arise from the strains sufficient to push local states past the band edges. In 1953, the Austrian-American physicist Franz Urbach (1902–1969) found that such tails decay exponentially into the gap. Later, photoemission experiments delivered absorption models revealing temperature dependence of the tail. A variety of amorphous crystalline solids expose exponential band edges via optical absorption. The universality of this feature suggested a common cause. Several attempts were made to explain the phenomenon, but these could not connect specific topological units to the electronic structure. See also Tauc plot References Crystallography
Urbach tail
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
189
[ "Crystallography", "Condensed matter physics", "Materials science" ]
67,720,106
https://en.wikipedia.org/wiki/Bacteroides%20thetaiotaomicron%20sRNA
The Bacteroides thetaiotaomicron genome contains hundreds of small RNAs (sRNAs), discovered through RNA sequencing. These include canonical housekeeping RNA species such as the 6S RNA (SsrS), tmRNA (SsrA), M1 RNA (RnpB) and 4.5S RNA (Ffs) as well as several hundred cis and trans encoded small RNAs. More than 20 candidates have been validated with northern blots and the structures of several members have been characterized through in silico analyses and chemical probing experiments. Two B. thetaiotaomicron sRNAs that have been functionally characterized are RteR and GibS. RteR is a 78 nucleotide (nt) long sRNA that is conserved in closely related species and likely serves as a repressor of a transposon operon. Analyses based on secondary structure conservation, taking into consideration nucleotide covariation and in-vitro chemical probing have revealed a structure that consists of a 5’ hairpin and a Rho-independent terminator that are separated by an 8 nt sequence. GibS is a 145 nt long sRNA that is also conserved in several closely related species within phylum Bacteroidota and has been hypothesized to play a role in carbon metabolism. Structural analyses have revealed this sRNA to possess an extended 5’ single stranded region (38 nt) followed by two meta-stable hairpins and a Rho-independent terminator at the 3’ end. It is maximally expressed when B. thetaiotaomicron is grown in N-acetyl-D-glucosamine as the sole carbon source and has been shown to both induce and repress target mRNAs involved in metabolic regulation. The B. thetaiotaomicron genome also contains a large subset of antisense sRNAs that bear resemblance to the B. fragilis DonS RNA. This family of 78 to 128 nt long sRNAs are encoded antisense to several of their target genes, that are members of PULs (Polysaccharide Utilization Loci). See also Bacillus subtilis sRNAs Bacterial small RNA Brucella sRNA Caenorhabditis elegans sRNA Escherichia coli sRNA Pseudomonas sRNA References RNA Bacteroidia Genomics RNA sequencing Nucleic acids
Bacteroides thetaiotaomicron sRNA
[ "Chemistry", "Biology" ]
489
[ "Genetics techniques", "Biomolecules by chemical classification", "RNA sequencing", "Molecular biology techniques", "Nucleic acids" ]
54,971,608
https://en.wikipedia.org/wiki/Theseus1
Theseus1 (THE1) is a transmembrane receptor-like kinase (RLK) that is found in plant cells. It was originally discovered in Arabidopsis thaliana as part of a family of 17 related proteins, commonly referred to as the Theseus1/Feronia family or the CrRLK family. So far, THE1 and 5 other members in the same family of RLKs have been found to play key roles in cell elongation during vegetative growth through interacting mostly with the cell wall. Though the exact mechanism for this process is still unknown, it is thought to be very similar to, and even partially regulated by, the brassinosteroid pathway. In addition, Theseus1 has the ability to detect changes in cell wall integrity and could possibly even recognize pathogenic sequences. While the workings of THE1 and other members of the CrRLK family are understood on a general level, research of the specific interactions between them has yet to be published. Discovery Theseus1 was discovered, along with the other members in its family of RLKs, while researchers were attempting to describe a pathway of monitoring cell-wall stability in plant cells. It was first characterised from its interaction with a Procuste1 mutant (prc1-1). This Procuste1 mutant produces less cellulose because of alterations to the cellulose synthase site, resulting in drastically decreased cell wall elongation. When THE1 was also mutated in the presence of the prc1-1 mutant, the rate of cell elongation was increased to half-way between the normal growth and the prc1-1 only growth rate. Because of this interaction, it was named after Theseus, the mythological founder of Athens and killer of Procrustes. Theseus 1 was originally found in, and is still most commonly obtained from, Arabidopsis thaliana. Other members of the same RLK family are named according to other mythological figures, such as Feronia, Anxur, and Hercules. Structure Theseus1 is an 855 residue, type I transmembrane protein that has an extracellular N-terminus and an intracellular C-terminus. The serine/threonine kinase domain that is typical of RLKs is present on the intracellular C-terminal along with an adjacent binding site for ATP. There are also two internal phosphorylation sites that could possibly act as molecular switching sites for THE1 activation/suppression. The N-terminal contains a roughly 19 residue long dissociable sequence that is thought to be used for signaling about an issue in the cell wall. Additionally, there are a few regions on the extracellular N-terminus of Theseus1 that closely resemble the structure of ML domains in other proteins, suggesting that it could have an additional function of monitoring pathogenic response in the cell wall. Activity Theseus1 is normally expressed in all cells, with increased expression in tissues that are expanding. The enzymatic activity of THE1 can be described on its own, but most of its actions happen in coordination with other members of the CrRLK family, most of which have yet to be described let alone given a proven mechanism. Sensing Changes in Cell Wall and Pathogen Response Theseus1 is commonly believed to be able to detect changes in the cell wall and respond to perturbations. This thought has been applied to a few different scenarios. First, it has been considered that THE1 detects fragments of cell walls, and then signals for the inhibition of cell elongation. Another proposition is that THE1 responds to changes in the cell wall composition before signalling for the inhibition of cell elongation. Both fragmentation and alternate composition of the cell wall are commonly due to the presence of pathogens. The final commonly supported idea is that THE1 could just directly identify the presence of pathogens themselves though the use of its ML domain-like regions. All of these ideas support the theory that THE1 is part of the cell's response to pathogenic activity. Also, THE1 has been shown to upregulate the same genes that are regulated by pattern recognition receptors (PRRs) and code for defense-related proteins, which further suggests that THE1 plays a role in pathogenic response Vital for Cellular Elongation Theseus1 and other members of the CrRLK family are important to cellular elongation. In particular, Theseus1 and Hercules1 (HERK1) have shown to perform similar roles in the process of elongation. Arabidopsis thaliana plants with a loss-of-function mutation to only one of these proteins will maintain a similar growth rate to the wild-type plant. However, if both proteins are mutated, the plant displays a greatly inhibited growth rate. Though the specific mechanism that causes this is unknown, it can be seen that these proteins are redundant but necessary for regular vegetative growth. Additionally, both THE1 and HERK1 function in coordination with the brassinosteroid pathway with a slight regulatory overlap between the two. Regulation by Rate of Cellulose Synthesis Theseus1 has been shown to regulate cell elongation in response to decreased cellulose synthesis. The most commonly described example of this is in coordination with the procuste1 mutant for decreased cellulose synthase activity (prc1-1). When prc1-1 is present, THE1 greatly inhibits cellular elongation; however, when a less-functional mutant of theseus1 was used in combination with prc1-1, the growth rate increased to somewhere between the natural growth rate and the THE1 repressed growth rate. This shows that while a cell is able to expand further with decreased cellulose levels, THE1 represses elongation because of the change in the rate of cellulose production. This is also thought to be another method of pathogenic response, as some pathogens inhibit cellulose production. References Transmembrane receptors Arabidopsis thaliana Plant hormones
Theseus1
[ "Chemistry" ]
1,233
[ "Transmembrane receptors", "Signal transduction" ]
54,972,969
https://en.wikipedia.org/wiki/Vasudeva%20Krishnamurthy
Vasudeva Krishnamurthy (1921–2014), nicknamed Prof. V.K., was an Indian algologist. He established Krishnamurthy Institute of Algology at Chennai to promote the study of algology. Krishnamurthy, son of Sanskrit professor R. Vasudeva Sharma, was born on 14 August 1921 at Valavanur, Viluppuram district. He died in Tamil Nadu on 9 May 2014. Education Krishnamurthy acquired a B.A. (1940, at St. Joseph's college, affiliated to Madras University) and a B.Sc. (Hons.) degree (1942, Presidency College, Madras University) at the University of Madras. University of Madras awarded him with the Pulny Gold Medal. He also gained an M.A. (1947, University of Madras), an M.Sc. (1952, Presidency College, Madras University) and a Ph.D. (1957, University of Manchester, England). At the age of 21, Krishnamurthy became a research scholar at the botany laboratory of the University of Madras and worked under the father of Indian algology, Prof. M.O.P. Iyengar. Career Krishnamurthy was the reader in botany from 1943 to 1960. In 1960 he joined Thanjavur Medical College as a professor of biology, and as professor of microbiology and bacteriology in the Department of Public Health Engineering in the college of Engineering. In 1961 he joined Central Salts and Marine Chemicals Research Institute (CSIR), Bhavnagar, Gujarat as a scientist. After working in CSIR's laboratories for a decade (1961–1971) he returned to Tamil Nadu and served as professor of botany in various colleges. When he retired in 1979, he was principal at Arignar Anna Government Arts College for Men Namakkal. Contribution to algology He started the Seaweed Research and Utilization Association to encourage research on seaweed in India. On behalf of this association he published the journal Seaweed Research Utilization. He was the founding president of the association and he served as president until he died. He founded Krishnamurthy Institute of Algology (KIA), Chennai, India. KIA has the largest library in Tamil Nadu on algal studies, and it is fully equipped for algal research. Prof. V. K. worked in this laboratory until his death. Some publications Krishnamurthy, V. and M.Baluswami 1982 Some species of Ectocarpaceae new to India. Seaw. Res. & Util. 5(2):102–112. Krishnamurthy, V. and M.Baluswami 1983 Some species of Ectocarpaceae new to India. Seaw. Res. & Util. 6(1):47–48. Krishnamurthy, V. and M.Baluswami 1984 The species of Porphyra from the Indian region. Seaw. Res.& Util. 7(1):31–38 Krishnamurthy, V. and M.Baluswami 1986 On Mesospora schmidtii Weber van Bosse(Ralfsiaceae, Phaeophyceae) from the AndamanIslands. Curr. Sci. 55(12):571–572. Krishnamurthy, V., A.Balasundaram, M. Baluswami and K.Varadarajan 1990 Vertical distribution of marine algae on the east coast of South India. In:Ed. V.N.Raja Rao, Perspectives in Phycology, Today & Tomorrow's Printers and Publishers, New Delhi, pp. 267–281 Krishnamurthy, V. and M.Baluswami 1988 A new species of Sphacelaria Lyngbyefrom South India. Seaw. Res. & Util.11(1):67–69 Baluswami, M. and M. Rajasekaran 2000 Morphology of Draparnaldiopsis krishnamurthyi Baluswami and Rajasekaranfrom Kambakkam, Andhra Pradesh. Ind. Hydrobiol. 3: 39–42 Krishnamurthy, V. and M.Baluswami 2000 Some new species of algae from India. Ind. Hydrobiol. 3(1):45–48 Rajasulochana, N. M.Baluswami, M.D. Vijaya Parthasarathy and V.Krishnamurthy 2002 Short term chemical analysis of Grateloupia lithophila Boergesen from Kovalam, near Chennai. Ind. Hydrobiol. 5(2): 155–161. Rajasulochana, N., M.Baluswami, M.D. Vijaya Parthasarathy and V.Krishnamurthy 2002 Chemical analysis of Grateloupia lithophila Boergesen. Seaw.Res. & Utiln. 24(1): 79–82. Angelin, T. Sylvia, M.Baluswami, M.D. Vijaya Parthasarathy and V.Krishnamurthy 2004 Physico-chemical properties of carrageenans extracted from Sarconema filiforme and Hypnea valentiae. Seaw. Res. & Utiln. 26(1&2): 197–207. Kanthimathi, V., M. Baluswami and V. Krishnamurthy 2004 Pithophora polymorpha Wittrock from Mahabalipuram near Chennai. Seaw. Res. Utiln. 26 (Special issue):33–37. Rajasulochana, N., M.Baluswami, M.D. Vijaya Parthasarathy and V.Krishnamurthy 2005 Seasonal variation in bio-chemical constituents of Grateloupia lithophila Boergesen. Seaw. Res. & Utiln. 27(1&2):53–56. Sylvia, S. M. Baluswami, M.D. Vijaya Parthasarathy and V. Krishnamurthy 2005 Effect of liquid seaweed fertilizers extracted from Gracilaria edulis (Gmel.)Silva, Sargassum wightii Greville and Ulva lactuca Linn. On the growth and yield of Abelmoschus esculentus (L.) Moench. Indian Hydrobiology,7 (Supplement): 69–88. Babu.B. and M. Baluswami 2005 Tuomeya Americana (Kuetzing) Papenfuss, a fresh-water redalga, new to India. Indian Hydrobiology, 8(1):1–4. Rajasulochana, N., M.Baluswami, M.D. Vijaya Parthasarathy and V. Krishnamurthy 2006 Seasonal variation in major metabolic products of some marine Rhodophyceae from the south east coast of Tamil Nadu. Ind. Hydrobiol. 9(2):317–321 Rajasulochana, N., M.Baluswami, M.D. Vijaya Parthasarathy and V. Krishnamurthy 2007 Diversity of phycocolloids in selected members of Rhodomelaceae. Ind. Hydrobiol. 10(1):145–151. Rajasulochana, N., M.Baluswami, M.D. Vijaya Parthasarathy and V. Krishnamurthy 2008 a FT-IR spectroscopic of investigations on the agars from Centroceras clavulatum and Spyridia hypnoides. Int. J. Phycol. Phycochem. 4(2):125–130. Rajasulochana, N., M.Baluswami, M.D. Vijaya Parthasarathy and V. Krishnamurthy 2008 b Seasonal variation in cell wall polysaccharides of Grateloupia filicina and Grateloupia lithophila. Seaweed Res. Utiln. 30(1&2):161–169. References Algae biomass producers Indian phycologists 20th-century Indian botanists 1921 births 2014 deaths
Vasudeva Krishnamurthy
[ "Engineering", "Biology" ]
1,715
[ "Synthetic biology", "Algae biomass producers", "Genetic engineering" ]
59,798,889
https://en.wikipedia.org/wiki/Computational%20materials%20science
Computational materials science and engineering uses modeling, simulation, theory, and informatics to understand materials. The main goals include discovering new materials, determining material behavior and mechanisms, explaining experiments, and exploring materials theories. It is analogous to computational chemistry and computational biology as an increasingly important subfield of materials science. Introduction Just as materials science spans all length scales, from electrons to components, so do its computational sub-disciplines. While many methods and variations have been and continue to be developed, seven main simulation techniques, or motifs, have emerged. These computer simulation methods use underlying models and approximations to understand material behavior in more complex scenarios than pure theory generally allows and with more detail and precision than is often possible from experiments. Each method can be used independently to predict materials properties and mechanisms, to feed information to other simulation methods run separately or concurrently, or to directly compare or contrast with experimental results. One notable sub-field of computational materials science is integrated computational materials engineering (ICME), which seeks to use computational results and methods in conjunction with experiments, with a focus on industrial and commercial application. Major current themes in the field include uncertainty quantification and propagation throughout simulations for eventual decision making, data infrastructure for sharing simulation inputs and results, high-throughput materials design and discovery, and new approaches given significant increases in computing power and the continuing history of supercomputing. Materials simulation methods Electronic structure Electronic structure methods solve the Schrödinger equation to calculate the energy of a system of electrons and atoms, the fundamental units of condensed matter. Many variations of electronic structure methods exist of varying computational complexity, with a range of trade-offs between speed and accuracy. Density functional theory Due to its balance of computational cost and predictive capability density functional theory (DFT) has the most significant use in materials science. DFT most often refers to the calculation of the lowest energy state of the system; however, molecular dynamics (atomic motion through time) can be run with DFT computing forces between atoms. While DFT and many other electronic structures methods are described as ab initio, there are still approximations and inputs. Within DFT there are increasingly complex, accurate, and slow approximations underlying the simulation because the exact exchange-correlation functional is not known. The simplest model is the Local-density approximation (LDA), becoming more complex with the generalized-gradient approximation (GGA) and beyond. An additional common approximation is to use a pseudopotential in place of core electrons, significantly speeding up simulations. Atomistic methods This section discusses the two major atomic simulation methods in materials science. Other particle-based methods include material point method and particle-in-cell, most often used for solid mechanics and plasma physics, respectively. Molecular dynamics The term Molecular dynamics (MD) is the historical name used to classify simulations of classical atomic motion through time. Typically, interactions between atoms are defined and fit to both experimental and electronic structure data with a wide variety of models, called interatomic potentials. With the interactions prescribed (forces), Newtonian motion is numerically integrated. The forces for MD can also be calculated using electronic structure methods based on either the Born-Oppenheimer Approximation or Car-Parrinello approaches. The simplest models include only van der Waals type attractions and steep repulsion to keep atoms apart, the nature of these models are derived from dispersion forces. Increasingly more complex models include effects due to coulomb interactions (e.g. ionic charges in ceramics), covalent bonds and angles (e.g. polymers), and electronic charge density (e.g. metals). Some models use fixed bonds, defined at the start of the simulation, while others have dynamic bonding. More recent efforts strive for robust, transferable models with generic functional forms: spherical harmonics, Gaussian kernels, and neural networks. In addition, MD can be used to simulate groupings of atoms within generic particles, called coarse-grained modeling, e.g. creating one particle per monomer within a polymer. Kinetic Monte Carlo Monte Carlo in the context of materials science most often refers to atomistic simulations relying on rates. In kinetic Monte Carlo (kMC) rates for all possible changes within the system are defined and probabilistically evaluated. Because there is no restriction of directly integrating motion (as in molecular dynamics), kMC methods are able to simulate significantly different problems with much longer timescales. Mesoscale methods The methods listed here are among the most common and the most directly tied to materials science specifically, where atomistic and electronic structure calculations are also widely used in computational chemistry and computational biology and continuum level simulations are common in a wide array of computational science application domains. Other methods within materials science include cellular automata for solidification and grain growth, Potts model approaches for grain evolution and other Monte Carlo techniques, as well as direct simulation of grain structures analogous to dislocation dynamics. Dislocation dynamics Plastic deformation in metals is dominated by the movement of dislocations, which are crystalline defects in materials with line type character. Rather than simulating the movement of tens of billions of atoms to model plastic deformation, which would be prohibitively computationally expensive, discrete dislocation dynamics (DDD) simulates the movement of dislocation lines. The overall goal of dislocation dynamics is to determine the movement of a set of dislocations given their initial positions, and external load and interacting microstructure. From this, macroscale deformation behavior can be extracted from the movement of individual dislocations by theories of plasticity. A typical DDD simulation goes as follows. A dislocation line can be modelled as a set of nodes connected by segments. This is similar to a mesh used in finite element modelling. Then, the forces on each of the nodes of the dislocation are calculated. These forces include any externally applied forces, forces due to the dislocation interacting with itself or other dislocations, forces from obstacles such as solutes or precipitates, and the drag force on the dislocation due to its motion, which is proportional to its velocity. The general method behind a DDD simulation is to calculate the forces on a dislocation at each of its nodes, from which the velocity of the dislocation at its nodes can be extracted. Then, the dislocation is moved forward according to this velocity and a given timestep. This procedure is then repeated. Over time, the dislocation may encounter enough obstacles such that it can no longer move and its velocity is near zero, at which point the simulation can be stopped and a new experiment can be conducted with this new dislocation arrangement. Both small-scale and large-scale dislocation simulations exist. For example, 2D dislocation models have been used to model the glide of a dislocation through a single plane as it interacts with various obstacles, such as precipitates. This further captures phenomena such as shearing and bowing of precipitates. The drawback to 2D DDD simulations is that phenomena involving movement out of a glide plane cannot be captured, such as cross slip and climb, although they are easier to run computationally. Small 3D DDD simulations have been used to simulate phenomena such as dislocation multiplication at Frank-Read sources, and larger simulations can capture work hardening in a metal with many dislocations, which interact with each other and can multiply. A number of 3D DDD codes exist, such as ParaDiS, microMegas, and MDDP, among others. There are other methods for simulating dislocation motion, from full molecular dynamics simulations, continuum dislocation dynamics, and phase field models. Phase field Phase field methods are focused on phenomena dependent on interfaces and interfacial motion. Both the free energy function and the kinetics (mobilities) are defined in order to propagate the interfaces within the system through time. Crystal plasticity Crystal plasticity simulates the effects of atomic-based, dislocation motion without directly resolving either. Instead, the crystal orientations are updated through time with elasticity theory, plasticity through yield surfaces, and hardening laws. In this way, the stress-strain behavior of a material can be determined. Continuum simulation Finite element method Finite element methods divide systems in space and solve the relevant physical equations throughout that decomposition. This ranges from thermal, mechanical, electromagnetic, to other physical phenomena. It is important to note from a materials science perspective that continuum methods generally ignore material heterogeneity and assume local materials properties to be identical throughout the system. Materials modeling methods All of the simulation methods described above contain models of materials behavior. The exchange-correlation functional for density functional theory, interatomic potential for molecular dynamics, and free energy functional for phase field simulations are examples. The degree to which each simulation method is sensitive to changes in the underlying model can be drastically different. Models themselves are often directly useful for materials science and engineering, not only to run a given simulation. CALPHAD Phase diagrams are integral to materials science and the development computational phase diagrams stands as one of the most important and successful examples of ICME. The Calculation of PHase Diagram (CALPHAD) method does not generally speaking constitute a simulation, but the models and optimizations instead result in phase diagrams to predict phase stability, extremely useful in materials design and materials process optimization. Comparison of methods For each material simulation method, there is a fundamental unit, characteristic length and time scale, and associated model(s). Multi-scale simulation Many of the methods described can be combined, either running simultaneously or separately, feeding information between length scales or accuracy levels. Concurrent multi-scale Concurrent simulations in this context means methods used directly together, within the same code, with the same time step, and with direct mapping between the respective fundamental units. One type of concurrent multiscale simulation is quantum mechanics/molecular mechanics (QM/MM). This involves running a small portion (often a molecule or protein of interest) with a more accurate electronic structure calculation and surrounding it with a larger region of fast running, less accurate classical molecular dynamics. Many other methods exist, such as atomistic-continuum simulations, similar to QM/MM except using molecular dynamics and the finite element method as the fine (high-fidelity) and coarse (low-fidelity), respectively. Hierarchical multi-scale Hierarchical simulation refers to those which directly exchange information between methods, but are run in separate codes, with differences in length and/or time scales handled through statistical or interpolative techniques. A common method of accounting for crystal orientation effects together with geometry embeds crystal plasticity within finite element simulations. Model development Building a materials model at one scale often requires information from another, lower scale. Some examples are included here. The most common scenario for classical molecular dynamics simulations is to develop the interatomic model directly using density functional theory, most often electronic structure calculations. Classical MD can therefore be considered a hierarchical multi-scale technique, as well as a coarse-grained method (ignoring electrons). Similarly, coarse grained molecular dynamics are reduced or simplified particle simulations directly trained from all-atom MD simulations. These particles can represent anything from carbon-hydrogen pseudo-atoms, entire polymer monomers, to powder particles. Density functional theory is also often used to train and develop CALPHAD-based phase diagrams. Software and tools Each modeling and simulation method has a combination of commercial, open-source, and lab-based codes. Open source software is becoming increasingly common, as are community codes which combine development efforts together. Examples include Quantum ESPRESSO (DFT), LAMMPS (MD), ParaDIS (DD), FiPy (phase field), and MOOSE (Continuum). In addition, open software from other communities is often useful for materials science, e.g. GROMACS developed within computational biology. Conferences All major materials science conferences include computational research. Focusing entirely on computational efforts, the TMS ICME World Congress meets biannually. The Gordon Research Conference on Computational Materials Science and Engineering began in 2020. Many other method specific smaller conferences are also regularly organized. Journals Many materials science journals, as well as those from related disciplines welcome computational materials research. Those dedicated to the field include Computational Materials Science, Modelling and Simulation in Materials Science and Engineering, and npj Computational Materials. Related fields Computational materials science is one sub-discipline of both computational science and computational engineering, containing significant overlap with computational chemistry and computational physics. In addition, many atomistic methods are common between computational chemistry, computational biology, and CMSE; similarly, many continuum methods overlap with many other fields of computational engineering. See also References External links TMS World Congress on Integrated Computational Materials Engineering (ICME) nanoHUB computational materials resources Computational science Computational physics
Computational materials science
[ "Physics", "Mathematics" ]
2,619
[ "Computational science", "Applied mathematics", "Computational physics" ]
59,803,219
https://en.wikipedia.org/wiki/Chernobyl%20groundwater%20contamination
The Chernobyl disaster remains the major and most detrimental nuclear catastrophe which completely altered the radioactive background of the Northern Hemisphere. It happened in April 1986 on the territory of the former Soviet Union (modern Ukraine). The catastrophe led to the increase of radiation in nearly one million times in some parts of Europe and North America compared to the pre-disaster state. Air, water, soils, vegetation and animals were contaminated to a varying degree. Apart from Ukraine and Belarus as the worst hit areas, adversely affected countries included Russia, Austria, Finland and Sweden. The full impact on the aquatic systems, including primarily adjacent valleys of Pripyat river and Dnieper river, are still unexplored. Substantial groundwater contamination is one of the gravest environmental impacts caused by the Chernobyl disaster. As a part of overall freshwater damage, it relates to so-called “secondary” contamination, caused by the delivery of radioactive materials through unconfined aquifers to the groundwater network It proved to be particularly challenging because groundwater basins, especially deep-laying aquifers, were traditionally considered invulnerable to diverse extraneous contaminants. To the surprise of scientists, radionuclides of Chernobyl origin were found even in deep-laying waters with formation periods of several hundred years. History Subsurface water was especially affected by radioactivity in the 30-km zone of evacuation (so called “exclusion zone”), surrounding the Chernobyl Nuclear Power Plant, or CNPP (Kovar&Herbert, 1998). The major and most hazardous contaminant from the perspective of hydrological spread was Strontium-90. This nuclide showed the most active mobility in subsurface waters; its rapid migration through groundwater aquifer was first discovered in 1988-1989 Other perilous nuclear isotopes included Cesium-137, Cesium-143, Ruthenium-106, Plutonium-239, Plutonium-240, Americium-241 The primary source of contamination was the damaged 4th reactor, which had actually been a crash site and where concentration of Strontium-90 initially exceeded the admissible levels for drinking water in 103-104 times. The reactor remained an epicenter of irradiation even after the emergency personnel built “Sarcophagus”, or “Shelter”, a protective construction aimed to isolate it from the environment. The structure proved to be non-hermetic, permeable to rainfall, snow and dew concentrations in many parts of 1000 m2 area Additionally, high amounts of cesium, tritium and plutonium were delivered to groundwater due to leakage of enriched water from the 4th reactor while building of the “Shelter” was in progress As a result, considerable amounts of water condensed inside the “Shelter” and absorbed radiation from nuclides-containing dust and fuels. Although most of this water evaporated, some portions of it leaked to groundwater from the surface layers under the reactor chambers. Other sources of groundwater contamination included: radioactive waste dumps on the territory of “exclusion zone”; cooling water reservoirs connected with aquifer; initial radioactive fallout which took place in first hours after the accident; and forest fires that led to accelerated spread of contaminated particles on soils of the surrounding area On the whole, the researchers recorded the probability of accumulation of nearly 30% of the overall surface contamination in the underground rock medium. This discovery demonstrates hazardous scales of radionuclides underground migration on the one hand, but the important function of igneous rock as protective shield against further spread of contaminants. Recent revelations of facts concealed by the Soviets show that the problem of groundwater radioactive contamination in Chernobyl zone existed long before the actual disaster. The analyses conducted in 1983-1985 showed deviation of radioactive standards in 1,5-2 times, as a result of earlier accidental malfunctions of CNPP in 1982 When the catastrophe occurred, groundwater irradiation was caused due to contamination of lands in the area of the wrecked fourth reactor. Furthermore, subsurface water was contaminated through unconfined aquifer in correlation and proportionally to contamination of soil by isotopes of Strontium and Caesium . Upper groundwater aquifer and most of Artesian aquifers were damaged in first place due to massive surface contamination with radioactive isotopes Strontium-90 and Cesium-137. At the same time, considerable levels of radioactive content were fixed on the periphery of exclusion zone, including part of potable water delivery system. This revelation proved the fact of migration of radioactive contaminants through the groundwater aquifers After the disaster, the Soviet Government aimed took delayed and inefficient measures at neutralization of consequences of the accident. The issue of groundwater contamination was improperly addressed the first several months after the disaster, leading to colossal financial expenses with negligible result. At the same time, proper monitoring of the situation was mostly absent The primary attempts of disaster relief workers were directed to prevention of surface waters contamination. Large-scale radionuclide content in the underground water was monitored and detected only in April–May 1987, almost a year after the disaster Migration pathways of contamination Unfortunately, hydrological and geological conditions in Chernobyl area promoted rapid radionuclide migration to subsurface water network. These factors include flat terrain, abundant precipitation and highly permeable sandy sediments Main natural factors of nuclides migration in the region can be divided into four groups, including: weather and climate-related (evaporation and precipitation frequency, intensity and distribution); geological (sediment permeability, drainage regimes, forms of vegetation); soil-borne (physical, hydrological and mechanical properties of lands); and lithological (terrain structures and types of rock). In meliorated areas migration processes are additionally influenced by anthropogenic drivers related to human agricultural activities. In this relation, specific parameters and type of drainage regime, melioration practices, water control and sprinkling can substantially accelerate natural tempos of migration of contaminants. For example, artificial drainage leads to substantial increase of absorption and flushing rates. These technological factors are particularly significant for the regions along Pripyat river and Dnieper river, which are almost totally subject to artificial irrigation and drainage within the network of constructed reservoirs and dams. At the same time, both natural and artificial factors of migration have specific prioritization for different contaminants. The primary way of Strontium-90 transportation to the groundwater is its infiltration from contaminated soils and subsequent transition through the porous surfaces of unconfined aquifer. The scholars also fixed two additional alternative ways of migration of this radionuclide. The first one is “technogenous” transition, caused by poor construction of wells for water withdrawal or insufficient quality of materials used for their shells. During electric pumping of deep-laying artesian water, the stream unprotected passes through contaminated layers of upper aquifers and absorbs radioactive particles before getting into a well. This way of contamination was experimentally verified at the Kiev water intake wells. Another abnormal way of radionuclides migration are weak zones of crystalline rocks. The researches of Center of Radio-ecological Studies of the National Academy of Sciences of Ukraine showed that crustal surface has unconsolidated zones characterized by increased electric productivity, as well as higher moisture and emanation capacity. As to Cesium-137, this nuclide demonstrates lower migration potential in Chernobyl soils and aquifers. Its mobility is hampered by such factors as: clay minerals which fixate radionuclides in rock, absorption and neutralization of isotopes through ion-exchange with other chemical components of water; partial neutralization by vegetation metabolic cycles; overall radioactive decay. Heavy isotopes of Plutonium and Americium have even lower transportation capacity both in and outside the exclusion zone. However, their hazardous potential should not be discarded considering extremely long half-life and unpredictable geo-chemical behavior Agricultural damage Groundwater transportation of radionuclides belongs to the key pathways of contamination of lands engaged in agricultural production. In particular, due to vertical migration with rises of water levels, radioactive particles infiltrate soils and subsequently get into plants through the absorption system of their roots. This leads to internal irradiation of animals and people during consumption of contaminated vegetables This situation is aggravated by a predominantly rural type of settlement in the Chernobyl area, with most of population engaged in active agricultural production. It makes the authorities either remove the contaminated areas near Chernobyl from agricultural activities or spend funds for excavation and treatment of surface layers. These problems of damage to initially intact soils puts a heavy burden primarily on the Ukrainian and especially the Belarusian economy. Nearly one-quarter of the entire territory of Belarus was seriously contaminated with isotopes of Cesium. The authorities were obliged to exclude nearly 265 thousands hectares of cultivated lands from agricultural use till present day. Although complex chemical and agro-technological measures led to limited decrease of radionuclide content in food produced on contaminated territories, the problem remains largely unresolved Apart from economical damage, agricultural contamination via groundwater pathways is detrimental for biophysical security of the population. Consumption of food containing radionuclides became the major source of radioactive exposure of people in the region Thus agricultural damage eventually means direct and long-lasting threat to the public health. Health risks The health impacts of groundwater contamination for population of Ukraine, Belarus and bordering states are usually perceived as extremely negative. The Ukrainian government initially implemented a costly and sophisticated remediation program. However, in view of limited financial resources and other more urgent health problems caused by the disaster, these plans were abandoned Not least, such a decision owed to the research results of domestic scholars showing that groundwater contamination does not contribute to the overall health risks substantially in regard to other active pathways of radioactive exposure in the “exclusion zone”, In particular, radioactive contamination of unconfined aquifer, which is usually considered a serious threat, has fewer economical and health impact in Chernobyl because subsurface water in “exclusion zone” is not used for household and drinking needs. The probability of using this water by local residents is excluded by a special status of Chernobyl area and relevant administrative prohibitions. The only group directly and inevitably exposed to health threats are emergency workers engaged in water drainage practices related to Chernobyl Nuclear Power Plant reactors deactivation and waste disposal operations. As to contamination of confined aquifer, which is a source of technical and household water supply for Pripyat city (the largest city in Chernobyl area), it also does not pose immediate health threat due to permanent monitoring of water delivery system. In case any indexes of radioactive content exceed the norm, withdrawal of water from local boreholes will be suspended. Yet such situation poses a certain economic risk due to high expenditures necessary for ensuring alternative water supply system . At the same time, lethal doses of radiation in unconfined aquifer retain substantial prospective danger due to their considerable capacity of migration to confined aquifer and subsequently to surface water, primarily in the Pripyat River. This water can furthermore enter tributaries of the Dnieper River and Kiev Reservoir. In this way the number of animals and people using contaminated water for domestic purposes can drastically increase. Considering that Dnieper is one of the key water arteries of Ukraine, in case of breaching of integrity of the “Shelter” or long-lived waste repositories, extensive spill of radionuclides in groundwater can reach the scale of national emergency. According to official position of the monitoring staff, such scenario is unlikely because before getting to the Dnieper the content of Strontium-90 is usually considerably diluted in the Pripyat River and Kiev Reservoir. Yet this assessment is considered inaccurate by some experts due to imperfect evaluation model implemented Thus groundwater contamination led to a paradoxical situation in the realm of public health: direct exposure to radiation by using contaminated subsurface water for household purposes is incomparably less than indirect impact caused by nuclides migration to cultivated lands. In this regard, can be distinguished on-site and off-site health risks from contaminants in groundwater network of the exclusion zone Low on-site risks are produced by direct water takeoff for drinking and domestic needs. It was calculated that even if hypothetical residents use water on the territory of radioactive waste dumps, the risks would be far below admissible levels. Such results can be explained by underground water purification during its hydrological transportation in surface waters, rains and snowmelt Primary health risks are off-site, posed by radionuclide contamination of agricultural lands and caused, among other factors, by groundwater migration through unconfined aquifer. This process eventually leads to internal irradiation of people using food from the contaminated areas. Water protection measures The urgency to take immediate measures for underground water protection in Chernobyl and Pripyat region was caused by perceived danger of transportation of radionuclides to the Dnieper River, thus contaminating Kiev, the capital of Ukraine, and 9 million other water users downstream. In this regard, on May 30, 1986 the government adopted the Decree on groundwater protection policy and launched a costly program of water remediation. However, these measures proved to be insufficient as they grounded upon incomplete data and absence of efficient monitoring. Without credible information, emergency staff launched “worst case” scenario, expecting maximum contamination density and minimal slowdown indexes. When the updated survey information showed negligible risks of excessive nuclides migration, remediation program was stopped. However, to this moment Ukraine already spent giant monetary funds equal to nearly 20 million dollars for this project, as well as exposed relief workers to needless danger of irradiation. In 1990-2000s, the focus of protective measures shifted from remediation to construction of protective systems for the complete isolation of contaminated areas along Pripyat River and Chernobyl Nuclear Power Plant from the rest of the region. Since it was done, local authorities were advised to concentrate efforts on the permanent monitoring of the situation. The process of degradation of radionuclides was let to itself under so called “observed natural attenuation” Monitoring measures In face of persistent disintegration of radioactive materials and highly unfavorable radiation background in “exclusion zone”, permanent monitoring was and remains crucial both for deescalation of environmental degradation and preventing humanitarian catastrophes among neighboring communities. Monitoring also allows to reduce parameter uncertainties and improve models of assessment, thus actually leading to more realistic vision of the problem and its scales. Until the late 1990s, methods of data collection for groundwater quality monitoring were of low efficiency and reliability. During installation of monitoring boreholes, the wells were contaminated with “hot fuel” particles from the surface ground, what made initial data inaccurate. Decontamination of boreholes from extraneous polluters could take 1,5–2 years. Another problem was insufficient purging of monitoring wells before sampling. This procedure, necessary for replacement of stale water inside boreholes with new water from aquifer, was introduced by monitoring personnel only in 1992. The importance of purging was immediately proved by substantial growth of Strontium-90 indexes in samples The quality of data was additionally worsened by corrosion of steel components of monitoring wells. Corrosive particles substantially altered radioactive background of aquifer. In particular, excessive content of iron compounds in water got into compensatory reactions with Strontium thus leading to deceptively lower Strontium-90 indexes in samples. In some cases, irrelevant design of well cages also impeded monitoring accuracy. The well constructions implemented by Chernobyl Nuclear Power Plant personnel in early 1990s had 12 meters long screening sections allowing only vertically arranged sampling. Such samples are hard to interpret as an aquifer usually has unequal vertical distribution of contaminants) Since 1994, the quality of groundwater observation in Chernobyl zone sufficiently improved. New monitoring wells are constructed with poli-vinylcloride materials instead of steel, with shortened screening sections, 1–2 m Additionally, in 1999-2012 there was created an experimental monitoring site in proximity to radioactive waste dumps area westward Chernobyl Nuclear Power Plant, called “Chernobyl Red Forest”. The elements of the new monitoring system include laboratory module, station for unsaturated zone monitoring, network of monitoring boreholes and meteorological station Its primary objectives include monitoring of such processes as: radionuclides extraction from “hot fuel particles” (HFP) dispersed in surface layer; their subsequent transition through the unsaturated aquifer, and condition of phreatic (saturation) zone. HFP are particles which emerged from burnt wood and concrete during initial explosion and subsequent fire in the “exclusion zone”. Unsaturated aquifer is provided with water and soil sampler, water containment sensors and tensiometers. Work of an experimental site allows to make real-time surveillance of Strontium-90 migration and condition in aquifer, yet simultaneously raises new questions. The monitoring staff noticed that fluctuations of water levels directly influence the release of radionuclides from sediments, while accumulation of organic matter in sediment correlates with geochemical parameters of aquifer. Additionally, for the first time the researchers detected Plutonium in deep-laying groundwater, which means that this contaminant also has a capacity to migrate in confined aquifer. However, specific means of this migration still remain unknown. The researchers forecast that in case of inviolated protection of nuclear waste dumps in exclusion zone, the concentration of Strontium-90 up to 2020 will be much lower in subsurface water than admissible maximum indexes. Also, contamination of the Pripyat River as the most vulnerable surface water route by underground tributaries is unlikely in the next 50 years At the same time, the number of monitoring wells is still insufficient and needs expansion and modification. Also, the boreholes are distributed within the exclusion zone unevenly, without consideration of hydrological and radioactive specifics of the area (Kovar&Herbert, 1998 Lessons learned Chernobyl accident revealed complete unpreparedness of the local authorities to the resolution of environment-related issues of a nuclear disaster. Groundwater management is no exception. Without accurate real-time data and adjusted emergency management plans, the government spent enormous funds for groundwater remediation, which later proved to be needless. At the same time, really crucial top-priority measures, such as reliable isolation of the damaged 4th reactor, were performed on a poor-quality level. If the “Shelter” had been constructed without deficiencies as completely hermetic and isolating the 4th reactor from contact with external aerial, soil and groundwater mediums, it would make much greater contribution to prevent nuclides from entering in and migrating throughout the groundwater network Taking these failures into account, the following are lessons learned from Chernobyl tragedy for groundwater management: The necessity of consistent and technologically reliable monitoring system capable to produce high-quality real-time data; Exact monitoring data as a primary basis for any remedial practices and melioration policies; Criteria and purposes of groundwater management activities, be it remediation, construction works or agricultural restrictions, are to be identified at the stage of analysis and prior to any practical realization; Problems of groundwater contamination must be regarded in the wider perspective, with close correlation to other pathways and forms of contamination, because they all are interconnected and mutually influenced; It is always highly advisable to engage international experts and leading scholars to peer-reviewing of designed action plans; Groundwater management in areas of radioactive contamination must be based on integrated ecosystem approach, i.e. considering its influence on local and global ecosystems, well-being of local communities and long-lasting environmental impacts. References Ground Water pollution Radioactive contamination
Chernobyl groundwater contamination
[ "Chemistry", "Technology", "Environmental_science" ]
4,057
[ "Aftermath of the Chernobyl disaster", "Environmental impact of nuclear power", "Radioactive contamination", "Water pollution" ]
59,809,245
https://en.wikipedia.org/wiki/Transpiration%20cooling
Transpiration cooling is a thermodynamic process where cooling is achieved by a process of moving a liquid or a gas through the wall of a structure to absorb some portion of the heat energy from the structure while simultaneously actively reducing the convective and radiative heat flux coming into the structure from the surrounding space. One approach to transpiration cooling is to move liquid through small pores in the outer wall of a body leading to evaporation of the liquid to a gas via the physical mechanism of evaporative cooling. Other approaches are possible. Applications Transpiration cooling is used in the aerospace industry, in jet and rocket engines. In 2018, researchers at the University of Oxford were experimentally testing transpiration cooling as a Thermal Protection System for Hypersonic Vehicles such as rockets or spaceplanes. Transpiration cooling is one of a variety of cooling techniques that may be used to reduce regenerative cooling loads in rocket engines and subsequently reduce propellant requirements. Other techniques exist, such as film cooling, ablative cooling, radiative cooling, heat sink cooling and dump cooling. Transpiration cooling is being considered for use in space vehicles reentering the Earth's atmosphere at hypersonic velocities where a transpirationally cooled outer skin could serve as a part of the thermal protection system of the reentering spacecraft. SpaceX publicly mentioned such a system in 2019 for use on their Starship reusable second stage and orbital spacecraft to mitigate the harsh conditions of reentry. The design concept envisioned a double stainless-steel skin, with active coolant flowing between the two layers, with some areas additionally containing multiple small pores that would allow for transpiration cooling. After design and testing in terrestrial labs, SpaceX subsequently stated that although an alternative heat mitigation approach—using low-cost ceramic tiles on the windward side of Starship—was being developed, transpiration cooling could be used in some areas. Few details on the design are expected to be publicly released, as US law prevents SpaceX from releasing such information. See also Evapotranspiration References Thermodynamic processes Spaceflight technology
Transpiration cooling
[ "Physics", "Chemistry" ]
435
[ "Thermodynamic processes", "Thermodynamics" ]
59,811,433
https://en.wikipedia.org/wiki/Verastem%20Oncology
Verastem, Inc., doing business as Verastem Oncology, is an American pharmaceutical company that develops medicines to treat certain cancers. Headquartered and founded in Boston, Massachusetts, the firm is a member of NASDAQ Biotechnology Index. History Verastem Oncology (Verastem Inc) was co-founded in 2010 by entrepreneur Christoph H. Westphal and venture capitalist Michelle Dipp, who provided seed funding and initial office space in Cambridge, MA. The company was formed to commercialize the work of the three other co-founders, MIT biologists Robert F. Weinberg, Eric S. Lander and Piyush Gupta, by discovering and developing drugs to treat cancer by targeting cancer stem cells. The company raised $16 million in the initial Series A financing. Westphal served as CEO and chairman of the board from 2010 to 2013. Under his leadership, the company raised $55 million through an IPO in 2012. Mr. Robert Forrester succeeded Christoph Westphal as Verastem's president and CEO in 2013. In July 2019, Brian Stuglik was appointed to chief executive officer (CEO) of Verastem Oncology. Pipeline Their leading investigational drug is defactinib (VS-6063), is a small-molecule focal adhesion kinase (FAK) inhibitor designed to kill cancer stem cells, intended for the treatment of malignant pleural mesothelioma. In October 2015, they announced the premature termination of the company's late-stage clinical trial for defactinib after data analysis of the Phase II COMMAND trial found no significant differences in efficacy versus placebo. . Following the failure of the study, the company had to cut 50% of its workforce. In November 2016, Verastem Oncology licensed global rights from Infinity Pharmaceuticals to duvelisib (IPI-145), a novel inhibitor of PI3K delta and gamma. In April 2018, Verastem filed a New Drug Application (NDA) for duvelisib for the treatment of relapsed or refractory chronic lymphocytic leukemia/small lymphocytic lymphoma (CLL/SLL) and accelerated approval for relapsed or refractory follicular lymphoma (FL). The results of the clinical study DUO were published in Blood Journal. Verastem Oncology received FDA approval for duvelisib on September 24, 2018, as a treatment for adults with 3rd-line chronic lymphocytic leukemia/small lymphocytic lymphoma, and an accelerated approval as a 3rd-line treatment for follicular lymphoma, contingent on the results of a confirmatory trial. The drug label carries a black box warning due to the risk of potentially fatal or serious toxicities: infections, diarrhea or colitis, cutaneous reactions and pneumonitis. In July 2019, Verastem Oncology signed an exclusive agreement with Sanofi for the commercialization of duvelisib in Russia and CIS, Turkey, the Middle East and Africa. References External links Pharmaceutical companies of the United States Companies listed on the Nasdaq Pharmaceutical companies established in 2010 Life sciences industry Specialty drugs
Verastem Oncology
[ "Biology" ]
670
[ "Specialty drugs", "Life sciences industry" ]
70,629,248
https://en.wikipedia.org/wiki/Carbide%20bromide
Carbide bromides are mixed anion compounds containing bromide and carbide anions. Many carbide bromides are cluster compounds, containing on, two or more carbon atoms in a core, surrounded by a layer of metal atoms, encased in a shell of bromide ions. These ions may be shared between clusters to form chains, double chains or layers. The great majority of these carbide bromide compounds contain rare earth elements. Since these elements have similar properties, similar structures can be made by substituting the elements. R2CBr2 forms a structure with layers of R6C clusters that contain one carbon atom. Each layer has bromide coating the top and bottom. Very similar is R2CBr2 which has layers of R6C2 clusters containing pairs of carbon atoms. This dicarbon is an (C24−), and contains a double bond. Layers have bromide on both sides, and so they are only weakly held together by van der Waals forces. If these layers are aligned with each other a 1T- form results with a small c measurement on the unit cell. In some compounds the layers are not quite aligned, but repeat after three layers giving a 3R form, with a larger c unit cell height. Where the layers align, the crystal system is trigonal. But if the layers never quite align at any height, a monoclinic crystal results. The C2 unit sits at an angle to the layers, and thus reduces symmetry compared to compounds with single carbon atoms in the cluster. In R2CBr there are layers of R6C that share bromide between layers. List References Bromides Carbides Mixed anion compounds
Carbide bromide
[ "Physics", "Chemistry" ]
350
[ "Matter", "Mixed anion compounds", "Salts", "Bromides", "Ions" ]
70,631,248
https://en.wikipedia.org/wiki/Thermally%20induced%20shape-memory%20effect%20%28polymers%29
The thermally induced unidirectional shape-shape-memory effect is an effect classified within the new so-called smart materials. Polymers with thermally induced shape-memory effect are new materials, whose applications are recently being studied in different fields of science (e.g., medicine), communications and entertainment. There are currently reported and commercially used systems. However, the possibility of programming other polymers is present, due to the number of copolymers that can be designed: the possibilities are almost endless. General information Polymers with thermally induced shape-memory effect are those polymers that respond to external stimuli and because of this have the ability to change their shape. The thermally induced shape-memory effect results from a combination of proper processing and programming of the system. This effect can be observed in polymers with very different chemical composition, which opens a great possibility of applications. Description of the effect on polymers In the first step the polymers are processed by means of common techniques, such as injection or extrusion, thermoforming, at a temperature (THigh) at which the polymer melts, obtaining a final shape which is called "permanent" shape. The next step is called system programming and involves heating the sample to a transition temperature (TTrans). At that temperature the polymer is deformed, reaching a shape called "temporary". Immediately afterwards the temperature of the sample is lowered. The final step of the effect involves the recovery of the permanent shape. The sample is heated to the transition temperature (TTrans) and within a short time the recovery of the permanent shape is observed. This effect is not a natural property of the polymer, but results from proper programming of the system with the appropriate chemistry. For a polymer to exhibit this effect, it must have two components at the molecular level: bonds (chemical or physical) to determine the permanent shape and "trigger" segments with a TTrans to fix the temporary shape. Characteristics of the effect on polymers Metals exhibit a bidirectional shape-memory effect, maintaining one shape at each temperature. Polymers recover their shape only once. Polymers can change their shape with elongations up to 200% while metals have a maximum of 8-10% elongation. Recovery in metals and ceramics involves a change in crystal structure, while recovery in polymers is due to the action of entropic forces and anchor points. Polymers can be designed according to the desired application, they can be: biodegradable, drug delivery systems (medicinal), antibacterial, etc. The transition temperature is designed with "trigger" segments, which makes temperature adjustment easier than in ceramics, since they depend on equiatomic quantities. Functioning It should first be noted that the first inelastic mechanism of these polymers is the mobility of the chains and the conformational rearrangement of the groups. Then the effect on semi-crystalline and amorphous polymers must be distinguished. In both cases, anchor points must be created that act as "triggers" for the effect. In the case of amorphous polymers, these will be the knots or "tangles" of the chains, and in the case of semi-crystalline polymers, the crystals themselves will form these anchor points. By modifying the shape of the material under minimal critical stress, the chains slide and a metastable structure is created, which increases the organization and order of the chains (lower entropy), when the deformation load is eliminated, the anchor points provide a storage mechanism for macroscopic stresses in the form of small localized stresses and decreasing entropy. In the glassy state the rotational motions of the molecules are frozen and impeded, as the temperature increases and the glassy state is reached, these motions thaw and rotations and relaxations occur, the molecules take the form that is entropically most favorable to them, the one with the lowest energy. These movements are called relaxation process and the formation of "random strings" to eliminate stresses is called shape-memory loss. A polymer will exhibit the shape-memory effect if it is susceptible to being stabilized in a given state of deformation, preventing the molecules from slipping and regaining their higher entropy (lower energy) form. This can be achieved almost entirely by creating crosslinking or vulcanization, these new bonds act as anchors and prevent the relaxation of the chains, the anchor points can be physical or chemical. Comparison with metals and ceramics The unidirectional shape-shape-memory effect was first observed by Chand and Read in 1951 in a Gold-Cadmium alloy and in 1963 Buehler described this effect for nitinol, which is an equiatomic Nickel-Titanium alloy. This effect in metals and ceramics is based on a change in the crystal structure, called martensitic phase transition. The disadvantage of these materials is that it is an equiatomic alloy and deviations of 1% in the composition modify the transition temperature by approximately 100 K. Some metals and ceramics present the effect bidirectionally, which means that at a certain temperature there is a shape and this can be changed by changing the temperature, but if the first temperature is recovered, also the first shape is recovered. This is achieved by training the material for each shape at each temperature. Metals and ceramics with thermally induced bidirectional shape-memory effect have had great application in medical implants, sensors, transducers, etc. Many present a risk however due to their high toxicity. Phases in the system To obtain the effect, it is necessary to achieve a phase separation, one of these phases works as the trigger for the temporary form, using a transition temperature that can be Tm or Tg and in this effect is called TTrans. A second phase has the higher transition temperature and above this temperature the polymer melts and is processed by conventional methods. The ratio of the elements forming the phase separation largely regulates the TTrans transition temperature; this is much easier to control than in metallic alloys. An example of this is the poly(ethylene oxide-ethylene terephthalate) or EOET copolymer. The polyethylene terephthalate (PET) segment has a relatively high Tg and its Tm is commonly referred to as the "hard" segment, whereas polyethylene ethylene oxide (PEO), has a relatively low Tm and Tg and is referred to as the "soft" segment. In the final polymer these segments separate into two phases in the solid state. PET has a high degree of crystallinity and the formation of these crystals provides for the flow and rearrangement of the PEO chains as they are stretched at temperatures higher than their Tm. Experimentation Achieving of the effect A commercial, high purity (non-recycled) polymer sample with known molecular mass distribution can be obtained or synthesized according to standard procedures. Common properties such as elastic modulus, tan δ, crystallinity, viscosity, density should be characterized. Anchor points, physical or chemical (chain entanglement, crystallinity or vulcanization), must be decided. If crosslinking with slight vulcanization is desired, standardized methods for each polymer must be taken into account. In the case of PCO, for example, it is a polymer without shape-shape-memory effect because it does not present a clear "plateau", but the addition of a minimum amount of peroxide (~1%) provides PCO with all the requirements to present this effect. A permanent stress-free shape with known dimensions is prepared by conventional methods. The system is programmed, i.e. it is heated up to TTrans and at that temperature the shape is modified by applying pressure or stress. Then the material is cooled and finally the pressure or stress is removed. After heating the sample again to TTrans, the stresses are released and the permanent shape is recovered. Some polymers fatigue first, so each system can be evaluated with a simple experiment that consists of programming the system 10 or 20 times in a row and measuring the recovery in % and time. Crystallizable polymers Polymers that can crystallize are (with the exception of PP) guarantee to obtain this effect, mainly due to their ordering capacity, which is reflected in the crystallinity, the crystals have affinity for their constituent elements and form new bonds these achieve anchoring forces that give stability to the temporary form. Crystallization, vulcanization, and final properties To analyze the behavior of the crystals in this type of polymers, the WAXS and DSC techniques are used; these techniques help to determine what percentage of the polymer are crystals and how they are organized. This is due to the fact that the crystallinity decreases as the crosslinking increases, since the chains lose the ability to arrange themselves and order is essential to achieve crystallinity. A second problem present when crosslinking molecules is melting, since an excess of crosslinking modifies the molecule in such a way that it stops melting (similar to a thermoset) and therefore the temporary shape cannot be obtained. The control of curing either by electromagnetic waves or with peroxides is very important since it increases the TTrans and decreases the crystallinity, determining factors in the shape-shape-memory effect. In the case of biocompatible semicrystalline systems such as poly(ε-caprolactone) and poly(n-butyl acrylate), crosslinked by photopolymerization it has been reported that the crystallization behavior is affected by the cooling rate, as in any other semicrystalline polymer, but the heat of crystallization remains independent of the cooling rate. The influence of the crosslinking of the molecules, the cooling rate and the crystallization behavior are specific to each system and impossible to enumerate since the synthesis possibilities are almost infinite. Crystallizable polymers such as oligo(ε-caprolactone) can have amorphous segments such as poly(n-butyl acrylate) and the molecular mass ratio of each determine the behavior of the system in programming temporary form and recovery to permanent form. Factores que influencian el efecto Molecular mass of the crosslinked polymer. Molecular weight of the crystallizable polymer. Degree of crosslinking. Phase separation. Moduli of the original polymers and proportion in the copolymer. Moisture (in polymers susceptible to moisture degradation). Cooling speed. Amorphous polymers If the polymeric system is amorphous, then the anchor points of the crystalline structure are not available and the only way to ensure the stability of the temporary shape is through chain entanglements (physical entanglements and not chemical crosslinking), in addition to the possibility of crosslinking. Relaxation processes In the glassy state, the movements of the long chain segments are frozen, the movements of these segments depend on an activation temperature that brings the polymer to a smoothing and elastic state, the rotation on the carbon bonds and the movements of the chains no longer have strong impediments to accommodate and acquire the conformation that requires less energy, the chains then "unravel" forming random strings, without order and therefore with higher entropy. If a polymer sample is stretched for a short time in the elastic range, when the load is removed, the sample will recover its original shape, but if the load remains for a sufficiently long period, the chains rearrange and the original shape is not recovered, the result is an irreversible deformation, also called relaxation process (in this case: creep). In order for a polymer to exhibit the thermally induced shape-memory effect, it is necessary to fix the chains with anchor points to avoid these relaxation processes that inelastically modify the system. Glass transition Amorphous polymers do not have a crystallization temperature (Tm) like semi-crystalline polymers and have only a glass transition temperature (Tg). This has a decisive influence on the behavior of shape-shape-memory polymer systems. A crystalline copolymer system alone can result in the crosslinker-treated copolymer losing its crystallinity and becoming practically amorphous. An amorphous polymer depends on the level of crosslinking or the degree of polymerization to exhibit this effect. In the case of poly(norbornene), which is a linear, amorphous polymer, with a content of 70 to 80% of trans bonds in commercial products, molecular mass of approximately 3x106 g mol−1 and Tg of approximately 35 to 45°C. Because it achieves an unusually high degree of polymerization, chain entanglements can be relied upon as anchor points to achieve the thermally induced shape-memory effect. Therefore, this polymer relies solely on physical anchor points. When heated up to Tg, the material abruptly changes from a rigid state to a tapered state (softens). To achieve the effect, the shape must be changed rapidly to avoid rearrangement of the segments of the polymer chains and immediately cool the material also very rapidly below Tg. Reheating the material back to Tg will show the recovery of the original shape. Influence of chemical structure In designing copolymers for thermally induced shape-memory effect it is very important to keep in mind that a slight change in chemical structure (cis/trans ratios, tacticity, molecular mass, etc.) produces a significant change in the shape-memory polymer. An example is the copolymer of poly(methylmethacrylate-co-methacrylic acid) or poly(MAA-co-MMA) compared to poly(MAA-co-MMA)-PEG, where PEG is short for poly(ethylene glycol) which forms complexes in the copolymer. Changes in the morphology of the material including PEG provide shape-memory effect to the copolymer, showing two phases, the three-dimensional network providing a stable phase and the reversible phase formed by the amorphous part of the PEG-PMAA complexes. The complexes show a high modulus storage capacity, so when a PEG of higher molecular mass is introduced into the copolymer, an increase in the elastic modulus, higher modulus in the glassy state and faster recovery are observed. Its properties can be studied with differential scanning calorimetry (DSC), wide-angle X-ray diffraction (WAXD) and dynamic mechanical analysis (DMA) techniques to determine its physicochemical arrangement. Overview For a polymer to exhibit the thermally induced shape-memory effect, it must have anchor points for temporary and permanent shape. These can be physical (chain entanglements, crystals) or chemical (chemical crosslinking, curing, vulcanization). This effect in polymers depends on entropic forces and not on martensitic transitions like metals. The most important physical properties are: elastic modulus, recovery speed, temporary shape stability. The transition temperature TTrans can be Tm or Tg or a mixture of both. All crystalline polymers (except for PP) can exhibit thermally induced shape-memory effect. Inelastic mechanisms that decrease the effect are: moisture degradation (for moisture sensitive polymers e.g. polyurethanes), unraveling of the chains, degradation of the bonds that fix the permanent or temporary shape. Applications Most of the applications of polymers with this effect are only suggestions for now, many possibilities have been proposed, but so far only a few have been used, the most important being medical devices and automotive elements, although the greatest success has been achieved with heat-shrinkable polyethylene, which is also an exception in the programming step, since it is processed in a different way. Healthcare applications Orthodontic items, such as wires and foams for endovascular procedures. Microelements for intelligent suturing. Intravenous needles that soften in the body and laparoscopy devices Drug delivery systems. In-body degradable implants for minimally invasive surgeries. Inner soles of orthopedic or special needs shoes and utensils for people with disabilities. Intravenous catheters. Everyday life applications Seals for adjustable pipes and fittings, shrinkable or adjustable pipes. Braille reprintable boards and reprintable advertisements. Adjustable anti-corrosion films. Hair for dolls, toys, hair styling items. New items packaged in smaller volume and that change their shape upon first use. Protections for automobiles, fenders, etc. Artificial nails. Smart textiles. See also Shape-shape-memory polymer Shape-shape-memory alloy Polymer Copolymer Smart material Bibliographical references Charlesby A. Atomic Radiation and Polymers. Pergamon Press, Oxford, pp. 198–257 (1960). Gall, K; Dunn, M; Liu, Y. Internal stress storage in shape shape-memory polymer nanocomposites. Applied physical letters. 85, (Jul-2004). Jeong, Han Mo; Song H, Chi W. Shape-shape-memory effect of poly (methylene-1,3-cyclopentane) and its copolymer with polyethylene. Polymer International, 51:275-280 (2002). Kawate, K. Creep Recovery of Acrylate Urethane Oligomer/Acrylate Networks. Creep recovery, shape shape-memory. Journal of polymer science. 35. Kim B K, Lee S Y, Xu M. Polyurethanes having shape-shape-memory effects. Polymer 37: 5781–93, (1998). Langer, R; Tirrell, D. A. Designing materials for biology and medicine. Nature 428: (Apr-2004). Lendlein, A; Kelch, S; Kratz, K. Shape-shape-memory Polymers. Encyclopedia of Materials: Science and Technology. 1–9. (2005). Lendlein, A; Langer, R. Biodegradable, elastic shape-shape-memory polymers for potential biomedical applications. Science. 296, 1673–1676 (2002). Lendlein, A; Kelch, S. Shape-Memory Polymers. Angew. Chemie. Chem. Int. 41: 2034 – 2057. (2002). Lendlein, A; Schmidt, A M; Langer R, AB-polymer networks based on oligo(ε-caprolactone) segments showing shape-shape-memory properties. Proc. Natl. Acad. Sci. USA. 98(3): 842–7 (2001). Li F, Chen Y, Zhu W, Zhang X, Xu M. Shape shape-memory effects of polyethylene/nylon 6 graft copolymers. Polymer 39(26):6929–6934 (1998). Liu, Chun, Mather. Chemically Cross-Linked Polycyclooctene: Synthesis, Characterization, and Shape Memory Behavior. Macromolecules, 35: 9868-9874 (2002). Nakasima A, Hu J, Ichinosa M, Shimada H. Potential application of shape-shape-memory plastic as elastic material in clinical orthodontics. (1991) Eur. J. Orthodontics 13:179–86. Ortega, Alicia M; Gall, Ken. The Effect of Crosslink Density on the Thermo-Mechanical Response of Shape Memory Polymers. Peng P; Wang, W; Xuesi C; and Jing X. Poly(ε-caprolactone) Polyurethane and Its Shape-Memory Property. Biomacromolecules 6:587-592 (2005). Wang, M; Zhang, L. Recovery as a Measure of Oriented Crystalline Structure in Poly (ether ester) s Based on Poly (ethylene oxide) and poly(ethylene terephthalate) Used as Shape Memory Polymers. Journal of Polymer Science: Part B: Polymer Physics, 37: 101–112 (1999). Yiping C. Ying G; Juan D; Juan L; Yuxing P; Albert S. Hydrogen-bonded polymer network—poly (ethylene glycol) complexes with shape shape-memory effect. Journal of Materials Chemistry. 12: 2957–2960 (2002). Katime I, Katime O, Katime D "Los materiales inteligentes de este Milenio: los hidrogeles polímeros". Editorial de la Universidad del País Vasco, Bilbao 2004. ISBN 84-8373-637-3. Katime I, Katime O y Katime D."Introducción a la Ciencia de los materiales polímeros: Síntesis y caracterización". Servicio Editorial de la Universidad del País Vasco, Bilbao 2010. ISBN 978-84-9860-356-9 Polymer chemistry Polymer physics Polymers
Thermally induced shape-memory effect (polymers)
[ "Chemistry", "Materials_science", "Engineering" ]
4,337
[ "Polymer physics", "Polymers", "Materials science", "Polymer chemistry" ]
70,636,825
https://en.wikipedia.org/wiki/Thomas%E2%80%93Yau%20conjecture
In mathematics, and especially symplectic geometry, the Thomas–Yau conjecture asks for the existence of a stability condition, similar to those which appear in algebraic geometry, which guarantees the existence of a solution to the special Lagrangian equation inside a Hamiltonian isotopy class of Lagrangian submanifolds. In particular the conjecture contains two difficulties: first it asks what a suitable stability condition might be, and secondly if one can prove stability of an isotopy class if and only if it contains a special Lagrangian representative. The Thomas–Yau conjecture was proposed by Richard Thomas and Shing-Tung Yau in 2001, and was motivated by similar theorems in algebraic geometry relating existence of solutions to geometric partial differential equations and stability conditions, especially the Kobayashi–Hitchin correspondence relating slope stable vector bundles to Hermitian Yang–Mills metrics. The conjecture is intimately related to mirror symmetry, a conjecture in string theory and mathematical physics which predicts that mirror to a symplectic manifold (which is a Calabi–Yau manifold) there should be another Calabi–Yau manifold for which the symplectic structure is interchanged with the complex structure. In particular mirror symmetry predicts that special Lagrangians, which are the Type IIA string theory model of BPS D-branes, should be interchanged with the same structures in the Type IIB model, which are given either by stable vector bundles or vector bundles admitting Hermitian Yang–Mills or possibly deformed Hermitian Yang–Mills metrics. Motivated by this, Dominic Joyce rephrased the Thomas–Yau conjecture in 2014, predicting that the stability condition may be understood using the theory of Bridgeland stability conditions defined on the Fukaya category of the Calabi–Yau manifold, which is a triangulated category appearing in Kontsevich's homological mirror symmetry conjecture. Statement The statement of the Thomas–Yau conjecture is not completely precise, as the particular stability condition is not yet known. In the work of Thomas and Thomas–Yau, the stability condition was given in terms of the Lagrangian mean curvature flow inside the Hamiltonian isotopy class of the Lagrangian, but Joyce's reinterpretation of the conjecture predicts that this stability condition can be given a categorical or algebraic form in terms of Bridgeland stability conditions. Special Lagrangian submanifolds Consider a Calabi–Yau manifold of complex dimension , which is in particular a real symplectic manifold of dimension . Then a Lagrangian submanifold is a real -dimensional submanifold such that the symplectic form is identically zero when restricted to , that is . The holomorphic volume form , when restricted to a Lagrangian submanifold, becomes a top degree differential form. If the Lagrangian is oriented, then there exists a volume form on and one may compare this volume form to the restriction of the holomorphic volume form: for some complex-valued function . The condition that is a Calabi–Yau manifold implies that the function has norm 1, so we have where is the phase angle of the function . In principle this phase function is only locally continuous, and its value may jump. A graded Lagrangian is a Lagrangian together with a lifting of the phase angle to , which satisfies everywhere on . An oriented, graded Lagrangian is said to be a special Lagrangian submanifold if the phase angle function is constant on . The average value of this function, denoted , may be computed using the volume form as and only depends on the Hamiltonian isotopy class of . Using this average value, the condition that is constant may be written in the following form, which commonly occurs in the literature. This is the definition of a special Lagrangian submanifold: Hamiltonian isotopy classes The condition of being a special Lagrangian is not satisfied for all Lagrangians, but the geometric and especially physical properties of Lagrangian submanifolds in string theory are predicted to only depend on the Hamiltonian isotopy class of the Lagrangian submanifold. An isotopy is a transformation of a submanifold inside an ambient manifold which is a homotopy by embeddings. On a symplectic manifold, a symplectic isotopy requires that these embeddings are by symplectomorphisms, and a Hamiltonian isotopy is a symplectic isotopy for which the symplectomorphisms are generated by Hamiltonian functions. Given a Lagrangian submanifold , the condition of being a Lagrangian is preserved under Hamiltonian (in fact symplectic) isotopies, and the collection of all Lagrangian submanifolds which are Hamiltonian isotopic to is denoted , called the Hamiltonian isotopy class of . Lagrangian mean curvature flow and stability condition Given a Riemannian manifold and a submanifold , the mean curvature flow is a differential equation satisfied for a one-parameter family of embeddings defined for in some interval with images denoted , where . Namely, the family satisfies mean curvature flow ifwhere is the mean curvature of the submanifold . This flow is the gradient flow of the volume functional on submanifolds of the Riemannian manifold , and there always exists short time existence of solutions starting from a given submanifold . On a Calabi–Yau manifold, if is a Lagrangian, the condition of being a Lagrangian is preserved when studying the mean curvature flow of with respect to the Calabi–Yau metric. This is therefore called the Lagrangian mean curvature flow (Lmcf). Furthermore, for a graded Lagrangian , Lmcf preserves Hamiltonian isotopy class, so for all where the Lmcf is defined. Thomas introduced a conjectural stability condition defined in terms of gradings when splitting into Lagrangian connected sums. Namely a graded Lagrangian is called stable if whenever it may be written as a graded Lagrangian connected sumthe average phases satisfy the inequalityIn the later language of Joyce using the notion of a Bridgeland stability condition, this was further explained as follows. An almost-calibrated Lagrangian (which means the lifted phase is taken to lie in the interval or some integer shift of this interval) which splits as a graded connected sum of almost-calibrated Lagrangians corresponds to a distinguished trianglein the Fukaya category. The Lagrangian is stable if for any such distinguished triangle, the above angle inequality holds. Statement of the conjecture The conjecture as originally proposed by Thomas is as follows:Conjecture: An oriented, graded, almost-calibrated Lagrangian admits a special Lagrangian representative in its Hamiltonian isotopy class if and only if it is stable in the above sense.Following this, in the work of Thomas–Yau, the behaviour of the Lagrangian mean curvature flow was also predicted.Conjecture (Thomas–Yau): If an oriented, graded, almost-calibrated Lagrangian is stable, then the Lagrangian mean curvature flow exists for all time and converges to a special Lagrangian representative in the Hamiltonian isotopy class .This conjecture was enhanced by Joyce, who provided a more subtle analysis of what behaviour is expected of the Lagrangian mean curvature flow. In particular Joyce described the types of finite-time singularity formation which are expected to occur in the Lagrangian mean curvature flow, and proposed expanding the class of Lagrangians studied to include singular or immersed Lagrangian submanifolds, which should appear in the full Fukaya category of the Calabi–Yau. Conjecture (Thomas–Yau–Joyce): An oriented, graded, almost-calibrated Lagrangian splits as a graded Lagrangian connected sum of special Lagrangian submanifolds with phase angles given by the convergence of the Lagrangian mean curvature flow with surgeries to remove singularities at a sequence of finite times . At these surgery points, the Lagrangian may change its Hamiltonian isotopy class but preserves its class in the Fukaya category.In the language of Joyce's formulation of the conjecture, the decomposition is a symplectic analogue of the Harder-Narasimhan filtration of a vector bundle, and using Joyce's interpretation of the conjecture in the Fukaya category with respect to a Bridgeland stability condition, the central charge is given by,the heart of the t-structure defining the stability condition is conjectured to be given by those Lagrangians in the Fukaya category with phase , and the Thomas–Yau–Joyce conjecture predicts that the Lagrangian mean curvature flow produces the Harder–Narasimhan filtration condition which is required to prove that the data defines a genuine Bridgeland stability condition on the Fukaya category. References Symplectic geometry Conjectures
Thomas–Yau conjecture
[ "Mathematics" ]
1,903
[ "Unsolved problems in mathematics", "Mathematical problems", "Conjectures" ]
70,638,612
https://en.wikipedia.org/wiki/David%20M.%20Brink
David Maurice Brink (20 July 1930, Hobart, Tasmania, Australia – 8 March 2021, Oxford, UK) was an Australian-British nuclear physicist. He is known for the Axel-Brink hypothesis. Education and career Brink matriculated in 1947 at the University of Tasmania, where he graduated with a B.Sc. in physics in 1951. As a Rhodes Scholar he became a graduate student in physics at Magdalen College, Oxford, where he received his PhD in 1955. His doctoral dissertation Some aspects of the interactions of light with matter was supervised by Maurice Pryce. From 1954 to 1958 Brink was a Rutherford Scholar of the Royal Society. For the academic year 1957–1958 he was an instructor at the Massachusetts Institute of Technology (MIT). From 1958 to 1993 he was a Fellow of Balliol College, Oxford. At the University of Oxford he was from 1958 to 1988 a university lecturer and from 1988 to 1993 a Moseley Reader. In 1993 he moved to Trento, Italy. There from 1993 to 1998 he was the vice-director of the European Centre for Theoretical Studies in Nuclear Physics (under the auspices of the European Centre of Technology), as well as, at the University of Trento a professor of the history of physics. Brink was a visiting scientist at Copenhagen's Niels Bohr Institute in 1964. He has been a visiting professor at the Institut de physique nucléaire d'Orsay (1969 and 1981–1982), the University of British Columbia (1975), the Technical University of Munich (1982), the University of Trento (1988), the University of Catania (1988), and Michigan State University (1988–1989). As a theoretical physicist he did important research on "the study of nuclear structure via the shell model and effective interactions, and nuclear reactions via statistical methods." He was elected in 1981 a Fellow of the Royal Society. He received in 1982 the Rutherford Medal of the Institute of Physics. He was made in 1992 a Foreign Member of the Royal Society of Sciences in Uppsala. In 2006 he received the Lise Meitner Prize "for his many contributions to the theory of nuclear structure and nuclear reactions over several decades, including his seminal work on the theory of nuclear masses using Skyrme effective interactions, nuclear giant resonances, clustering in nuclei and quantum and semi-classical theories of heavy-ion scattering and reactions." Selected publications Articles (over 650 citations) (over 950 citations) (over 3050 citations) Books with George Raymond Satchler: Angular Momentum 1962. 2nd edition 1971. 3rd edition. Clarendon Press, Oxford 1993, ISBN 0-19-851759-9 Nuclear Forces, Pergamon Press 1965 German translation: Kernkräfte, WTB Texte, 1971 Semi-classical methods in nucleus-nucleus scattering, Cambridge University Press 1985; 2009 edition as editor with Feodor Karpechine, F. Bary Malik, João Da Providência: with Ricardo A. Broglia: Nuclear superfluidity: pairing in finite systems, Cambridge University Press 2005; e-book ; hbk References 1930 births 2021 deaths 20th-century Australian physicists 21st-century Australian physicists 20th-century British physicists 21st-century British physicists University of Tasmania alumni Alumni of Magdalen College, Oxford Fellows of Balliol College, Oxford Academic staff of the University of Trento Fellows of the Royal Society Nuclear physicists Theoretical physicists People from Hobart
David M. Brink
[ "Physics" ]
702
[ "Theoretical physics", "Theoretical physicists" ]
66,228,158
https://en.wikipedia.org/wiki/Numerical%20analytic%20continuation
In many-body physics, the problem of analytic continuation is that of numerically extracting the spectral density of a Green function given its values on the imaginary axis. It is a necessary post-processing step for calculating dynamical properties of physical systems from Quantum Monte Carlo simulations, which often compute Green function values only at imaginary times or Matsubara frequencies. Mathematically, the problem reduces to solving a Fredholm integral equation of the first kind with an ill-conditioned kernel. As a result, it is an ill-posed inverse problem with no unique solution and where a small noise on the input leads to large errors in the unregularized solution. There are different methods for solving this problem including the maximum entropy method, the average spectrum method and Pade approximation methods. Examples A common analytic continuation problem is obtaining the spectral function at real frequencies from the Green function values at Matsubara frequencies by numerically inverting the integral equation where for fermionic systems or for bosonic ones and is the inverse temperature. This relation is an example of Kramers-Kronig relation. The spectral function can also be related to the imaginary-time Green function be applying the inverse Fourier transform to the above equation with . Evaluating the summation over Matsubara frequencies gives the desired relation where the upper sign is for fermionic systems and the lower sign is for bosonic ones. Another example of the analytic continuation is calculating the optical conductivity from the current-current correlation function values at Matsubara frequencies. The two are related as following Software The Maxent Project: Open source utility for performing analytic continuation using the maximum entropy method. Spektra: Free online tool for performing analytic continuation using the average spectrum Method. SpM: Sparse modeling tool for analytic continuation of imaginary-time Green’s function. See also Analytic continuation Analytic continuation along a curve Fredholm integral equation Green's function Kramers–Kronig relations Quantum Monte Carlo References Mathematical physics Quantum Monte Carlo
Numerical analytic continuation
[ "Physics", "Chemistry", "Mathematics" ]
400
[ "Quantum chemistry", "Applied mathematics", "Theoretical physics", "Quantum Monte Carlo", "Mathematical physics" ]
58,156,803
https://en.wikipedia.org/wiki/Bunkers%20%28energy%20in%20transport%29
In energy statistics, marine bunkers and aviation bunkers as defined by the International Energy Agency are the energy consumption of ships and aircraft. Marine and aviation bunkers are reported separately from international bunkers, which represent consumption of ships and aircraft on international routes. International bunkers are subtracted from the energy supplies of a country to calculate its domestic consumption. It is as if international aviation and international shipping did not belong to any country. They are managed by the International Civil Aviation Organization (ICAO) and the International Maritime Organization (IMO). Critics The European Federation for Transport and Environment has only limited confidence in ICAO and IMO's ability to reduce air and sea emissions due to international bunkers and thus to comply with the Paris Climate Agreement. A few figures International marine bunkers amount to 2,466 TWh/a whereas international aviation bunkers amount to 2,163 TWh/a. References See also Bunkering Energy in transport
Bunkers (energy in transport)
[ "Physics" ]
193
[ "Physical systems", "Transport", "Energy in transport" ]
58,158,336
https://en.wikipedia.org/wiki/Brian%20Hibbert
Brian Hibbert is a British engineer. He is best known for his leadership of high-tech, commercial enterprises in the aerospace and defense industry. Hibbert began his career as an engineer with Rolls-Royce Limited where he attained chartered engineer status. From 1974 to 1991, he was an engineer and project manager at Hunting Engineering, and from 1993 to 2001 he was managing director. Hunting Engineering was acquired by INSYS in 2001, where Hibbert continued as Managing Director. He retired as Managing Director from Lockheed Martin in 2007 after they acquired INSYS in 2006. After retirement, Hibbert has spent his time promoting investment in small and medium enterprise. Hibbert is a Chartered Engineer and a Fellow of the Society of Environmental Engineers. In 2004 he was invested as a Commander of the Most Excellent Order of the British Empire. References Living people Environmental engineers Commanders of the Order of the British Empire Fellows of the Society of Environmental Engineers 1947 births
Brian Hibbert
[ "Chemistry", "Engineering" ]
191
[ "Environmental engineers", "Environmental engineering" ]
58,158,412
https://en.wikipedia.org/wiki/Norton%20Core
Norton Core is a discontinued mesh WiFi router that was introduced at the 2017 CES by Symantec (now NortonLifeLock) as a part of their Norton brand. It was marketed as a "Secure WiFi Router," as it protects connected devices by defending the network against online threats and blocking unsafe websites. The network can be controlled through a mobile app where users can view their "security score," set up and manage their router, and manage devices connected to it. It competes with the Bitdefender Box and CUJO AI. TIME rated Norton Core as one of the "25 Best Inventions of 2017." Norton Core faced limited acceptance from the public, and was criticized for requiring an expensive subscription. As a result, the Norton Core router was discontinued on January 31, 2019, with the sale of it ending immediately. Support ended on April 15, 2022, revised from its original January 2021 date. Specifications Norton Core consists of a 2.4 to 5 GHz frequency band, 1 GB of RAM, 4 GB of memory, a 1.7 GHz Qualcomm processor, Bluetooth 4.0 support, four Ethernet ports (1 WAN, 3 LAN), and two USB ports. There were silver and gold color options for the router. Notes External links Website Networking hardware Products introduced in 2017
Norton Core
[ "Engineering" ]
273
[ "Computer networks engineering", "Networking hardware" ]
58,160,311
https://en.wikipedia.org/wiki/Thorium%20monoxide
Thorium monoxide (thorium(II) oxide), is the binary oxide of thorium having chemical formula ThO. In the vapor phase, it is a diatomic molecule. Gaseous (molecular) form Laser ablation of thorium in the presence of oxygen produces vapor-phase thorium monoxide. Thorium monoxide molecules contain a highly polar covalent bond. The effective electric field between the two atoms has been calculated to be about 80 gigavolts per centimeter, one of the largest known internal effective electric fields. Solid form Simple combustion of thorium in air produces thorium dioxide. However, exposure of a thin film of thorium to low-pressure oxygen at medium temperature forms a rapidly growing layer of thorium monoxide under a more-stable surface coating of the dioxide. At extremely high temperatures, thorium dioxide can convert to the monoxide either by a comproportionation reaction (equilibrium with liquid thorium metal) above or by simple dissociation (evolution of oxygen) above . References Oxides Thorium compounds Rock salt crystal structure
Thorium monoxide
[ "Chemistry" ]
220
[ "Inorganic compounds", "Oxides", "Inorganic compound stubs", "Salts" ]
58,162,731
https://en.wikipedia.org/wiki/Bin%20picking
Bin picking (also referred to as random bin picking) is a core problem in computer vision and robotics. The goal is to have a robot with sensors and cameras attached to it pick-up known objects with random poses out of a bin using a suction gripper, parallel gripper, or other kind of robot end effector. Early work on bin picking made use of Photometric Stereo in recovering the shapes of objects and to determine their orientation in space. Amazon previously held a competition focused on bin picking referred to as the "Amazon Picking Challenge", which was held from 2015 to 2017. The challenge tasked entrants with building their own robot hardware and software that could attempt simplified versions of the general task of picking and stowing items on shelves. The robots were scored by how many items were picked and stowed in a fixed amount of time. The first Amazon Robotics challenge was won by a team from TU Berlin in 2015, followed by a team from TU Delft and the Dutch company "Fizyr" in 2016. The last Amazon Robotics Challenge was won by the Australian Centre for Robotic Vision at Queensland University of Technology with their robot named Cartman. The Amazon Robotics/Picking Challenge was discontinued following the 2017 competition. Although there can be some overlap, bin picking is distinct from "each picking" and the bin packing problem. See also 3D pose estimation Bowl feeder References Robotics Robotics engineering
Bin picking
[ "Technology", "Engineering" ]
281
[ "Robotics", "Automation", "Computer engineering", "Robotics engineering" ]
74,896,529
https://en.wikipedia.org/wiki/ISO/IEC%2019790
ISO/IEC 19790 is an ISO/IEC standard for security requirements for cryptographic modules. It addresses a wide range of issues regarding their implementation, including specifications, interface definitions, authentication, operational and physical security, configuration management, testing, and life-cycle management. The first version of ISO/IEC 19790 was derived from the U.S. government computer security standard FIPS 140-2, Security Requirements for Cryptographic Modules. , the current version of the standard is ISO/IEC 19790:2012. This replaces a previous version, ISO/IEC 19790:2006, which is now obsolete. Use of ISO/IEC 19790 is referenced in the U.S. government standard FIPS 140-3. As an ISO/IEC standard, access to it requires payment, typically on a per-user basis. ISO/IEC 24759 is a related standard for the testing of cryptographic modules, the first version of which derived from NIST's Derived Test Requirements for FIPS PUB 140-2, Security Requirements for Cryptographic Modules. References Cryptography standards Computer security standards
ISO/IEC 19790
[ "Technology", "Engineering" ]
222
[ "Computer security standards", "Computer standards", "Cybersecurity engineering" ]
74,908,046
https://en.wikipedia.org/wiki/Binary%20Black%20Hole%20Grand%20Challenge%20Alliance
The Binary Black Hole Grand Challenge Alliance (BBH Challenge Alliance) was a scientific collaboration of international physics institutes and research groups dedicated to simulating the sources and predicting the waveforms for gravitational waves, in anticipation of gravitational radiation experiments such as LIGO. History The BBH Challenge Alliance was established in 1993. This was an alliance of numerical relativity groups engaged in a friendly competition to tackle the grand challenge of simulating binary black hole collisions for the purpose of understanding gravitational wave signatures that would be detected by experiments such as LIGO. References External links Binary Black Hole Grand Challenge Alliance project page (archived) Gravitational-wave astronomy Astronomy in the United States Organizations based in Texas Organizations established in 1993
Binary Black Hole Grand Challenge Alliance
[ "Physics", "Astronomy" ]
138
[ "Astronomy stubs", "Astronomical sub-disciplines", "Gravitational-wave astronomy", "Astrophysics" ]
64,761,242
https://en.wikipedia.org/wiki/Zolotarev%20polynomials
In mathematics, Zolotarev polynomials are polynomials used in approximation theory. They are sometimes used as an alternative to the Chebyshev polynomials where accuracy of approximation near the origin is of less importance. Zolotarev polynomials differ from the Chebyshev polynomials in that two of the coefficients are fixed in advance rather than allowed to take on any value. The Chebyshev polynomials of the first kind are a special case of Zolotarev polynomials. These polynomials were introduced by Russian mathematician Yegor Ivanovich Zolotarev in 1868. Definition and properties Zolotarev polynomials of degree in are of the form where is a prescribed value for and the are otherwise chosen such that the deviation of from zero is minimum in the interval . A subset of Zolotarev polynomials can be expressed in terms of Chebyshev polynomials of the first kind, . For then For values of greater than the maximum of this range, Zolotarev polynomials can be expressed in terms of elliptic functions. For , the Zolotarev polynomial is identical to the equivalent Chebyshev polynomial. For negative values of , the polynomial can be found from the polynomial of the positive value, The Zolotarev polynomial can be expanded into a sum of Chebyshev polynomials using the relationship In terms of Jacobi elliptic functions The original solution to the approximation problem given by Zolotarev was in terms of Jacobi elliptic functions. Zolotarev gave the general solution where the number of zeroes to the left of the peak value () in the interval is not equal to the number of zeroes to the right of this peak (). The degree of the polynomial is . For many applications, is used and then only need be considered. The general Zolotarev polynomials are defined as where is the Jacobi eta function is the incomplete elliptic integral of the first kind is the quarter-wave complete elliptic integral of the first kind. That is, is the Jacobi elliptic modulus is the Jacobi elliptic sine. The variation of the function within the interval [−1,1] is equiripple except for one peak which is larger than the rest. The position and width of this peak can be set independently. The position of the peak is given by where is the Jacobi elliptic cosine is the Jacobi delta amplitude is the Jacobi zeta function is as defined above. The height of the peak is given by where is the incomplete elliptic integral of the third kind is the position on the left limb of the peak which is the same height as the equiripple peaks. Jacobi eta function The Jacobi eta function can be defined in terms of a Jacobi auxiliary theta function, where, Applications The polynomials were introduced by Yegor Ivanovich Zolotarev in 1868 as a means of uniformly approximating polynomials of degree on the interval [−1,1]. Pafnuty Chebyshev had shown in 1858 that could be approximated in this interval with a polynomial of degree at most with an error of . In 1868, Zolotarev showed that could be approximated with a polynomial of degree at most , two degrees lower. The error in Zolotarev's method is given by, The procedure was further developed by Naum Achieser in 1956. Zolotarev polynomials are used in the design of Achieser-Zolotarev filters. They were first used in this role in 1970 by Ralph Levy in the design of microwave waveguide filters. Achieser-Zolotarev filters are similar to Chebyshev filters in that they have an equal ripple attenuation through the passband, except that the attenuation exceeds the preset ripple for the peak closest to the origin. Zolotarev polynomials can be used to synthesise the radiation patterns of linear antenna arrays, first suggested by D.A. McNamara in 1985. The work was based on the filter application with beam angle used as the variable instead of frequency. The Zolotarev beam pattern has equal-level sidelobes. References Bibliography (corrections July 2000). Polynomials Approximation theory
Zolotarev polynomials
[ "Mathematics" ]
846
[ "Approximation theory", "Polynomials", "Mathematical relations", "Approximations", "Algebra" ]