id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
69,171,533
https://en.wikipedia.org/wiki/COVID%20Moonshot
The COVID Moonshot is a collaborative open-science project started in March 2020 with the goal of developing an un-patented oral antiviral drug to treat SARS-CoV-2, the virus causing COVID-19. COVID Moonshot researchers are targeting the proteins needed to form functioning new viral proteins. They are particularly interested in proteases such as 3C-like protease (Mpro), a coronavirus nonstructural protein that mediates the breaking and replication of proteins. COVID Moonshot may be the first open-science community effort for the development of an antiviral drug. Hundreds of scientists around the world, from academic and industrial organizations, have shared their expertise, resources, data, and results to more rapidly identify, screen, and test candidate compounds for the treatment of COVID-19. Project history Development of antiviral drugs is a complicated and time-consuming multistage process. The public sharing of information in the early stages of genome identification and protein structure identification has accelerated the process of searching for COVID-19 treatments and established a basis for the COVID Moonshot initiative. Genome identification On January 3, 2020, Chinese virologist Yong-Zhen Zhang of Fudan University and the Shanghai Public Health Clinical Center received a test sample from Wuhan, China, where patients had a pneumonia-like illness. By January 5, Zhang and his team had sequenced a virus from the sample and deposited its genome on GenBank, an international research database maintained by the United States National Center for Biotechnology Information. By January 11, 2020, Edward C. Holmes of the University of Sydney had Zhang's permission to publicly release the genome. Protein structures With that information, structural biologists world-wide began examining its protein structures. Investigators from the Center for Structural Genomics of Infectious Diseases (CSGID) and other groups began working to characterize the 3D structure of the proteins, sharing their results via the Protein Data Bank (PDB). Scientists were able to identify a key protein in the virus: 3C-like protease (Mpro). Crucial early X-ray crystallography was done by Zihe Rao and Haitao Yang in Shanghai, China. On January 26, 2020, they submitted a structure of Mpro bound to an inhibitor to the Protein Data Bank. It was released as of February 5, 2020. Rao began coordinating with David Stuart and Martin Walsh at Diamond Light Source, the United Kingdom's synchrotron facility. The Diamond group was able to develop and release a high-resolution crystal structure of unbound Mpro. Approaches to accelerating drug development have been suggested, but identification of proteins and drug development commonly take years. It was possible to sequence the virus and characterize key proteins extremely quickly because the new virus was somewhat familiar. It had a 70–80% sequence similarity to the proteins in the SARS-CoV coronavirus that caused the SARS outbreak in 2002. Researchers could therefore build on what was already known about previous coronaviruses. Possible targets Identifying and recreating viral proteins in the lab is a first step to developing drugs to attack them and vaccines to protect against them. The COVID Moonshot initiative follows an approach to structure-based drug design in which researchers attempt to find a molecule that will bind tightly to a drug target and prevent it from carrying out its normal activities. In the case of SARS-CoV-2, the coronavirus enters the body and then replicates its genomic RNA, building new copies that are incorporated into new, rapidly spreading viral particles. Protease enzymes or proteases are often desirable drug targets, because proteases are important in the formation and spreading of viral particles. Inhibition of viral proteases can inhibit the virus's ability to replicate itself and spread. 3C-like protease (Mpro), a coronavirus nonstructural protein, is one of the main proteins involved in the replication and transcription of SARS-CoV-2. By understanding Mpro's structure and the ways in which it functions, scientists can identify possible candidates to preemptively bind to Mpro and block its activity. Mpro is not the only possible target for drug design, but it is a highly interesting one. Fragment screening In collaboration with the University of Oxford and the Weizmann Institute of Science in Rehovot, Israel, the facilities at Diamond Light were used to develop fragment screens utilizing crystallography and mass spectrometry. Nir London's laboratory at the Weizmann Institute contributed technology for identifying compounds that bind irreversibly to target proteins. Frank von Delft and the Nuffield Department of Medicine at the University of Oxford provided technology for rapid crystallographic fragment screening. Researchers examined thousands of possible fragments from diverse screening libraries and identified at least 71 possible protein–ligand crystal structures, chemical fragments which might have the potential to bind to Mpro. These results were immediately made available online. Designing candidates The open release of the data and its announcement on Twitter on March 7, 2020, mark a critical point in the formation of COVID Moonshot. The scientists shared their information and challenged chemists worldwide to use that information to design potential openly available antiviral drug candidates. They expected a couple of hundred submissions. By May 2020 more than 4,600 design submissions for potential inhibitors were received. By January 2021, the number of unique compound designs had risen to 14,000. In response, those involved began to shift from a spontaneous virtual collaboration to a larger and more organized network of partners with specialized skills and well-articulated goals. The design submissions were stored in Collaborative Drug Discovery's CDD Vault, a database used for large-scale management of chemical structures, experimental protocols and experimental results. Alpha Lee and Matt Robinson brought computational expertise from PostEra to the project. PostEra used techniques from artificial intelligence and machine learning to develop analysis tools for computational drug discovery, chemical synthesis and biochemical assays. When COVID Moonshot's appeal resulted in not hundreds but thousands of responses, they built a platform capable of triaging large numbers of compounds and designing routes for their synthetic formation. Supercomputer access was provided through the COVID-19 High Performance Computing (HPC) Consortium, accelerating the speed at which designs could be examined and compared. The distributed supercomputing initiative Folding@home has carried out multiple sprints to model novel protein structures and target desirable structures as a part of COVID Moonshot. Many of the criteria for selecting drug candidates were determined by the group's goals. An ideal drug candidate would be effective in treating COVID-19. It also would be easily and cheaply made, so that as many countries and companies as possible could produce and distribute it. The ingredients to make it should be easy to obtain, and the processes involved should be as simple as possible. A drug shouldn't require special handling (like refrigeration) and it should be easy to administer (a pill rather than an injection). In a matter of months, researchers were able to identify more than 200 promising crystal structure designs and to begin creating and testing them in the lab. Chris Schofield at the University of Oxford synthesized and tested 4 of the most promising of the novel designed peptides to demonstrate their ability to block and inhibit Mpro. Freely available data from COVID Moonshot has also been used to assess the predictive ability of docking scores in suggesting the potency of SARS-CoV-2 M-pro inhibitors. To go beyond the design phase, possible drug candidates must be created and tested for both effectiveness and safety in animal and human trials. The Wellcome Trust has committed to key initial funding to support this process. Synthesis of candidates is being carried out in parallel, at sites including Ukraine (Enamine), India (Sai Life Sciences) and China (WuXi). Annette von Delft of the University of Oxford and the National Institute for Health Research (NIHR)'s Oxford Biomedical Research Centre (BRC) is leading pre-clinical small molecule research related to COVID Moonshot. Potential for antiviral treatments COVID Moonshot anticipates that they will select three pre-clinical candidates by March 2022, to be followed by preclinical safety and toxicology testing and identification of needed chemistry, manufacturing and control (CMC) steps. Based on that data, the most promising candidate will be chosen. Phase-1 clinical trials, the first stage of testing in human subjects, are projected to begin by June 2023. Unlike a vaccine, which increases immunity and protects against catching an infectious disease, an antiviral drug treats someone who is already sick by attacking the virus and countering its effects, potentially lessening both symptoms and further transmission. Mpro is present in other coronaviruses that cause disease, so an antiviral drug that targets Mpro may also be effective against coronaviruses such as SARS and MERS and future pandemics. Mpro does not mutate easily, so it is less likely that variants of the virus will adapt that can avoid the effects of such a drug. Open science Among the many participants in the COVID Moonshot project are the University of Oxford, University of Cambridge, Diamond Light Source, Weizmann Institute of Science in Rehovot, Israel, Temple University, Memorial Sloan Kettering Cancer Center, PostEra, University of Johannesburg, and the Drugs for Neglected Diseases initiative (DNDi) in Switzerland. Support for the project has come from a variety of philanthropic sources including the Wellcome Trust, COVID-19 Therapeutics Accelerator (CTA), Bill & Melinda Gates Foundation, LifeArc, and through crowdsourcing. Because COVID Moonshot is based in open science and shared open data, any drug that the project develops can be manufactured and sold by whoever wishes to produce it, worldwide. Countries that are unable to buy or manufacture expensive licensed drugs would therefore have the opportunity to produce their own supplies, and competition between suppliers is likely to result in greater availability and reduced prices for consumers. This would circumvent issues around the time needed to vaccinate people worldwide. As of July 2021, it was estimated that at current rates, this was likely to take several years. Inequities in distribution will increase both the spreading of the virus and the risk that new and more dangerous variants will emerge. Supporters of the COVID Moonshot initiative have argued that open-science drug discovery is an essential model for combating both current and future pandemics, and that the prevention of the spread of pandemic diseases is an essential public service. References External links Antiviral drugs Collaborative projects Genome databases International medical and health organizations Open data Open science Proteins SARS-CoV-2 Scientific organisations based in England
COVID Moonshot
[ "Chemistry", "Biology" ]
2,212
[ "Biomolecules by chemical classification", "Antiviral drugs", "Molecular biology", "Proteins", "Biocides" ]
70,662,126
https://en.wikipedia.org/wiki/Berkelium%28III%29%20chloride
Berkelium(III) chloride also known as berkelium trichloride, is a chemical compound with the formula BkCl3. It is a water-soluble green salt with a melting point of 603 °C. This compound forms the hexahydrate, BkCl3·6H2O. Preparation and reactions This compound was first prepared in 1970 by reacting hydrogen chloride gas and berkelium(IV) oxide or berkelium(III) oxide at 520 °C: Bk2O3 + 6HCl → 2BkCl3 + 3H2O Berkelium(III) chloride reacts with beryllocene to produce berkelocene(Bk(C5H5)3). It also reacts with oxalic acid to produce berkelium oxalate. This reaction is used to purify this compound, by reacting the oxalate with hydrochloric acid. Structure Anhydrous berkelium(III) chloride has a hexagonal crystal structure, is isostructural to uranium trichloride, and has the person symbol hP6. When heated it its melting point, it converts to an orthorhombic phase. However, the hexahydrate has a monoclinic crystal structure and is isostructural to americium trichloride hexahydrate with the lattice constants a = 966 pm, b = 654 pm and c = 797 pm. This hexahydrate consists of BkCl2(OH2)6+ ions and Cl− ions. Complexes Caesium sodium berkelium chloride is known with the formula Cs2NaBkCl6 and is produced by the reaction of berkelium(III) hydroxide, hydrochloric acid, and caesium chloride. References Berkelium compounds Chlorides Actinide halides
Berkelium(III) chloride
[ "Chemistry" ]
392
[ "Chlorides", "Inorganic compounds", "Salts" ]
70,663,911
https://en.wikipedia.org/wiki/Columbamine
Columbamine is a chemical with the molecular formula . It is an organic heterotetracyclic compound and a berberine alkaloid. Columbamine can be also called dehydroisocorypalmine. It has a molecular weight of 338.4 g/mol and its monoisotopic mass is 339.1470581677 daltons. Columbamine is soluble in DMSO and is a solid at room temperature. Occurrence Columbamine has been found in Plumeria rubra. References Tetracyclic compounds Alkaloids Heterocyclic compounds with 4 rings Methoxy compounds Nitrogen heterocycles Quaternary ammonium compounds
Columbamine
[ "Chemistry" ]
145
[ "Organic compounds", "Biomolecules by chemical classification", "Natural products", "Alkaloids" ]
70,665,800
https://en.wikipedia.org/wiki/Abikoviromycin
Abikoviromycin is an antiviral antibiotic piperidine alkaloid with the molecular formula C10H11NO which is produced by the bacteria Streptomyces abikoensis and Streptomyces rubescens. References Further reading antibiotics Heterocyclic compounds with 3 rings Tetrahydropyridines Epoxides
Abikoviromycin
[ "Biology" ]
76
[ "Antibiotics", "Biocides", "Biotechnology products" ]
70,673,811
https://en.wikipedia.org/wiki/Alkylammonium
In organic chemistry, alkylammonium refers to cations of the formula [R4−nNHn]+, where R = alkyl and 1≤ n ≤ 4. The cations with four alkyl substituents, i.e., [R4N]+, are further classified as quaternary ammonium cations and are discussed more thoroughly in the article with that title. In contrast to quaternary ammonium cations, other members of the alkylammonium cations do not exist appreciably in the presence of strong base because they undergo deprotonation, yielding the parent amine. The alkylammonium cations containing N-H centers are colorless and often hydrophilic. Occurrence Alkylammonium species are pervasive since virtually all alkyl amines protonate near neutral pH. Examples of technologically significant alkylammonium compounds are: Ethylenediamine dihydroiodide is used as a source of iodide in animal feed. dimethylammonium and diethylammonium, which are generated in the production of dimethyldithiocarbamate and diethyldithiocarbamate: 2R2NH + CS2 → [R2NH2+][R2NCS2−] References Ammonium compounds Organonitrogen compounds
Alkylammonium
[ "Chemistry" ]
282
[ "Organic compounds", "Organonitrogen compounds", "Ammonium compounds", "Salts" ]
70,675,314
https://en.wikipedia.org/wiki/Gated%20drug%20delivery%20systems
Gated drug delivery systems are a method of controlled drug release that center around the use of physical molecules that cover the pores of drug carriers until triggered for removal by an external stimulus. Gated drug delivery systems are a recent innovation in the field of drug delivery and pose as a promising candidate for future drug delivery systems that are effective at targeting certain sites without having leakages or off target effects in normal tissues. This new technology has the potential to be used in a variety of tissues over a wide range of disease states and has the added benefit of protecting healthy tissues and reducing systemic side effects. Uses Gated drug delivery systems are an emerging concept that have drawn a lot of attention for their wide variety of potential applications in the medical field. The abnormal physiological conditions found within the tumor environment provide a breadth of options that could be used for externally stimulating these systems to release cargo. Gated systems in cancer therapy also have the added effect of reducing off target effects and decreasing leakage and delivery of drug to normal tissues. Another use for this technology could also be antibacterial regulation. These systems could be used to limit bacterial resistance as well as accumulation of antibiotics within the body. Antibacterial regulation potentially opens the door to using gated systems in theranostics, in which the system is able to detect an issue and then provide a therapeutic response. There is also the potential for inhalable pulmonary drug delivery. With an increase in respiratory disease cases, the need for a drug delivery system that can be targeted to the lungs and provide sustained release is becoming more severe. This type of system would be applicable to patients experiencing asthma, pneumonia, obstructive pulmonary disease, and a number of other lung related diseases. History The history of gated drug delivery systems starts in the mid-1960s when the concept of zero order controlled drug delivery was first thought of. Researchers raced to be able to find a drug delivery platform that would be able to have perfectly sustained drug release. These efforts were initially on the macroscopic level with some of the first controlled drug delivery (CDD) devices being an ophthalmic insert, an intrauterine device, and a skin patch. In the 1970s the drug delivery field shifted from macroscopic systems and started to delve into microscopic systems. Ideas such as steroid loaded poly (lactic-co-glycolic acid), PLGA, microparticles came into existence. The next major jump came in the 1980s in the form of nanotherapeutics. There were some major technological advances that allowed this next generation of drug delivery systems to come along. Those ideas were PEGylation, active targeting, and the enhanced permeation and retention effect (EPR). Some of the issues that had been seen with earlier renditions of nanoparticle drug delivery was that there were off target effects from drug being delivered to normal tissue, the delivery system wasn't highly controllable, and there wasn't optimal accumulation of drug in the targeted area. This is when the development of "smart drug delivery" originated. Encapsulated within the idea of smart drug delivery is the use of gated delivery systems. Researchers discovered that certain materials could be loaded and capped to prevent premature drug release. The caps could subsequently be removed using different external stimuli. This created a class of drug delivery systems that were able to solve a number of problems exhibited by normal nanoparticle drug delivery systems. These smart drug delivery systems are able to deliver the drug with minimal leakage, can be actively or passively targeted to different areas within the body, and will only release drug in the presence of certain triggers, creating a sustained local response and accumulation of drug at the disease area. Scaffold fabrication There are many different materials and fabrication methods that can be used to produce gated drug delivery scaffolding. In general, porous materials, such as mesoporous silica nanoparticles are used because of their expansive surface area, large loading capacity, and porous structures. These characteristics make it possible to load a variety of molecules that vary greatly in size. Mesoporous silica nanoparticles Mesoporous silica nanoparticles (MSN) are considered to be one of the most widely used systems for drug delivery. MSN's have some of the characteristic features of gated systems such as being porous and having a high loading capacity, but they also exhibit some special features such as increased biocompatibility and chemical inertness. These delivery systems are composed of two parts: the inorganic scaffold and the molecular gates. In a study conducted by the Kong Lab at Deakin University in Australia, the researchers generated MSN's by adding tetraethyl orthosilicate to aqueous cetyltrimethylammonium bromide. The MSN's they created had a surface area of 363 m^2/g, an average pore size of 2.59 nm, and a pore volume of 0.33 cm^3/g. Mesoporous carbon Nanoparticles Mesoporous carbon nanoparticles (MCN) are similar to MSN's. They have a similar structure and share key physical properties and characteristics. However, it has been found that MCN's can exhibit lower toxicity that MSN's. To date, not much research has been done on MCN's. The Du lab based in Nanjing, China took made MSN templates using the common method of combining CTAB and TEOS. The researchers then took the MSN templates and dispersed them in a glucose solution followed by autoclaving the mixture to produce a reaction. The product was then subjected to carbonization at 900 degrees Celsius and the MCN's were generated. The researchers found that MCN's had a surface area of 1575 m^2/g, a pore size of 2.2 nm, and an average diameter of 115 nm. External stimuli There is a number of external triggers that can be used to release cargo on gated delivery systems. Examples of some triggers include pH, redox, enzyme, light, temperature, magnetic, ultrasound, and small molecule responsive gated systems. pH One of the most common triggers for drug delivery systems is pH. This stimulus is abundantly used in cancer therapies due to the fact that the tumor microenvironment is acidic. The development of pH triggered systems meant that drug could be introduced to the body but not be deployed until encountering the tumor microenvironment. Hence a possible and probable reason that pH triggered systems are so common. There are a few approaches to making these systems. One method is using linkages that dissolve at certain pH levels. As the system enters an acidic environment, the linkages that hold that gates onto the porous scaffold are hydrolyzed and the cargo can be released. Examples of pH linkages are imine, amides, esters, and acetals. Another method that can be used is protonation. This method relies on electrostatic interactions between the gate molecule and the porous scaffold. The two will be linked together with a certain molecule, for example, acetylated carboxymethyl. When the system reaches an acidic environment, protonation of the molecule is initiated. The protonation causes a disruption in the linkage and the cargo can be released. Redox Redox reactions are also used for gated delivery systems. Within cells and the bloodstream there are several reducing agents that can be used to trigger drug release in gated systems. The most common reducing agent used in gated delivery system is glutathione (GSH) because it has been determined that GSH is the most abundant reducing agent in the body. GSH also has significantly different concentrations between the intracellular and extracellular environments making it easier to target either environment without getting triggered by the other. Furthermore, GSH is found in higher concentration within tumor cells. This provides another way to have sustained and local release of drug at tumor sites. There are generally 2 different mechanisms for this type of gated system. One method is to cleave disulfide bonds. Another method is to cleave bonds through the use of reactive oxygen species (ROS). Bonds that are able to be cleaved by ROS are generally thioketals, ketals, and diselenides. Enzyme Enzyme responsive gated materials are another class of gated delivery systems. In these scenarios, enzymes can trigger release of the gates from the scaffolds in drug delivery systems. The mechanism for this type of gate is that certain linkages are used that can be hydrolyzed by select enzymes. The two most popular choices are protease and hyaluronidase. An advantage of using enzyme responsive triggers is that there is a large amount of substrate specificity, and the enzymes are able to trigger their target with high selectivity, even under mild conditions. Another advantage of this system is that enzymes are found throughout the entire body and work on almost all biological processes so the delivery system could potentially be activated in any part of the body during many points within a singular process. One study done by the Martinez-Manez lab in Valencia, Spain aimed to generate MSNs linked to poly-l-glutamic acid (PGA) gates through peptide bonds. The trigger for this system was the presence of a lysosomal proteolytic enzyme (protease), in this case, pronase. The researchers found that in the absence of pronase, the system was only able to release less than 20% of its cargo in 24 hours, however, in the presence of pronase, there was a 90% release of cargo within 5 hours. Magnetic and temperature Within the topic of gated drug delivery systems, utilizing magnetic forces generally goes hand in hand with temperature stimulus. The phenomenon of magnetic hyperthermia is when superparamagnetic nanoparticles reorient themselves after being exposed to heat generated by an alternating magnetic field (AMF). This concept has been utilized within the drug delivery field wherein gatekeepers are magnetically linked to the scaffolding and upon the application of heat, reorient and allow for the release of drug. This particular method has not been researched as heavily given the drawback that high energy is needed to produce the AMF and uncap the system. However, the Vallet-Regi lab based in Madrid, Spain decided to investigate the possibility of using magnetic gates bound to the scaffold using DNA. The lab generated oligonucleotide-modified superparamagnetic mesoporous silica nanoparticles. They capped the scaffolding using iron oxide nanoparticles that carried complementary DNA to the scaffold's oligonucleotide sequence. What the lab found was that they were able to cap their system due to the DNA coming together and creating a double strand. Upon heating the system using an AMF, the DNA bonds detached, the system became uncapped, and the drug was able to be released. Furthermore, the lab found that this linkage was reversible. As temperature was reduced, the DNA was able to re-link to its complementary half. This study was able to illustrate the possibility of having a drug delivery system that could be remotely triggered and exhibit an on-off switch. Electrostatic Researchers started investigating electrostatic gating because some trigger drug delivery systems on the market are not entirely feasible. The main complaint of these other systems is that continual external stimulation is required for the therapy to function. In order to combat this complaint, the Grattoni lab in Houston, Texas started working on a drug delivery system that utilized electrostatic gating. The researchers generated a silicon carbide coated nanofluidic membrane that would have controlled release of a drug when a buried electrode was exposed to low intensity voltage. What the researchers found was that their device was able to successfully release drug and do it in such a way that drug release was proportional to the applied voltage. They also found that the device was chemically inert, making it feasible for long term implantation. References Medicinal chemistry Biophysics
Gated drug delivery systems
[ "Physics", "Chemistry", "Biology" ]
2,508
[ "Applied and interdisciplinary physics", "Biophysics", "nan", "Medicinal chemistry", "Biochemistry" ]
72,155,088
https://en.wikipedia.org/wiki/Flowability
Flowability, also known as powder flow is a property that defines an ability of a powdered material to flow, related to cohesion. Powder flowability depends on many traits: the shape and size of the powder particles due to intermolecular force, porosity electrostatic activity hygroscopy bulk density angle of repose presence of glidants oxidation rate (of a metallic powder) humidity ISO 4490:2018 norm (and its precedent, ISO 4490:2014) standardizes a method for determining the flow rate of metallic powders. It uses a normalized/calibrated funnel, named Hall flowmeter. See also Fluid mechanics Soil mechanics Cohesion (geology) angle of repose References Condensed matter physics Intermolecular forces Physical phenomena Mechanics Geotechnical engineering Fluid mechanics
Flowability
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
165
[ "Physical phenomena", "Molecular physics", "Phases of matter", "Materials science", "Intermolecular forces", "Geotechnical engineering", "Civil engineering", "Mechanics", "Condensed matter physics", "Mechanical engineering", "Fluid mechanics", "Matter" ]
72,165,192
https://en.wikipedia.org/wiki/Upside-down%20painting
Most paintings are intended to be hung in a precise orientation, defining an upper part and a lower part. Some paintings are displayed upside down, sometimes by mistake since the image does not represent an easily recognizable oriented subject and lacks a signature or by a deliberate decision of the exhibitor. Examples In 1941 unfinished version of New York City, a 1942 oil by Piet Mondrian, was hung upside-down at 1945 at the MOMA of New York and since 1980 at the Kunstsammlung Nordrhein-Westfalen. After the mistake was discovered in 2022, the painting's orientation was not corrected, to avoid damage. , a paper-cut by Henri Matisse, depicts a ship reflecting on the water. It hung upside down at MOMA for 47 days in 1961. Georgia O'Keeffe's The Lawrence Tree (1929) depicts a tree from its foot. It hung up upside down in 1931 and between 1979 and 1989. Her Oriental Poppies hung upside down for 30 years at the Weisman Art Museum of the University of Minnesota. Vincent van Gogh's Long Grass with Butterflies spent two weeks inverted at the National Gallery of London. Salvador Dalí's Four Fishermen's Wives in Cadaquès was upside down at the Metropolitan Museum of New York. Pablo Picasso's 1912 drawing The Fiddler was upside down at the Reina Sofía Museum of Madrid. The representations of the head and the fiddle were confused. Josep Amorós's portrait of Philip V of Spain hangs upside down at the , Spain. The king ordered the burning of Xàtiva in 1701, during the War of the Spanish Succession. Georg Baselitz used a painting by Louis-Ferdinand von Rayski, Wermsdorf Woods, as a model, in order to paint his first picture with an inverted motif: The Wood On Its Head (1969). By inverting his paintings, the artist is able to emphasize the organisation of colours and form and confront the viewer with the picture's surface rather than the personal content of the image. In this sense, the paintings are empty and not subject to interpretation. Instead, one can only look at them. When both orientations are valid Some works display rotational symmetry or are ambiguous figures that allow both orientations to be meaningful. Giuseppe Arcimboldo painted several works that are still lifes in one orientation and related portraits in the other. See also Spolia (fragments of sculpture and architecture recycled in new buildings) may not be in the original orientation for ideological or pragmatical reasons. An example is the blocks in the shape of a Medusa head reused as column bases in the Basilica Cistern of Constantinople. , a genre depicting enemies hanging from their feet. 🔝, a symbol to show the top side of an object. Denny Dent, an artist who sometimes painted upside-down portraits on stage before turning the canvas right-side-up for the audience References Rotation Painting Visual arts exhibitions
Upside-down painting
[ "Physics" ]
599
[ "Physical phenomena", "Motion (physics)", "Classical mechanics", "Rotation" ]
59,842,040
https://en.wikipedia.org/wiki/Graph%20removal%20lemma
In graph theory, the graph removal lemma states that when a graph contains few copies of a given subgraph, then all of the copies can be eliminated by removing a small number of edges. The special case in which the subgraph is a triangle is known as the triangle removal lemma. The graph removal lemma can be used to prove Roth's theorem on 3-term arithmetic progressions, and a generalization of it, the hypergraph removal lemma, can be used to prove Szemerédi's theorem. It also has applications to property testing. Formulation Let be a graph with vertices. The graph removal lemma states that for any , there exists a constant such that for any -vertex graph with fewer than subgraphs isomorphic to , it is possible to eliminate all copies of by removing at most edges from . An alternative way to state this is to say that for any -vertex graph with subgraphs isomorphic to , it is possible to eliminate all copies of by removing edges from . Here, the indicates the use of little o notation. In the case when is a triangle, resulting lemma is called triangle removal lemma. History The original motivation for the study of triangle removal lemma was the Ruzsa–Szemerédi problem. Its initial formulation due to Imre Z. Ruzsa and Szemerédi from 1978 was slightly weaker than the triangle removal lemma used nowadays and can be roughly stated as follows: every locally linear graph on vertices contains edges. This statement can be quickly deduced from a modern triangle removal lemma. Ruzsa and Szemerédi provided also an alternative proof of Roth's theorem on arithmetic progressions as a simple corollary. In 1986, during their work on generalizations of the Ruzsa–Szemerédi problem to arbitrary -uniform graphs, Erdős, Frankl, and Rödl provided a statement for general graphs very close to the modern graph removal lemma: if graph is a homomorphic image of , then any -free graph on vertices can be made -free by removing edges. The modern formulation of the graph removal lemma was first stated by Füredi in 1994. The proof generalized earlier approaches by Ruzsa and Szemerédi and Erdős, Frankl, and Rödl, also using the Szemerédi regularity lemma. Graph counting lemma A key component of the proof of the graph removal lemma is the graph counting lemma about counting subgraphs in systems of regular pairs. The graph counting lemma is also very useful on its own. According to Füredi, it is used "in most applications of regularity lemma". Heuristic argument Let be a graph on vertices, whose vertex set is and edge set is . Let be sets of vertices of some graph such that, for all , the pair is -regular (in the sense of the regularity lemma). Let also be the density between the sets and . Intuitively, a regular pair with density should behave like a random Erdős–Rényi-like graph, where every pair of vertices is selected to be an edge independently with probability . This suggests that the number of copies of on vertices such that should be close to the expected number from the Erdős–Rényi model,where and are the edge set and the vertex set of . Precise statement The straightforward formalization of the above heuristic claim is as follows. Let be a graph on vertices, whose vertex set is and whose edge set is . Let be arbitrary. Then there exists such that for any as above, satisfying for all , the number of graph homomorphisms from to such that vertex is mapped to is not smaller than Blow-up Lemma One can even find bounded-degree subgraphs of blow-ups of in a similar setting. The following claim appears in the literature under name of the blow-up lemma and was first proven by Komlós, Sárközy, and Szemerédi. The precise statement here is a slightly simplified version due to Komlós, who referred to it also as the key lemma, as it is used in numerous regularity-based proofs. Let be an arbitrary graph and let . Construct by replacing each vertex of by an independent set of size and replacing every edge of by the complete bipartite graph on . Let be arbitrary reals, let be a positive integer, and let be a subgraph of with vertices and maximum degree . Define . Finally, let be a graph and be disjoint sets of vertices of such that, whenever , then is a -regular pair with density at least . Then if and , then the number of injective graph homomorphisms from to is at least . In fact, one can restrict to counting only those homomorphisms such that any vertex of with is mapped to a vertex in . Proof We will provide a proof of the counting lemma in the case when is a triangle (triangle counting lemma). The proof of the general case, as well as the proof of the blow-up lemma, are very similar and do not require different techniques. Take . Let be the set of those vertices in which have at least neighbors in and at least neighbors in . Note that if there were more than vertices in with less than neighbors in , then these vertices together with the whole would witness -irregularity of the pair . Repeating this argument for shows that we must have . Now take an arbitrary and define and as neighbors of in and , respectively. By definition, and , so by the regularity of we obtain existence of at leasttriangles containing . Since was chosen arbitrarily from the set of size at least , we obtain a total of at leastwhich finishes the proof as . Proof Proof of the triangle removal lemma To prove the triangle removal lemma, consider an -regular partition of the vertex set of . This exists by the Szemerédi regularity lemma. The idea is to remove all edges between irregular pairs, low-density pairs, and small parts, and prove that if at least one triangle still remains, then many triangles remain. Specifically, remove all edges between parts and if This procedure removes at most edges. If there exists a triangle with vertices in after these edges are removed, then the triangle counting lemma tells us there are at leasttriples in which form a triangle. Thus, we may take Proof of the graph removal lemma The proof of the case of general is analogous to the triangle case, and uses the graph counting lemma instead of the triangle counting lemma. Induced Graph Removal Lemma A natural generalization of the graph removal lemma is to consider induced subgraphs. In property testing, it is often useful to consider how far a graph is from being induced -free. A graph is considered to contain an induced subgraph if there is an injective map such that is an edge of if and only if is an edge of . Notice that non-edges are considered as well. is induced -free if there is no induced subgraph . We define as -far from being induced -free if we cannot add or delete edges to make induced -free. Formulation A version of the graph removal lemma for induced subgraphs was proved by Alon, Fischer, Krivelevich, and Szegedy in 2000. It states that for any graph with vertices and , there exists a constant such that, if an -vertex graph has fewer than induced subgraphs isomorphic to , then it is possible to eliminate all induced copies of by adding or removing fewer than edges. The problem can be reformulated as follows: Given a red-blue coloring of the complete graph (analogous to the graph on the same vertices where non-edges are blue and edges are red), and a constant , then there exists a constant such that for any red-blue colorings of has fewer than subgraphs isomorphic to , then it is possible to eliminate all copies of by changing the colors of fewer than edges. Notice that our previous "cleaning" process, where we remove all edges between irregular pairs, low-density pairs, and small parts, only involves removing edges. Removing edges only corresponds to changing edge colors from red to blue. However, there are situations in the induced case where the optimal edit distance involves changing edge colors from blue to red as well. Thus, the regularity lemma is insufficient to prove the induced graph removal lemma. The proof of the induced graph removal lemma must take advantage of the strong regularity lemma. Proof Strong Regularity Lemma The strong regularity lemma is a strengthened version of Szemerédi's regularity lemma. For any infinite sequence of constants , there exists an integer such that for any graph , we can obtain two (equitable) partitions and such that the following properties are satisfied: refines ; that is, every part of is the union of some collection of parts in . is -regular and is -regular. The function is defined to be the energy function defined in Szemerédi regularity lemma. Essentially, we can find a pair of partitions where is regular compared to , and at the same time are close to each other. This property is captured in the third condition. Corollary of the Strong Regularity Lemma The following corollary of the strong regularity lemma is used in the proof of the induced graph removal lemma. For any infinite sequence of constants , there exists such that there exists a partition and subsets for each where the following properties are satisfied: is -regular for each pair for all but pairs The main idea of the proof of this corollary is to start with two partitions and that satisfy the Strong Regularity Lemma where . Then for each part , we uniformly at random choose some part that is a part in . The expected number of irregular pairs is less than 1. Thus, there exists some collection of such that every pair is -regular! The important aspect of this corollary is that pair of is -regular! This allows us to consider edges and non-edges when we perform our cleaning argument. Proof Sketch of the Induced Graph Removal Lemma With these results, we are able to prove the induced graph removal lemma. Take any graph with vertices that has less than copies of . The idea is to start with a collection of vertex sets which satisfy the conditions of the Corollary of the Strong Regularity Lemma. We then can perform a "cleaning" process where we remove all edges between pairs of parts with low density, and we can add all edges between pairs of parts with high density. We choose the density requirements such that we added/deleted at most edges. If the new graph has no copies of , then we are done. Suppose the new graph has a copy of . Suppose the vertex is embedded in . Then if there is an edge connecting in , then does not have low density. (Edges between were not removed in the cleaning process.) Similarly, if there is not an edge connecting in , then does not have high density. (Edges between were not added in the cleaning process.) Thus, by a similar counting argument to the proof of the triangle counting lemma (that is, the graph counting lemma), we can show that has more than copies of . Generalizations The graph removal lemma was later extended to directed graphs and to hypergraphs. Quantitative bounds The usage of the regularity lemma in the proof of the graph removal lemma forces to be extremely small, bounded by a tower function whose height is polynomial in ; that is, (here is the tower of twos of height ). A tower function of height is necessary in all regularity proofs, as is implied by results of Gowers on lower bounds in the regularity lemma. However, in 2011, Fox provided a new proof of the graph removal lemma which does not use the regularity lemma, improving the bound to (here is the number of vertices of the removed graph ). His proof, however, uses regularity-related ideas such as energy increment, but with a different notion of energy, related to entropy. This proof can be also rephrased using the Frieze-Kannan weak regularity lemma as noted by Conlon and Fox. In the special case of bipartite , it was shown that is sufficient. There is a large gap between the available upper and lower bounds for in the general case. The current best result true for all graphs is due to Alon and states that, for each nonbipartite , there exists a constant such that is necessary for the graph removal lemma to hold, while for bipartite , the optimal has polynomial dependence on , which matches the lower bound. The construction for the nonbipartite case is a consequence of Behrend construction of large Salem-Spencer sets. Indeed, as the triangle removal lemma implies Roth's theorem, existence of large Salem-Spencer sets may be translated to an upper bound for in the triangle removal lemma. This method can be leveraged for arbitrary nonbipartite to give the aforementioned bound. Applications Additive combinatorics Graph theory Property testing See also Counting lemma Tuza's conjecture References Graph theory
Graph removal lemma
[ "Mathematics" ]
2,712
[ "Theorems in graph theory", "Theorems in discrete mathematics" ]
59,842,050
https://en.wikipedia.org/wiki/Ruzsa%E2%80%93Szemer%C3%A9di%20problem
In combinatorial mathematics and extremal graph theory, the Ruzsa–Szemerédi problem or (6,3)-problem asks for the maximum number of edges in a graph in which every edge belongs to a unique triangle. Equivalently it asks for the maximum number of edges in a balanced bipartite graph whose edges can be partitioned into a linear number of induced matchings, or the maximum number of triples one can choose from points so that every six points contain at most two triples. The problem is named after Imre Z. Ruzsa and Endre Szemerédi, who first proved that its answer is smaller than by a slowly-growing (but still unknown) factor. Equivalence between formulations The following questions all have answers that are asymptotically equivalent: they differ by, at most, constant factors from each other. What is the maximum possible number of edges in a graph with vertices in which every edge belongs to a unique triangle? The graphs with this property are called locally linear graphs or locally matching graphs. What is the maximum possible number of edges in a bipartite graph with vertices on each side of its bipartition, whose edges can be partitioned into induced subgraphs that are each matchings? What is the largest possible number of triples of points that one can select from given points, in such a way that every six points contain at most two of the selected triples? The Ruzsa–Szemerédi problem asks for the answer to these equivalent questions. To convert the bipartite graph induced matching problem into the unique triangle problem, add a third set of vertices to the graph, one for each induced matching, and add edges from vertices and of the bipartite graph to vertex in this third set whenever bipartite edge belongs to induced matching . The result is a balanced tripartite graph with vertices and the unique triangle property. In the other direction, an arbitrary graph with the unique triangle property can be made into a balanced tripartite graph by choosing a partition of the vertices into three equal sets randomly and keeping only the triangles that respect the partition. This will retain (in expectation) a constant fraction of the triangles and edges. A balanced tripartite graph with the unique triangle property can be made into a partitioned bipartite graph by removing one of its three subsets of vertices, and making an induced matching on the neighbors of each removed vertex. To convert a graph with a unique triangle per edge into a triple system, let the triples be the triangles of the graph. No six points can include three triangles without either two of the three triangles sharing an edge or all three triangles forming a fourth triangle that shares an edge with each of them. In the other direction, to convert a triple system into a graph, first eliminate any sets of four points that contain two triples. These four points cannot participate in any other triples, and so cannot contribute towards a more-than-linear total number of triples. Then, form a graph connecting any pair of points that both belong to any of the remaining triples. Lower bound A nearly-quadratic lower bound on the Ruzsa–Szemerédi problem can be derived from a result of Felix Behrend, according to which the numbers modulo an odd prime number have large Salem–Spencer sets, subsets of size with no three-term arithmetic progressions. Behrend's result can be used to construct tripartite graphs in which each side of the tripartition has vertices, there are edges, and each edge belongs to a unique triangle. Thus, with this construction, and the number of edges is . To construct a graph of this form from Behrend's arithmetic-progression-free subset , number the vertices on each side of the tripartition from to , and construct triangles of the form modulo for each in the range from to and each in . For example, with and , the result is a nine-vertex balanced tripartite graph with 18 edges, shown in the illustration. The graph formed from the union of these triangles has the desired property that every edge belongs to a unique triangle. For, if not, there would be a triangle where , , and all belong to , violating the assumption that there be no arithmetic progressions in . Upper bound The Szemerédi regularity lemma can be used to prove that any solution to the Ruzsa–Szemerédi problem has at most edges or triples. A stronger form of the graph removal lemma by Jacob Fox implies that the size of a solution is at most . Here the and are instances of little o and big Omega notation, and denotes the iterated logarithm. Fox proves that, in any -vertex graph with triangles for some , one can find a triangle-free subgraph by removing at most edges. In a graph with the unique triangle property, there are (naively) triangles, so this result applies with . But in this graph, each edge removal eliminates only one triangle, so the number of edges that must be removed to eliminate all triangles is the same as the number of triangles. History The problem is named after Imre Z. Ruzsa and Endre Szemerédi, who studied this problem, in the formulation involving triples of points, in a 1978 publication. However, it had been previously studied by W. G. Brown, Paul Erdős, and Vera T. Sós, in two publications in 1973 which proved that the maximum number of triples can be , and conjectured that it was . Ruzsa and Szemerédi provided (unequal) nearly-quadratic upper and lower bounds for the problem, significantly improving the previous lower bound of Brown, Erdős, and Sós, and proving their conjecture. Applications The existence of dense graphs that can be partitioned into large induced matchings has been used to construct efficient tests for whether a Boolean function is linear, a key component of the PCP theorem in computational complexity theory. In the theory of property testing algorithms, the known results on the Ruzsa–Szemerédi problem have been applied to show that it is possible to test whether a graph has no copies of a given subgraph , with one-sided error in a number of queries polynomial in the error parameter, if and only if is a bipartite graph. In the theory of streaming algorithms for graph matching (for instance to match internet advertisers with advertising slots), the quality of matching covers (sparse subgraphs that approximately preserve the size of a matching in all vertex subsets) is closely related to the density of bipartite graphs that can be partitioned into induced matchings. This construction uses a modified form of the Ruzsa-Szemerédi problem in which the number of induced matchings can be much smaller than the number of vertices, but each induced matching must cover most of the vertices of the graph. In this version of the problem, it is possible to construct graphs with a non-constant number of linear-sized induced matchings, and this result leads to nearly-tight bounds on the approximation ratio of streaming matching algorithms. The subquadratic upper bound on the Ruzsa–Szemerédi problem was also used to provide an bound on the size of cap sets, before stronger bounds of the form for were proven for this problem. It also provides the best known upper bound on tripod packing. References Combinatorial design Extremal graph theory Matching (graph theory)
Ruzsa–Szemerédi problem
[ "Mathematics" ]
1,545
[ "Combinatorial design", "Graph theory", "Combinatorics", "Mathematical relations", "Extremal graph theory", "Matching (graph theory)" ]
59,850,323
https://en.wikipedia.org/wiki/NGC%205363
NGC 5363 is a lenticular galaxy located in the constellation Virgo. It is located at a distance of circa 65 million light years from Earth, which, given its apparent dimensions, means that NGC 5363 is about 100,000 light years across. It was discovered by William Herschel on January 19, 1784. It is a member of the NGC 5364 Group of galaxies, itself one of the Virgo III Groups strung out to the east of the Virgo Supercluster of galaxies. Characteristics NGC 5363 is characterised by the presence of a dust lane along its minor axis, visible also in mid-infrared maps, and a more extended one with an intermediate orientation. The total mass of cold dust in the galaxy is estimated to be , extending for 52 arcseconds in the far-infrared. The dust emission appears as a disk with spiral arms and a possible barlike structure, and extends at the outer parts of the galaxy as a fainter, armlike structure, along the major axis of the galaxy. The galaxy also features HII emission that forms a spiral disk. The total dust mass is about a factor of 100 larger than the one predicted if it was created only by the mass lost by evolved stars. The galaxy also has shells, which are evidence of a recent merger, in which NGC 5363 accreted another galaxy, and thus it is strongly suggested that the interstellar dust is of external origin. It is highly likely that this merger event caused star formation activity in the galaxy, as is evident by the detection of ultraviolet radiation associated with young stars. Based on its spectrum, the nucleus of NGC 5363 has been found to be active and has been categorised as a LINER. In the centre of NGC 5363 lies a supermassive black hole with an estimated mass of 375 million . NGC 5363 has been found to emit radio waves. The radio source consists of a compact core with a diameter of less than 2 arcseconds and probably an extended component, stretching for about 20 arcseconds. Nearby galaxies NGC 5363 is the foremost galaxy in a galaxy group known as the NGC 5363 group. Other members of the group include NGC 5300, NGC 5348, NGC 5356, NGC 5360, and NGC 5364. NGC 5363 and NGC 5364 lie at a projected distance of 14.5 arcminutes, forming a non-interacting pair. The group is part of the Virgo III Groups, a very obvious chain of galaxy groups on the left side of the Virgo cluster, stretching across 40 million light years of space. See also IC 1459, NGC 3108, NGC 5128, and NGC 5173 – other early-type galaxies with spiral features References External links NGC 5363 on SIMBAD Lenticular galaxies Virgo (constellation) 5363 08847 49547 Astronomical objects discovered in 1784 Discoveries by William Herschel
NGC 5363
[ "Astronomy" ]
604
[ "Virgo (constellation)", "Constellations" ]
61,276,647
https://en.wikipedia.org/wiki/List%20of%20textbooks%20on%20classical%20mechanics%20and%20quantum%20mechanics
This is a list of notable textbooks on classical mechanics and quantum mechanics arranged according to level and surnames of the authors in alphabetical order. Undergraduate Classical mechanics Chapters 1–21. Numerous subsequent editions. Quantum mechanics Advanced undergraduate and graduate Classical mechanics Quantum mechanics Three volumes. Landau, L. D, and Lifshitz, E. M. Course of Theoretical Physics Volume 3 - Quantum Mechanics: Non-Relativistic Theory. Edited by Pitaevskiĭ L. P. Translated by J. B Sykes and J. S Bell, Third edition, revised and enlarged ed., Pergamon Press, 1977. . Leonard I. Schiff (1968) Quantum Mechanics McGraw-Hill Education Davydov A.S. (1965) Quantum Mechanics Pergamon ISBN 9781483172026 Both topics See also List of textbooks in thermodynamics and statistical mechanics List of textbooks in electromagnetism List of books on general relativity Teaching quantum mechanics External links A Physics Book List. John Baez. Department of Mathematics, University of California, Riverside. 1993–1997. Textbooks Lists of science textbooks Mathematics-related lists Physics-related lists Textbooks
List of textbooks on classical mechanics and quantum mechanics
[ "Physics" ]
238
[ "Quantum mechanics", "Mechanics", "Classical mechanics", "Works about quantum mechanics" ]
62,534,177
https://en.wikipedia.org/wiki/Local%20structure
The local structure is a term in nuclear spectroscopy that refers to the structure of the nearest neighbours around an atom in crystals and molecules. E.g. in crystals the atoms order in a regular fashion on wide ranges to form even gigantic highly ordered crystals (Naica Mine). However, in reality, crystals are never perfect and have impurities or defects, which means that a foreign atom resides on a lattice site or in between lattice sites (interstitials). These small defects and impurities cannot be seen by methods such as X-ray diffraction or neutron diffraction, because these methods average in their nature of measurement over a large number of atoms and thus are insensitive to effects in local structure. Methods in nuclear spectroscopy use specific nuclei as probe. The nucleus of an atom is about 10,000 to 150,000 times smaller than the atom itself. It experiences the electric fields created by the atom's electrons that surround the nucleus. In addition, the electric fields created by neighbouring atoms also influence the fields that the nucleus experiences. The interactions between the nucleus and these fields are called hyperfine interactions that influence the nucleus' properties. The nucleus therefore becomes very sensitive to small changes in its hyperfine structure, which can be measured by methods of nuclear spectroscopy, such as e.g. nuclear magnetic resonance, Mössbauer spectroscopy, and perturbed angular correlation. With the same methods, the local magnetic fields in a crystal structure can also be probed and provide a magnetic local structure. This is of great importance for the understanding of defects in magnetic materials, which have wide range of applications such as modern magnetic materials or the giant magnetoresistance effect, that is used in materials in the reader heads of harddrives. Research of the local structure of materials has become an important tool for the understanding of properties especially in functional materials, such as used in electronics, chips, batteries, semiconductors, or solar cells. Many of those materials are defect materials and their specific properties are controlled by defects. References Electrostatics Atomic physics Quantum chemistry Electric and magnetic fields in matter
Local structure
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
424
[ "Quantum chemistry stubs", "Quantum chemistry", "Theoretical chemistry stubs", "Quantum mechanics", "Electric and magnetic fields in matter", "Materials science", "Theoretical chemistry", "Condensed matter physics", "Atomic physics", " molecular", "Atomic", "Physical chemistry stubs", " and ...
62,534,251
https://en.wikipedia.org/wiki/Interpolymer%20complexes
Interpolymer complexes (IPC) are the products of non-covalent interactions between complementary unlike macromolecules in solutions. There are four types of these complexes: Interpolyelectrolyte complexes (IPEC) or polyelectrolyte complexes (PEC) Hydrogen-bonded interpolymer complexes Stereocomplexes Charge-transfer complexes Formation of interpolymer complexes Interpolymer complexes can be prepared either by mixing complementary polymers in solutions or by matrix (template) polymerisation. It is also possible to prepare IPCs at liquid-liquid interfaces or at solid or soft surfaces. Usually the structure of IPCs formed will depend on many factors, including the nature of interacting polymers, concentrations of their solutions, nature of solvent and presence of inorganic ions or organic molecules in solutions. Mixing of dilute polymer solutions usually leads to formation of IPCs as a colloidal dispersion, whereas more concentrated polymer solutions form IPCs in the form of a gel. Methods to study interpolymer complexes Methods to study interpolymer complexes could be classified into: (1) approaches to demonstrate the fact of the complex formation and to determine the composition of IPCs in solutions; (2) approaches to study the structure of IPCs formed; (3) methods to characterize IPCs in solid state. Applications of interpolymer complexes IPCs are finding applications in pharmaceutics in the design of novel dosage forms. They also are increasingly used to form various coatings using layer-by-layer deposition approach. Some IPCs were proposed for application as membranes and films. They also have been used for structuring of soils to protect from erosion. Other applications include encapsulation technologies. References Polymer chemistry
Interpolymer complexes
[ "Chemistry", "Materials_science", "Engineering" ]
361
[ "Materials science", "Polymer chemistry" ]
62,536,761
https://en.wikipedia.org/wiki/Freweini%20Mebrahtu
Freweini Mebrahtu () is an Ethiopian chemical engineer and inventor who won the 2019 CNN Hero of the year award for her activism in improving girls' access to education. Early life Freweini was born in Ethiopia and educated in the United States, studying chemical engineering at Prairie View A&M University. In 2005, she patented a reusable menstrual pad that can be used for up to 2 years with proper care. As of 2019, she employs hundreds of locals in Tigray region of Ethiopia, and makes more than 700,000 of the reusable pads that are mainly provided to non-governmental organizations. Contribution Her menstrual product plus her educational campaign has helped in removing the stigma surrounding menstruation and stopped girls from dropping out of schools due to the stigma. The non profit organization Dignity Period has distributed more than 150,000 free menstrual supplies purchased from Freweni's factory. It was reported that attendance among girls improved by 24% due to this effort. References Living people Ethiopian engineers Chemical engineers Ethiopian emigrants to the United States Year of birth missing (living people) Menstrual cycle Prairie View A&M University alumni People from Tigray Region
Freweini Mebrahtu
[ "Chemistry", "Engineering" ]
254
[ "Chemical engineering", "Chemical engineers" ]
62,539,802
https://en.wikipedia.org/wiki/Rotation%20distance
In discrete mathematics and theoretical computer science, the rotation distance between two binary trees with the same number of nodes is the minimum number of tree rotations needed to reconfigure one tree into another. Because of a combinatorial equivalence between binary trees and triangulations of convex polygons, rotation distance is equivalent to the flip distance for triangulations of convex polygons. Rotation distance was first defined by Karel Čulík II and Derick Wood in 1982. Every two -node binary trees have rotation distance at most , and some pairs of trees have exactly this distance. The computational complexity of computing the rotation distance is unknown. Definition A binary tree is a structure consisting of a set of nodes, one of which is designated as the root node, in which each remaining node is either the left child or right child of some other node, its parent, and in which following the parent links from any node eventually leads to the root node. (In some sources, the nodes described here are called "internal nodes", there exists another set of nodes called "external nodes", each internal node is required to have exactly two children, and each external node is required to have zero children. The version described here can be obtained by removing all the external nodes from such a tree.) For any node in the tree, there is a subtree of the same form, rooted at and consisting of all the nodes that can reach by following parent links. Each binary tree has a left-to-right ordering of its nodes, its inorder traversal, obtained by recursively traversing the left subtree (the subtree at the left child of the root, if such a child exists), then listing the root itself, and then recursively traversing the right subtree. In a binary search tree, each node is associated with a search key, and the left-to-right ordering is required to be consistent with the order of the keys. A tree rotation is an operation that changes the structure of a binary tree without changing its left-to-right ordering. Several self-balancing binary search tree data structures use these rotations as a primitive operation in their rebalancing algorithms. A rotation operates on two nodes and , where is the parent of , and restructures the tree by making be the parent of and taking the place of in the tree. To free up one of the child links of and make room to link as a child of , this operation may also need to move one of the children of to become a child of . There are two variations of this operation, a right rotation in which begins as the left child of and ends as the right child of , and a left rotation in which begins as the right child of and ends as the left child of . Any two trees that have the same left-to-right sequence of nodes may be transformed into each other by a sequence of rotations. The rotation distance between the two trees is the number of rotations in the shortest possible sequence of rotations that performs this transformation. It can also be described as the shortest path distance in a rotation graph, a graph that has a vertex for each binary tree on a given left-to-right sequence of nodes and an edge for each rotation between two trees. This rotation graph is exactly the graph of vertices and edges of an associahedron. Equivalence to flip distance Given a family of triangulations of some geometric object, a flip is an operation that transforms one triangulation to another by removing an edge between two triangles and adding the opposite diagonal to the resulting quadrilateral. The flip distance between two triangulations is the minimum number of flips needed to transform one triangulation into another. It can also be described as the shortest path distance in a flip graph, a graph that has a vertex for each triangulation and an edge for each flip between two triangulations. Flips and flip distances can be defined in this way for several different kinds of triangulations, including triangulations of sets of points in the Euclidean plane, triangulations of polygons, and triangulations of abstract manifolds. There is a one-to-one correspondence between triangulations of a given convex polygon, with a designated root edge, and binary trees, taking triangulations of -sided polygons into binary trees with nodes. In this correspondence, each triangle of a triangulation corresponds to a node in a binary tree. The root node is the triangle having the designated root edge as one of its sides, and two nodes are linked as parent and child in the tree when the corresponding triangles share a diagonal in the triangulation. Under this correspondence, rotations in binary trees correspond exactly to flips in the corresponding triangulations. Therefore, the rotation distance on -node trees corresponds exactly to flip distance on triangulations of -sided convex polygons. Maximum value define the "right spine" of a binary tree to be the path obtained by starting from the root and following right child links until reaching a node that has no right child. If a tree has the property that not all nodes belong to the right spine, there always exists a right rotation that increases the length of the right spine. For, in this case, there exists at least one node on the right spine that has a left child that is not on the right spine. Performing a right rotation on and adds to the right spine without removing any other node from it. By repeatedly increasing the length of the right spine, any -node tree can be transformed into the unique tree with the same node order in which all nodes belong to the right spine, in at most steps. Given any two trees with the same node order, one can transform one into the other by transforming the first tree into a tree with all nodes on the right spine, and then reversing the same transformation of the second tree, in a total of at most steps. Therefore, as proved, the rotation distance between any two trees is at most . By considering the problem in terms of flips of convex polygons instead of rotations of trees, were able to show that the rotation distance is at most . In terms of triangulations of convex polygons, the right spine is the sequence of triangles incident to the right endpoint of the root edge, and the tree in which all vertices lie on the spine corresponds to a fan triangulation for this vertex. The main idea of their improvement is to try flipping both given triangulations to a fan triangulation for any vertex, rather than only the one for the right endpoint of the root edge. It is not possible for all of these choices to simultaneously give the worst-case distance from each starting triangulation, giving the improvement. also used a geometric argument to show that, for infinitely many values of , the maximum rotation distance is exactly . They again use the interpretation of the problem in terms of flips of triangulations of convex polygons, and they interpret the starting and ending triangulation as the top and bottom faces of a convex polyhedron with the convex polygon itself interpreted as a Hamiltonian circuit in this polyhedron. Under this interpretation, a sequence of flips from one triangulation to the other can be translated into a collection of tetrahedra that triangulate the given three-dimensional polyhedron. They find a family of polyhedra with the property that (in three-dimensional hyperbolic geometry) the polyhedra have large volume, but all tetrahedra inside them have much smaller volume, implying that many tetrahedra are needed in any triangulation. The binary trees obtained from translating the top and bottom sets of faces of these polyhedra back into trees have high rotation distance, at least . Subsequently, provided a proof that for all , the maximum rotation distance is exactly . Pournin's proof is combinatorial, and avoids the use of hyperbolic geometry. Computational complexity As well as defining rotation distance, asked for the computational complexity of computing the rotation distance between two given trees. The existence of short rotation sequences between any two trees implies that testing whether the rotation distance is at most belongs to the complexity class NP, but it is not known to be NP-complete, nor is it known to be solvable in polynomial time. The rotation distance between any two trees can be lower bounded, in the equivalent view of polygon triangulations, by the number of diagonals that need to be removed from one triangulation and replaced by other diagonals to produce the other triangulation. It can also be upper bounded by twice this number, by partitioning the problem into subproblems along any diagonals shared between both triangulations and then applying the method of to each subproblem. This method provides an approximation algorithm for the problem with an approximation ratio of two. A similar approach of partitioning into subproblems along shared diagonals leads to a fixed-parameter tractable algorithm for computing the rotation distance exactly. Determining the complexity of computing the rotation distance exactly without parameterization remains unsolved, and the best algorithms currently known for the problem run in exponential time. Variants Though the complexity of rotation distance is unknown, there exists several variants for which rotation distance can be solved in polynomial time. In abstract algebra, each element in Thompson's group F has a presentation using two generators. Finding the minimum length of such a presentation is equivalent to finding the rotation distance between two binary trees with only rotations on the root node and its right child allowed. Fordham's algorithm computes the rotation distance under this restriction in linear time. The algorithm classifies tree nodes into 7 types and uses a lookup table to find the number of rotations required to transform a node of one type into another. The sum of the costs of all transformations is the rotation distance. In two additional variants, one only allows rotations such that the pivot of the rotation is a non-leaf child of the root and the other child of the root is a leaf, while the other only allow rotations on right-arm nodes (nodes that are on the path from the root to its rightmost leaf). Both variants result in a meet semi-lattice, whose structure is exploited to derive a algorithm. References Binary trees Triangulation (geometry) Reconfiguration
Rotation distance
[ "Mathematics" ]
2,136
[ "Triangulation (geometry)", "Reconfiguration", "Planar graphs", "Computational problems", "Planes (geometry)", "Mathematical problems" ]
66,249,878
https://en.wikipedia.org/wiki/Arctic%20Wolf%20Networks
Arctic Wolf Networks is a cybersecurity company that provides security monitoring to detect and respond to cyber threats. The company monitors on-premises computers, networks and cloud-based information assets from malicious activity such as cybercrime, ransomware, and malicious software attacks. History Founded in 2012, Arctic Wolf focused on providing managed security services to small and mid-market organizations. The company was listed as a Gartner Cool Vendor in security for mid-sized enterprises in June 2018. Acquisitions In December 2018, Arctic Wolf announced the acquisition of the company RootSecure, and subsequently turned the RootSecure product offering into a vulnerability management service. On February 1, 2022, Arctic Wolf acquired Tetra Defense. In October 2023, Arctic Wolf acquired Revelstoke, a cybersecurity company. Cylance was acquired from Blackberry Limited by Arctic Wolf in December 2024. Funding In March 2020, following a $60M D Round of funding, the company announced moving its headquarters from Sunnyvale, California to Eden Prairie, Minnesota in October 2020. In October 2020, Arctic Wolf announced a $200M E Round of funding at a valuation of 1.3B$. On July 19, 2021, Arctic Wolf secured $150M at Series F, tripling its valuation to $4.3B. References External links Software companies established in 2012 Network management Software companies of the United States American companies established in 2012 Computer security companies Information technology companies of the United States Security companies of the United States
Arctic Wolf Networks
[ "Engineering" ]
302
[ "Computer networks engineering", "Network management" ]
66,254,033
https://en.wikipedia.org/wiki/Methylol%20urea
Methylol urea is the organic compound with the formula H2NC(O)NHCH2OH. It is a white, water-soluble solid that decomposes near 110 °C. Methylolurea is the product of the condensation reaction of formaldehyde and urea. As such it is an intermediate in the formation of urea-formaldehyde resins as well as fertilizer compositions such as methylene diurea. It has also been investigated as a corrosion inhibitor. References Ureas
Methylol urea
[ "Chemistry" ]
109
[ "Organic compounds", "Ureas" ]
66,256,557
https://en.wikipedia.org/wiki/Nitrogen%20solubility%20index
The nitrogen solubility index (NSI) is a measure of the solubility of the protein in a substance. It is typically used as a quick measure of the functionality of a protein, for example to predict the ability of the protein to stabilise foams, emulsions or gels. To determine the NSI, the sample is dried, dispersed in a 0.1 M salt solution, centrifuged and filtered. The NSI is the amount of Nitrogen in this filtered solution divided by the nitrogen in the initial sample, as measured by the Kjeldahl method. The relevance of the NSI is based on the fact that proteins are the major biological source of Nitrogen: for various types of protein, there are empirical formulas which correlate the nitrogen content to the protein content. Other related measures of protein solubility are the Protein Solubility Index (PSI), the Protein Dispersibility index (PDI). These are based on a specific protein assay, rather than a nitrogen assay, and the dispersibility index differs from the solubility index, in that the sample is dispersed with a high-shear mixer and then strained through a screen instead of being centrifuged and filtered. References Protein methods Food analysis
Nitrogen solubility index
[ "Chemistry", "Biology" ]
261
[ "Biochemistry methods", "Protein methods", "Protein biochemistry", "Food analysis", "Food chemistry" ]
63,440,650
https://en.wikipedia.org/wiki/Equivalents%20of%20the%20Axiom%20of%20Choice
Equivalents of the Axiom of Choice is a book in mathematics, collecting statements in mathematics that are true if and only if the axiom of choice holds. It was written by Herman Rubin and Jean E. Rubin, and published in 1963 by North-Holland as volume 34 of their Studies in Logic and the Foundations of Mathematics series. An updated edition, Equivalents of the Axiom of Choice, II, was published as volume 116 of the same series in 1985. Topics At the time of the book's original publication, it was unknown whether the axiom of choice followed from the other axioms of Zermelo–Fraenkel set theory (ZF), or was independent of them, although it was known to be consistent with them from the work of Kurt Gödel. This book codified the project of classifying theorems of mathematics according to whether the axiom of choice was necessary in their proofs, or whether they could be proven without it. At approximately the same time as the book's publication, Paul Cohen proved that the negation of the axiom of choice is also consistent, implying that the axiom of choice, and all of its equivalent statements in this book, are indeed independent of ZF. The first edition of the book includes over 150 statements in mathematics that are equivalent to the axiom of choice, including some that are novel to the book. This edition is divided into two parts, the first involving notions expressed using sets and the second involving classes instead of sets. Within the first part, the topics are grouped into statements related to the well-ordering principle, the axiom of choice itself, trichotomy (the ability to compare cardinal numbers), and Zorn's lemma and related maximality principles. This section also includes three more chapters, on statements in abstract algebra, statements for cardinal numbers, and a final collection of miscellaneous statements. The second section has four chapters, on topics parallel to four of the first section's chapters. The book includes the history of each statement, and many proofs of their equivalence. Rather than ZF, it uses Von Neumann–Bernays–Gödel set theory for its proofs, mainly in a form called NBG0 that allows urelements (contrary to the axiom of extensionality) and also does not include the axiom of regularity. The second edition adds many additional equivalent statements, more than twice as many as the first edition, with an additional list of over 80 statements that are related to the axiom of choice but not known to be equivalent to it. It includes two added sections, one on equivalent statements that need the axioms of extensionality and regularity in their proofs of equivalence, and another on statements in topology, mathematical analysis, and mathematical logic. It also includes more recent developments on the independence of the axiom of choice, and an improved account of the history of Zorn's lemma. Audience and reception This book is written as a reference for professional mathematicians, especially those working in set theory. Reviewer Chen Chung Chang writes that it "will be useful both to the specialist in the field and to the general working mathematician", and that its presentation of results is "clear and lucid". By the time of the second edition, reviewers J. M. Plotkin and David Pincus both called this "the standard reference" in this area. References External links Equivalents of the Axiom of Choice, II at the Internet Archive Axiom of choice Mathematics books 1963 non-fiction books 1985 non-fiction books
Equivalents of the Axiom of Choice
[ "Mathematics" ]
724
[ "Axiom of choice", "Axioms of set theory", "Mathematical axioms" ]
63,442,098
https://en.wikipedia.org/wiki/Hyperbolastic%20functions
The hyperbolastic functions, also known as hyperbolastic growth models, are mathematical functions that are used in medical statistical modeling. These models were originally developed to capture the growth dynamics of multicellular tumor spheres, and were introduced in 2005 by Mohammad Tabatabai, David Williams, and Zoran Bursac. The precision of hyperbolastic functions in modeling real world problems is somewhat due to their flexibility in their point of inflection. These functions can be used in a wide variety of modeling problems such as tumor growth, stem cell proliferation, pharma kinetics, cancer growth, sigmoid activation function in neural networks, and epidemiological disease progression or regression. The hyperbolastic functions can model both growth and decay curves until it reaches carrying capacity. Due to their flexibility, these models have diverse applications in the medical field, with the ability to capture disease progression with an intervening treatment. As the figures indicate, hyperbolastic functions can fit a sigmoidal curve indicating that the slowest rate occurs at the early and late stages. In addition to the presenting sigmoidal shapes, it can also accommodate biphasic situations where medical interventions slow or reverse disease progression; but, when the effect of the treatment vanishes, the disease will begin the second phase of its progression until it reaches its horizontal asymptote. One of the main characteristics these functions have is that they cannot only fit sigmoidal shapes, but can also model biphasic growth patterns that other classical sigmoidal curves cannot adequately model. This distinguishing feature has advantageous applications in various fields including medicine, biology, economics, engineering, agronomy, and computer aided system theory. Function H1 The hyperbolastic rate equation of type I, denoted H1, is given by where is any real number and is the population size at . The parameter represents carrying capacity, and parameters and jointly represent growth rate. The parameter gives the distance from a symmetric sigmoidal curve. Solving the hyperbolastic rate equation of type I for gives where is the inverse hyperbolic sine function. If one desires to use the initial condition , then can be expressed as . If , then reduces to . In the event that a vertical shift is needed to give a better model fit, one can add the shift parameter , which would result in the following formula . The hyperbolastic function of type I generalizes the logistic function. If the parameters , then it would become a logistic function. This function is a hyperbolastic function of type I. The standard hyperbolastic function of type I is . Function H2 The hyperbolastic rate equation of type II, denoted by H2, is defined as where is the hyperbolic tangent function, is the carrying capacity, and both and jointly determine the growth rate. In addition, the parameter represents acceleration in the time course. Solving the hyperbolastic rate function of type II for gives . If one desires to use initial condition then can be expressed as . If , then reduces to . Similarly, in the event that a vertical shift is needed to give a better fit, one can use the following formula . The standard hyperbolastic function of type II is defined as . Function H3 The hyperbolastic rate equation of type III is denoted by H3 and has the form , where > 0. The parameter represents the carrying capacity, and the parameters and jointly determine the growth rate. The parameter represents acceleration of the time scale, while the size of represents distance from a symmetric sigmoidal curve. The solution to the differential equation of type III is , with the initial condition we can express as . The hyperbolastic distribution of type III is a three-parameter family of continuous probability distributions with scale parameters > 0, and ≥ 0 and parameter as the shape parameter. When the parameter = 0, the hyperbolastic distribution of type III is reduced to the weibull distribution. The hyperbolastic cumulative distribution function of type III is given by , and its corresponding probability density function is . The hazard function (or failure rate) is given by The survival function is given by The standard hyperbolastic cumulative distribution function of type III is defined as , and its corresponding probability density function is . Properties If one desires to calculate the point where the population reaches a percentage of its carrying capacity , then one can solve the equation for , where . For instance, the half point can be found by setting . Applications According to stem cell researchers at McGowan Institute for Regenerative Medicine at the University of Pittsburgh, "a newer model [called the hyperbolastic type III or] H3 is a differential equation that also describes the cell growth. This model allows for much more variation and has been proven to better predict growth." The hyperbolastic growth models H1, H2, and H3 have been applied to analyze the growth of solid Ehrlich carcinoma using a variety of treatments. In animal science, the hyperbolastic functions have been used for modeling broiler chicken growth. The hyperbolastic model of type III was used to determine the size of the recovering wound. In the area of wound healing, the hyperbolastic models accurately representing the time course of healing. Such functions have been used to investigate variations in the healing velocity among different kinds of wounds and at different stages in the healing process taking into consideration the areas of trace elements, growth factors, diabetic wounds, and nutrition. Another application of hyperbolastic functions is in the area of the stochastic diffusion process, whose mean function is a hyperbolastic curve. The main characteristics of the process are studied and the maximum likelihood estimation for the parameters of the process is considered. To this end, the firefly metaheuristic optimization algorithm is applied after bounding the parametric space by a stage wise procedure. Some examples based on simulated sample paths and real data illustrate this development. A sample path of a diffusion process models the trajectory of a particle embedded in a flowing fluid and subjected to random displacements due to collisions with other particles, which is called Brownian motion. The hyperbolastic function of type III was used to model the proliferation of both adult mesenchymal and embryonic stem cells; and, the hyperbolastic mixed model of type II has been used in modeling cervical cancer data. Hyperbolastic curves can be an important tool in analyzing cellular growth, the fitting of biological curves, the growth of phytoplankton, and instantaneous maturity rate. In forest ecology and management, the hyperbolastic models have been applied to model the relationship between DBH and height. The multivariable hyperbolastic model type III has been used to analyze the growth dynamics of phytoplankton taking into consideration the concentration of nutrients. Hyperbolastic regressions Hyperbolastic regressions are statistical models that utilize standard hyperbolastic functions to model a dichotomous or multinomial outcome variable. The purpose of hyperbolastic regression is to predict an outcome using a set of explanatory (independent) variables. These types of regressions are routinely used in many areas including medical, public health, dental, biomedical, as well as social, behavioral, and engineering sciences. For instance, binary regression analysis has been used to predict endoscopic lesions in iron deficiency anemia. In addition, binary regression was applied to differentiate between malignant and benign adnexal mass prior to surgery. The binary hyperbolastic regression of type I Let be a binary outcome variable which can assume one of two mutually exclusive values, success or failure. If we code success as and failure as , then for parameter , the hyperbolastic success probability of type I with a sample of size as a function of parameter and parameter vector given a -dimensional vector of explanatory variables is defined as , where , is given by . The odds of success is the ratio of the probability of success to the probability of failure. For binary hyperbolastic regression of type I, the odds of success is denoted by and expressed by the equation . The logarithm of is called the logit of binary hyperbolastic regression of type I. The logit transformation is denoted by and can be written as . Shannon information for binary hyperbolastic of type I (H1) The Shannon information for the random variable is defined as where the base of logarithm and . For binary outcome, is equal to . For the binary hyperbolastic regression of type I, the information is given by , where , and is the input data. For a random sample of binary outcomes of size , the average empirical information for hyperbolastic H1 can be estimated by , where , and is the input data for the observation. Information Entropy for hyperbolastic H1 Information entropy measures the loss of information in a transmitted message or signal. In machine learning applications, it is the number of bits necessary to transmit a randomly selected event from a probability distribution. For a discrete random variable , the information entropy is defined as where is the probability mass function for the random variable . The information entropy is the mathematical expectation of with respect to probability mass function . The Information entropy has many applications in machine learning and artificial intelligence such as classification modeling and decision trees. For the hyperbolastic H1, the entropy is equal to The estimated average entropy for hyperbolastic H1 is denoted by and is given by Binary Cross-entropy for hyperbolastic H1 The binary cross-entropy compares the observed with the predicted probabilities. The average binary cross-entropy for hyperbolastic H1 is denoted by and is equal to The binary hyperbolastic regression of type II The hyperbolastic regression of type II is an alternative method for the analysis of binary data with robust properties. For the binary outcome variable , the hyperbolastic success probability of type II is a function of a -dimensional vector of explanatory variables given by , For the binary hyperbolastic regression of type II, the odds of success is denoted by and is defined as The logit transformation is given by Shannon information for binary hyperbolastic of type II (H2) For the binary hyperbolastic regression H2, the Shannon information is given by where , and is the input data. For a random sample of binary outcomes of size , the average empirical information for hyperbolastic H2 is estimated by where , and is the input data for the observation. Information Entropy for hyperbolastic H2 For the hyperbolastic H2, the information entropy is equal to and the estimated average entropy for hyperbolastic H2 is Binary Cross-entropy for hyperbolastic H2 The average binary cross-entropy for hyperbolastic H2 is Parameter estimation for the binary hyperbolastic regression of type I and II The estimate of the parameter vector can be obtained by maximizing the log-likelihood function where is defined according to one of the two types of hyberbolastic functions used. The multinomial hyperbolastic regression of type I and II The generalization of the binary hyperbolastic regression to multinomial hyperbolastic regression has a response variable for individual with categories (i.e. ). When , this model reduces to a binary hyperbolastic regression. For each , we form indicator variables where , meaning that whenever the response is in category and otherwise. Define parameter vector in a -dimensional Euclidean space and . Using category 1 as a reference and as its corresponding probability function, the multinomial hyperbolastic regression of type I probabilities are defined as and for , Similarly, for the multinomial hyperbolastic regression of type II we have and for , where with and . The choice of is dependent on the choice of hyperbolastic H1 or H2. Shannon Information for multiclass hyperbolastic H1 or H2 For the multiclass , the Shannon information is . For a random sample of size , the empirical multiclass information can be estimated by . Multiclass Entropy in Information Theory For a discrete random variable , the multiclass information entropy is defined as where is the probability mass function for the multiclass random variable . For the hyperbolastic H1 or H2, the multiclass entropy is equal to The estimated average multiclass entropy is equal to Multiclass Cross-entropy for hyperbolastic H1 or H2 Multiclass cross-entropy compares the observed multiclass output with the predicted probabilities. For a random sample of multiclass outcomes of size , the average multiclass cross-entropy for hyperbolastic H1 or H2 can be estimated by The log-odds of membership in category versus the reference category 1, denoted by , is equal to where and . The estimated parameter matrix of multinomial hyperbolastic regression is obtained by maximizing the log-likelihood function. The maximum likelihood estimates of the parameter matrix is References Medical models Population models Special functions
Hyperbolastic functions
[ "Mathematics" ]
2,619
[ "Special functions", "Combinatorics" ]
63,442,371
https://en.wikipedia.org/wiki/Regulation%20of%20algorithms
Regulation of algorithms, or algorithmic regulation, is the creation of laws, rules and public sector policies for promotion and regulation of algorithms, particularly in artificial intelligence and machine learning. For the subset of AI algorithms, the term regulation of artificial intelligence is used. The regulatory and policy landscape for artificial intelligence (AI) is an emerging issue in jurisdictions globally, including in the European Union. Regulation of AI is considered necessary to both encourage AI and manage associated risks, but challenging. Another emerging topic is the regulation of blockchain algorithms (Use of the smart contracts must be regulated) and is mentioned along with regulation of AI algorithms. Many countries have enacted regulations of high frequency trades, which is shifting due to technological progress into the realm of AI algorithms. The motivation for regulation of algorithms is the apprehension of losing control over the algorithms, whose impact on human life increases. Multiple countries have already introduced regulations in case of automated credit score calculation—right to explanation is mandatory for those algorithms. For example, The IEEE has begun developing a new standard to explicitly address ethical issues and the values of potential future users. Bias, transparency, and ethics concerns have emerged with respect to the use of algorithms in diverse domains ranging from criminal justice to healthcare—many fear that artificial intelligence could replicate existing social inequalities along race, class, gender, and sexuality lines. Regulation of artificial intelligence Public discussion In 2016, Joy Buolamwini founded Algorithmic Justice League after a personal experience with biased facial detection software in order to raise awareness of the social implications of artificial intelligence through art and research. In 2017 Elon Musk advocated regulation of algorithms in the context of the existential risk from artificial general intelligence. According to NPR, the Tesla CEO was "clearly not thrilled" to be advocating for government scrutiny that could impact his own industry, but believed the risks of going completely without oversight are too high: "Normally the way regulations are set up is when a bunch of bad things happen, there's a public outcry, and after many years a regulatory agency is set up to regulate that industry. It takes forever. That, in the past, has been bad but not something which represented a fundamental risk to the existence of civilisation." In response, some politicians expressed skepticism about the wisdom of regulating a technology that is still in development. Responding both to Musk and to February 2017 proposals by European Union lawmakers to regulate AI and robotics, Intel CEO Brian Krzanich has argued that artificial intelligence is in its infancy and that it is too early to regulate the technology. Instead of trying to regulate the technology itself, some scholars suggest to rather develop common norms including requirements for the testing and transparency of algorithms, possibly in combination with some form of warranty. One suggestion has been for the development of a global governance board to regulate AI development. In 2020, the European Union published its draft strategy paper for promoting and regulating AI. Algorithmic tacit collusion is a legally dubious antitrust practise committed by means of algorithms, which the courts are not able to prosecute. This danger concerns scientists and regulators in EU, US and beyond. European Commissioner Margrethe Vestager mentioned an early example of algorithmic tacit collusion in her speech on "Algorithms and Collusion" on March 16, 2017, described as follows: "A few years ago, two companies were selling a textbook called The Making of a Fly. One of those sellers used an algorithm which essentially matched its rival’s price. That rival had an algorithm which always set a price 27% higher than the first. The result was that prices kept spiralling upwards, until finally someone noticed what was going on, and adjusted the price manually. By that time, the book was selling – or rather, not selling – for 23 million dollars a copy." In 2018, the Netherlands employed an algorithmic system SyRI (Systeem Risico Indicatie) to detect citizens perceived being high risk for committing welfare fraud, which quietly flagged thousands of people to investigators. This caused a public protest. The district court of Hague shut down SyRI referencing Article 8 of the European Convention on Human Rights (ECHR). In 2020, algorithms assigning exam grades to students in the UK sparked open protest under the banner "Fuck the algorithm." This protest was successful and the grades were taken back. Implementation AI law and regulations can be divided into three main topics, namely governance of autonomous intelligence systems, responsibility and accountability for the systems, and privacy and safety issues. The development of public sector strategies for management and regulation of AI has been increasingly deemed necessary at the local, national, and international levels and in fields from public service management to law enforcement, the financial sector, robotics, the military, and international law. There are many concerns that there is not enough visibility and monitoring of AI in these sectors. In the United States financial sector, for example, there have been calls for the Consumer Financial Protection Bureau to more closely examine source code and algorithms when conducting audits of financial institutions' non-public data. In the United States, on January 7, 2019, following an Executive Order on 'Maintaining American Leadership in Artificial Intelligence', the White House's Office of Science and Technology Policy released a draft Guidance for Regulation of Artificial Intelligence Applications, which includes ten principles for United States agencies when deciding whether and how to regulate AI. In response, the National Institute of Standards and Technology has released a position paper, the National Security Commission on Artificial Intelligence has published an interim report, and the Defense Innovation Board has issued recommendations on the ethical use of AI. In April 2016, for the first time in more than two decades, the European Parliament adopted a set of comprehensive regulations for the collection, storage, and use of personal information, the General Data Protection Regulation (GDPR)1 (European Union, Parliament and Council 2016).[6] The GDPR's policy on the right of citizens to receive an explanation for algorithmic decisions highlights the pressing importance of human interpretability in algorithm design. In 2016, China published a position paper questioning the adequacy of existing international law to address the eventuality of fully autonomous weapons, becoming the first permanent member of the U.N. Security Council to broach the issue, and leading to proposals for global regulation. In the United States, steering on regulating security-related AI is provided by the National Security Commission on Artificial Intelligence. In 2017, the U.K. Vehicle Technology and Aviation Bill imposes liability on the owner of an uninsured automated vehicle when driving itself and makes provisions for cases where the owner has made “unauthorized alterations” to the vehicle or failed to update its software. Further ethical issues arise when, e.g., a self-driving car swerves to avoid a pedestrian and causes a fatal accident. In 2021, the European Commission proposed the Artificial Intelligence Act. Algorithm certification There is a concept of algorithm certification emerging as a method of regulating algorithms. Algorithm certification involves auditing whether the algorithm used during the life cycle 1) conforms to the protocoled requirements (e.g., for correctness, completeness, consistency, and accuracy); 2) satisfies the standards, practices, and conventions; and 3) solves the right problem (e.g., correctly model physical laws), and satisfies the intended use and user needs in the operational environment. Regulation of blockchain algorithms Blockchain systems provide transparent and fixed records of transactions and hereby contradict the goal of the European GDPR, which is to give individuals full control of their private data. By implementing the Decree on Development of Digital Economy, Belarus has become the first-ever country to legalize smart contracts. Belarusian lawyer Denis Aleinikov is considered to be the author of a smart contract legal concept introduced by the decree. There are strong arguments that the existing US state laws are already a sound basis for the smart contracts' enforceability — Arizona, Nevada, Ohio and Tennessee have amended their laws specifically to allow for the enforceability of blockchain-based contracts nevertheless. Regulation of robots and autonomous algorithms There have been proposals to regulate robots and autonomous algorithms. These include: the South Korean Government's proposal in 2007 of a Robot Ethics Charter; a 2011 proposal from the U.K. Engineering and Physical Sciences Research Council of five ethical “principles for designers, builders, and users of robots”; the Association for Computing Machinery's seven principles for algorithmic transparency and accountability, published in 2017. In popular culture In 1942, author Isaac Asimov addressed regulation of algorithms by introducing the fictional Three Laws of Robotics: A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey the orders by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. The main alternative to regulation is a ban, and the banning of algorithms is presently highly unlikely. However, in Frank Herbert's Dune universe, thinking machines is a collective term for artificial intelligence, which were completely destroyed and banned after a revolt known as the Butlerian Jihad: JIHAD, BUTLERIAN: (see also Great Revolt) — the crusade against computers, thinking machines, and conscious robots begun in 201 B.G. and concluded in 108 B.G. Its chief commandment remains in the O.C. Bible as "Thou shalt not make a machine in the likeness of a human mind." See also Algorithmic transparency Algorithmic accountability Artificial intelligence Artificial intelligence arms race Artificial intelligence in government Ethics of artificial intelligence Government by algorithm Privacy law References Computer law Existential risk from artificial general intelligence Algorithms Blockchains Regulation of technologies Regulation of artificial intelligence
Regulation of algorithms
[ "Mathematics", "Technology" ]
1,983
[ "Existential risk from artificial general intelligence", "Regulation of artificial intelligence", "Algorithms", "Mathematical logic", "Applied mathematics", "Computer law", "Computing and society" ]
74,947,355
https://en.wikipedia.org/wiki/Corydalis%20Alkaloids
Corydalis Alkaloids are categorized as natural products of the isoquinoline alkaloid type. Occurrence Corydalis alkaloids are primarily located within the roots of Corydalis cava and various other Corydalis species. Representatives The representatives of Corydalis alkaloids include d-tetrahydrocoptisine (also known as d- or (+)-stylopine), d-canadine, and hydrohydrastinine. Properties Corydalis alkaloids exhibit certain narcotic and muscle-paralyzing effects. Historically, the powdered rhizomes of Corydalis alkaloid-containing plants enjoyed popularity as a vermifuge and menstrual stimulant. References Alkaloids
Corydalis Alkaloids
[ "Chemistry" ]
163
[ "Organic compounds", "Biomolecules by chemical classification", "Natural products", "Alkaloids" ]
77,921,851
https://en.wikipedia.org/wiki/Disordered%20local%20moment%20picture
The disordered local moment (DLM) picture is a method, in condensed matter physics, for describing the electronic structure of a magnetic material at a finite temperature, where a probability distribution of sizes and orientations of atomic magnetic moments must be considered. Its was pioneered, among others, by Balázs Győrffy, Julie Staunton, Malcolm Stocks, and co-workers. The underlying assumption of the DLM picture is similar to the Born-Oppenheimer approximation for the separation of solution of the ionic and electronic problems in a material. In the disordered local moment picture, it is assumed that 'local' magnetic moments which form around atoms are sufficiently long-lived that the electronic problem can be solved for an assumed, fixed distribution of magnetic moments. Many such distributions can then be averaged over, appropriately weighted by their probabilities, and a description of the paramagnetic state obtained. (A paramagnetic state is one where the magnetic order parameter, , is equal to the zero vector.) The picture is typically based on density functional theory (DFT) calculations of the electronic structure of materials. Most frequently, DLM calculations employ either the Korringa–Kohn–Rostoker (KKR) (sometimes referred to as multiple scattering theory) or linearised muffin-tin orbital (LMTO) formulations of DFT, where the coherent potential approximation (CPA) can be used to average over multiple orientations of magnetic moment. However, the picture has also been applied in the context of supercells containing appropriate distributions of magnetic moment orientations. Though originally developed as a means by which to describe the electronic structure of a magnetic material above its magnetic critical temperature (Curie temperature), it has since been applied in a number of other contexts. This includes precise calculation of Curie temperatures and magnetic correlation functions for transition metals, rare-earth elements, and transition metal oxides; as well as a description of the temperature dependance of magnetocrystalline anisotropy. The approach has found particular success in describing the temperature-dependence of magnetic quantities of interest in rare earth-transition metal permanent magnets such as SmCo5 and Nd2Fe14B, which are of interest for a range of energy generation and conversion technologies. References Condensed matter physics
Disordered local moment picture
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
473
[ "Phases of matter", "Condensed matter physics", "Matter", "Materials science" ]
77,925,093
https://en.wikipedia.org/wiki/Perovskite%20light-emitting%20diode
Perovskite light-emitting diodes (PeLEDs) are candidates for display and lighting technologies. Researchers have shown interest in perovskite light-emitting diodes (PeLEDs) owing to their capacity for emitting light with narrow bandwidth, adjustable spectrum, ability to deliver high color purity, and solution fabrication. Green PeLEDs PeLEDs have not surpassed the efficiency of commercial organic light-emitting diodes (OLEDs) because specific critical parameters, such as charge carrier transport and optical output coupling efficiency, have not been optimized. The development of efficient green PeLEDs with a external quantum efficiency (EQE) exceeding 30% was reported by Bai and his colleagues on May 29, 2023. This achievement was made by adjustments in charge carrier transport and the distribution of near-field light. These optimizations resulted in a light output coupling efficiency of 41.82%. The modified structure of green PeLED achieved record external quantum efficiency of 30.84% at a brightness level of 6514 cd/m2. This work introduced an approach to building ultra-efficient PeLEDs by balancing electron-hole recombination and enhancing light outcoupling. Expanding the effective area of perovskite LEDs can decrease their performance. Sun et al. introduced L-methionine (NVAL) to construct an intermediate phase with low formation enthalpy and COO− coordination. This new intermediate phase altered the crystallization pathway, effectively inhibiting phase segregation. Consequently, high-quality large-area quasi-2D perovskite films were achieved. They further fine-tuned the film's composite dynamics, leading to high-efficiency quasi-2D perovskite green LEDs with an effective area of 9.0 cm2. An external quantum efficiency (EQE) of 16.4% was attained at <n> = 3, making it the most efficient large-area perovskite LED. Moreover, a luminance of 9.1×104 cd/m2 was achieved in the <n> = 10 films. Blue PeLEDs On March 16, 2023, Zhou et al. published a study demonstrating their successful control of ion behavior to create highly efficient sky-blue perovskite light-emitting diodes. They achieved this by utilizing a bifunctional passivator, which consisted of Lewis base benzoic acid anions and alkali metal cations. This passivator had a dual role: it effectively passivated the deficient lead atom while inhibited the migration of halide ions. The outcome of this innovative approach was the realization of an efficient perovskite LED that emitted light at a stable wavelength of 483 nm. The LED exhibited a commendable external quantum efficiency (EQE) of 16.58%, with a peak EQE reaching 18.65%. Through optical coupling enhancement, the EQE was further boosted to 28.82%. Red PeLEDs One of the most crucial aspects of lighting and display technology is the efficient generation of red emission. Quasi-2D perovskites have demonstrated potential for high emission efficiency due to robust carrier confinement. However, the external quantum efficiencies (EQE) of most red quasi-2D PeLEDs are not optimal due to different n-value phases within complex quasi-2D perovskite films. To address this challenge, Jiang et al. published their findings in Advanced Materials on July 20, 2022. Their research focused on strategically incorporating large cations to enhance the efficiency of red light perovskite LEDs. By introducing phenethylammonium iodide (PEAI)/3-fluorophenylethylammonium iodide (m-F-PEA) and 1-naphthylmethylammonium iodide (NMAI), they achieved precise control over the phase distribution of quasi-2D perovskite materials. This approach effectively reduced the prevalence of smaller n-index phases and concurrently addressed lead and halide defects in the perovskite films. The outcome of this research was the development of perovskite LEDs capable of achieving an EQE of 25.8% at 680 nm, accompanied by a peak brightness of 1300 cd/m2. White PeLEDs High-performance white perovskite LED with high light extraction efficiency can be constructed through near-field optical coupling. The near-field optical coupling between blue perovskite diode and red perovskite nanocrystal was achieved by a reasonably designed multi-layer translucent electrode (LiF/Al/Ag/LiF). The red perovskite nano-crystalline layer allows the waveguide mode and surface plasmon polarization mode captured in the blue perovskite diode to be extracted and converted into red light emission, increasing the light extraction efficiency by 50%. At the same time, the complementary emission spectra of blue photons and down-converted red photons contribute to the formation of white LEDs. Finally, the off-device quantum efficiency exceeds 12%, and the brightness exceeds 2000 cd/m2, which are both the highest in white PeLEDs. Lifetime Preparing high-quality all-inorganic perovskite films through solution-based methods remains a formidable challenge, primarily attributed to the rapid and uncontrollable crystallization of such materials. The key innovation involved controlling the crystal orientation of the all-inorganic perovskite along the (110) plane through a low-temperature annealing process (35-40 °C). This precise control led to the orderly stacking of crystals, which significantly increased surface coverage and reduced defects within the material. After thorough optimization, the well-oriented CsPbBr3 perovskite LED achieved an external quantum efficiency (EQE) of up to 16.45%, a remarkable brightness of 79,932 cd/m2, and a lifespan of 136 hours when initially operated at a brightness level of 100 cd/m2. On September 20, 2021, the team led by Sargent et al. from the University of Toronto published their research findings in the Journal of the American Chemical Society (JACS) on bright and stable light-emitting diodes (LEDs) based on perovskite quantum dots within a perovskite matrix. The research reported that perovskite quantum dots remain stable in a precursor solution thin film of perovskite and drive the uniform crystallization of the perovskite matrix using strain quantum dots as nucleation centers. The type I band alignment ensures that quantum dots act as charge acceptors and radiative emitters. The new material exhibits suppressed biexciton Auger recombination and bright luminescence even at high excitation (600 W/cm2). The red LEDs based on the new material demonstrate an external quantum efficiency of 18% and maintain high performance at a brightness exceeding 4700 cd/m2. The new material extends the LED's operating half-life to 2400 hours at an initial brightness of 100 cd/m2. References Solid state engineering
Perovskite light-emitting diode
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
1,472
[ "Electronic engineering", "Solid state engineering", "Condensed matter physics" ]
77,929,507
https://en.wikipedia.org/wiki/Peak%20power
Peak power refers to the maximum of the instantaneous power waveform, which, for a sine wave, is always twice the average power. For other waveforms, the relationship between peak power and average power is the peak-to-average power ratio (PAPR). It always produces a higher value than the average power figure, however, and so has been tempting to use in advertising without context, making it look as though the amp has twice the power of competitors . Peak power is a fundamental concept in electrical engineering, relevant to various types of waveforms, including alternating current (AC) and other signal forms. It represents the maximum instantaneous power level that a system can handle or produce. This article explores the significance of peak power across different applications and waveforms. The peak power of an amplifier is determined by the voltage rails and the maximum amount of current its electronic components can handle for an instant without damage. This characterizes the ability of equipment to handle quickly changing power levels, as many audio signals have a highly dynamic nature. Radio frequency Peak power is the highest power level that a transmitter can achieve during its operation. Unlike average power, which is the mean power output over a period, peak power represents the maximum power output at any given instant. This distinction is crucial in applications where signal peaks can significantly exceed the average power level. Peak power is a critical parameter in the field of radio frequency (RF) and telecommunications. It refers to the maximum instantaneous power level that a transmitter can output. Understanding peak power is essential for designing and operating efficient and effective communication systems. Importance Peak power is a fundamental concept in the design and operation of transmitters. It plays a crucial role in ensuring signal integrity, system performance, and component reliability. By understanding and managing peak power, engineers can design more efficient and effective communication systems. Signal Integrity: High peak power ensures that the transmitted signal can overcome noise and interference, maintaining signal integrity over long distances. System Performance: In systems like radar and communication transmitters, peak power is vital for achieving the desired range and clarity. Component Stress: Understanding peak power helps in designing components that can withstand these power levels without damage. Measurement of Peak Power Measuring peak power involves capturing the highest power level within a specified time frame. This can be done using specialized equipment like peak power meters, which can accurately track and record these peaks. The measurement process must account for various factors, including signal type and modulation. Applications of Peak Power Radar Systems: In radar systems, peak power determines the maximum range and resolution. Higher peak power allows for better detection and imaging of distant objects. Communication Systems: In communication systems, peak power ensures that signals can be transmitted over long distances without significant loss of quality. Broadcasting: In broadcasting, peak power is crucial for maintaining signal strength and quality, especially in areas with high interference. Challenges and Considerations Heat Dissipation: High peak power levels can generate significant heat, requiring efficient cooling systems to prevent damage. Intermodulation Distortion: Non-linearities in the transmitter can cause intermodulation distortion, affecting signal quality. Proper design and calibration are necessary to minimize these effects. Regulatory Compliance: Transmitters must comply with regulatory limits on peak power to avoid interference with other communication systems. References External links Definition of peak-to-average ratio – ATIS (Alliance for Telecommunications Industry Solutions) Telecom Glossary 2K Definition of crest factor – ATIS (Alliance for Telecommunications Industry Solutions) Telecom Glossary 2K Peak-to-average power ratio (PAPR) of OFDM systems - tutorial Waveforms Power (physics)
Peak power
[ "Physics", "Mathematics" ]
723
[ "Physical phenomena", "Force", "Physical quantities", "Quantity", "Waves", "Energy (physics)", "Power (physics)", "Wikipedia categories named after physical quantities", "Waveforms" ]
67,752,523
https://en.wikipedia.org/wiki/Jury%20theorem
A jury theorem is a mathematical theorem proving that, under certain assumptions, a decision attained using majority voting in a large group is more likely to be correct than a decision attained by a single expert. It serves as a formal argument for the idea of wisdom of the crowd, for decision of questions of fact by jury trial, and for democracy in general. The first and most famous jury theorem is Condorcet's jury theorem. It assumes that all voters have independent probabilities to vote for the correct alternative, these probabilities are larger than 1/2, and are the same for all voters. Under these assumptions, the probability that the majority decision is correct is strictly larger when the group is larger; and when the group size tends to infinity, the probability that the majority decision is correct tends to 1. There are many other jury theorems, relaxing some or all of these assumptions. Setting The premise of all jury theorems is that there is an objective truth, which is unknown to the voters. Most theorems focus on binary issues (issues with two possible states), for example, whether a certain defendant is guilty or innocent, whether a certain stock is going to rise or fall, etc. There are voters (or jurors), and their goal is to reveal the truth. Each voter has an opinion about which of the two options is correct. The opinion of each voter is either correct (i.e., equals the true state), or wrong (i.e., differs than the true state). This is in contrast to other settings of voting, in which the opinion of each voter represents his/her subjective preferences and is thus always "correct" for this specific voter. The opinion of a voter can be considered a random variable: for each voter, there is a positive probability that his opinion equals the true state. The group decision is determined by the majority rule. For example, if a majority of voters says "guilty" then the decision is "guilty", while if a majority says "innocent" then the decision is "innocent". To avoid ties, it is often assumed that the number of voters is odd. Alternatively, if is even, then ties are broken by tossing a fair coin. Jury theorems are interested in the probability of correctness - the probability that the majority decision coincides with the objective truth. Typical jury theorems make two kinds of claims on this probability: Growing Reliability: the probability of correctness is larger when the group is larger. Crowd Infallibility: the probability of correctness goes to 1 when the group size goes to infinity. Claim 1 is often called the non-asymptotic part and claim 2 is often called the asymptotic part of the jury theorem. Obviously, these claims are not always true, but they are true under certain assumptions on the voters. Different jury theorems make different assumptions. Independence, competence, and uniformity Condorcet's jury theorem makes the following three assumptions: Unconditional Independence: the voters make up their minds independently. In other words, their opinions are independent random variables. Unconditional Competence: the probability that the opinion of a single voter coincides with the objective truth is larger than 1/2 (i.e., the voter is smarter than a random coin-toss). Uniformity: all voters have the same probability of being correct. The jury theorem of Condorcet says that these three assumptions imply Growing Reliability and Crowd Infallibility. Correlated votes: weakening the independence assumption The opinions of different voters are often correlated, so Unconditional Independence may not hold. In this case, the Growing Reliability claim might fail. Example Let be the probability of a juror voting for the correct alternative and be the (second-order) correlation coefficient between any two correct votes. If all higher-order correlation coefficients in the Bahadur representation of the joint probability distribution of votes equal to zero, and is an admissible pair, then the probability of the jury collectively reaching the correct decision under simple majority is given by: where is the regularized incomplete beta function. Example: Take a jury of three jurors , with individual competence and second-order correlation . Then . The competence of the jury is lower than the competence of a single juror, which equals to . Moreover, enlarging the jury by two jurors decreases the jury competence even further, . Note that and is an admissible pair of parameters. For and , the maximum admissible second-order correlation coefficient equals . The above example shows that when the individual competence is low but the correlation is high: The collective competence under simple majority may fall below that of a single juror; Enlarging the jury may decrease its collective competence. The above result is due to Kaniovski and Zaigraev. They also discuss optimal jury design for homogenous juries with correlated votes. There are several jury theorems that weaken the Independence assumption in various ways. Truth-sensitive independence and competence In binary decision problems, there is often one option that is easier to detect that the other one. For example, it may be easier to detect that a defendant is guilty (as there is clear evidence for guilt) than to detect that he is innocent. In this case, the probability that the opinion of a single voter is correct is represented by two different numbers: probability given that option #1 is correct, and probability given that option #2 is correct. This also implies that opinions of different voters are correlated. This motivates the following relaxations of the above assumptions: Conditional Independence: for each of the two options, the voters' opinions given that this option is the true one are independent random variables. Conditional Competence: for each of the two options, the probability that a single voter's opinion is correct given that this option is true is larger than 1/2. Conditional Uniformity: for each of the two options, all voters have the same probability of being correct given that this option is true. Growing Reliability and Crowd Infallibility continue to hold under these weaker assumptions. One criticism of Conditional Competence is that it depends on the way the decision question is formulated. For example, instead of asking whether the defendant is guilty or innocent, one can ask whether the defendant is guilty of exactly 10 charges (option A), or guilty of another number of charges (0..9 or more than 11). This changes the conditions, and hence, the conditional probability. Moreover, if the state is very specific, then the probability of voting correctly might be below 1/2, so Conditional Competence might not hold. Effect of an opinion leader Another cause of correlation between voters is the existence of an opinion leader. Suppose each voter makes an independent decision, but then each voter, with some fixed probability, changes his opinion to match that of the opinion leader. Jury theorems by Boland and Boland, Proschan and Tong shows that, if (and only if) the probability of following the opinion leader is less than 1-1/2p (where p is the competence level of all voters), then Crowd Infallibility holds. Problem-sensitive independence and competence In addition to the dependence on the true option, there are many other reasons for which voters' opinions may be correlated. For example: Deliberation among voters; Peer pressure; False evidence (e.g. a guilty defendant that excels at pretending to be innocent); External conditions (e.g. poor weather affecting their judgement). Any other common cause of votes It is possible to weaken the Conditional Independence assumption, and conditionalize on all common causes of the votes (rather than just the state). In other words, the votes are now independent conditioned on the specific decision problem. However, in a specific problem, the Conditional Competence assumption may not be valid. For example, in a specific problem with false evidence, it is likely that most voters will have a wrong opinion. Thus, the two assumptions - conditional independence and conditional competence - are not justifiable simultaneously (under the same conditionalization). A possible solution is to weaken Conditional Competence as follows. For each voter and each problem x, there is a probability p(x) that the voter's opinion is correct in this specific problem. Since x is a random variable, p(x) is a random variable too. Conditional Competence requires that p(x) > 1/2 with probability 1. The weakened assumption is: Tendency to Competence: for each voter, and for each r>0, the probability that p(x) = 1/2+r is at least as large as the probability that p(x) = 1/2-r. A jury theorem by Dietrich and Spiekerman says that Conditional Independence, Tendency to Competence, and Conditional Uniformity, together imply Growing Reliability. Note that Crowd Infallibility is not implied. In fact, the probability of correctness tends to a value which is below 1, if and only of Conditional Competence does not hold. Bounded correlation A jury theorem by Pivato shows that, if the average covariance between voters becomes small as the population becomes large, then Crowd Infallibility holds (for some voting rule). There are other jury theorems that take into account the degree to which votes may be correlated. Other solutions Other ways to cope with voter correlation include causal networks, dependence structures, and interchangeability. Diverse capabilities: weakening the uniformity assumption Different voters often have different competence levels, so the Uniformity assumption does not hold. In this case, both Growing Reliability and Crowd Infallibility may not hold. This may happen if new voters have much lower competence than existing voters, so that adding new voters decreases the group's probability of correctness. In some cases, the probability of correctness might converge to 1/2 (- a random decision) rather than to 1. Stronger competence requirements Uniformity can be dismissed if the Competence assumption is strengthened. There are several ways to strengthen it: Strong Competence: for each voter i, the probability of correctness pi is at least 1/2+e, where e>0 is fixed for all voters. In other words: the competence is bounded away from a fair coin toss. A jury theorem by Paroush shows that Strong Competence and Conditional Independence together imply Crowd Infallibility (but not Growing Reliability). Average Competence: the average of the individual competence levels of the voters (i.e. the average of their individual probabilities of deciding correctly) is slightly greater than half, or converges to a value above 1/2. Jury theorems by Grofman, Owen and Feld, and Berend and Paroush, show that Average Competence and Conditional Independence together imply Crowd Infallibility (but not Growing Reliability). Random voter selection instead of assuming that the voter identity is fixed, one can assume that there is a large pool of potential voters with different competence levels, and the actual voters are selected at random from this pool (as in sortition). A jury theorem by Ben Yashar and Paroush shows that, under certain conditions, the correctness probability of a jury, or of a subset of it chosen at random, is larger than the correctness probability of a single juror selected at random. A more general jury theorem by Berend and Sapir proves that Growing Reliability holds in this setting: the correctness probability of a random committee increases with the committee size. The theorem holds, under certain conditions, even with correlated votes. A jury theorem by Owen, Grofman and Feld analyzes a setting where the competence level is random. They show what distribution of individual competence maximizes or minimizes the probability of correctness. Weighted majority rule When the competence levels of the voters are known, the simple majority rule may not be the best decision rule. There are various works on identifying the optimal decision rule - the rule maximizing the group correctness probability. Nitzan and Paroush show that, under Unconditional Independence, the optimal decision rule is a weighted majority rule, where the weight of each voter with correctness probability pi is log(pi/(1-pi)), and an alternative is selected if the sum of weights of its supporters is above some threshold. Grofman and Shapley analyze the effect of interdependencies between voters on the optimal decision rule. Ben-Yashar and Nitzan prove a more general result. Dietrich generalizes this result to a setting that does not require prior probabilities of the 'correctness' of the two alternative. The only required assumption is Epistemic Monotonicity, which says that, if under certain profile alternative x is selected, and the profile changes such that x becomes more probable, then x is still selected. Dietrich shows that Epistemic Monotonicity implies that the optimal decision rule is weighted majority with a threshold. In the same paper, he generalizes the optimal decision rule to a setting that does not require the input to be a vote for one of the alternatives. It can be, for example, a subjective degree of belief. Moreover, competence parameters do not need to be known. For example, if the inputs are subjective beliefs x1,...,xn, then the optimal decision rule sums log(xi/(1-xi)) and checks whether the sum is above some threshold. Epistemic Monotonicity is not sufficient for computing the threshold itself; the threshold can be computed by assuming expected-utility maximization and prior probabilities. A general problem with the weighted majority rules is that they require to know the competence levels of the different voters, which is usually hard to compute in an objective way. Baharad, Goldberger, Koppel and Nitzan present an algorithm that solves this problem using statistical machine learning. It requires as input only a list of past votes; it does not need to know whether these votes were correct or not. If the list is sufficiently large, then its probability of correctness converges to 1 even if the individual voters' competence levels are close to 1/2. More than two options Often, decision problems involve three or more options. This critical limitation was in fact recognized by Condorcet (see Condorcet's paradox), and in general it is very difficult to reconcile individual decisions between three or more outcomes (see Arrow's theorem). This limitation may also be overcome by means of a sequence of votes on pairs of alternatives, as is commonly realized via the legislative amendment process. (However, as per Arrow's theorem, this creates a "path dependence" on the exact sequence of pairs of alternatives; e.g., which amendment is proposed first can make a difference in what amendment is ultimately passed, or if the law—with or without amendments—is passed at all.) With three or more options, Conditional Competence can be generalized as follows: Multioption Conditional Competence: for any two options x and y, if x is correct and y is not, then any voter is more likely to vote for x than for y. A jury theorem by List and Goodin shows that Multioption Conditional Competence and Conditional Independence together imply Crowd Infallibility. Dietrich and Spiekermann conjecture that they imply Growing Reliability too. Another related jury theorem is by Everaere, Konieczny and Marquis. When there are more than two options, there are various voting rules that can be used instead of simple majority. The statistic and utilitarian properties of such rules are analyzed e.g. by Pivato. Indirect majority systems Condorcet's theorem considers a direct majority system, in which all votes are counted directly towards the final outcome. Many countries use an indirect majority system, in which the voters are divided into groups. The voters in each group decide on an outcome by an internal majority vote; then, the groups decide on the final outcome by a majority vote among them. For example, suppose there are 15 voters. In a direct majority system, a decision is accepted whenever at least 8 votes support it. Suppose now that the voters are grouped into 3 groups of size 5 each. A decision is accepted whenever at least 2 groups support it, and in each group, a decision is accepted whenever at least 3 voters support it. Therefore, a decision may be accepted even if only 6 voters support it. Boland, Proschan and Tong prove that, when the voters are independent and p>1/2, a direct majority system - as in Condorcet's theorem - always has a higher chance of accepting the correct decision than any indirect majority system. Berg and Paroush consider multi-tier voting hierarchies, which may have several levels with different decision-making rules in each level. They study the optimal voting structure, and compares the competence against the benefit of time-saving and other expenses. Goodin and Spiekermann compute the amount by which a small group of experts should be better than the average voters, in order for them to accept better decisions. Strategic voting It is well-known that, when there are three or more alternatives, and voters have different preferences, they may engage in strategic voting, for example, vote for the second-best option in order to prevent the worst option from being elected. Surprisingly, strategic voting might occur even with two alternatives and when all voters have the same preference, which is to reveal the truth. For example, suppose the question is whether a defendant is guilty or innocent, and suppose a certain juror thinks the true answer is "guilty". However, he also knows that his vote is effective only if the other votes are tied. But, if other votes are tied, it means that the probability that the defendant is guilty is close to 1/2. Taking this into account, our juror might decide that this probability is not sufficient for deciding "guilty", and thus will vote "innocent". But if all other voters do the same, the wrong answer is derived. In game-theoretic terms, truthful voting might not be a Nash equilibrium. This problem has been termed the swing voter's curse, as it is analogous to the winner's curse in auction theory. A jury theorem by Peleg and Zamir shows sufficient and necessary conditions for the existence of a Bayesian-Nash equilibrium that satisfies Condorcet's jury theorem. Bozbay, Dietrich and Peters show voting rules that lead to efficient aggregation of the voters' private information even with strategic voting. In practice, this problem may not be very severe, since most voters care not only about the final outcome, but also about voting correctly by their conscience. Moreover, most voters are not sophisticated enough to vote strategically. Subjective opinions The notion of "correctness" may not be meaningful when making policy decisions, which are based on values or preferences, rather than just on facts. Some defenders of the theorem hold that it is applicable when voting is aimed at determining which policy best promotes the public good, rather than at merely expressing individual preferences. On this reading, what the theorem says is that although each member of the electorate may only have a vague perception of which of two policies is better, majority voting has an amplifying effect. The "group competence level", as represented by the probability that the majority chooses the better alternative, increases towards 1 as the size of the electorate grows assuming that each voter is more often right than wrong. Several papers show that, under reasonable conditions, large groups are better trackers of the majority preference. Applicability The applicability of jury theorems, in particular, Condorcet's Jury Theorem (CJT) to democratic processes is debated, as it can prove majority rule to be a perfect mechanism or a disaster depending on individual competence. Recent studies show that, in a non-homogeneous case, the theorem's thesis does not hold almost surely (unless weighted majority rule is used with stochastic weights that are correlated with epistemic rationality but such that every voter has a minimal weight of one). Further reading Law of large numbers: a mathematical generalization of jury theorems. Evolution in collective decision making. Realizing Epistemic Democracy: a criticism on the assumptions of jury theorems. The Epistemology of Democracy: a comparison of jury theorems to two other epistemic models of democracy: experimentalism and Diversity trumps ability. References Probability theorems Voting theory
Jury theorem
[ "Mathematics" ]
4,169
[ "Theorems in probability theory", "Mathematical theorems", "Mathematical problems" ]
55,016,824
https://en.wikipedia.org/wiki/Electromagnetism%20uniqueness%20theorem
The electromagnetism uniqueness theorem states the uniqueness (but not necessarily the existence) of a solution to Maxwell's equations, if the boundary conditions provided satisfy the following requirements: At , the initial values of all fields (, , and ) everywhere (in the entire volume considered) is specified; For all times (of consideration), the component of either the electric field or the magnetic field tangential to the boundary surface ( or , where is the normal vector at a point on the boundary surface) is specified. Note that this theorem must not be misunderstood as that providing boundary conditions (or the field solution itself) uniquely fixes a source distribution, when the source distribution is outside of the volume specified in the initial condition. One example is that the field outside a uniformly charged sphere may also be produced by a point charge placed at the center of the sphere instead, i.e. the source needed to produce such field at a boundary outside the sphere is not unique. See also Maxwell's equations Green's function Surface equivalence principle Uniqueness theorem References Specific Vector calculus Physics theorems Uniqueness theorems
Electromagnetism uniqueness theorem
[ "Physics", "Materials_science", "Mathematics" ]
226
[ "Materials science stubs", "Mathematical theorems", "Equations of physics", "Mathematical problems", "Uniqueness theorems", "Electromagnetism stubs", "Physics theorems" ]
55,017,999
https://en.wikipedia.org/wiki/Drainage%20tunnel
A drainage tunnel, called an emissary in ancient contexts, is a tunnel or channel created to drain water, often from a stagnant or variable-depth body of water. It typically leads to a lower stream or river, or to a location where a pumping station can be economically run. Drainage tunnels have frequently been constructed to drain mining districts or to serve drainage districts. Etymology Emissary comes from Latin emissarium, from ex and mittere 'to send out'. Ancient world The most remarkable emissaries carry off the waters of lakes surrounded by hills. In ancient Greece, the waters of Lake Copais were drained into the Cephisus; they were partly natural and partly artificial. In 480 BC, Phaeax built drains at Agrigentum in Sicily: they were admired for their sheer size, although the workmanship was crude. The ancient Romans excelled in the construction of emissaries, as in all their hydraulic works, and remains are extant showing that lakes Trasimeno, Albano and Nemi were all drained by means of emissaries. The case of Lake Fucino is remarkable in two ways: the attempt to drain it was one of the rare failures of Roman engineering, and the emissary is now completely above ground and open to inspection. Julius Caesar is said to have first conceived the idea of this stupendous undertaking (Suet. Jul. 44). Claudius inaugurated what was to have been a complete drainage scheme, the Tunnels of Claudius (Tac. Ann. xii.57), but the water level dropped by just 4 meters and stabilized, leaving the lake very much there. Hadrian tried it again, but failed; and it was not until 1878 that Lake Fucino was finally drained. The initial text of this section was an abridgement from Smith's Dictionary of Greek and Roman Antiquities (1875 edition, public domain). Modern examples Modern examples of drainage tunnels include the Emisor Oriente Tunnel near Mexico City, as well as the Tunnel and Reservoir Plan in Chicago. See also Storm drain External links Emissarium, the full article in Smith's Dictionary Walter Dragoni, Costanza Cambi, Field Trip Guidebook for "Hydraulic Structures in Ancient Rome", field trip of the 42nd Congress of the International Association of Hydrogeologists, Rome, September 2015. full text Flood control Roman aqueducts Ancient Roman architectural elements Hydraulic engineering
Drainage tunnel
[ "Physics", "Chemistry", "Engineering", "Environmental_science" ]
491
[ "Hydrology", "Physical systems", "Flood control", "Hydraulics", "Civil engineering", "Civil engineering stubs", "Environmental engineering", "Hydraulic engineering" ]
55,018,276
https://en.wikipedia.org/wiki/C26H35FO6
{{DISPLAYTITLE:C26H35FO6}} The molecular formula C26H35FO6 (molar mass: 462.558 g/mol) may refer to: Amcinafal (also known as triamcinolone pentanonide) Amelometasone Molecular formulas
C26H35FO6
[ "Physics", "Chemistry" ]
68
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
55,018,327
https://en.wikipedia.org/wiki/C28H34O6
{{DISPLAYTITLE:C28H34O6}} The molecular formula C28H34O6 (molar mass: 466.566 g/mol) may refer to: Benzodrocortisone, or hydrocortisone 17-benzoate Deoxygedunin Molecular formulas
C28H34O6
[ "Physics", "Chemistry" ]
69
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
55,018,451
https://en.wikipedia.org/wiki/C29H33FO6
{{DISPLAYTITLE:C29H33FO6}} The molecular formula C29H33FO6 (molar mass: 496.58 g/mol, exact mass: 496.2261 u) may refer to: Amcinafide, or triamcinolone acetophenide Betamethasone benzoate
C29H33FO6
[ "Chemistry" ]
73
[ "Isomerism", "Set index articles on molecular formulas" ]
55,019,804
https://en.wikipedia.org/wiki/Arnold%20M.%20Collins
Arnold Miller Collins (1899-1982) was a chemist at DuPont who, working under Elmer Bolton and Wallace Carothers with Ira Williams, first isolated polychloroprene and 2-chloro-1, 3-butadiene in 1930. Personal Born 1899. Married Helen Clark Collins. Died October 8, 1982. Education Collins attended Columbia College, graduating in 1921 with the AB degree. Doctoral degree. Columbia College 1924. His dissertation was entitled "Electrolytic introduction of alkyl groups", Columbia University, New York, New York. Career At Dupont, Collins worked under Wallace Carothers. Carothers assigned Collins to produce a sample of divinylacetylene. In March 1930, while distilling the products of the acetylene reaction, Collins obtained a small quantity of an unknown liquid, which he put aside in stoppered test tubes. He later found that the liquid had congealed into a clear homogeneous mass. When Collins removed the mass from the test tube, it bounced. Further analysis showed that the mass was a polymer of chloroprene, formed with chlorine from the cuprous chloride catalyst. Collins had stumbled upon a new synthetic rubber. Following this breakthrough, DuPont began to manufacture its first artificial rubber, DuPrene, in September 1931. In 1936, it was renamed neoprene a term to be used generically. Awards and Recognitions 1973 - Charles Goodyear Medal from the ACS Rubber Division External links 1981 Interview with Arnold Collins References Polymer scientists and engineers 1899 births 1982 deaths 20th-century American chemists
Arnold M. Collins
[ "Chemistry", "Materials_science" ]
325
[ "Polymer scientists and engineers", "Physical chemists", "Polymer chemistry" ]
55,022,692
https://en.wikipedia.org/wiki/Frantz%20Yvelin
Frantz Yvelin is a French businessman, pilot, and serial entrepreneur. He was the President of Aigle Azur, France's 2nd largest airline until August 26, 2019. Frantz Yvelin previously created and ran two French independent scheduled Airlines, (La Compagnie and L'Avion). Early life and education A commercial pilot since the age of 21, Yvelin is type-rated on Airbus A320, Boeing 737, Boeing 757, Boeing 767, Cessna Citation and McDonnell Douglas MD80. Career Yvelin started his career as an IT consultant (for GFI Informatique, CS Communication & Systèmes). In 2006 he founded and ran Europe's first all-Business-Class airline, L'Avion, before selling it to British Airways. (In 2009, L'Avion became OpenSkies and has since operated under that brand). Yvelin was Head of Strategy and Development for OpenSkies for a time after it was merged with L'Avion. In 2013, along with La Compagnie, Frantz created a French holding company called Dreamjet Participations, which he ran as President and CEO until the end of 2016. Dreamjet Participations acquired 100% of French leisure airline XL Airways in 2016. Along with Peter Luethi, La Compagnie's co-founder, he has been an air transport advisor for three years and was a lecturer in air transportation economics at the École nationale de l'aviation civile (teaching the Mastère spécialisé course). In parallel, he has helped to develop an airliners' ferry and flight testing company based in the USA. Notes Living people French chief executives French airline chief executives École nationale de l'aviation civile 1976 births
Frantz Yvelin
[ "Engineering" ]
366
[ "École nationale de l'aviation civile", "Aerospace engineering organizations" ]
55,026,194
https://en.wikipedia.org/wiki/Marina%20Halac
Marina Halac (born November 17, 1979) is a professor of economics at Yale University. She is also an associate editor of Econometrica and a member of the editorial board of the American Economic Review. She was the 2016 recipient of the Elaine Bennett Research Prize, which is awarded biennially by the American Economic Association to recognize outstanding research by a woman. She received this award within the first seven years after completing her PhD in economics from the University of California, Berkeley. In 2017, she was named one of the "Best 40 under 40 Business School Professors" by Poets and Quants. She was a recipient of the George S. Eccles Research Award in 2017, which is awarded to the author of the best book or writings on economics that bridge theory and practice, as determined by top members of the Columbia Business School faculty and alumni. Halac was born and raised in Buenos Aires and studied economics at the University of CEMA, Argentina from 1998 to 2001. Here, her professors encouraged her to pursue an advanced economics degree in the United States. Following her graduation in 2001, she and her husband, Guillermo Noguera, became research assistants at the World Bank in Washington, D.C., and then both earned doctoral degrees in economics at the University of California at Berkeley. Her research focuses on theoretical models of how to optimally delegate decision making, such as optimal rules for firms that need to delegate investment decisions to managers with competing incentives, problems of how to motivate experimentation and innovation, the design of fiscal rules to constrain government spending, and the role of reputation in maintaining productivity, addressing strategic uncertainty with incentives and information, and inflation targeting under political pressure. Her work on relational contracting, which studies how best to design contracts in a principal-agent setting where the value of the relationship is not mutually known, suggests new ways to approach dynamic contracting problems with bargaining. Additionally, her work regarding fiscal rules and discretion under political shocks examines a specific fiscal policy model where the government has preferences that are time-inconsistent, with a present bias towards public spending. While she is currently employed as an economics professor at Yale University, she has taught at Columbia University, Graduate School of Business, Economics Division as well as the University of Warwick. She is a fellow of the Econometric Society. Selected works References American women economists 21st-century American economists Argentine economists Argentine women economists Game theorists Microeconomists Columbia Business School faculty Yale University faculty 1979 births Living people University of California, Berkeley alumni Fellows of the Econometric Society 21st-century American women
Marina Halac
[ "Mathematics" ]
517
[ "Game theorists", "Game theory" ]
55,031,001
https://en.wikipedia.org/wiki/Bioconservatism
Bioconservatism is a philosophical and ethical stance that emphasizes caution and restraint in the use of biotechnologies, particularly those involving genetic manipulation and human enhancement. The term "bioconservatism" is a portmanteau of the words biology and conservatism. Critics of bioconservatism, such as Steve Clarke and Rebecca Roache, argue that bioconservatives ground their views primarily in intuition, which can be subject to various cognitive biases. Bioconservatives' reluctance to acknowledge the fragility of their position is seen as a reason for stalled debate. Bioconservatism is characterized by a belief that technological trends risk compromising human dignity, and by opposition to movements and technologies including transhumanism, human genetic modification, "strong" artificial intelligence, and the technological singularity. Many bioconservatives also oppose the use of technologies such as life extension and preimplantation genetic screening. Bioconservatives range in political perspective from right-leaning religious and cultural conservatives to left-leaning environmentalists and technology critics. What unifies bioconservatives is skepticism about medical and other biotechnological transformations of the living world. In contrast to bioluddism, the bioconservative perspective typically presents a more focused critique of technological society. It is distinguished by its defense of the natural, framed as a moral category. Bioconservatism advocates Bioconservatives seek to counter the arguments made by transhumanists who support the use of human enhancement technologies despite acknowledging the risks they involve. Transhumanists believe that these technologies have the power to radically change what we currently perceive of as a human being, and that they are necessary for future human development. Transhumanist philosophers such as Nick Bostrom believe that genetic modification will be essential to improving human health in the future. The three major elements of the bioconservative argument, as described by Bostrom, are firstly, that human augmentation is innately degrading and therefore harmful; secondly, that the existence of augmented humans poses a threat to "ordinary humans;" and thirdly, that human augmentation shows a lack of acknowledgement that "not everything in the world is open to any use we may desire or devise." The first two of these elements are secular whilst the last derives "from religious or crypto-religious sentiments." Michael Sandel Michael J. Sandel is an American political philosopher and a prominent bioconservative. His article and subsequent book, both titled The Case Against Perfection, concern the moral permissibility of genetic engineering or genome editing. Sandel compares genetic and non-genetic forms of enhancement pointing to the fact that much of non-genetic alteration has largely the same effect as genetic engineering. SAT tutors or study drugs such as Ritalin can have similar effects as minor tampering with natural born intelligence. Sandel uses such examples to argue that the most important moral issue with genetic engineering is not that the consequences of manipulating human nature will undermine human agency but the perfectionist aspiration behind such a drive to mastery. For Sandel, "the deepest moral objection to enhancement lies less in the perfection it seeks than in the human disposition it expresses and promotes.” For example, the parental desire for a child to be of a certain genetic quality is incompatible with the special kind of unconditional love parents should have for their children. He writes “[t]o appreciate children as gifts is to accept them as they come, not as objects of our design or products of our will or instruments of our ambition.” Sandel insists that consequentialist arguments overlook the principle issue of whether bioenhancement should be aspired to at all. He is attributed with the view that human augmentation should be avoided as it expresses an excessive desire to change oneself and 'become masters of our nature.' For example, in the field of cognitive enhancement, he argues that moral question we should be concerned with is not the consequences of inequality of access to such technology in possibly creating two classes of humans but whether we should aspire to such enhancement at all. Similarly, he has argued that the ethical problem with genetic engineering is not that it undermines the child's autonomy, as this claim "wrongly implies that absent a designing parent, children are free to choose their characteristics for themselves." Rather, he sees enhancement as hubristic, taking nature into our own hands: pursuing the fixity of enhancement is an instance of vanity. Sandel also criticizes the argument that a genetically engineered athlete would have an unfair advantage over his unenhanced competitors, suggesting that it has always been the case that some athletes are better endowed genetically than others. In short, Sandel argues that the real ethical problems with genetic engineering concern its effects on humility, responsibility and solidarity. Humility Sandel argues that humility is a moral virtue that will be undermined by genetic engineering. He argues that humility encourages one to 'abide the unexpected, to live with dissonance, to rein in the impulse control,' and therefore, is worth fostering in all aspects of one's life. This includes the humility of parents regarding their own genetic endowment and that of their children. Sandel's concern is that, through genetic engineering, the relationship between parent and child is "disfigured:" The problem lies in the hubris of the designing parents, in their drive to master the mystery of genetics. Even if this disposition did not make parents tyrants to their children, it would disfigure the relation between parent and child, thus depriving the parent of the humility and enlarged human sympathies that an openness to the unbidden can cultivate. Essentially, Sandel believes that in order to be a good parent with the virtue of humility, one needs to accept that their child may not progress exactly according to their expectations. Designing an athletic child, for example, is incompatible with the idea of parents having such open expectations. He argues that genetic enhancement deprives the parent of the humility of an 'openness to the unbidden' fosters. Sandel believes that parents must be prepared to love their child unconditionally and to see their children as gifts from nature, rather than entities to be defined according to parental and genetic expectations. Moreover, in the paper The Case Against Perfection, Sandel argues: I do not think the main problem with enhancement and genetic engineering is that they undermine effort and erode human agency. The deeper danger is that they represent a kind of hyperagency—a Promethean aspiration to remake nature, including human nature, to serve our purposes and satisfy our desires". In doing so, Sandel worries that an essential aspect of human nature - and the meaning of life derived from such, would be eroded in the process of expanding radically beyond our naturally endowed capacities. He calls this yearning the "Promethean project," which is necessarily constrained by appreciating our humility and place in nature. Sandel adds: It is in part a religious sensibility. But its resonance reaches beyond religion. Responsibility Sandel argues that due to the increasing role of genetic enhancement, there will be an 'explosion' of responsibility on humanity. He argues that genetic engineering will increase parental responsibility as "parents become responsible for choosing, or failing to choose, the right traits for their children." He believes that such responsibility will lead to genes become a matter of choice rather than a matter of chance. Sandel illustrates this argument through the lens of sports: in athletics, undesirable outcomes are often attributed to extrinsic values such as lack of preparation or lapse in discipline. With the introduction of genetic engineering athletes, Sandel believes that athletes will bear additional responsibility for their talents and performance; for example, for failing to acquire the intrinsic traits necessary for success. Sandel believes this can be extrapolated to society as a whole: individuals will be forced to shoulder more responsibility for deficiencies in the face of increased genetic choice. Solidarity Sandel points out that without genetic engineering, a child is "at the mercy of the genetic lottery." Insurance markets allow a pooling of risk for the benefit of all: those who turn out to be healthy subsidise those who are not. This could be phrased more generally as: individual success is not fully determined by that individual or their parents, as genetic traits are to some extent randomly assigned from a collective pool. Sandel argues that, because we all face the same risks, social insurance schemes that rely on a sense of solidarity are possible. However, genetic enhancement gives individuals perfect genetic knowledge and increased resistance to some diseases. Enhanced individuals would not opt into such a system or such human community, because it would involve guaranteed losses for them. They would feel no debt to their community, and social solidarity would disappear. Sandel argues that solidarity 'arises when men and women reflect on the contingency of their talents and fortunes.' He argues that if our genetic endowments begin to be seen as 'achievements for which we can claim credit,' society would have no obligation to share with those less fortunate. Consequently, Sandel mounts a case against the perfection of genetic knowledge because it would end the solidarity arising when people reflect on the non-necessary nature of their fortunes. Leon Kass In his paper “Ageless Bodies, Happy Souls," Leon Kass argues for bioconservatism. His argument was first delivered as a lecture at the Washington D.C. Ethics and Public Policy Center and later published as an article in The Atlantic. Although it was written during the time when Kass chaired the President's Council on Bioethics, the views expressed are his own, and not those of the council. In brief, he argues that for three main reasons there is something wrong with biotechnological enhancement. Kass calls them the arguments of "the attitude of mastery," "'unnatural' means" and "dubious ends." Before he turns to these arguments, he focuses on the distinction between "therapy" and "enhancement." While therapy has the aim of (re-)establishing the state of what could be considered as "normal" (e.g. replacement of organs), enhancement gives people an advantage over the "normal workings" of the human body (e.g. immortality). On the basis of this distinction, Kass argues, most people would support therapy, but remain sceptical towards enhancement. However, he believes this distinction is not clear, since it is hard to tell where therapy stops and enhancement begins. One reason he gives is that the "normal workings" of the human body cannot be unambiguously defined due to the variance within humans: someone may be born with perfect pitch, another deaf. Bostrom and Roache reply to this by giving an instance where one may clearly speak of permissible enhancement. They claim that extending a life (i.e. making it longer than it would normally have been) means that one saves this particular life. Since one would believe it is morally permissible to save lives (as long as no harm is caused), they claim that there is no good reason to believe extending a life is impermissible. The relevance of the above counterargument presented by Bostrom and Roache becomes clearer when we consider the essence of Kass's skepticism with 'enhancement.' Firstly, he labels natural human experiences like aging, death and unhappiness as preconditions of human flourishing. By extension, given that technological enhancement diminishes these preconditions and therefore hinders human flourishing, he is able to assert that enhancement is not morally permissible. That being said, Bostrom and Roache challenge Kass's inherent assumption that extending life is different from saving it. In other words, they argue that by alleviating ageing and death, someone's life is being extended, which is no different from saving their life. By this argument, the concept of human flourishing becomes entirely irrelevant since it is morally permissible to save someone's life, regardless of whether they are leading a flourishing life or not. The problematic attitude of biotechnological enhancement One of Leon Kass' main arguments on this matter concern the attitude of 'mastery'. Kass implies that although the means are present to modify human nature (both body and mind), the ends remain unknown, filled with unintended consequences: Due to the unawareness of the goodness of potential ends, Kass claims this not to be mastery at all. Instead, we are acting on the momentary whims that nature exposes us to, effectively making it impossible for humanity to escape from the "grip of our own nature." Kass builds on Sandel's argument that transhumanists fail to properly recognise the 'giftedness' of the world. He agrees that this idea is useful in that it should teach us an attitude of modesty, restraint and humility. However, he believes it will not by itself sufficiently indicate which things can be manipulated and which should be left untouched. Therefore, Kass additionally proposes that we must also respect the 'givenness' of species-specified natures – 'given' in the sense of something fixed and specified. 'Unnatural' means of biotechnological enhancement Kass refers to biotechnological enhancement as cheating or ‘cheap,’ because it undermines the feeling of having worked hard to achieve a certain aim. He writes, “The naturalness of means matters. It lies not in the fact that the assisting drugs and devices are artifacts, but in the danger of violating or deforming the deep structure of the natural human activity.” By nature, there is "an experiential and intelligible connection between means and ends." Kass suggests that the struggles one has to go through to achieve excellence "is not only the source of our deeds, but also their product." Therefore, they build character. He maintains that biotechnology as a shortcut does not build character but instead erodes self-control. This can be seen in how confronting fearful things might eventually enable us to cope with our fears, unlike a pill which merely prevents people from experiencing fear and thereby doesn't help us overcome it. As Kass notes, "people who take pills to block out from memory the painful or hateful aspects of new experience will not learn how to deal with suffering or sorrow. A drug to induce fearlessness does not produce courage." He contends that there is a necessity in having limited biotechnological enhancement for humans as it recognises giftedness and forges humility. Kass notes that while there are biological interventions that may assist in the pursuit of excellence without cheapening its attainment, "partly because many of life's excellences have nothing to do with competition or adversity," (e.g. "drugs to decrease drowsiness or increase alertness... may actually help people in their natural pursuits of learning or painting or performing their civic duty,") "the point is less the exertions of good character against hardship, but the manifestation of an alert and self-experiencing agent making his deeds flow intentionally from his willing, knowing, and embodied soul." Kass argues that we need to have an "intelligible connection" between means and ends in order to call one's bodies, minds, and transformations genuinely their own. 'Dubious' ends of biotechnological enhancement The case for ageless bodies is that the prevention of decay, decline, and disability, the avoidance of blindness, deafness, and debility, the elimination of feebleness, frailty, and fatigue, are conducive to living fully as a human being at the top of one's powers, and a "good quality of life" from beginning to end. However, Kass argues that human limitation is what gives the opportunity for happiness. Firstly, he argues that "a concern with one's own improving agelessness is finally incompatible with accepting the need for procreation and human renewal." This creates a world "hostile to children," and arguably "increasingly dominated by anxiety over health and the fear of death." This is because the existence of decline and decay is precisely what allows us to accept mortality. The hostility towards children is resultant of the redundancy of new generations to the progression of the human species, given infinite lifespan; progression and evolution of the human race would no longer arise from procreation and succession, but from the engineered enhancement of existing generations. Secondly, He explains that one needs to grieve in order to love, and that one must feel a lack to be capable of aspiration: [...] human fulfillment depends on our being creatures of need and finitude and hence of longings and attachment. Finally, Kass warns, "the engaged and energetic being-at-work of what uniquely gave to us is what we need to treasure and defend. All other perfection is at best a passing illusion, at worst a Faustian bargain that will cost us our full and flourishing humanity." Jürgen Habermas Jürgen Habermas has also written against genetic human enhancement. In his book “The Future of Human Nature,” Habermas rejects the use of prenatal genetic technologies to enhance offspring. Habermas rejects genetic human enhancement on two main grounds: the violation of ethical freedom, and the production of asymmetrical relationships. He broadens this discussion by then discussing the tensions between the evolution of science with religion and moral principles. Violation of ethical freedom Habermas points out that a genetic modification produces an external imposition on a person's life that is qualitatively different from any social influence. This prenatal genetic modification will most likely be chosen by one's parents, therefore threatening the ethical freedom and equality that one is entitled to as a birthright. For Habermas, the difference relies in that while socialisation processes can always be contested, genetic designs cannot therefore possess a level of unpredictability. This argument builds on Habermas' magnum opus discourse ethics. For Habermas: Eugenic interventions aiming at enhancement reduce ethical freedom insofar as they tie down the person concerned to rejected, but irreversible intentions of third parties, barring him from the spontaneous self-perception of being the undivided author of his own life. Asymmetrical relationships Habermas suggested that genetic human enhancements would create asymmetric relationships that endanger democracy, which is premised on the idea of moral equality. He claims that regardless of the scope of the modifications, the very knowledge of enhancement obstructs symmetrical relationships between parents and their children. The child's genome was interfered with nonconsensually, making predecessors responsible for the traits in question. Unlike for thinkers like Fukuyama, Habermas' point is not that these traits might produce different ‘types of humans’. Rather, he placed the emphasis on how others are responsible in choosing these traits. This is the fundamental difference between natural traits and human enhancement, and it is what bears decisive weight for Habermas: the child's autonomy as self-determination is violated. However, Habermas does acknowledge that, for example, making one's son very tall in the hope that they will become a basketball player does not automatically determine that he will choose this path. However, although the opportunity can be turned down, this does not make it any less of a violation from being forced into an irreversible situation. Genetic modification has two large-scale consequences. Firstly, no action the child undertakes can be ascribed to her own negotiation with the natural lottery, since a ‘third party’ has negotiated on the child's behalf. This imperils the sense of responsibility for one's own life that comes along with freedom. As such, individuals’ self-understanding as ethical beings is endangered, opening the door to ethical nihilism. This is so because the genetic modification creates a type of dependence in which one of the parts does not even have the hypothetical possibility of changing social places with the other. Secondly, it becomes impossible to collectively and democratically establish moral rules through communication, since a condition for their establishment is the possibility to question assertions. Genetically modified individuals, however, never realise if their very questioning might have been informed by enhancement, nor can they question it. That being said, Habermas acknowledges that our societies are full of asymmetric relationships, such as oppression of minorities or exploitation. However, these conditions could be different. On the contrary, genetic modification cannot be reverted once it is performed. Criticism The transhumanist Institute for Ethics and Emerging Technologies criticizes bioconservatism as a form of "human racism" (more commonly known as speciesism), and as being motivated by a "yuck factor" that ignores individual freedoms. Nick Bostrom on posthuman dignity Nick Bostrom argues that bioconservative concerns regarding the threat of transhumanism to posthuman dignity are unsubstantiated. Bostrom himself identifies with forms of posthuman dignity, and in his article In Defence of Posthuman Dignity, argues that such does not run in contradiction with the ideals of transhumanism. Bostrom argues in the article that Fukuyama's concerns about the threats transhumanism pose to dignity as moral status - that transhumanism might strip away humanity's inalienable right of respect- lacks empirical evidence. He states that the proportion of people given full moral respect in Western societies has actually increased through history. This increase includes such populations as non-whites, women and non-property owners. Following this logic, it will similarly be feasible to incorporate future posthumans without compensating the dignities of the rest of the population. Bostrom then goes on to discuss dignity in the sense of moral worthiness, which varies among individuals. He suggests that posthumans can similarly possess dignity in this sense. Further, he suggests, it is possible that posthumans, being genetically enhanced, may come to possess even higher levels of moral excellence than contemporary human beings. While he considers that certain posthumans may live more degraded lives as a result of self-enhancement, he also notes that even at this time many people are not living worthy lives either. He finds this regrettable and suggests that countermeasures as education and cultural reforms can be helpful in curtailing such practices. Bostrom supports the morphological and reproductive freedoms of human beings, suggesting that ultimately, leading whatever life one aspires should be an unalienable right. Reproductive freedom means that parents should be free to choose the technological enhancements they want when having a child. According to Bostrom, there is no reason to prefer the random processes of nature over human design. He dismisses claims that describe this kind of operations as "tyranny" of the parents over their future chrildren. In his opinion, the tyranny of nature is no different. In fact, he claims that "Had Mother Nature been a real parent, she would have been in jail for child abuse and murder." Earlier in the paper, Bostrom also replies to Leon Kass with the claim that, in his words, "nature's gifts are sometimes poisoned and should not always be accepted." He makes the point that nature cannot be relied upon for normative standards. Instead, he suggests that transhumanism can, over time, allow for the technical improvement of "human nature," consistent with our widely held societal morals. According to Bostrom, the way that bioconservatives justify banning certain human enhancements while not others, reveal the double standard that is present in this line of thought. For him, a misleading conception of human dignity is to blame for this. We mistakenly take for granted that human nature is an intrinsic, unmodifiable set of properties. This problem, he argues, is overcome when human nature is conceived as 'dynamic, partially human-made, and improvable.' If we acknowledge that social and technological factors influence our nature, then dignity 'consists in what we are and what we have the potential to become, not in our pedigree or social origin'. It can be seen, then, than improved capabilities does not affect moral status, and that we should sustain an inclusive view that recognize our enhanced descendants as possessors of dignity. Transhumanists reject the notion that there is a significant moral difference between enhancing human lives through technological means compared to other methods. Distinguishing between types of enhancement Bostrom discusses a criticism levelled against transhumanists by bioconservatives, that children who are biologically enhanced by certain kinds of technologies will face psychological anguish because of the enhancement. Prenatal enhancements may create expectations for the individual's future traits or behaviour. If the individual learns of these enhancements, this is likely to cause them psychological anguish stemming from pressure to fulfil such expectations. Actions which are likely to cause individuals psychological anguish are undesirable to the point of being morally reprehensible. Therefore, prenatal enhancements are morally reprehensible. Bostrom finds that bioconservatives rely on a false dichotomy between technological enhancements that are harmful and those that are not, thus challenging premise two. Bostrom argues that children whose mothers played Mozart to them in the womb would not face psychological anguish upon discovering that their musical talents had been “prenatally programmed by her parents.” However, he finds that bioconservative writers often employ analogous arguments to the contrary demonstrating that technological enhancements, rather than playing Mozart in the womb, could potentially disturb children. Hans Jonas on reproductive freedom Hans Jonas contends the criticisms about bio-enhanced children by questioning their freedom without the presence of enhancement. He argues that enhancement would increase their freedom. This is because enhanced physical and mental capabilities would allow for greater opportunities; the children would no longer be constrained by physical or mental deficiencies. Jonas further weakens the arguments about reproductive freedom by referencing Habermas. Habermas argues that freedom for offspring is restricted by the knowledge of their enhancement. To challenge this, Jonas elaborates on his notion about reproductive freedom. Notable bioconservatives George Annas Dale Carrico Francis Fukuyama (as attributed by observers) Leon Kass Bill McKibben Oliver O'Donovan Jeremy Rifkin Wesley Smith Michael Sandel Edmund Pellegrino See also Bioluddism Posthumanization Techno-progressivism Appeal to nature References Further reading Gregg, Benjamin (2021). "Regulating genetic engineering guided by human dignity, not genetic essentialism", Politics and the Life Sciences, 10.1017/pls.2021.29, 41, 1, (60–75), Savulescu, Julian (2019). "Rational Freedom and Six Mistakes of a Bioconservative", The American Journal of Bioethics, 19(7), 1–5. https://doi.org/10.1080/15265161.2019.1626642 External links Nick Bostrom, "In defense of posthuman dignity", full text Climate change How Climate Change Makes Bioconservatism the Most Relevant Ideology, Chet Bowers, Truthout, 2016. Bioengineer humans to tackle climate change, say philosophers, The Guardian, 2012, featuring Rebecca Roache Political ideologies Transhumanism
Bioconservatism
[ "Technology", "Engineering", "Biology" ]
5,662
[ "Genetic engineering", "Transhumanism", "Ethics of science and technology" ]
73,562,868
https://en.wikipedia.org/wiki/Guillermo%20Rojas%20Bazan
Guillermo Rojas Bazan is an aviation model maker and researcher from Argentina. He is internationally renowned and considered unique and innovative in the field of museum quality airplane modeling in metal. His work has had a significant impact in the development of highly detailed model aircraft. Rojas Bazan has developed his own modeling techniques and is one of the only aircraft model builders to use aluminum. He is a true scratch builder, working completely by hand, foregoing electrical machines, except for a small compressor used for his airbrush. During the first forty-five years of his career, while living and working in four different countries, he made more than 200 custom models for museums, art galleries, scale model companies, and collectors. He has been called the greatest aircraft model maker in the world by various sources. Life and work Guillermo Rojas Bazan was born in 1949 in Buenos Aires, Argentina. Rojas Bazan received his education in both a technical engineering school and an art school. From 1981 to 1988, Rojas Bazan worked for the Instituto Aeronaval (Naval Air Institute) and the Argentine Air Force. During that time, he worked as a technical draftsman, aircraft illustrator, and designer. Additionally, he was commissioned to build all of the aircraft used in Argentina's Naval history, resulting in ninety-nine aircraft models built that are still currently on display. In 1988, he left Argentina for Spain, where he built models for an aviation art gallery in London and produced replicas for collectors in the United States and Europe. While working for the London gallery, Rojas Bazan was able to choose which models he built, and made several of what he describes as non-commercial models, that all sold despite not being ordered. In 1994, he moved to the U.S. and was hired by Fine Art Models, a company located in Royal Oak, Michigan. During his years working for Fine Art Models, he made models in 1/15 scale that were copied in Eastern Europe and sold in limited editions. In recent years, he has worked as a freelance artist for collectors and museums. Rojas Bazan's models are known for their high detail and weathering, giving the aircraft models their realism. Ann Cooper, a writer for Private Pilot magazine, explained that Rojas Bazan "doesn't just assemble parts and finish the exterior surfaces of his models, he loads them, inside and out, with rich, realistic detail work." Furthermore, there is a precision to details that are microscopic. Kelly Shaw of the magazine Fine Scale Modeler stated that "His all-metal scratch-builds are memorable for their undulating surfaces, variation in riveting, overlapping panels, and stressed skin. In photos, it's easy to mistake his models for real aircraft." Also noting that despite his success as a model maker, he has continued to strive for perfection. Moreover, Rojas Bazan’s models were lauded in letters by the University of Notre Dame, Christie's, and the U.S. Air Force and were referred to as the "Tiffany of Models," further explaining that his work is taking scale modeling around the world into a new dimension. To make sure his models are accurate, Rojas Bazan relies on extensive research before beginning work on a model. This process takes a longer period of time when an aircraft has a greater complexity. Rojas Bazan’s research has involved construction plans, material samples, test samples, and talking to former pilots. His Junkers G 24 model was built despite the fact that none of those aircraft still exist, using photos and original publications. Rojas Bazan has explained how "a lot of kits get it wrong." He primarily specializes in models of aircraft built between 1925-1945. In 1995, Mike Knepper, a writer for Cigar Aficionado magazine named Rojas Bazan the "Mozart of Modeling." Models Rojas Bazan built eighty-seven models for Argentina’s National Museum of the Nation as a result of being commissioned to build every aircraft used in Argentina’s Naval history. He built an additional twelve models that were displayed in different locations in Argentina. At Fine Art Models, he built numerous models including the F-4UD Corsair, which was recognized as a masterpiece by the German magazine IQ. When discussing Rojas Bazan’s P-51 Mustang model, a former World War II pilot of that aircraft, said that the details in the model are the best that he has ever seen. When asked about which are his favorite models, Rojas Bazan said, 'I do not have only one favorite model, I have several. Many of them are planes from the period between 1920 and 1939, before WWII (golden age of aviation). These include the Northrop Gamma, Boeing B-15, Boeing YB-17 (prototypes on the great B-17), Martin B-10, Vought Vindicator, Curtiss Hawk III, Junkers G-38, Junkers G-24, Heinkel He70, Fairey Battle, etc. Many of these aircraft were not good machines, or have not been very popular, but I like them aesthetically.' One model that was built in Rojas Bazan's most recent freelancer era is a Mitsubishi A6M Zero that now resides in Japan and was promoted there as the best Zero replica ever built. Another model completed in recent years is a Ju 87 B-1 Stuka, which is featured in one of his YouTube videos. In that video, he explains the archeological labor that he undergoes to complete his models in the most realistic way possible. His last completed model is a Consolidated B-24H Liberator. It was commissioned by the 467th Bombardment Group (H) Association and will be displayed at the historic Wendover airfield. Recognition At the North American Model Engineering Expo, Rojas Bazan accepted the Joe Martin Foundation Award for Craftsman of the Year (2013). The University of Notre Dame, the U.S. Airforce, and Christie's have given Rojas Bazan letters of recognition. References External links YouTube Video of the B-24H Liberator - Model Engineering -Aluminum Construction- Total Scratch Built. 1:20 Scale YouTube Video of the Ju 87 B-1 Stuka- Model Engineering- Metal Construction- Truly Scratch Built- Part 1 YouTube Video of the A6M2 Zero -Model Engineering- Metal Construction- Truly Scratch Built- Part One Model makers People in aviation 21st-century Argentine people 20th-century Argentine people 1949 births Living people
Guillermo Rojas Bazan
[ "Physics" ]
1,332
[ "Model makers" ]
73,566,449
https://en.wikipedia.org/wiki/Pinealon
Pinealon is a synthetic tripeptide of sequence (Glu-Asp-Arg) and purported geroprotector documented in the Russian scientific literature. Research Pinealon has been shown to protect rat offspring from prenatal hyperhomocysteinemia and correspondingly improve post natal cognitive function. Pinealon likewise maintains learning retention rats with experimentally-induced diabetes. As well as old humans and young wrestlers. Pinealon has been tested in large scale Human trials such as the Gasprom study, where it showed profound Geroprotective effects. This included longer telomeres and improvements in various health markers. Chemistry Pinealon is a tripeptide composed of L-glutamic acid, L-aspartic acid, and L-arginine and is notated as Glu-Asp-Arg or EDR. References Tripeptides Neuroprotective agents Peptide therapeutics Russian drugs
Pinealon
[ "Chemistry" ]
197
[ "Biochemistry stubs", "Medicinal chemistry stubs", "Medicinal chemistry" ]
73,568,819
https://en.wikipedia.org/wiki/Intravesical%20drug%20delivery
Intravesical drug delivery is the delivery of medications directly into the bladder by urinary catheter. This method of drug delivery is used to directly target diseases of the bladder such as interstitial cystitis and bladder cancer, but currently faces obstacles such as low drug retention time due to washing out with urine and issues with the low permeability of the bladder wall itself. Due to the advantages of directly targeting the bladder, as well as the effectiveness of permeability enhancers, advances in intravesical drug carriers, and mucoadhesive, intravesical drug delivery is becoming more effective and of increased interest in the medical community. Advantages Delivering drugs directly to the target bladder site allows for maximizing drug delivery while minimizing systemic effects. Delivering the treatment directly to the site allows for more effective dosages to be given since high concentrations of drug in the bladder can be reached. This becomes especially important when patients have a urinary bladder disease that is drug resistant. The delivery of drugs directly to the bladder is a large improvement over systemic delivery which only allows a small fraction of the drug to reach the bladder, causing lower concentrations of drug leading to systemic treatments being ineffective. The smaller fraction of drug reaching its target with systemic delivery means more drugs must be administered which can lead to problems with systemic toxicity. This is not the case when drug is administered directly to the bladder. The layer of the bladder which comes into contact with urine, the urothelium (the transitional epithelium of the bladder, is a mostly impermeable barrier which stops molecules in the urine from being reabsorbed and prevents molecules from being secreted directly into the bladder as well. The bladder’s impermeability means that any drug delivered intravesical will not absorb into the bloodstream well through the bladder wall, causing fewer systemic effects. This impermeability also causes treatment of bladder diseases to be more difficult to treat as drugs do not absorb well into the bladder wall. Intravesical drug delivery has been identified as an ideal way to treat most urinary disorders, including bladder tumors and bladder cancers, interstitial cystitis, and urinary incontinence. There is currently a lack of interest in treating urinary tract infections using intravesical delivery. Disadvantages of Intravesical Drug Delivery While intravesical delivery shows distinct advantages over systemic drug delivery it has several problems to overcome. When giving a drug intravesically it is diluted by urine and washed out when urine is voided. Additionally, the low permeability of the urothelium which lines the bladder creates a hurdle that must be overcome if the bladder wall needs to be treated. These issues create the need for more frequent dosing, which causes urinary catheter site irritation and compliance issues with treatments. Intravesical drug dilution occurs as urine accumulates in the bladder, lowering the concentration of drug in the bladder as overall volume increases. The voiding of drug with urine when using traditional drug formulations in the bladder has become a hurdle to overcome as well, since residence time of the drug inside the bladder is directly tied to the treatment’s efficacy. Creating formulations which adhere to the bladder wall has been targeted as one way to improve intravesical dug efficacy,. The low adherence of drugs to the bladder wall and low permeability into the bladder wall contributes to low drug retention in the bladder. When modifying drug formulations for intravesical delivery gels or viscosity increasing formulations are sometimes used to increase retention, though this can cause issues with urethra obstruction, an additional hurdle in intravesical drug delivery. Permeability issues with the bladder wall can be attributed to the urothelium, the lining of the bladder wall made up of umbrella cells, intermediate cells, and basal cells.,. The impermeability can be attributed to the umbrella cells which form tight junctions with each other to make up the innermost layer of the urothelium and have the ability to change shape to adapt to the bladder’s varying size. The umbrella cells are covered in a dense layer of plaques which further prevents the absorption of particles through the urothelium and a layer of mucin composed of glycosaminoglycans (GAGs) which prevents both hydrophobic and negatively charged molecules from adhering to the bladder wall. Overcoming the impermeability of the mucin layer and the urothelium is a large focus of many intravesical drug formulations, and is key to an efficacious intravesical treatment Improvements The main ways researchers are currently overcoming the problems in intravesical drug delivery are through developing formulations using mucoadhesives, nanoparticles, liposomes, polymeric hydrogel, expandable delivery devices, and electromotive drug administration. These methods each serve to improve retention time, drug permeability through the urothelium, or some combination of the two. Enhancing drug retention Enhancing Drug retention can be achieved through changing formulation and delivery device. Often drug retention and permeability enhancement are tied, as drugs which permeate the urothelium will suffer fewer effects from urine dilution and voiding. Two of the most common methods to improve drug retention are by using a mucoadhesive formulation or using polymeric hydrogels that form in the bladder, or in situ gelling hydrogels. Mucoadhesive formulations Mucoadhesive formulations can be made with both biopolymers and synthetic polymers, and usually contain polymers that are hydrophilic and can form many hydrogen bonds with the GAG-mucin. Positively charged molecules typically make far better mucoadhesive as the mucin layer is negatively charged. By forming these bonds, the mucoadhesive, and the drug it carries, can maintain sustained contact with the bladder wall, enhancing retention of the drug in the bladder. Among mucoadhesive materials Chitosan often stands out due to its biocompatibility, biodegradability, and permeability enhancing factors. In experiments with chitosan, it has been shown that the mucoadhesive properties of a molecule likely increase as the molecular weight is increased. Studies have also found that the modification of chitosan formulations with thiomers, which can form covalent bonds with mucus, can significantly improve the mucoadhesion of the chitosan formulations In situ gelling polymeric hydrogels Polymeric hydrogels for intravesical drug delivery take advantage of characteristics of the bladder or urine to gel, or may use external manipulation to cause the hydrogel to form. These gels can take advantage of pH or temperature differences, or external input like UV lasers, to form gels inside the bladder after instillation of the formulation in liquid form. If these gels are made to be mucoadhesive they stick to the bladder wall and do not wash out or cause urethral obstruction. Polymeric hydrogels have also been formulated to float on top of the urine to avoid wash out and obstruction without having to adhere to the bladder wall. Drawbacks of using polymeric hydrogel formulations include the concern of urethral obstruction, the varying conditions of the urine which make pH or ionic controlled gelling formulations less controlled, and the bladder wall inflammation which can occur with mucoadhesive polymeric hydrogels. Enhancing drug permeability Enhancing drug permeability can be done through physical or chemical methods, and is also achieved through nanoparticle and liposome drug carriers. Physical methods include electromotive drug administration, radiofrequency-induced chemotherapeutic effect, and conductive hyperthermic chemotherapy, but electromotive drug administration seems to be the most prevalent in recent research and clinical trial focus. Chemical methods revolve around adding a chemical agent to enhance drug uptake and increase permeability. To enhance drug permeability through physical or chemical methods both the mucin layer and the umbrella cells of the urothelium must undergo a structural or chemical change. Electromotive Drug Administration (EMDA) Electromotive drug administration utilizes a small electric current flowing across the bladder wall between two electrodes, one on the skin and one placed inside the bladder via catheterization, to enhance permeability of aqueous solutions. Electromotive drug administration best enhances ionized formulations, which diffuse poorly using standard passive diffusion. This allows it to potentially assist in the delivery of many drugs that usually perform poorly in the bladder without having to change their formulations heavily. Across multiple studies and clinical trials electromotive drug administration has been shown to increase the uptake of many drugs, showing potential use in bladder cancer, urinary incontinence, urinary cystitis and pain management. Cost of local anesthesia for bladder distention using electromotive drug administration in combination with lidocaine has been shown to be cheaper and more practical than general anesthesia or spinal anesthesia. Chemically enhancing permeability To enhance the permeability of the bladder wall, specifically the urothelium, to drugs administered locally to the bladder four chemical agents are most commonly used: DMSO, protamine sulphate, Hyaluronidase, and chitosan. DMSO is already widely used to directly treat urinary cystitis due to its anti-inflammatory and antibacterial properties. DMSO can penetrate tissues without causing any damage to them. This property of DMSO made it of particular interest as a chemical enhancer and it has been shown to increase the uptake of several chemotherapeutics used intravesically. Protamine sulphate causes disruption to the mucus layer of the urothelium and can cause large disruption of bladder permeability which can be modified by adding defibrotide. Hyaluronidase breaks down Hyaluronic acid, a GAG molecule important to the mucin layer, causing enhanced permeability of the mucin layer to drugs administered concurrently with hyaluronidase. Conversely, hyaluronic acid can be used to treat interstitial cystitis as it helps to repair damaged mucin layers. Chitosan is thought to function as a permeability enhancer by binding to the mucin layer and negatively affecting tight junctions between umbrella cells in the urothelium. It has been shown that chitosan increases bladder wall permeability but its effectiveness as a permeability enhancer decreases as calcium ion concentration increases. Chemically enhancing bladder permeability can lead to negative side effects such as incontinence, pain, and uncontrolled leakage of molecules other than intended drug from the urine into the bladder wall. Nanoparticle and liposome drug carriers Nanoparticle and liposome drug carrier formulations allow for increased drug uptake, especially in the case of liposomes which allow for greater uptake via endocytosis. Liposomes generally must be shielded via modification with a Polyethylene glycol molecule to overcome issues with instability and aggregation in urine. Nanoparticle and Liposome drug carriers can be loaded into a in situ forming hydrogel to gain the advantages of mucoadhesive properties Empty liposomes by themselves have been noted to improve interstitial cystitis, most likely due to formation of a lipid film on damaged urothelium. The variety of types of nanoparticles which can be made to carry drugs in intravesical formulations, combined with the tunability of many of these particles in regards to drug loading and release rates makes nanoparticles and liposomes a highly versatile and useful tool in intravesical drug delivery. References Drug delivery devices Urology
Intravesical drug delivery
[ "Chemistry" ]
2,428
[ "Pharmacology", "Drug delivery devices" ]
73,569,052
https://en.wikipedia.org/wiki/Rainwater%20harvesting%20in%20the%20Sahel
Rainwater harvesting in the Sahel is a combination of "indigenous and innovative" agricultural strategies that "plant the rain" and reduce evaporation, so that crops have access to soil moisture for the longest possible period of time. In the resource-poor drylands of the Sahel region of Africa, irrigation systems and chemical fertilizers are often prohibitively expensive and thus uncommon: so increasing or maintaining crop yields in the face of climate change depends on augmenting the region's extant rainfed agriculture systems to "increase water storage within the soil and replenish soil nutrients." Rainwater harvesting is a form of agricultural water management. Rainwater harvesting is most effective when combined with systems for soil regeneration and organic-matter management. Background The Sahel is an ecologically (rather than geopolitically) defined region of Africa. The noun Sahel comes from the Arabic sāḥil () describing a border, shore or edge, which aptly describes the transitional areas of Africa where savanna becomes the hyper-arid Sahara Desert. (According to the Concise Oxford Dictionary of World Place Names, "The Arabs considered the Sahara to be a huge ocean with the Sahel as its shore.") The Sahel crosses Senegal, The Gambia, Mauritania, Mali, Burkina Faso, Ghana, Niger, Nigeria, Cameroon, Chad, Central African Republic, South Sudan, Sudan, and Eritrea in a belt up to wide that spans from the Atlantic Ocean in the west to the Red Sea in the east. The Sahel is marked by decreasing levels of precipitation from south to north, but what defines a dryland ecosystem is not necessarily low rainfall. In some cases the dryness is due to persistent high levels of evaporation (due to heat or desiccating winds). Unpredictable rainfall is often also a factor. Population estimates of the Sahel vary depending on which political subdivisions are included, but the count is in the vicinity of 100 million people, including nearly a million refugees and internally displaced people. The countries of the Sahel region are mainly poor. For example, the Volta River basin is occupied by about 20 million people who live in the countries of Burkina Faso and Ghana; 61 percent of Burkinabe and 45 percent of Ghanaians live on less than per day. About 12 million farmers in the region (including people in Niger, Chad, Mauritania, Mali, and probably Burkina Faso and Senegal), are occasionally or "chronically vulnerable to food and nutrition insecurity." The Brookings Institution has described Sahelians as among the "most underprivileged, marginalized, and poorest people" on Earth. Subsistence food production Agriculture contributes between 10 and 70 percent of GDP to the economies of most sub-Saharan countries. The major agricultural systems of the Sahel are oasis, pastoral, and mixed production of cereals and root crops. The root crops are predominantly sweet potato and cassava; cereals are predominantly millet and sorghum, with some maize; the "north-south rainfall gradient defines...a successive north-south dominance of millet, sorghum and maize." Climate changes over the next 25 years are predicted to decrease Sahelian cereal production by double-digit percentages, largely due to increased heat. The Intergovernmental Panel on Climate Change also predicts double-digit decreases due to increased rainfall variability. Homegrown staple crops account for an estimated 90 percent of food consumption in the Sahel, and 90 percent of these crops are grown using exclusively rain-fed agriculture. A general African transition to first-world-style irrigation systems is considered unlikely, and the Sahel region has an "especially limited irrigation potential." According to the United Nations Food and Agriculture Organization, no more than 10 percent of African food production is likely to be grown under irrigation over the next 20 years. Mechanized irrigation, where it exists, is typically limited to more lucrative cash crops, rather than subsistence. Therefore, in order to increase or even maintain the Sahel's dryland agriculture production capacity the "most logical strategy...will be improving rainfed productivity for most staples." Rainfall and fertility Precipitation patterns and soil quality are "key constraint[s]" in Sahelian food production. Rainfall levels are both generally low to start with and "highly variable" to complicate matters. This variability is a common cause of crop failure due to unpredictable "onset and distribution" of rainfall; cereal yields are impacted by the start date and duration of the rain as much as by the absolute quantity. The majority of the year is the dry season, which ends with harmattan winds blowing dust south from the Sahara; rain usually falls between one and four months of the year, from June through September. Soils in the Sahel are typically degraded, often "crusty, abandoned agricultural lands" and "particularly poor in organic carbon." In Burkina Faso, one-third of all land is degraded. The human-induced structural damage to soils wrought by intensive 20th-century agriculture methods "is especially evident during droughts when the land is stripped bare of vegetation and erosive winds and water take their toll." In addition to the toll of soil and wind erosion on old fields, the practices of burning or removing crop residues, and a shift to fewer or no fallowing periods due to increased population density (and commensurate increased need to cultivate all accessible land) have contributed to further decreases in natural fertility. The Sahel is dappled with "unproductive crusty patches" found on "old dunes, sandy plains, colluvial slopes, and alluvial terraces." These "glazed" patches are regionally known as glacis and are found, for example, on approximately 60 percent of all degraded land in Niger. "Glacis" describes a slope made particularly slippery, for whatever reason, and is related to the Old French glacier. Glacis patches in the Sahel are more or less impermeable; rainwater runs off or evaporates, further immiserating the soil biome, and thus the plants and the people. Climate change Even before the full impact of climate change is felt in the Sahel, the region struggles with challenges including "unsustainable management strategies, weak economies, weak infrastructure, 'inappropriate resource tenure', inappropriate interventions (such as eucalyptus plantations), [and] ineffective institutions." The future of the Sahel is insecure. Climate change impacts will be variable but there is a "likelihood of negative impacts in most locations from increased temperatures, greater rainfall variability, and more extreme weather events." Rainwater harvesting techniques of the Sahel The purpose of rainwater harvesting in the Sahel and other dryland eco-agricultural regions is to extend the usability of irregular water inputs. Banking rainwater (through techniques often summarized by the epigram "slow it, spread it, sink it") is possible with site-appropriate techniques and as more water becomes "available for ecosystems...their capacity to perform their functions is improved." Furthermore, soil restoration is possible and would potentially open up more than of land in Africa for additional cultivation, which could in turn reduce deforestation for agricultural uses. Niger has implemented several of these techniques on a wide scale beginning in the 1980s and has recovered approximately of degraded land. Benefits of rainwater harvesting (especially on a community scale) include additional drinking water for animals, land reclamation opportunities, higher soil fertility, accelerated growth of timber for firewood, and reinforcement of a virtuous cycle pattern leading toward additional rainfall (trees make rain). Any or all of the following techniques reduce water runoff and thus increase soil water storage, generally yielding about two to three times more growth than crops grown in the same regions/conditions under a more conventional system. One study found that appropriately managed Sahelian rainwater-harvesting techniques increased runoff retention up to 87 percent, doubled water infiltration rates, and extended the crop-growing season up to 20 days. Bouli A is a small-scale artificial pond dug "where there is convergence of runoff" at the midpoint or bottom of a slope. This water tends to last for two or even three months into the dry period after the monsoon. In addition to supplying additional water for livestock and vegetable gardens the bouli "can recreate an ecosystem favourable to the life of the fauna and the local flora, boosting recharge of water tables during droughts and allowing vegetation to grow even during the dry period." Bouli may be the most poorly studied of the rainwater harvesting techniques appropriate for the Sahel, as there are relatively few studies about the mechanics and benefits of this system. Bunds Mauritian farmers build weirs to trap windblown sand during the dry season and during the "infrequent rains" these weirs serve to minimize water runoff and maximize groundwater recharge; the stone rows of Burkina Faso, Mali, and Niger function by similar principles. Stone rows, typically called bunds, are a traditional and widely used means of land improvement in the Sahel. Laid out on contour, stone rows minimize soil erosion but also minimize rainwater runoff and offer favorable microclimates. Bunds not laid out in parallel with the natural contours of the land may result in "some gully formation during rainy periods." Bunds can also be made of earth, which was the original practice that preceded the use of stone. Bunds may be laid out up to 30 meters apart and may themselves be planted with indigenous vegetation such as Andropogon gayanus or Piliostigma reticulatum. Both earth and stone bunds are prone to material deterioration over time and demand periodic maintenance; as a general rule, the more stones used the more stable the row. Projet d’aménagement des terroirs et conservation des eaux (PATECORE) popularized the three-stone system for building more durable, animal-disturbance-resistant stone rows, in which one large stone is placed atop three smaller stones. Demi-lunes, or half-moons Half-moons, which are known as through much of the Sahel because of the French colonial influence on regional languages, are a widely used traditional form of semi-circular planting pit. Half-moons are formed by digging a hole up to four meters across but somewhat shallower in depth, and "placing the removed earth on the downhill side." Half-moons are particularly useful for remediating the more or less impermeable glacis soils. These edged planting pits capture and hold organic matter and moisture. The accumulated detritus in turn attracts termites and other invertebrates whose actions create passages and pores in the organic matter, building humus, and permitting better water infiltration. Half-moons have been shown to reduce the risk of crop failure and increase agricultural productivity, especially with the use of "complementary inputs" such as animal manures. Half-moons, however, are extremely labor-intensive: "constructing just one takes several hours" and the preparation of the planting areas must be done during the dry season when the ground is very hard and the heat may be severe. According to one account based on interviews with Sahelian farm families, "preparation of [one hectare of demi-lunes] amounts to two to four person-months of work, and yearly maintenance of approximately one-person month is required." Zaï, or tassa A zaï is a "water pocket" and is another indigenous planting method, developed in the Yatenga. The word comes from the Moré language, and means something like "getting up early and hurrying out to prepare the soil" or even "breaking and fragmenting the soil crust before sowing." Tassa is the Hausa language word for this concept. A similar practice in the Yako region is called guendo. Similar to half-moons, but smaller, zaï are usually 24 to 40 cm wide, 10 to 25 cm deep, spaced about 40 cm apart in a grid across the field. Zaï are usually established with "two handfuls" of organic matter in the form of animal manure, crop residues, or a composted combination of the two. These pits were traditionally used on a small scale to remediate degraded zipélé lands but are now being used on much larger plots. Zaï are best-suited for use in areas that see "isohyets of 300 and 800 mm rainfall." Zaï have been shows to increase yields between 2.5 and 20 times normal, "depending on the crop." As with half-moons, the major drawback of zaï is in the hundreds of man-hours that are necessary to build them. Families must either have a large number of fit and able-bodied workers, or "pay for the services of the young people's association." Other techniques Other beneficial and successful practices in the Sahel include: Living hedgerows Straw mulching Coppicing/pollarding rather than cutting down trees wholesale, ideally leaving two or three shoots for regrowth Paddock systems for grazing animals Tied ridges, a planting system that looks a Belgian waffle Obstacles to implementation Widespread adoption of rainwater harvesting techniques in the Sahel is so far limited by a number of factors including a high upfront cost for labor. The massive quantity and weight of stones needed to establish bunds is often prohibitive. It is estimated that of rock are needed to establish stone rows for just one hectare of arable land. Other limits include lack of knowledge about these techniques and the absence of training programs. In the words of one development analyst, "agricultural water management strategies have been over-studied, over-promoted, and over-funded. However, despite the efforts of numerous projects, water scarcity still limits agricultural production of most of the smallholder crop-livestock farmers of the basin and cereal yields are still lying far below their potential." One study found that village training programs, "a low-cost policy intervention," were highly effective in increasing uptake of rainwater harvesting techniques. Additional images See also Effects of climate change on agriculture Farmer-managed natural regeneration Contour trenching Spreading ground Anthrosol Terra preta Oasification Afforestation Water scarcity in Africa Water conflict in the Middle East and North Africa Environmental issues in Africa Sources Bibliography Further reading & Agriculture in Africa Climate change adaptation Climate change in Africa Environment of Africa Environmental education Food security Irrigation Permaculture concepts Sahel Sahel Sustainable agriculture Water conservation Water in Africa Water supply
Rainwater harvesting in the Sahel
[ "Chemistry", "Engineering", "Environmental_science" ]
2,969
[ "Hydrology", "Water supply", "Environmental engineering", "Environmental social science", "Environmental education" ]
58,203,281
https://en.wikipedia.org/wiki/Introduction%20to%20Electrodynamics
Introduction to Electrodynamics is a textbook by physicist David J. Griffiths. Generally regarded as a standard undergraduate text on the subject, it began as lecture notes that have been perfected over time. Its most recent edition, the fifth, was published in 2023 by Cambridge University. This book uses SI units (the mks convention) exclusively. A table for converting between SI and Gaussian units is given in Appendix C. Griffiths said he was able to reduce the price of his textbook on quantum mechanics simply by changing the publisher, from Pearson to Cambridge University Press. He has done the same with this one. (See the ISBN in the box to the right.) Table of contents (5th edition) Preface Advertisement Chapter 1: Vector Analysis Chapter 2: Electrostatics Chapter 3: Potentials Chapter 4: Electric Fields in Matter Chapter 5: Magnetostatics Chapter 6: Magnetic Fields in Matter Chapter 7: Electrodynamics Intermission Chapter 8: Conservation Laws Chapter 9: Electromagnetic Waves Chapter 10: Potentials and Fields Chapter 11: Radiation Chapter 12: Electrodynamics and Relativity Appendix A: Vector Calculus in Curvilinear Coordinates Appendix B: The Helmholtz Theorem Appendix C: Units Index Reception Paul D. Scholten, a professor at Miami University (Ohio), opined that the first edition of this book offers a streamlined, though not always in-depth, coverage of the fundamental physics of electrodynamics. Special topics such as superconductivity or plasma physics are not mentioned. Breaking with tradition, Griffiths did not give solutions to all the odd-numbered questions in the book. Another unique feature of the first edition is the informal, even emotional, tone. The author sometimes referred to the reader directly. Physics received the primary focus. Equations are derived and explained, and common misconceptions are addressed. According to Robert W. Scharstein from the Department of Electrical Engineering at the University of Alabama, the mathematics used in the third edition is just enough to convey the subject and the problems are valuable teaching tools that do not involve the "plug and chug disease." Although students of electrical engineering are not expected to encounter complicated boundary-value problems in their career, this book is useful to them as well, because of its emphasis on conceptual rather than mathematical issues. He argued that with this book, it is possible to skip the more mathematically involved sections to the more conceptually interesting topics, such as antennas. Moreover, the tone is clear and entertaining. Using this book "rejuvenated" his enthusiasm for teaching the subject.Colin Inglefield, an associate professor of physics at Weber State University (Utah), commented that the third edition is notable for its informal and conversational style that may appeal to a large class of students. The ordering of its chapters and its contents are fairly standard and are similar to texts at the same level. The first chapter offers a valuable review of vector calculus, which is essential for understanding this subject. While most other authors, including those aimed at a more advanced audience, denote the distance from the source point to the field point by , Griffiths uses a script (see figure). Unlike some comparable books, the level of mathematical sophistication is not particularly high. For example, Green's functions are not anywhere mentioned. Instead, physical intuition and conceptual understanding are emphasized. In fact, care is taken to address common misconceptions and pitfalls. It contains no computer exercises. Nevertheless, it is perfectly adequate for undergraduate instruction in physics. As of June 2005, Inglefield has taught three semesters using this book. Physicists Yoni Kahn of Princeton University and Adam Anderson of the Fermi National Accelerator Laboratory indicated that Griffiths' Electrodynamics offers a dependable treatment of all materials in the electromagnetism section of the Physics Graduate Record Examinations (Physics GRE) except circuit analysis. Editions See also Introduction to Quantum Mechanics (textbook) by the same author Classical Electrodynamics (textbook) by John David Jackson, a commonly used graduate-level textbook. List of textbooks in electromagnetism List of textbooks on classical and quantum mechanics List of textbooks in thermodynamics and statistical mechanics List of books on general relativity Notes References Further reading A graduate textbook. Electromagnetism Physics textbooks Electrodynamics 1981 non-fiction books Undergraduate education
Introduction to Electrodynamics
[ "Physics", "Mathematics" ]
884
[ "Electromagnetism", "Physical phenomena", "Fundamental interactions", "Electrodynamics", "Dynamical systems" ]
58,207,196
https://en.wikipedia.org/wiki/Ballistic%20table
A ballistic table or ballistic chart, also known as the data of previous engagements (DOPE) chart, is a reference data chart used in long-range shooting to predict the trajectory of a projectile and compensate for physical effects of gravity and wind drift, in order to increase the probability of the projectile successfully reaching the intended target. Ballistic tables commonly are used in target shooting, hunting, military sharpshooting and ballistic science applications. Ballistic chart data are typically given in angular measurements with units in either milliradians (mil/mrad) or minutes of arc (MOA), arranged in a table format with the rows representing different reference distances and the columns corresponding to categories of information (e.g. angular deviations, actual drop/drift distance, "click" count, etc.) in which the shooter is interested. After ranging the intended target, the shooter can then read off the chart data to estimate the ballistic correction required (relative to a zeroed range) and calibrate the aim accordingly by turning the adjustment knobs on the scope and/or using the reference markings on the scope's reticle. Ballistic tables are usually generated using specifically designed computer programs built on mathematical functions known as ballistic softwares, and an electronic device that runs ballistic softwares is called a ballistic calculator or ballistic computer. The number of inputs to the ballistic calculator can sometimes vary depended on the specific generator, or the user may choose to only input certain variables. For example, a very simple drop table can be made using inputs for the sight adjustment value (in mil or MOA), the zero range, intended target ranges, muzzle velocity, caliber, ballistic coefficient and bullet weight. Some of the environmental effects that play a role in calculating the trajectory are gravity, projectile spin, wind, temperature, air pressure and humidity. More advanced tables can take more factors into account to ensure a more accurate prediction of the trajectory, which becomes increasingly affected by gravity and wind drift over longer distances due to the more prolonged bullet flight. Some of these variables may have a negligible effect on shorter ranges. See also External ballistics References External links Buckmasters.com - How to Read a Ballistics Chart Long Range Shooting - Intro to Ballistic Tables - The Loadout Room JBM Ballistics, a free online ballistic calculator. Projectiles Aerodynamics Ballistics
Ballistic table
[ "Physics", "Chemistry", "Engineering" ]
479
[ "Applied and interdisciplinary physics", "Aerodynamics", "Aerospace engineering", "Ballistics", "Fluid dynamics" ]
64,810,837
https://en.wikipedia.org/wiki/High-entropy-alloy%20nanoparticles
High-entropy-alloy nanoparticles (HEA-NPs) are nanoparticles having five or more elements alloyed in a single-phase solid solution structure. HEA-NPs possess a wide range of compositional library, distinct alloy mixing structure, and nanoscale size effect, giving them huge potential in catalysis, energy, environmental, and biomedical applications. Enabling synthesis HEA-NPs are a structural analog to bulk high-entropy alloys (HEAs), but synthesized at the nanoscale. The formation of HEAs typically requires high temperature for multi-element mixing; however, high temperature acts against nano-material synthesis due to high-temperature-induced structure aggregation and surface reconstruction. In 2018, HEA-NPs were firstly synthesized by a carbothermal shock synthesis. (The material and technology are patented.) The carbothermal shock employs a rapid high-temperature heating (e.g. 2000 K, in 55 ms) to enable the non-equilibrium synthesis of HEA-NPs with uniform size and homogeneous mixing despite containing immiscible combinations. Although rapid quenching is desired to maintain the solid-solution state, too fast cooling rate can hinder structural ordering. Therefore, the cooling rate should be chosen carefully based on the temperature-time-transformation diagram. Another guide that can be used for the synthesis is the Ellingham diagram. Elements at the top of the diagram are easily reduced and tend to form HEA-NPs, while elements at the bottom of the diagram tend to form high-entropy oxide NPs. Later, other similar non-equilibrium "shock" methods were also introduced to synthesize HEA-NPs and other types of high entropy nanostructures. Recently, a low temperature synthesis through simultaneous multi-cation exchange (below 900 K) has been demonstrated for high-entropy metal sulfide NPs, which may be applied to metal selenides, tellurides, phosphides, and halides as well. In 2024 a study showed that induction plasma can be used as a one-step method that enables the continuous synthesis of HEA-NPs directly from elemental metal powders via in-flight alloying. Structural analysis Due to the random distribution of elements in HEA-NPs, in addition to conventional characterization methods, other methods with higher resolution are needed for their structural analysis. To analyze the random mixing of multiple elements, atomic electron tomography can be used, which provides positional precision of 21 pm and identification of atoms by periods. Furthermore, X-ray absorption spectroscopy can give information on local coordination environments, while extended X-ray absorption fine structure can be used to get coordination numbers and bond distances. Combined with hard X-ray photoelectron spectroscopy or X-ray absorption near-edge structure, these analyses can be used to explore structure–property relationships in HEA-NPs. In addition, due to the immense number of possibilities of compositions and surfaces (i.e., terrace, edge, and corner) available for HEA-NPs, simulations such as density functional theory calculations are also popularly used for their analysis. Properties and applications HEA-NPs have a large compositional library, which enables tunability in chemical composition, structure, and associated properties. In HEA-NPs, the same type of atoms can have different local density of states because their neighboring atom compositions can be different. Such variations in local environment lead to diverse and tunable adsorption energy levels, which can be beneficial to satisfy the Sabatier principle especially for complex reactions. In addition, owing to the high entropy structure, HEA-NPs typically show improved structural stability. One suggested mechanism for the enhanced structural stability is through prevention of phase separation due to lattice distortions from different sized elements acting as diffusion barriers. With the above merits, HEA-NPs have been used as high-performance catalysts for both thermochemical and electrochemical reactions, such as ammonia oxidation, decomposition, and water splitting. High throughput and data mining approaches are being implemented toward accelerated materials discovery in the multi-dimensional space of HEA-NPs. References See also High-entropy alloys Thermal shock synthesis Self-assembly of nanoparticles Alloys Thermodynamic entropy Nanoparticles by physical property
High-entropy-alloy nanoparticles
[ "Physics", "Chemistry" ]
892
[ "Physical quantities", "Thermodynamic entropy", "Entropy", "Chemical mixtures", "Alloys", "Statistical mechanics" ]
64,815,290
https://en.wikipedia.org/wiki/British%20Hydromechanics%20Research%20Association
The British Hydromechanics Research Association is a former government research association that supplies consulting engineering over fluid dynamics. History It was formed on 20 September 1947 in Essex, under the Companies Act 1929 It had moved to Bedfordshire by the 1960s. In the 1970s it was known as BHRA Fluid Engineering. Next door was the National Centre for Materials Handling, set up by the Ministry of Technology (MinTech), later known as the National Materials Handling Centre. On 16 October 1989 it became a private consultancy. Fluid engineering The BHRA conducted most of the research for the aerodynamics of British power station infrastructure in the 1960s, such as cooling towers. In 1966 it designed an early Thames flood barrier. Computational fluid dynamics It developed early CFD software. Visits On Tuesday 21 June 1966, the new Bedfordshire laboratories were opened by Duke of Edinburgh. Structure The organisation, Framatome BHR, is now in Cranfield in west Bedfordshire, near the M1. See also Bierrum, has built and designed Britain's power station cooling towers since 1965, also in Bedfordshire. References External links BHR Group 1947 establishments in the United Kingdom British research associations Central Bedfordshire District Computational fluid dynamics Engineering research institutes Hydraulic engineering organizations Hydraulic laboratories Research institutes established in 1947 Science and technology in Bedfordshire Science and technology in Essex Wind tunnels
British Hydromechanics Research Association
[ "Physics", "Chemistry", "Engineering" ]
262
[ "Computational fluid dynamics", "Engineering research institutes", "Civil engineering organizations", "Computational physics", "Hydraulic engineering organizations", "Fluid dynamics" ]
76,504,290
https://en.wikipedia.org/wiki/List%20of%20Belgian%20provinces%20by%20life%20expectancy
Statistics Belgium Statistics by province Average values for 3-year periods. The values are rounded, all calculations were done on raw data. The sorting of provinces by total life expectancy for both periods is the same.Data source: Statistics Belgium Statistics by region Data source: Statistics Belgium Eurostat (2019—2022) Data source: Eurostat Global Data Lab (2019–2022) Data source: Global Data Lab Charts See also List of countries by life expectancy List of European countries by life expectancy Administrative divisions of Belgium Demographics of Belgium References Health in Belgium Demographics of Belgium Belgium, life expectancy Belgium Provinces of Belgium Provinces by life expectancy Belgium
List of Belgian provinces by life expectancy
[ "Biology" ]
132
[ "Senescence", "Life expectancy" ]
76,512,012
https://en.wikipedia.org/wiki/Dexamethasone/levofloxacin
Dexamethasone/levofloxacin, sold under the brand name Levodexa, is a fixed-dose combination medication used for the prevention and treatment of inflammation, and the prevention of infection, associated with cataract surgery. It contains dexamethasone, a corticosteroid; and levofloxacin, an anti-infective. It was approved for medical use in Canada in December 2023. Medical uses Dexamethasone/levofloxacin is indicated for the prevention and treatment of inflammation, and the prevention of infection, associated with cataract surgery in adults. References Combination drugs Ophthalmology drugs
Dexamethasone/levofloxacin
[ "Chemistry" ]
140
[ "Pharmacology", "Pharmacology stubs", "Medicinal chemistry stubs" ]
59,856,308
https://en.wikipedia.org/wiki/Felix%20Behrend
Felix Adalbert Behrend (23 April 1911 – 27 May 1962) was a German mathematician of Jewish descent who escaped Nazi Germany and settled in Australia. His research interests included combinatorics, number theory, and topology. Behrend's theorem and Behrend sequences are named after him. Life Behrend was born on 23 April 1911 in Charlottenburg, a suburb of Berlin. He was one of four children of Dr. Felix W. Behrend, a politically liberal mathematics and physics teacher. Although of Jewish descent, their family was Lutheran. Behrend followed his father in studying both mathematics and physics, both at Humboldt University of Berlin and the University of Hamburg, and completed a doctorate in 1933 at Humboldt University. His dissertation, Über numeri abundantes [On abundant numbers] was supervised by Erhard Schmidt. With Adolf Hitler's rise to power in 1933, Behrend's father lost his job, and Behrend himself moved to Cambridge University in England to work with Harold Davenport and G. H. Hardy. After taking work with a life insurance company in Zürich in 1935 he was transferred to Prague, where he earned a habilitation at Charles University in 1938 while continuing to work as an actuary. He left Czechoslovakia in 1939, just before the war reached that country, and returned through Switzerland to England, but was deported on the HMT Dunera to Australia as an enemy alien in 1940. Although both Hardy and J. H. C. Whitehead intervened for an early release, he remained in the prison camps in Australia, teaching mathematics there to the other internees. After Thomas MacFarland Cherry added to the calls for his release, he gained his freedom in 1942 and began working at the University of Melbourne. He remained there for the rest of the career, and married a Hungarian dance teacher in 1945 in the Queen's College chapel; they had two children. Although his highest rank was associate professor, Bernhard Neumann writes that "he would have been made a (personal) professor" if not for his untimely death. He died of brain cancer on 27 May 1962 in Richmond, Victoria, a suburb of Melbourne. Contributions Behrend's work covered a wide range of topics, and often consisted of "a new approach to questions already deeply studied". He began his research career in number theory, publishing three papers by the age of 23. His doctoral work provided upper and lower bounds on the density of the abundant numbers. He also provided elementary bounds on the prime number theorem, before that problem was solved more completely by Paul Erdős and Atle Selberg in the late 1940s. He is known for his results in combinatorial number theory, and in particular for Behrend's theorem on the logarithmic density of sets of integers in which no member of the set is a multiple of any other, and for his construction of large Salem–Spencer sets of integers with no three-element arithmetic progression. Behrend sequences are sequences of integers whose multiples have density one; they are named for Behrend, who proved in 1948 that the sum of reciprocals of such a sequence must diverge. He wrote one paper in algebraic geometry, on the number of symmetric polynomials needed to construct a system of polynomials without nontrivial real solutions, several short papers on mathematical analysis, and an investigation of the properties of geometric shapes that are invariant under affine transformations. After moving to Melbourne his interests shifted to topology, first in the construction of polyhedral models of manifolds, and later in point-set topology. He was also the author of a posthumously-published children's book, Ulysses' Father (1962), consisting of a collection of bedtime stories linked through the Greek legend of Sisyphus. Selected publications References 1911 births 1962 deaths 20th-century German mathematicians 20th-century Australian mathematicians Australian people of German-Jewish descent Combinatorialists German number theorists Humboldt University of Berlin alumni Charles University alumni Academic staff of the University of Melbourne German emigrants to Australia
Felix Behrend
[ "Mathematics" ]
818
[ "Combinatorialists", "Combinatorics" ]
59,859,530
https://en.wikipedia.org/wiki/EP%20matrix
In mathematics, an EP matrix (or range-Hermitian matrix or RPN matrix) is a square matrix A whose range is equal to the range of its conjugate transpose A*. Another equivalent characterization of EP matrices is that the range of A is orthogonal to the nullspace of A. Thus, EP matrices are also known as RPN (Range Perpendicular to Nullspace) matrices. EP matrices were introduced in 1950 by Hans Schwerdtfeger, and since then, many equivalent characterizations of EP matrices have been investigated through the literature. The meaning of the EP abbreviation stands originally for Equal Principal, but it is widely believed that it stands for Equal Projectors instead, since an equivalent characterization of EP matrices is based in terms of equality of the projectors AA+ and A+A. The range of any matrix A is perpendicular to the null-space of A*, but is not necessarily perpendicular to the null-space of A. When A is an EP matrix, the range of A is precisely perpendicular to the null-space of A. Properties An equivalent characterization of an EP matrix A is that A commutes with its Moore-Penrose inverse, that is, the projectors AA+ and A+A are equal. This is similar to the characterization of normal matrices where A commutes with its conjugate transpose. As a corollary, nonsingular matrices are always EP matrices. The sum of EP matrices Ai is an EP matrix if the null-space of the sum is contained in the null-space of each matrix Ai. To be an EP matrix is a necessary condition for normality: A is normal if and only if A is EP matrix and AA*A2 = A2A*A. When A is an EP matrix, the Moore-Penrose inverse of A is equal to the group inverse of A. A is an EP matrix if and only if the Moore-Penrose inverse of A is an EP matrix. Decomposition The spectral theorem states that a matrix is normal if and only if it is unitarily similar to a diagonal matrix. Weakening the normality condition to EPness, a similar statement is still valid. Precisely, a matrix A of rank r is an EP matrix if and only if it is unitarily similar to a core-nilpotent matrix, that is, where U is an orthogonal matrix and C is an r x r nonsingular matrix. Note that if A is full rank, then A = UCU*. References Matrices
EP matrix
[ "Mathematics" ]
515
[ "Matrices (mathematics)", "Mathematical objects" ]
59,862,590
https://en.wikipedia.org/wiki/Aerodynamics%20Research%20Institute
The Aerodynamische Versuchsanstalt (AVA) in Göttingen was one of the four predecessor organizations of the 1969 founded "German Research and Experimental Institute for Aerospace", which in 1997 was renamed German Aerospace Center (DLR). History The AVA was created in 1919 from the 1907 Göttingen by Ludwig Prandtl founded "Modellversuchsanstalt für Aerodynamik der Motorluftschiff-Studiengesellschaft". In its founding years, it was still concerned with the development of the "best" form of airship. In 1908, the first wind tunnel was built in Göttingen for tests on models for aviation. In 1915, founded in 1911 Kaiser Wilhelm Society (KWG) and under the direction of Ludwig Prandtl the "Modellversuchsanstalt aerodynamics" was founded in 1919 as the "Aerodynamic Research Institute of the Kaiser Wilhelm Society" (AVA) was transferred to the KWG and converted in 1925 into the "Kaiser Wilhelm Institute for Flow Research linked to the Aerodynamic Research Institute". Ludwig Prandtl headed the institute until 1937, his successor became Albert Betz. In the same year a spin-off from the institute took place under the name "Aerodynamische Versuchsanstalt Göttingen e. V. in the Kaiser Wilhelm Society ", in which the Reich Ministry of Aviation was involved. The remaining after the spin-off part was continued under the name "Kaiser Wilhelm Institute for Flow Research" from the 1948, the Max Planck Institute for Fluid Research (today Max Planck Institute for Dynamics and Self-Organization). The AVA was confiscated in 1945 by the British (until 1948), 1953 as "Aerodynamic Research Institute Göttingen e. V. re-opened in the Max Planck Society and fully integrated in 1956 as the "Aerodynamic Research Institute in the Max Planck Society". In 1969, the spin-off from the Max Planck Society and the founding of the "German Research and Experimental Institute for Aerospace e. V.". Bibliography Aerodynamische Versuchsanstalt Göttingen e.V. in der Kaiser-Wilhelm-/Max-Planck-Gesellschaft (CPTS), in: Eckart Henning, Marion Kazemi: Handbuch zur Institutsgeschichte der Kaiser-Wilhelm-/ Max-Planck-Gesellschaft zur Förderung der Wissenschaften 1911–2011 – Daten und Quellen, Berlin 2016, 2 subvolumes, volume 1: Institute und Forschungsstellen A–L (online, PDF, 75 MB), pages 27–45 (Chronologie des Instituts) Sources Historie des DLR – Gesellschaft von Freunden des DLR e. V. 100 Jahre DLR – Homepage des DLR Archiv zur Geschichte der Max-Planck-Gesellschaft Former research institutes Aerodynamics Research institutes in Göttingen Max Planck Institutes Aviation history of Germany 1907 establishments in Germany 1969 disestablishments in West Germany History of Lower Saxony
Aerodynamics Research Institute
[ "Chemistry", "Engineering" ]
613
[ "Aerospace engineering", "Aerodynamics", "Fluid dynamics" ]
59,866,779
https://en.wikipedia.org/wiki/Signs%20Of%20LIfe%20Detector
Signs Of LIfe Detector (SOLID) is an analytical instrument under development to detect extraterrestrial life in the form of organic biosignatures obtained from a core drill during planetary exploration. The instrument is based on fluorescent immunoassays and it is being developed by the Spanish Astrobiology Center (CAB) in collaboration with the NASA Astrobiology Institute. SOLID is currently undergoing testing for use in astrobiology space missions that search for common biomolecules that may indicate the presence of extraterrestrial life, past or present. The system was validated in field tests and engineers are looking into ways to refine the method and miniaturize the instrument further. Science background Modern astrobiology inquiry has emphasized the search for water on Mars, chemical biosignatures in the permafrost, soil and rocks at the planet's surface, and even biomarker gases in the atmosphere that may give away the presence of past or present life. The detection of preserved organic molecules of unambiguous biological origin is fundamental for the confirmation of present or past life, but the 1976 Viking lander biological experiments failed to detect organics on Mars, and it is suspected it was because of the combined effects of heat applied during analysis and the unexpected presence of oxidants such as perchlorates in the Martian soil. The recent discovery of near surface ground ice on Mars supports arguments for the long-term preservation of biomolecules on Mars. SOLID demonstrated that antibodies are unaffected by acidity, heat and oxidants such as perchlorates, and it has emerged as a viable choice for an astrobiology mission directly searching for biosignatures. For a time, the ExoMars' Rosalind Franklin rover was planned to carry a similar instrument called Life Marker Chip. Instrument SOLID was designed for automatic in situ detection and identification of substances from liquid and crushed samples under the conditions of outer space. The system uses hundreds of carefully selected antibodies to detect lipids, proteins, polysaccharides, and nucleic acids. These are complex biological polymers that could only be synthesized by life forms, and are therefore strong indicators —biosignatures— of past or present life. SOLID consists of two separate functional units: a Sample Preparation Unit (SPU) for extractions by ultrasonication, and a Sample Analysis Unit (SAU), for fluorescent immunoassays. The antibody microarrays are separated in hundreds of small compartments inside a biochip only a few square centimeters in size. SOLID instrument is able to perform both "sandwich" and competitive immunoassays using hundreds of well characterized and highly specific antibodies. The technique called "sandwich immunoassay" is a non-competitive immunoassay in which the analyte (compound of interest in the unknown sample) is captured by an immobilized antibody, then a labeled antibody is bound to the analyte to reveal its presence. In other words, the "sandwich" quantify antigens (i.e. biomolecules) between two layers of antibodies (i.e. capture and detection antibody). For the competitive assay technique, unlabeled analyte displaces bound labelled analyte, which is then detected or measured. An optical system is set up so that a laser beam excites the fluorochrome label and a CCD detector captures an image of the microarray that can be measured. The instrument is able to detect a broad range of molecular size compounds, from the amino acid size, peptides, proteins, to whole cells and spores, with sensitivities at 1–2 ppb (ng/mL) for biomolecules and 104 to 103 spores per milliliter. Some compartments in the microarray are reserved for samples of known nature and concentrations, that are used as controls for reference and comparison. SOLID instrument concept avoids the high-temperature treatments of other techniques that may destroy organic matter in the presence of Martian oxidants such as perchlorates. Testing A field prototype of SOLID was first tested in 2005 in a simulated Mars drilling expedition called MARTE (Mars Analog Rio Tinto Experiment) where the researchers tested a drill in depth, sample-handling systems, and immunoassays relevant to the search for life in the Martian subsurface. MARTE was funded by the NASA Astrobiology Science and Technology for Exploring Planets (ASTEP) program. Using the sample cores, SOLID successfully detected several biological polymers in extreme environments in different parts of the world, including a deep South African mine, Antarctica's McMurdo Dry Valleys, Yellowstone, Iceland, Atacama Desert in Chile, and in the acid water of Rio Tinto. Extracts obtained from Mars analogue sites on Earth were added to various perchlorate concentrations at −20 °C for 45 days and then the samples were analyzed with SOLID. The results showed no interference from acidity or from the presence of 50 mM perchlorate which is 20 times higher than that found at the Phoenix landing site. SOLID demonstrated that the chosen antibodies are unaffected by acidity, heat and oxidants such as perchlorates, and it has emerged as a viable choice for an astrobiology mission directly searching for biosignatures. In 2018, another field test took place at the Atacama Desert with a rover called ARADS (Atacama Rover Astrobiology Drilling Studies) that carried a core drill, SOLID instrument, and another life detection system called Microfluidic Life Analyzer (MILA). MILA processes minuscule volumes of fluid samples to isolate amino acids, which are building blocks of proteins. The rover tested different strategies for searching for potential evidence of life in the soil, and established that roving, drilling and life detection can take place in concert. Status These tests validated the system for planetary exploration. Some improvements to be addressed in the future are instrument miniaturization, extraction protocols, and antibody stability under outer space conditions. SOLID would be one of the payloads of the proposed Icebreaker Life to Mars, or a lander to Europa. References Astrobiology Fluorescence Molecular biology Spacecraft instruments Space science experiments Spectrometers INTA spacecraft instruments
Signs Of LIfe Detector
[ "Physics", "Chemistry", "Astronomy", "Biology" ]
1,268
[ "Luminescence", "Fluorescence", "Origin of life", "Spectrum (physical sciences)", "Speculative evolution", "Astrobiology", "Molecular biology", "Biochemistry", "Spectrometers", "Spectroscopy", "Astronomical sub-disciplines", "Biological hypotheses" ]
59,867,581
https://en.wikipedia.org/wiki/SoyBase%20Database
SoyBase is a database created by the United States Department of Agriculture. It contains genetic information about soybeans. It includes genetic maps, information about Mendelian genetics and molecular data regarding genes and sequences. It was started in 1990 and is freely available to individuals and organizations worldwide. History SoyBase was instituted by the Corn Insects and Corn Genetics Research Unit (CICGRU) in Ames, Iowa as a central repository for the soybean genetics community's published information. Originally, the database concentrated on genetic information such as genetic linkage maps and other Mendalian information. SoyBase genetic maps are a manually-curated composite of all published mapping and QTL studies, and thus provide a species level view of markers and QTL. In 2010 the soybean genome sequence was released along with gene models and many other types of genome annotations that were integrated in to SoyBase. SoyBase genetic linkage maps were integral to the assembly of the soybean genome. In 2018 the database received approximately 63,000 page requests from 2,600 users per month from 130 countries. About 40 organizations in the United States and 82 foreign educational institutions access SoyBase yearly. SoyBase supplies data to U.S. and foreign government organizations and corporate entities. Data submission and release policy Data is accepted from the original source generators only. Users that independently identify data for inclusion into the database can contact SoyBase directly. A number Excel-based spreadsheet templates are available to facilitate the inclusion of data into SoyBase. All data in SoyBase are available without restrictions. A number of data sub-setting and download tools are provided, and when needed ad hoc subsets of the data can be requested from the SoyBase Curator. Search tool The SoyBase Database Search Tool uses a text entry box for queries. Results are returned as text and as displays. Results display soybean genetic (and genomic) data using Generic Model Organism Database (GMOD) open-source software. In addition to SoyBase, objects identified by exact lexical matches to the query term, the tool also uses a soybean-specific ontology to identify biologically-related SoyBase objects. Some SoyBase sequence data and annotations are available through an InterMine instance (SoyMine), which is a collaboration with the Legume Information System Project. Graphical displays Genetic maps contain information on markers (SSR, RFLP, SNP, etc.), genes, and biparental and Genome-wide Association Study (GWAS) Quantitative Trait Loci (QTL). Soybean genetic maps are displayed using the CMap comparative genetic map viewer. Soybean genomic sequence and gene model data are displayed using the GBrowse sequence viewer. Other genome annotations in this viewer include epigenetic data such as DNA methylation and gene expression data of various soybean strains subjected to different treatments and from different soybean tissues/cultivars. Metabolic data and biochemical pathway information is displayed using Pathway Tools. Soybean metabolic pathway information (SoyCyc) was inferred by the Plant Metabolic Network project and was used to populate PathwayTools displays. References External links Plant Metabolic Network project Generic Model Organism Database project Legume Federation project Legume Information System United States Department of Agriculture InterMine Biological databases Model organism databases
SoyBase Database
[ "Biology" ]
680
[ "Model organism databases", "Model organisms" ]
63,449,981
https://en.wikipedia.org/wiki/Davenport%E2%80%93Schinzel%20Sequences%20and%20Their%20Geometric%20Applications
Davenport–Schinzel Sequences and Their Geometric Applications is a book in discrete geometry. It was written by Micha Sharir and Pankaj K. Agarwal, and published by Cambridge University Press in 1995, with a paperback reprint in 2010. Topics Davenport–Schinzel sequences are named after Harold Davenport and Andrzej Schinzel, who applied them to certain problems in the theory of differential equations. They are finite sequences of symbols from a given alphabet, constrained by forbidding pairs of symbols from appearing in alternation more than a given number of times (regardless of what other symbols might separate them). In a Davenport–Schinzel sequence of order , the longest allowed alternations have length . For instance, a Davenport–Schinzel sequence of order three could have two symbols and that appear either in the order or , but longer alternations like would be forbidden. The length of such a sequence, for a given choice of , can be only slightly longer than its number of distinct symbols. This phenomenon has been used to prove corresponding near-linear bounds on various problems in discrete geometry, for instance showing that the unbounded face of an arrangement of line segments can have complexity that is only slightly superlinear. The book is about this family of results, both on bounding the lengths of Davenport–Schinzel sequences and on their applications to discrete geometry. The first three chapters of the book provide bounds on the lengths of Davenport–Schinzel sequences whose superlinearity is described in terms of the inverse Ackermann function . For instance, the length of a Davenport–Schinzel sequence of order three, with symbols, can be at most , as the second chapter shows; the third concerns higher orders. The fourth chapter applies this theory to line segments, and includes a proof that the bounds proven using these tools are tight: there exist systems of line segments whose arrangement complexity matches the bounds on Davenport–Schinzel sequence length. The remaining chapters concern more advanced applications of these methods. Three chapters concern arrangements of curves in the plane, algorithms for arrangements, and higher-dimensional arrangements, following which the final chapter (comprising a large fraction of the book) concerns applications of these combinatorial bounds to problems including Voronoi diagrams and nearest neighbor search, the construction of transversal lines through systems of objects, visibility problems, and robot motion planning. The topic remains an active area of research and the book poses many open questions. Audience and reception Although primarily aimed at researchers, this book (and especially its earlier chapters) could also be used as the textbook for a graduate course in its material. Reviewer Peter Hajnal calls it "very important to any specialist in computational geometry" and "highly recommended to anybody who is interested in this new topic at the border of combinatorics, geometry, and algorithm theory". References Combinatorics on words Discrete geometry Mathematics books 1995 non-fiction books
Davenport–Schinzel Sequences and Their Geometric Applications
[ "Mathematics" ]
596
[ "Discrete geometry", "Discrete mathematics", "Combinatorics on words", "Combinatorics" ]
63,451,209
https://en.wikipedia.org/wiki/Vattapara%20accident%20zone
Vattapara Hairpin Turn also known as (Vattapara accident zone) is a road and place along the Indian National Highway 66 near Valanchery, Malappuram District, Kerala, India, that is known for a high number of accidents. Over a five-year period there were 300 accidents, 200 injuries, and 30 deaths. History Vattapara Hairpin Turn is located at a distance of about 4 km from Valanchery in Malappuram district, on the National Highway formerly known as NH 17 and now NH 66. The 'Vattapara bend' is an 'infamous' bend in Vattapara between Puthanathani and Valanchery. The number of vehicles that have flipped on the turn is not tracked. There have been more than 300 road accidents in the last five years. More than 30 deaths, more than 200 injured. It is a common sight for locals to see vehicles coming and going, and firefighters ringing bells and trying to lift the vehicle. This road also called Bermuda Triangle in Malappuram district at Vattapara bend, a death trap for tanker lorries Reason of accident Frequent vehicular accidents are caused by the slope of the road approaching the curve and the lack of scientificity in the construction of the curve in terms of the slope of the surface. Related to the theory of 'Banking of the Curve' (Banked turn). A vehicle speeding through a curve will still have a tendency to roll out. This is called centrifugal force. If this force survives the friction between the vehicle's tire and the road, the vehicle will fall out. This friction also depends on the speed of the vehicle, the condition of the road surface, and the load on the vehicle. Here the slope of the road is to the left. Curving vehicles are at risk by default. Tanker lorries and truckers who get off the first part at fairly good speeds at night and elsewhere without adequate warnings, or without heeding warnings, suddenly notice a single hairpin bend to the right. Most of the time, they cut to the right at once and the cart overturns to the left and the left to the left. Ways to cope with an accident Change traffic Solutions -1 is a bypass road that connects directly from Kanjipura to Valanchery Moodal before the Vattapara bend on the National Highway diverts traffic. Solutions - 2 The second solution is to widen the roads from Puthanathani to Thirunavaya Kuttipuram to divert traffic. See also Valanchery Roads in Kerala National Highway 66 (India) Traffic collisions in India Puthanathani Kuttipuram Banked turn References External links മലപപറം വളാഞചേരിയിൽ ടാങകർ ലോറി മറിഞഞ; ആളപായമിലല - Tanker lorry overturns at Malappuram Valanchery; No crowds Two killed in Valanchery road mishap Accident-prone highway stretch to be redesigned | Kozhikode News - Times of India Road incidents in India Road safety Collision Disasters in Kerala
Vattapara accident zone
[ "Physics" ]
612
[ "Collision", "Mechanics" ]
63,451,675
https://en.wikipedia.org/wiki/Regulation%20of%20artificial%20intelligence
Regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI). It is part of the broader regulation of algorithms. The regulatory and policy landscape for AI is an emerging issue in jurisdictions worldwide, including for international organizations without direct enforcement power like the IEEE or the OECD. Since 2016, numerous AI ethics guidelines have been published in order to maintain social control over the technology. Regulation is deemed necessary to both foster AI innovation and manage associated risks. Furthermore, organizations deploying AI have a central role to play in creating and implementing trustworthy AI, adhering to established principles, and taking accountability for mitigating risks. Regulating AI through mechanisms such as review boards can also be seen as social means to approach the AI control problem. Background According to Stanford University's 2023 AI Index, the annual number of bills mentioning "artificial intelligence" passed in 127 surveyed countries jumped from one in 2016 to 37 in 2022. In 2017, Elon Musk called for regulation of AI development. According to NPR, the Tesla CEO was "clearly not thrilled" to be advocating for government scrutiny that could impact his own industry, but believed the risks of going completely without oversight are high: "Normally the way regulations are set up is when a bunch of bad things happen, there's a public outcry, and after many years a regulatory agency is set up to regulate that industry. It takes forever. That, in the past, has been bad but not something which represented a fundamental risk to the existence of civilization." In response, some politicians expressed skepticism about the wisdom of regulating a technology that is still in development. Responding both to Musk and to February 2017 proposals by European Union lawmakers to regulate AI and robotics, Intel CEO Brian Krzanich has argued that AI is in its infancy and that it is too early to regulate the technology. Many tech companies oppose the harsh regulation of AI and "While some of the companies have said they welcome rules around A.I., they have also argued against tough regulations akin to those being created in Europe" Instead of trying to regulate the technology itself, some scholars suggested developing common norms including requirements for the testing and transparency of algorithms, possibly in combination with some form of warranty. In a 2022 Ipsos survey, attitudes towards AI varied greatly by country; 78% of Chinese citizens, but only 35% of Americans, agreed that "products and services using AI have more benefits than drawbacks". A 2023 Reuters/Ipsos poll found that 61% of Americans agree, and 22% disagree, that AI poses risks to humanity. In a 2023 Fox News poll, 35% of Americans thought it "very important", and an additional 41% thought it "somewhat important", for the federal government to regulate AI, versus 13% responding "not very important" and 8% responding "not at all important". Perspectives The regulation of artificial intelligences is the development of public sector policies and laws for promoting and regulating AI. Regulation is now generally considered necessary to both encourage AI and manage associated risks. Public administration and policy considerations generally focus on the technical and economic implications and on trustworthy and human-centered AI systems, although regulation of artificial superintelligences is also considered. The basic approach to regulation focuses on the risks and biases of machine-learning algorithms, at the level of the input data, algorithm testing, and decision model. It also focuses on the explainability of the outputs. There have been both hard law and soft law proposals to regulate AI. Some legal scholars have noted that hard law approaches to AI regulation have substantial challenges. Among the challenges, AI technology is rapidly evolving leading to a "pacing problem" where traditional laws and regulations often cannot keep up with emerging applications and their associated risks and benefits. Similarly, the diversity of AI applications challenges existing regulatory agencies, which often have limited jurisdictional scope. As an alternative, some legal scholars argue that soft law approaches to AI regulation are promising because soft laws can be adapted more flexibly to meet the needs of emerging and evolving AI technology and nascent applications. However, soft law approaches often lack substantial enforcement potential. Cason Schmit, Megan Doerr, and Jennifer Wagner proposed the creation of a quasi-governmental regulator by leveraging intellectual property rights (i.e., copyleft licensing) in certain AI objects (i.e., AI models and training datasets) and delegating enforcement rights to a designated enforcement entity. They argue that AI can be licensed under terms that require adherence to specified ethical practices and codes of conduct. (e.g., soft law principles). Prominent youth organizations focused on AI, namely Encode Justice, have also issued comprehensive agendas calling for more stringent AI regulations and public-private partnerships. AI regulation could derive from basic principles. A 2020 Berkman Klein Center for Internet & Society meta-review of existing sets of principles, such as the Asilomar Principles and the Beijing Principles, identified eight such basic principles: privacy, accountability, safety and security, transparency and explainability, fairness and non-discrimination, human control of technology, professional responsibility, and respect for human values. AI law and regulations have been divided into three main topics, namely governance of autonomous intelligence systems, responsibility and accountability for the systems, and privacy and safety issues. A public administration approach sees a relationship between AI law and regulation, the ethics of AI, and 'AI society', defined as workforce substitution and transformation, social acceptance and trust in AI, and the transformation of human to machine interaction. The development of public sector strategies for management and regulation of AI is deemed necessary at the local, national, and international levels and in a variety of fields, from public service management and accountability to law enforcement, healthcare (especially the concept of a Human Guarantee), the financial sector, robotics, autonomous vehicles, the military and national security, and international law. Henry Kissinger, Eric Schmidt, and Daniel Huttenlocher published a joint statement in November 2021 entitled "Being Human in an Age of AI", calling for a government commission to regulate AI. As a response to the AI control problem Regulation of AI can be seen as positive social means to manage the AI control problem (the need to ensure long-term beneficial AI), with other social responses such as doing nothing or banning being seen as impractical, and approaches such as enhancing human capabilities through transhumanism techniques like brain-computer interfaces being seen as potentially complementary. Regulation of research into artificial general intelligence (AGI) focuses on the role of review boards, from university or corporation to international levels, and on encouraging research into AI safety, together with the possibility of differential intellectual progress (prioritizing protective strategies over risky strategies in AI development) or conducting international mass surveillance to perform AGI arms control. For instance, the 'AGI Nanny' is a proposed strategy, potentially under the control of humanity, for preventing the creation of a dangerous superintelligence as well as for addressing other major threats to human well-being, such as subversion of the global financial system, until a true superintelligence can be safely created. It entails the creation of a smarter-than-human, but not superintelligent, AGI system connected to a large surveillance network, with the goal of monitoring humanity and protecting it from danger. Regulation of conscious, ethically aware AGIs focuses on how to integrate them with existing human society and can be divided into considerations of their legal standing and of their moral rights. Regulation of AI has been seen as restrictive, with a risk of preventing the development of AGI. Global guidance The development of a global governance board to regulate AI development was suggested at least as early as 2017. In December 2018, Canada and France announced plans for a G7-backed International Panel on Artificial Intelligence, modeled on the International Panel on Climate Change, to study the global effects of AI on people and economies and to steer AI development. In 2019, the Panel was renamed the Global Partnership on AI. The Global Partnership on Artificial Intelligence (GPAI) was launched in June 2020, stating a need for AI to be developed in accordance with human rights and democratic values, to ensure public confidence and trust in the technology, as outlined in the OECD Principles on Artificial Intelligence (2019). The 15 founding members of the Global Partnership on Artificial Intelligence are Australia, Canada, the European Union, France, Germany, India, Italy, Japan, the Republic of Korea, Mexico, New Zealand, Singapore, Slovenia, the United States and the UK. In 2023, the GPAI has 29 members. The GPAI Secretariat is hosted by the OECD in Paris, France. GPAI's mandate covers four themes, two of which are supported by the International Centre of Expertise in Montréal for the Advancement of Artificial Intelligence, namely, responsible AI and data governance. A corresponding centre of excellence in Paris will support the other two themes on the future of work, and on innovation and commercialization. GPAI also investigated how AI can be leveraged to respond to the COVID-19 pandemic. The OECD AI Principles were adopted in May 2019, and the G20 AI Principles in June 2019. In September 2019 the World Economic Forum issued ten 'AI Government Procurement Guidelines'. In February 2020, the European Union published its draft strategy paper for promoting and regulating AI. At the United Nations (UN), several entities have begun to promote and discuss aspects of AI regulation and policy, including the UNICRI Centre for AI and Robotics. In partnership with INTERPOL, UNICRI's Centre issued the report AI and Robotics for Law Enforcement in April 2019 and the follow-up report Towards Responsible AI Innovation in May 2020. At UNESCO's Scientific 40th session in November 2019, the organization commenced a two-year process to achieve a "global standard-setting instrument on ethics of artificial intelligence". In pursuit of this goal, UNESCO forums and conferences on AI were held to gather stakeholder views. A draft text of a Recommendation on the Ethics of AI of the UNESCO Ad Hoc Expert Group was issued in September 2020 and included a call for legislative gaps to be filled. UNESCO tabled the international instrument on the ethics of AI for adoption at its General Conference in November 2021; this was subsequently adopted. While the UN is making progress with the global management of AI, its institutional and legal capability to manage the AGI existential risk is more limited. An initiative of International Telecommunication Union (ITU) in partnership with 40 UN sister agencies, AI for Good is a global platform which aims to identify practical applications of AI to advance the United Nations Sustainable Development Goals and scale those solutions for global impact. It is an action-oriented, global & inclusive United Nations platform fostering development of AI to positively impact health, climate, gender, inclusive prosperity, sustainable infrastructure, and other global development priorities. Recent research has indicated that countries will also begin to use artificial intelligence as a tool for national cyberdefense. AI is a new factor in the cyber arms industry, as it can be used for defense purposes. Therefore, academics urge that nations should establish regulations for the use of AI, similar to how there are regulations for other military industries. Regional and national regulation The regulatory and policy landscape for AI is an emerging issue in regional and national jurisdictions globally, for example in the European Union and Russia. Since early 2016, many national, regional and international authorities have begun adopting strategies, actions plans and policy papers on AI. These documents cover a wide range of topics such as regulation and governance, as well as industrial strategy, research, talent and infrastructure. Different countries have approached the problem in different ways. Regarding the three largest economies, it has been said that "the United States is following a market-driven approach, China is advancing a state-driven approach, and the EU is pursuing a rights-driven approach." Australia In October 2023, the Australian Computer Society, Business Council of Australia, Australian Chamber of Commerce and Industry, Ai Group (aka Australian Industry Group), Council of Small Business Organisations Australia, and Tech Council of Australia jointly published an open letter calling for a national approach to AI strategy. The letter backs the federal government establishing a whole-of-government AI taskforce. Brazil On September 30, 2021, the Brazilian Chamber of Deputies approved the Brazilian Legal Framework for Artificial Intelligence, Marco Legal da Inteligência Artificial, in regulatory efforts for the development and usage of AI technologies and to further stimulate research and innovation in AI solutions aimed at ethics, culture, justice, fairness, and accountability. This 10 article bill outlines objectives including missions to contribute to the elaboration of ethical principles, promote sustained investments in research, and remove barriers to innovation. Specifically, in article 4, the bill emphasizes the avoidance of discriminatory AI solutions, plurality, and respect for human rights. Furthermore, this act emphasizes the importance of the equality principle in deliberate decision-making algorithms, especially for highly diverse and multiethnic societies like that of Brazil. When the bill was first released to the public, it faced substantial criticism, alarming the government for critical provisions. The underlying issue is that this bill fails to thoroughly and carefully address accountability, transparency, and inclusivity principles. Article VI establishes subjective liability, meaning any individual that is damaged by an AI system and is wishing to receive compensation must specify the stakeholder and prove that there was a mistake in the machine's life cycle. Scholars emphasize that it is out of legal order to assign an individual responsible for proving algorithmic errors given the high degree of autonomy, unpredictability, and complexity of AI systems. This also drew attention to the currently occurring issues with face recognition systems in Brazil leading to unjust arrests by the police, which would then imply that when this bill is adopted, individuals would have to prove and justify these machine errors. The main controversy of this draft bill was directed to three proposed principles. First, the non-discrimination principle, suggests that AI must be developed and used in a way that merely mitigates the possibility of abusive and discriminatory practices. Secondly, the pursuit of neutrality principle lists recommendations for stakeholders to mitigate biases; however, with no obligation to achieve this goal. Lastly, the transparency principle states that a system's transparency is only necessary when there is a high risk of violating fundamental rights. As easily observed, the Brazilian Legal Framework for Artificial Intelligence lacks binding and obligatory clauses and is rather filled with relaxed guidelines. In fact, experts emphasize that this bill may even make accountability for AI discriminatory biases even harder to achieve. Compared to the EU's proposal of extensive risk-based regulations, the Brazilian Bill has 10 articles proposing vague and generic recommendations. Compared to the multistakeholder participation approach taken previously in the 2000s when drafting the Brazilian Internet Bill of Rights, Marco Civil da Internet, the Brazilian Bill is assessed to significantly lack perspective. Multistakeholderism, more commonly referred to as Multistakeholder Governance, is defined as the practice of bringing multiple stakeholders to participate in dialogue, decision-making, and implementation of responses to jointly perceived problems. In the context of regulatory AI, this multistakeholder perspective captures the trade-offs and varying perspectives of different stakeholders with specific interests, which helps maintain transparency and broader efficacy. On the contrary, the legislative proposal for AI regulation did not follow a similar multistakeholder approach. Future steps may include, expanding upon the multistakeholder perspective. There has been a growing concern about the inapplicability of the framework of the bill, which highlights that the one-shoe-fits-all solution may not be suitable for the regulation of AI and calls for subjective and adaptive provisions. Canada The Pan-Canadian Artificial Intelligence Strategy (2017) is supported by federal funding of Can $125 million with the objectives of increasing the number of outstanding AI researchers and skilled graduates in Canada, establishing nodes of scientific excellence at the three major AI centres, developing 'global thought leadership' on the economic, ethical, policy and legal implications of AI advances and supporting a national research community working on AI. The Canada CIFAR AI Chairs Program is the cornerstone of the strategy. It benefits from funding of Can$86.5 million over five years to attract and retain world-renowned AI researchers. The federal government appointed an Advisory Council on AI in May 2019 with a focus on examining how to build on Canada's strengths to ensure that AI advancements reflect Canadian values, such as human rights, transparency and openness. The Advisory Council on AI has established a working group on extracting commercial value from Canadian-owned AI and data analytics. In 2020, the federal government and Government of Quebec announced the opening of the International Centre of Expertise in Montréal for the Advancement of Artificial Intelligence, which will advance the cause of responsible development of AI. In June 2022, the government of Canada started a second phase of the Pan-Canadian Artificial Intelligence Strategy. In November 2022, Canada has introduced the Digital Charter Implementation Act (Bill C-27), which proposes three acts that have been described as a holistic package of legislation for trust and privacy: the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act, and the Artificial Intelligence & Data Act (AIDA). Morocco In Morocco, a new legislative proposal has been put forward by a coalition of political parties in Parliament to establish the National Agency for Artificial Intelligence (AI). This agency is intended to regulate AI technologies, enhance collaboration with international entities in the field, and increase public awareness of both the possibilities and risks associated with AI. China The regulation of AI in China is mainly governed by the State Council of the People's Republic of China's July 8, 2017 "A Next Generation Artificial Intelligence Development Plan" (State Council Document No. 35), in which the Central Committee of the Chinese Communist Party and the State Council of the PRC urged the governing bodies of China to promote the development of AI up to 2030. Regulation of the issues of ethical and legal support for the development of AI is accelerating, and policy ensures state control of Chinese companies and over valuable data, including storage of data on Chinese users within the country and the mandatory use of People's Republic of China's national standards for AI, including over big data, cloud computing, and industrial software. In 2021, China published ethical guidelines for the use of AI in China which state that researchers must ensure that AI abides by shared human values, is always under human control, and is not endangering public safety. In 2023, China introduced Interim Measures for the Management of Generative AI Services. Council of Europe The Council of Europe (CoE) is an international organization that promotes human rights, democracy and the rule of law. It comprises 46 member states, including all 29 Signatories of the European Union's 2018 Declaration of Cooperation on Artificial Intelligence. The CoE has created a common legal space in which the members have a legal obligation to guarantee rights as set out in the European Convention on Human Rights. Specifically in relation to AI, "The Council of Europe's aim is to identify intersecting areas between AI and our standards on human rights, democracy and rule of law, and to develop relevant standard setting or capacity-building solutions". The large number of relevant documents identified by the CoE include guidelines, charters, papers, reports and strategies. The authoring bodies of these AI regulation documents are not confined to one sector of society and include organizations, companies, bodies and nation-states. In 2019, the Council of Europe initiated a process to assess the need for legally binding regulation of AI, focusing specifically on its implications for human rights and democratic values. Negotiations on a treaty began in September 2022, involving the 46 member states of the Council of Europe, as well as Argentina, Australia, Canada, Costa Rica, the Holy See, Israel, Japan, Mexico, Peru, the United States of America, and Uruguay, as well as the European Union. On 17 May 2024, the "Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law" was adopted. It was opened for signature on 5 September 2024. Although developed by a European organisation, the treaty is open for accession by states from other parts of the world. The first ten signatories were: Andorra, Georgia, Iceland, Norway, Moldova, San Marino, the United Kingdom, Israel, the United States, and the European Union. European Union The EU is one of the largest jurisdictions in the world and plays an active role in the global regulation of digital technology through the GDPR, Digital Services Act, the Digital Markets Act. For AI in particular, the Artificial intelligence Act is regarded in 2023 as the most far-reaching regulation of AI worldwide. Most European Union (EU) countries have their own national strategies towards regulating AI, but these are largely convergent. The European Union is guided by a European Strategy on Artificial Intelligence, supported by a High-Level Expert Group on Artificial Intelligence. In April 2019, the European Commission published its Ethics Guidelines for Trustworthy Artificial Intelligence (AI), following this with its Policy and investment recommendations for trustworthy Artificial Intelligence in June 2019. The EU Commission's High Level Expert Group on Artificial Intelligence carries out work on Trustworthy AI, and the Commission has issued reports on the Safety and Liability Aspects of AI and on the Ethics of Automated Vehicles. In 2020. the EU Commission sought views on a proposal for AI specific legislation, and that process is ongoing. On February 2, 2020, the European Commission published its White Paper on Artificial Intelligence – A European approach to excellence and trust. The White Paper consists of two main building blocks, an 'ecosystem of excellence' and a 'ecosystem of trust'. The 'ecosystem of trust' outlines the EU's approach for a regulatory framework for AI. In its proposed approach, the Commission distinguishes AI applications based on whether they are 'high-risk' or not. Only high-risk AI applications should be in the scope of a future EU regulatory framework. An AI application is considered high-risk if it operates in a risky sector (such as healthcare, transport or energy) and is "used in such a manner that significant risks are likely to arise". For high-risk AI applications, the requirements are mainly about the : "training data", "data and record-keeping", "information to be provided", "robustness and accuracy", and "human oversight". There are also requirements specific to certain usages such as remote biometric identification. AI applications that do not qualify as 'high-risk' could be governed by a voluntary labeling scheme. As regards compliance and enforcement, the Commission considers prior conformity assessments which could include 'procedures for testing, inspection or certification' and/or 'checks of the algorithms and of the data sets used in the development phase'. A European governance structure on AI in the form of a framework for cooperation of national competent authorities could facilitate the implementation of the regulatory framework. A January 2021 draft was leaked online on April 14, 2021, before the Commission presented their official "Proposal for a Regulation laying down harmonised rules on artificial intelligence" a week later. Shortly after, the Artificial Intelligence Act (also known as the AI Act) was formally proposed on this basis. This proposal includes a refinement of the 2020 risk-based approach with, this time, 4 risk categories: "minimal", "limited", "high" and "unacceptable". The proposal has been severely critiqued in the public debate. Academics have expressed concerns about various unclear elements in the proposal – such as the broad definition of what constitutes AI – and feared unintended legal implications, especially for vulnerable groups such as patients and migrants. The risk category "general-purpose AI" was added to the AI Act to account for versatile models like ChatGPT, which did not fit the application-based regulation framework. Unlike for other risk categories, general-purpose AI models can be regulated based on their capabilities, not just their uses. Weaker general-purpose AI models are subject transparency requirements, while those considered to pose "systemic risks" (notably those trained using computational capabilities exceeding 1025 FLOPS) must also undergo a thorough evaluation process. A subsequent version of the AI Act was finally adopted in May 2024. The AI Act will be progressively enforced. Recognition of emotions and real-time remote biometric identification will be prohibited, with some exemptions, such as for law enforcement. The European Union's AI Act has created a regulatory framework with significant implications globally. This legislation introduces a risk-based approach to categorizing AI systems, focusing on high-risk applications like healthcare, education, and public safety. It requires organizations to ensure transparency, data governance, and human oversight in their AI solutions. While this aims to foster ethical AI use, the stringent requirements could increase compliance costs and delay technology deployment, impacting innovation-driven industries. Observers have expressed concerns about the multiplication of legislative proposals under the von der Leyen Commission. The speed of the legislative initiatives is partially led by political ambitions of the EU and could put at risk the digital rights of the European citizens, including rights to privacy, especially in the face of uncertain guarantees of data protection through cyber security. Among the stated guiding principles in the variety of legislative proposals in the area of AI under the von der Leyen Commission are the objectives of strategic autonomy and the concept of digital sovereignty. On May 29, 2024, the European Court of Auditors published a report stating that EU measures were not well coordinated with those of EU countries; that the monitoring of investments was not systematic; and that stronger governance was needed. Germany In November 2020, DIN, DKE and the German Federal Ministry for Economic Affairs and Energy published the first edition of the "German Standardization Roadmap for Artificial Intelligence" (NRM KI) and presented it to the public at the Digital Summit of the Federal Government of Germany. NRM KI describes requirements to future regulations and standards in the context of AI. The implementation of the recommendations for action is intended to help to strengthen the German economy and science in the international competition in the field of artificial intelligence and create innovation-friendly conditions for this emerging technology. The first edition is a 200-page long document written by 300 experts. The second edition of the NRM KI was published to coincide with the German government's Digital Summit on December 9, 2022. DIN coordinated more than 570 participating experts from a wide range of fields from science, industry, civil society and the public sector. The second edition is a 450-page long document. On the one hand, NRM KI covers the focus topics in terms of applications (e.g. medicine, mobility, energy & environment, financial services, industrial automation) and fundamental issues (e.g. AI classification, security, certifiability, socio-technical systems, ethics). On the other hand, it provides an overview of the central terms in the field of AI and its environment across a wide range of interest groups and information sources. In total, the document covers 116 standardisation needs and provides six central recommendations for action. G7 On 30 October 2023, members of the G7 subscribe to eleven guiding principles for the design, production and implementation of advanced artificial intelligence systems, as well as a voluntary Code of Conduct for artificial intelligence developers in the context of the Hiroshima Process. The agreement receives the applause of Ursula von der Leyen who finds in it the principles of the AI Directive, currently being finalized. Israel On October 30, 2022, pursuant to government resolution 212 of August 2021, the Israeli Ministry of Innovation, Science and Technology released its "Principles of Policy, Regulation and Ethics in AI" white paper for public consultation. By December 2023, the Ministry of Innovation and the Ministry of Justice published a joint AI regulation and ethics policy paper, outlining several AI ethical principles and a set of recommendations including opting for sector-based regulation, a risk-based approach, preference for "soft" regulatory tools and maintaining consistency with existing global regulatory approaches to AI. Italy In October 2023, the Italian privacy authority approved a regulation that provides three principles for therapeutic decisions taken by automated systems: transparency of decision-making processes, human supervision of automated decisions and algorithmic non-discrimination. New Zealand , no AI-specific legislation exists, but AI usage is regulated by existing laws, including the Privacy Act, the Human Rights Act, the Fair Trading Act and the Harmful Digital Communications Act. In 2020, the New Zealand Government sponsored a World Economic Forum pilot project titled "Reimagining Regulation for the Age of AI", aimed at creating regulatory frameworks around AI. The same year, the Privacy Act was updated to regulate the use of New Zealanders' personal information in AI. In 2023, the Privacy Commissioner released guidance on using AI in accordance with information privacy principles. In February 2024, the Attorney-General and Technology Minister announced the formation of a Parliamentary cross-party AI caucus, and that framework for the Government's use of AI was being developed. She also announced that no extra regulation was planned at that stage. Philippines In 2023, a bill was filed in the Philippine House of Representatives which proposed the establishment of the Artificial Intelligence Development Authority (AIDA) which would oversee the development and research of artificial intelligence. AIDA was also proposed to be a watchdog against crimes using AI. The Commission on Elections has also considered in 2024 the ban of using AI and deepfake for campaigning. They look to implement regulations that would apply as early as for the 2025 general elections. Spain In 2018, the Spanish Ministry of Science, Innovation and Universities approved an R&D Strategy on Artificial Intelligence. United Kingdom The UK supported the application and development of AI in business via the Digital Economy Strategy 2015–2018 introduced at the beginning of 2015 by Innovate UK as part of the UK Digital Strategy. In the public sector, the Department for Digital, Culture, Media and Sport advised on data ethics and the Alan Turing Institute provided guidance on responsible design and implementation of AI systems. In terms of cyber security, in 2020 the National Cyber Security Centre has issued guidance on 'Intelligent Security Tools'. The following year, the UK published its 10-year National AI Strategy, which describes actions to assess long-term AI risks, including AGI-related catastrophic risks. In March 2023, the UK released the white paper A pro-innovation approach to AI regulation. This white paper presents general AI principles, but leaves significant flexibility to existing regulators in how they adapt these principles to specific areas such as transport or financial markets. In November 2023, the UK hosted the first AI safety summit, with the prime minister Rishi Sunak aiming to position the UK as a leader in AI safety regulation. During the summit, the UK created an AI Safety Institute, as an evolution of the Frontier AI Taskforce led by Ian Hogarth. The institute was notably assigned the responsibility of advancing the safety evaluations of the world's most advanced AI models, also called frontier AI models. The UK government indicated its reluctance to legislate early, arguing that it may reduce the sector's growth and that laws might be rendered obselete by further technological progress. United States Discussions on regulation of AI in the United States have included topics such as the timeliness of regulating AI, the nature of the federal regulatory framework to govern and promote AI, including what agency should lead, the regulatory and governing powers of that agency, and how to update regulations in the face of rapidly changing technology, as well as the roles of state governments and courts. 2016–2017 As early as 2016, the Obama administration had begun to focus on the risks and regulations for artificial intelligence. In a report titled Preparing For the Future of Artificial Intelligence, the National Science and Technology Council set a precedent to allow researchers to continue to develop new AI technologies with few restrictions. It is stated within the report that "the approach to regulation of AI-enabled products to protect public safety should be informed by assessment of the aspects of risk....". These risks would be the principal reason to create any form of regulation, granted that any existing regulation would not apply to AI technology. 2018–2019 The first main report was the National Strategic Research and Development Plan for Artificial Intelligence. On August 13, 2018, Section 1051 of the Fiscal Year 2019 John S. McCain National Defense Authorization Act (P.L. 115-232) established the National Security Commission on Artificial Intelligence "to consider the methods and means necessary to advance the development of artificial intelligence, machine learning, and associated technologies to comprehensively address the national security and defense needs of the United States." Steering on regulating security-related AI is provided by the National Security Commission on Artificial Intelligence. The Artificial Intelligence Initiative Act (S.1558) is a proposed bill that would establish a federal initiative designed to accelerate research and development on AI for, inter alia, the economic and national security of the United States. On January 7, 2019, following an Executive Order on Maintaining American Leadership in Artificial Intelligence, the White House's Office of Science and Technology Policy released a draft Guidance for Regulation of Artificial Intelligence Applications, which includes ten principles for United States agencies when deciding whether and how to regulate AI. In response, the National Institute of Standards and Technology has released a position paper, and the Defense Innovation Board has issued recommendations on the ethical use of AI. A year later, the administration called for comments on regulation in another draft of its Guidance for Regulation of Artificial Intelligence Applications. Other specific agencies working on the regulation of AI include the Food and Drug Administration, which has created pathways to regulate the incorporation of AI in medical imaging. National Science and Technology Council also published the National Artificial Intelligence Research and Development Strategic Plan, which received public scrutiny and recommendations to further improve it towards enabling Trustworthy AI. 2021–2022 In March 2021, the National Security Commission on Artificial Intelligence released their final report. In the report, they stated that "Advances in AI, including the mastery of more general AI capabilities along one or more dimensions, will likely provide new capabilities and applications. Some of these advances could lead to inflection points or leaps in capabilities. Such advances may also introduce new concerns and risks and the need for new policies, recommendations, and technical advances to assure that systems are aligned with goals and values, including safety, robustness and trustworthiness. The US should monitor advances in AI and make necessary investments in technology and give attention to policy so as to ensure that AI systems and their uses align with our goals and values." In June 2022, Senators Rob Portman and Gary Peters introduced the Global Catastrophic Risk Mitigation Act. The bipartisan bill "would also help counter the risk of artificial intelligence... from being abused in ways that may pose a catastrophic risk". On October 4, 2022, President Joe Biden unveiled a new AI Bill of Rights, which outlines five protections Americans should have in the AI age: 1. Safe and Effective Systems, 2. Algorithmic Discrimination Protection, 3.Data Privacy, 4. Notice and Explanation, and 5. Human Alternatives, Consideration, and Fallback. The Bill was introduced in October 2021 by the Office of Science and Technology Policy (OSTP), a US government department that advises the president on science and technology. 2023 In January 2023, the New York City Bias Audit Law (Local Law 144) was enacted by the NYC Council in November 2021. Originally due to come into effect on 1 January 2023, the enforcement date for Local Law 144 has been pushed back due to the high volume of comments received during the public hearing on the Department of Consumer and Worker Protection's (DCWP) proposed rules to clarify the requirements of the legislation. It eventually became effective on July 5, 2023. From this date, the companies that are operating and hiring in New York City are prohibited from using automated tools to hire candidates or promote employees, unless the tools have been independently audited for bias. In July 2023, the Biden–Harris Administration secured voluntary commitments from seven companies – Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI – to manage the risks associated with AI. The companies committed to ensure AI products undergo both internal and external security testing before public release; to share information on the management of AI risks with the industry, governments, civil society, and academia; to prioritize cybersecurity and protect proprietary AI system components; to develop mechanisms to inform users when content is AI-generated, such as watermarking; to publicly report on their AI systems' capabilities, limitations, and areas of use; to prioritize research on societal risks posed by AI, including bias, discrimination, and privacy concerns; and to develop AI systems to address societal challenges, ranging from cancer prevention to climate change mitigation. In September 2023, eight additional companies – Adobe, Cohere, IBM, Nvidia, Palantir, Salesforce, Scale AI, and Stability AI – subscribed to these voluntary commitments. The Biden administration, in October 2023 signaled that they would release an executive order leveraging the federal government's purchasing power to shape AI regulations, hinting at a proactive governmental stance in regulating AI technologies. On October 30, 2023, President Biden released this Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence. The Executive Order addresses a variety of issues, such as focusing on standards for critical infrastructure, AI-enhanced cybersecurity, and federally funded biological synthesis projects. The Executive Order provides the authority to various agencies and departments of the US government, including the Energy and Defense departments, to apply existing consumer protection laws to AI development. The Executive Order builds on the Administration's earlier agreements with AI companies to instate new initiatives to "red-team" or stress-test AI dual-use foundation models, especially those that have the potential to pose security risks, with data and results shared with the federal government. The Executive Order also recognizes AI's social challenges, and calls for companies building AI dual-use foundation models to be wary of these societal problems. For example, the Executive Order states that AI should not "worsen job quality", and should not "cause labor-force disruptions". Additionally, Biden's Executive Order mandates that AI must "advance equity and civil rights", and cannot disadvantage marginalized groups. It also called for foundation models to include "watermarks" to help the public discern between human and AI-generated content, which has raised controversy and criticism from deepfake detection researchers. 2024 In February 2024, Senator Scott Wiener introduced the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act to the California legislature. The bill drew heavily on the Biden executive order. It had the goal of reducing catastrophic risks by mandating safety tests for the most powerful AI models. If passed, the bill would have also established a publicly-funded cloud computing cluster in California. On September 29, Governor Gavin Newsom vetoed the bill. It is considered unlikely that the legislature will override the governor's veto with a two-thirds vote from both houses. On March 21, 2024, the State of Tennessee enacted legislation called the ELVIS Act, aimed specifically at audio deepfakes, and voice cloning. This legislation was the first enacted legislation in the nation aimed at regulating AI simulation of image, voice and likeness. The bill passed unanimously in the Tennessee House of Representatives and Senate. This legislation's success was hoped by its supporters to inspire similar actions in other states, contributing to a unified approach to copyright and privacy in the digital age, and to reinforce the importance of safeguarding artists' rights against unauthorized use of their voices and likenesses. On March 13, 2024, Utah Governor Spencer Cox signed the S.B 149 "Artificial Intelligence Policy Act". This legislation goes into effect on May 1, 2024. It establishes liability, notably for companies that don't disclose their use of generative AI when required by state consumer protection laws, or when users commit criminal offense using generative AI. It also creates the Office of Artificial Intelligence Policy and the Artificial Intelligence Learning Laboratory Program. Regulation of fully autonomous weapons Legal questions related to lethal autonomous weapons systems (LAWS), in particular compliance with the laws of armed conflict, have been under discussion at the United Nations since 2013, within the context of the Convention on Certain Conventional Weapons. Notably, informal meetings of experts took place in 2014, 2015 and 2016 and a Group of Governmental Experts (GGE) was appointed to further deliberate on the issue in 2016. A set of guiding principles on LAWS affirmed by the GGE on LAWS were adopted in 2018. In 2016, China published a position paper questioning the adequacy of existing international law to address the eventuality of fully autonomous weapons, becoming the first permanent member of the U.N. Security Council to broach the issue, and leading to proposals for global regulation. The possibility of a moratorium or preemptive ban of the development and use of LAWS has also been raised on several occasions by other national delegations to the Convention on Certain Conventional Weapons and is strongly advocated for by the Campaign to Stop Killer Robots – a coalition of non-governmental organizations. The US government maintains that current international humanitarian law is capable of regulating the development or use of LAWS. The Congressional Research Service indicated in 2023 that the US doesn't have LAWS in its inventory, but that its policy doesn't prohibit the development and employment of it. See also AI alignment Algorithmic accountability Artificial intelligence Artificial intelligence and elections Artificial intelligence arms race Artificial intelligence in government Ethics of artificial intelligence Government by algorithm Legal informatics Regulation of algorithms References Existential risk from artificial general intelligence Computer law Regulation of technologies Regulation of artificial intelligence Politics and technology AI safety
Regulation of artificial intelligence
[ "Technology", "Engineering" ]
8,518
[ "Existential risk from artificial general intelligence", "Regulation of artificial intelligence", "Safety engineering", "Computer law", "AI safety", "Computing and society" ]
63,455,108
https://en.wikipedia.org/wiki/List%20of%20gene%20therapies
This article contains a list of commercially available gene therapies. Gene therapies Alipogene tiparvovec (Glybera): AAV-based treatment for lipoprotein lipase deficiency (no longer commercially available) Axicabtagene ciloleucel (Yescarta): treatment for large B-cell lymphoma Beremagene geperpavec (Vyjuvek): treatment of wounds. Betibeglogene autotemcel (Zynteglo): treatment for beta thalassemia Brexucabtagene autoleucel (Tecartus): treatment for mantle cell lymphoma and acute lymphoblastic leukemia Cambiogenplasmid (Neovasculgen): treatment for vascular endothelial growth factor peripheral artery disease Ciltacabtagene autoleucel (Carvykti): treatment for multiple myeloma Delandistrogene moxeparvovec (Elevidys): treatment for Duchenne muscular dystrophy Elivaldogene autotemcel (Skysona): treatment for cerebral adrenoleukodystrophy Etranacogene dezaparvovec (Hemgenix): AAV-based treatment for hemophilia B Exagamglogene autotemcel (Casgevy): treatment for sickle cell disease. Gendicine: treatment for head and neck squamous cell carcinoma Idecabtagene vicleucel (Abecma): treatment for multiple myeloma Lovotibeglogene autotemcel (Lyfgenia): treatment for sickle cell disease. Nadofaragene firadenovec (Adstiladrin): treatment for bladder cancer Obecabtagene autoleucel (Aucatzyl): treatment of acute lymphoblastic leukemia Onasemnogene abeparvovec (Zolgensma): AAV-based treatment for spinal muscular atrophy Strimvelis: treatment for adenosine deaminase deficiency (ADA-SCID) Talimogene laherparepvec (Imlygic): treatment for melanoma in patients who have recurring skin lesions Tisagenlecleucel (Kymriah): treatment for B cell lymphoblastic leukemia Valoctocogene roxaparvovec (Roctavian): treatment for hemophilia A Voretigene neparvovec (Luxturna): AAV-based treatment for Leber congenital amaurosis See also FDA-approved CAR T cell therapies References External links Applied genetics Bioethics Biotechnology Medical genetics Gene therapies Gene delivery Genetic engineering
List of gene therapies
[ "Chemistry", "Technology", "Engineering", "Biology" ]
594
[ "Bioethics", "Genetics techniques", "Biological engineering", "Molecular-biology-related lists", "Genetic engineering", "Biotechnology", "Molecular biology techniques", "Gene therapy", "nan", "Molecular biology", "Ethics of science and technology", "Gene delivery" ]
63,455,582
https://en.wikipedia.org/wiki/Rayleigh%E2%80%93Gans%20approximation
Rayleigh–Gans approximation, also known as Rayleigh–Gans–Debye approximation and Rayleigh–Gans–Born approximation, is an approximate solution to light scattering by optically soft particles. Optical softness implies that the relative refractive index of particle is close to that of the surrounding medium. The approximation holds for particles of arbitrary shape that are relatively small but can be larger than Rayleigh scattering limits. The theory was derived by Lord Rayleigh in 1881 and was applied to homogeneous spheres, spherical shells, radially inhomogeneous spheres and infinite cylinders. Peter Debye has contributed to the theory in 1881. The theory for homogeneous sphere was rederived by Richard Gans in 1925. The approximation is analogous to Born approximation in quantum mechanics. Theory The validity conditions for the approximation can be denoted as: is the wavevector of the light (), whereas refers to the linear dimension of the particle. is the complex refractive index of the particle. The first condition allows for a simplification in expressing the material polarizability in the derivation below. The second condition is a statement of the Born approximation, that is, that the incident field is not greatly altered within one particle so that each volume element is considered to be illuminated by an intensity and phase determined only by its position relative to the incident wave, unaffected by scattering from other volume elements. The particle is divided into small volume elements, which are treated as independent Rayleigh scatterers. For an inbound light with s polarization, the scattering amplitude contribution from each volume element is given as: where denotes the phase difference due to each individual element, and the fraction in parentheses is the electric polarizability as found from the refractive index using the Clausius–Mossotti relation. Under the condition (n-1) << 1, this factor can be approximated as 2(n-1)/3. The phases affecting the scattering from each volume element are dependent only on their positions with respect to the incoming wave and the scattering direction. Integrating, the scattering amplitude function thus obtains: in which only the final integral, which describes the interfering phases contributing to the scattering direction (θ, φ), remains to be solved according to the particular geometry of the scatterer. Calling V the entire volume of the scattering object, over which this integration is performed, one can write that scattering parameter for scattering with the electric field polarization normal to the plane of incidence (s polarization) as and for polarization in the plane of incidence (p polarization) as where denotes the "form factor" of the scatterer: In order to only find intensities we can define P as the squared magnitude of the form factor: Then the scattered radiation intensity, relative to the intensity of the incident wave, for each polarization can be written as: where r is the distance from the scatterer to the observation point. Per the optical theorem, absorption cross section is given as: which is independent of the polarization. Applications Rayleigh–Gans approximation has been applied on the calculation of the optical cross sections of fractal aggregates. The theory was also applied to anisotropic spheres for nanostructured polycrystalline alumina and turbidity calculations on biological structures such as lipid vesicles and bacteria. A nonlinear Rayleigh−Gans−Debye model was used to investigate second-harmonic generation in malachite green molecules adsorbed on polystyrene particles. See also Mie scattering Anomalous diffraction theory Discrete dipole approximation Gans theory References Scattering, absorption and radiative transfer (optics) Radio frequency propagation X-ray scattering
Rayleigh–Gans approximation
[ "Physics", "Chemistry" ]
754
[ "Physical phenomena", " absorption and radiative transfer (optics)", "Spectrum (physical sciences)", "Radio frequency propagation", "X-ray scattering", "Electromagnetic spectrum", "Waves", "Scattering" ]
56,494,038
https://en.wikipedia.org/wiki/Snake%20Projection
The Snake Projection is a continuous map projection typically used as the planar coordinate system for realizing low distortion throughout long linear engineering projects. Details The Snake Projection was originally developed by University College London and Network Rail to provide a continuous low distortion projection for the West Coast Mainline infrastructure works. The parameters defining each Snake Projection are tailored for the specific project; the most typical use is with large-scale linear engineering projects such as rail infrastructure, however the projection is equally applicable to any application requiring a low distortion grid along a linear route (for example pipelines and roads). The name of the projection is derived from the sinuous snake-like nature of the projects it may be designed for. Typical map projection distance distortion characteristics of a Snake Projection are minimal over the whole route within approximately 20 kilometres of the centre line. The principal advantage of the projection is that, for the corridor defining the design space, distances measured on the ground have a nearly one to one relationship with distances in coordinate space (i.e. no scale factor need be applied to convert between distances in grid and distances on the ground). The length of the applicable corridor is variable on a project basis, however when required the projection can extend over several hundreds of kilometres to achieve grid distortion of less than 20 parts per million along the route. The main disadvantage is that away from the design corridor the distortion of the projection is not controlled. The Snake Projection is suited for engineering purposes due to its low distortion characteristics. An example of its differentiation from mapping grids is the 60m increase in length of the London to Birmingham section of the HS2 rail line, purely due to the more accurate grid representation compared to the length when using the national mapping coordinate system British National Grid. Usage The Snake Projection is the engineering coordinate system used for a significant proportion of primary rail routes in the UK, including that of the HS2 London to Birmingham high speed line. For the London to Glasgow West Coast Main Line the distortion in the Snake Projection used is no greater than 20 parts per million within 5 kilometres of either side of the track. Implementation The Snake Projection algorithm converts between geographical and grid coordinates, however the method of technical implementation can vary. One method of implementing a Snake Projection is to define using an NTv2 geodetic transformation coupled with a standard parameterised map projection (such as Transverse Mercator); this is increasing in popularity due to better compatibility with CAD and GIS software. The global EPSG geodetic coordinate system database features several snake projection definitions through the NTv2 approach. Other implementations include those published through the SnakeGrid organisation. See also List of map projections Surveying References Map projections Geodesy Rail infrastructure Surveying Civil engineering
Snake Projection
[ "Mathematics", "Engineering" ]
541
[ "Applied mathematics", "Map projections", "Construction", "Surveying", "Civil engineering", "Coordinate systems", "Geodesy" ]
56,495,502
https://en.wikipedia.org/wiki/Charles%20Meneveau
Charles Meneveau (born 1960) is a French-Chilean born American fluid dynamicist, known for his work on turbulence, including turbulence modeling and computational fluid dynamics. Charles Meneveau, the Louis M. Sardella Professor in Mechanical Engineering and an associate director of the Institute for Data Intensive Engineering and Science (IDIES) at the Johns Hopkins University, focuses his research on understanding and modeling hydrodynamic turbulence, and on complexity in fluid mechanics in general. He combines computational, theoretical and experimental tools for his research, with an emphasis on the multiscale aspects of turbulence, using tools such as subgrid-scale modeling, downscaling techniques, and fractal geometry, and applications to Large Eddy Simulation (LES). He pioneered the use of the Lagrangian dynamic procedure for sub-grid scale modeling in large-eddy simulation (LES) of turbulence. His recent work includes the use of LES for wind-energy-related applications and the development of the Johns Hopkins Turbulence Database for sharing large-scale datasets from high-fidelity computational fluid dynamics calculations. Education 1989: Ph.D. in mechanical engineering, Yale University, May 1989 1988: Master of Philosophy, Yale University, 1988 1987: Master of Science, Yale University, 1987 1985: B.S. in mechanical engineering, Universidad Técnica Federico Santa María, Valparaíso (Chile), 1985 His Ph.D. advisor was K. R. Sreenivasan and his thesis was on the multi-fractal nature of small-scale turbulence. At Yale he was also informally co-advised by B. B. Mandelbrot. Career and research Meneveau's postdoctoral position was at the Stanford University/NASA-Ames's Center. He has been on the faculty of the Johns Hopkins University since 1990. His main appointment is in the Department of Mechanical Engineering with secondary appointments in the Departments of Environmental Health and Engineering and Physics and Astronomy. Professor Meneveau’s research is focused on understanding and modeling hydrodynamic turbulence, and complexity in fluid mechanics in general. Special emphasis is placed on the multiscale aspects of turbulence, using tools such as subgrid-scale modeling, downscaling techniques, and fractal geometry. Applications of the results to Large Eddy Simulation (LES) have facilitated applications of LES to engineering, environmental and geophysical flow phenomena. Currently Meneveau is focused on applications of LES to wind energy and wind farm fluid dynamics, on developing advanced wall models for LES, on modeling oil dispersion in the ocean, as well as on building “big-data” tools to share the very large data sets that arise in computational fluid dynamics with broad constituencies of scientists and engineers around the world Among Meneveau’s main contributions are advances to turbulence modeling and large eddy simulations. The advances were made possible by elucidating the properties of the small-scale motions in turbulent flows and applying the new insights to the development of advanced subgrid-scale models, such as the Lagrangian dynamic model. This model has been implemented in various research and open source CFD codes (e.g. OpenFoam) and expanded the applicability of Large Eddy Simulations to complex-geometry flows of engineering and environmental interest, where prior models could not be used. Among the application areas of Large Eddy Simulation being pursued in Meneveau’s group is the study of complex flows in large wind farms. Using the improved simulation tools as well as wind tunnel tests, Meneveau and his colleagues identified the important process of vertical entrainment of mean flow kinetic energy into an array of wind turbines. This research has clarified the mechanisms limiting wind plant performance at a time when there is enormous growth in wind farms. The research has led to new engineering models that will allow for better designed wind farms thus increasing their economic benefit and helping to reduce greenhouse gas emissions from fossil fuels. Meneveau has participated in efforts to democratize access to valuable “big data” in turbulence. As a deputy director of JHU’s Institute for Data Intensive Engineering and Science, he worked with a team of computer scientists, applied mathematicians, astrophysicists, and fluid dynamicists that built the JHTDB (Johns Hopkins Turbulence Databases). This open numerical laboratory provides researchers from around the world with user-friendly access to large data sets arising from Direct Numerical Simulations of various types of turbulent flows. To date, hundreds of researchers worldwide have used the data, and flow data at over hundred trillion points have been sampled from the database. The system has demonstrated how “big data” resulting from large world-class numerical simulations can be shared with many researchers who lack the massive supercomputing resources needed to generate such data. Meneveau also has performed groundbreaking research on understanding several multiscale aspects of turbulence. As part of his doctoral work at Yale in the late 1980s, Meneveau and his advisor Prof. K. R. Sreenivasan established the fractal and multifractal theory for turbulent flows and confirmed the theory using experiments. Interfaces in turbulence were shown to have a fractal dimension of nearly 7/3, where the 1/3 exponent above the value of two valid for smooth surfaces could be related to the classic Kolmogorov theory. And a universal multi-fractal spectrum was established, leading to a simple cascade model, which has since been applied to many other physical, biological and socio-economic systems. Later, as a postdoc at Stanford University’s Center for Turbulence Research under the guidance of Prof. P. Moin, Meneveau pioneered the application of orthogonal wavelet analysis to turbulence, introducing the concept of wavelet spectrum and other scale-dependent statistical measures of variability. Awards, honors, societies and journal editorships Awards 2021: Recipient, 2021 Fluid Dynamics Award from the American Institute of Aeronautics and Astronautics (AIAA), "for advancing both the theoretical and practical understanding of turbulence through groundbreaking modeling techniques and applications of large-eddy simulation." 2018: Elected Member, National Academy of Engineering (NAE), “for contributions to turbulence small-scale dynamics, large-eddy simulations, wind farm fluid dynamics, and leadership in the fluid dynamics community”. 2016: Awarded honorary doctorate from the Danish Technical University, Doctor Tecnices, Honoris Causa for “Outstanding and highly innovative scientific achievements in fluid dynamics, particularly for his work on turbulence and atmospheric physics and its applications to wind energy”. 2014-2015: Midwest Mechanics Lecturer 2012-2013: Fulbright Scholar, US-Australia Fulbright Scholarship 2012: Stanley Corrsin Lecturer, Johns Hopkins University 2011: First recipient of the Stanley Corrsin Award from the American Physical Society, citation: “For his innovative use of experimental data and turbulence theory in the development of advanced models for large-eddy simulations, and for the application of these models to environmental, geophysical and engineering applications.”   2005: Foreign corresponding member of the Chilean Academy of Sciences 2005: Appointed to the Louis M. Sardella Professorship in Mechanical Engineering 2004: UCAR Outstanding Publication Award for co-authorship of the paper by Horst et al., that appeared in J. Atmospheric Science     2003: Johns Hopkins University Alumni Association Excellence in Teaching Award 2001: François N. Frenkiel Award for Fluid Mechanics, American Physical Society 1989: Henry P. Becton Prize for Excellence in Research, Yale University 1985: Premio Federico Santa María, UTFSM Valparaíso, Chile Societies American Academy of Mechanics, Fellow. American Society of Mechanical Engineers, Fellow. American Physical Society, Fellow. Pi Tau Sigma, Honorary Member American Geophysical Union, Member. American Institute for Aeronautics and Astronautics, Senior Member. Editorships 2010–Present: Deputy editor, Journal of Fluid Mechanics 2019: Chair, American Physical Society, Division of Fluid Dynamics 2008–Present: Key participant in the development and maintenance of the JHTDB (Johns Hopkins Turbulence Databases) open numerical laboratory 2003-2015: Editor-in-chief, Journal of Turbulence 2005-2010: Associate editor, Journal of Fluid Mechanics 2005-2010: Member, editorial committee, Annual Rev. of Fluid Mechanics 2001–Present: Member, advisory board, Theor. & Comp. Fluid Dynamics 2001-2003: Associate editor, Physics of Fluids 2003: Guest associate editor, Annual Reviews of Fluid Mechanics Journal publications Google Scholar page References External links Biography at Johns Hopkins University 1960 births Living people Chilean scientists Yale University alumni Johns Hopkins University faculty 21st-century American physicists Fluid dynamicists Members of the United States National Academy of Engineering Fellows of the American Physical Society
Charles Meneveau
[ "Chemistry" ]
1,746
[ "Fluid dynamicists", "Fluid dynamics" ]
67,763,627
https://en.wikipedia.org/wiki/Enterocin
Enterocin and its derivatives are bacteriocins synthesized by the lactic acid bacteria, Enterococcus. This class of polyketide antibiotics are effective against foodborne pathogens including L. monocytogenes, Listeria, and Bacillus. Due to its proteolytic degradability in the gastrointestinal tract, enterocin is used for controlling foodborne pathogens via human consumption. History Enterocin was discovered from soil and marine Streptomyces strains as well as from marine ascidians of Didemnum and it has also been found in a mangrove strains Streptomyces qinglanensis and Salinispora pacifica. Total synthesis The total synthesis of enterocin has been reported. Biosynthesis Enterocin has a caged, tricyclic, nonaromatic core and its formation undergoes a flavoenzyme (EncM) catalyzed Favorskii-like rearrangement of a poly(beta-carbonyl). Studies done on enterocin have shown that it is biosynthesized from a type II polyketide synthase (PKS) pathway, starting with a structure derived from phenylalanine or activation of benzoic acid followed by the EncM catalyzed rearrangement. The enzyme EncN catalyzes the ATP-dependent transfer of the benzoate to EncC, the acyl carrier protein. EncC transfers the aromatic unit to EncA-EncB, the ketosynthase in order for malonation via FabD, the malonyl-CoA:ACP transacylase. A Claisen condensation occurs between the benzoyl and malonyl groups and occurs six more times followed by reaction with EncD, a ketoreductase; the intermediate undergoes the EncM catalyzed oxidative rearrangement to form the enterocin tricyclic core. Further reaction with O-methyltransferase, EncK and cytochrome P450 hydroxylase, EncR yields enterocin. References Antibiotics Oxygen heterocycles Lactones Heterocyclic compounds with 3 rings Methoxy compounds
Enterocin
[ "Biology" ]
464
[ "Antibiotics", "Biocides", "Biotechnology products" ]
67,764,026
https://en.wikipedia.org/wiki/Biogeoclimatic%20ecosystem%20classification
Biogeoclimatic ecosystem classification (BEC) is an ecological classification framework used in British Columbia to define, describe, and map ecosystem-based units at various scales, from broad, ecologically-based climatic regions down to local ecosystems or sites. BEC is termed an ecosystem classification as the approach integrates site, soil, and vegetation characteristics to develop and characterize all units. BEC has a strong application focus and guides to classification and management of forests, grasslands and wetlands are available for much of the province to aid in identification of the ecosystem units. History The biogeoclimatic ecosystem classification (BEC) system evolved from the work of Vladimir J. Krajina, a Czech-trained professor of ecology and botany at the University of British Columbia and his students, from 1949 - 1970. Krajina conceptualized the biogeoclimatic approach as an attempt to describe the ecologically diverse and largely undescribed landscape of British Columbia, the mountainous western-most province of Canada, using a unique blend of various contemporary traditions. These included the American tradition of community change and climax, the state factor concept of Jenny, the Braun-Blanquet approach, the Russian biogeocoenose, and environmental grids, and the European microscopic pedology approach The biogeoclimatic approach was subsequently adopted by the Forest Service of British Columbia in 1976—initially as a five-year program to develop the classification to assist with tree species selection in reforestation. The classification concepts adopted from Krajina were modified by the staff of the B.C. Forest Service in the implementation of a provincial classification. Over the past 40 years, the BEC approach has been expanded and applied to all regions of British Columbia. It has developed into a comprehensive framework for understanding ecosystems in a climatically and topographically complex region. Classification Framework Biogeoclimatic ecosystem classification (BEC) is best described as a classification framework that leverages a modified Braun-Blanquet vegetation classification approach to identify and delineate ecologically equivalent climatic regions and site conditions (Figure 1). The framework integrates vegetation classification with two other component hierarchical classifications: climate (or zonal) and site (Figure 2) where the vegetation classification hierarchy is used to develop the other two component hierarchies. The emphasis of the approach is to create ecological units with similar site potential as reflected by mature or climax plant communities. Vegetation Component The BEC approach classifies vegetation in a hierarchy (see Figure 2) that presents vegetation communities at various levels of generalization. At upper levels of the hierarchy, the communities may have the same dominant tree species and occur in the same broad climate, for example, western redcedar - western hemlock forests of maritime climates of British Columbia. Whereas, at lower levels of the hierarchy, the communities will have very similar understorey species and will occur on similar site conditions, for example, western redcedar forests dominated by skunk cabbage (Lysitchiton americanum) occurring on wet, swampy sites. Categories of the vegetation hierarchy are modelled after the Braun-Blanquet approach including the class, order, alliance, and association levels. Subcategories are generally also applied (i.e., subassociation, suballiance, suborder). Fundamentally, the climax plant association is the basic unit of BEC. In the BEC approach, mature or climax plant associations of the zonal site define biogeoclimatic subzones and ecologically equivalent sites within a given biogeoclimatic unit are recognized and differentiated by mature or climax plant associations and used to define site units. The climax forest state is recognized where main canopy tree species are the same as those regenerating in the understory. Climax forests of British Columbia (BC) are most commonly dominated by shade-tolerant tree species; however, under some climates or site conditions, shade-intolerant species will regenerate under the canopy. For example, in the driest forested biogeoclimatic units of BC, several pine species that are considered seral species through most of their distribution regenerate under the forest canopy and are recognized as zonal climax species: lodgepole pine (Pinus contorta) in the Sub-Boreal Pine Spruce [SBPS] zone and ponderosa pine (P. ponderosa) in the Ponderosa Pine [PP] zone. Climate or Zonal Component The BEC system classifies climates using a zonal site approach. The zonal site concept arose from the early works of the Russian soil scientist Vasily Dokuchaev (late 1800’s) and soil scientist/forester Georgy N. Vysotsky. They considered that sites and soils with average conditions best reflected the regional climate. BEC adopted the zonal concept and developed specific site and soils criteria to define a zonal site. The mature/climax plant association that occurs on zonal sites is termed the zonal plant association and is used to characterize, differentiate and map biogeoclimatic subzones (see Figure 2). In the climate component hierarchy, subzones are the fundamental unit, which are grouped into zones based largely on similarity of shade-tolerant (climax) tree species composition of the zonal plant association. Zonal plant orders characterize Biogeoclimatic Zones; zonal plant associations characterize Biogeoclimatic Subzones; and, zonal plant subassociations can be used to characterize Biogeoclimatic Variants. Site Component Plant associations and subassociations are similarly used to define zonal and azonal ecosystems on different site conditions within a consistent climate regime (biogeoclimatic subzone/variant). This approach emphasizes site conditions as the effects of climate regime are controlled. These biogeoclimatic and site specific ecosystem units are termed site series (see Figure 2). In the forested environments of BC, soil moisture and soil nutrient gradients are the primary site-level gradients. Soil moisture regime is strongly influenced by position on a slope (Figure 3). Soil moisture and soil nutrient regimes are the two categorical axes used in an edatopic grid to characterize the generalized environmental conditions of vegetation units within a biogeoclimatic unit for most types of terrestrial ecosystems (see Figure 4 for an example). Site series from different climates (biogeoclimatic units), which share the same mature or climax plant association, are said to occupy ecologically equivalent conditions and are combined into site associations. Site associations are similar in concept to forest site types of Cajander and habitat types of the U.S. Pacific Northwest. Where seral plant associations are defined, they are linked directly to the site series (Figure 5). Uses of BEC The BEC system is used by resource managers in British Columbia to assist them with the management of natural ecosystems for forestry, conservation, and wildlife, The system was initially developed to determine, on a site-specific basis, the ecologically suitable tree species for regeneration after forest harvesting. It has evolved into a tool that is also used for: Setting standards for species selection and stocking after harvest Setting conservation targets Determining at-risk ecosystems Understanding range management issues Determining wildlife suitability and capability Figure 6 demonstrates how the BEC system is used to present ecologically suitable tree species for regeneration. References Ecology of the Rocky Mountains Environment of British Columbia Ecology
Biogeoclimatic ecosystem classification
[ "Biology" ]
1,519
[ "Ecology" ]
67,766,374
https://en.wikipedia.org/wiki/Particulate%20inorganic%20carbon
Particulate inorganic carbon (PIC) can be contrasted with dissolved inorganic carbon (DIC), the other form of inorganic carbon found in the ocean. These distinctions are important in chemical oceanography. Particulate inorganic carbon is sometimes called suspended inorganic carbon. In operational terms, it is defined as the inorganic carbon in particulate form that is too large to pass through the filter used to separate dissolved inorganic carbon. Most PIC is calcium carbonate, CaCO3, particularly in the form of calcite, but also in the form of aragonite. Calcium carbonate makes up the shells of many marine organisms. It also forms during whiting events and is excreted by marine fish during osmoregulation. Overview Carbon compounds can be distinguished as either organic or inorganic, and dissolved or particulate, depending on their composition. Organic carbon forms the backbone of key component of organic compounds such as – proteins, lipids, carbohydrates, and nucleic acids. Inorganic carbon is found primarily in simple compounds such as carbon dioxide, carbonic acid, bicarbonate, and carbonate (CO2, H2CO3, HCO3−, CO32− respectively). Marine carbon is further separated into particulate and dissolved phases. These pools are operationally defined by physical separation – dissolved carbon passes through a 0.2 μm filter, and particulate carbon does not. There are two main types of inorganic carbon that are found in the oceans. Dissolved inorganic carbon (DIC) is made up of bicarbonate (HCO3−), carbonate (CO32−) and carbon dioxide (including both dissolved CO2 and carbonic acid H2CO3). DIC can be converted to particulate inorganic carbon (PIC) through precipitation of CaCO3 (biologically or abiotically). DIC can also be converted to particulate organic carbon (POC) through photosynthesis and chemoautotrophy (i.e. primary production). DIC increases with depth as organic carbon particles sink and are respired. Free oxygen decreases as DIC increases because oxygen is consumed during aerobic respiration. Particulate inorganic carbon (PIC) is the other form of inorganic carbon found in the ocean. Most PIC is the CaCO3 that makes up shells of various marine organisms, but can also form in whiting events. Marine fish also excrete calcium carbonate during osmoregulation. Some of the inorganic carbon species in the ocean, such as bicarbonate and carbonate, are major contributors to alkalinity, a natural ocean buffer that prevents drastic changes in acidity (or pH). The marine carbon cycle also affects the reaction and dissolution rates of some chemical compounds, regulates the amount of carbon dioxide in the atmosphere and Earth's temperature. Calcium carbonate Particulate inorganic carbon (PIC) usually takes the form of calcium carbonate (CaCO3), and plays a key part in the ocean carbon cycle. This biologically fixed carbon is used as a protective coating for many planktonic species (coccolithophores, foraminifera) as well as larger marine organisms (mollusk shells). Calcium carbonate is also excreted at high rates during osmoregulation by fish, and can form in whiting events. While this form of carbon is not directly taken from the atmospheric budget, it is formed from dissolved forms of carbonate which are in equilibrium with CO2 and then responsible for removing this carbon via sequestration. CO2 + H2O → H2CO3 → H+ + HCO3− Ca2+ + 2HCO3− → CaCO3 + CO2 + H2O While this process does manage to fix a large amount of carbon, two units of alkalinity are sequestered for every unit of sequestered carbon. The formation and sinking of CaCO3 therefore drives a surface to deep alkalinity gradient which serves to raise the pH of surface waters, shifting the speciation of dissolved carbon to raise the partial pressure of dissolved CO2 in surface waters, which actually raises atmospheric levels. In addition, the burial of CaCO3 in sediments serves to lower overall oceanic alkalinity, tending to raise pH and thereby atmospheric CO2 levels if not counterbalanced by the new input of alkalinity from weathering. The portion of carbon that is permanently buried at the sea floor becomes part of the geologic record. Calcium carbonate often forms remarkable deposits that can then be raised onto land through tectonic motion as in the case with the White Cliffs of Dover in Southern England. These cliffs are made almost entirely of the plates of buried coccolithophores. Carbonate pump The carbonate pump, sometimes called the carbonate counter pump, starts with marine organisms at the ocean's surface producing particulate inorganic carbon (PIC) in the form of calcium carbonate (calcite or aragonite, CaCO3). This CaCO3 is what forms hard body parts like shells. The formation of these shells increases atmospheric CO2 due to the production of CaCO3 in the following reaction with simplified stoichiometry:Coccolithophores, a nearly ubiquitous group of phytoplankton that produce shells of calcium carbonate, are the dominant contributors to the carbonate pump. Due to their abundance, coccolithophores have significant implications on carbonate chemistry, in the surface waters they inhabit and in the ocean below: they provide a large mechanism for the downward transport of CaCO3. The air-sea CO2 flux induced by a marine biological community can be determined by the rain ratio - the proportion of carbon from calcium carbonate compared to that from organic carbon in particulate matter sinking to the ocean floor, (PIC/POC). The carbonate pump acts as a negative feedback on CO2 taken into the ocean by the solubility pump. It occurs with lesser magnitude than the solubility pump. The carbonate pump is sometimes referred to as the "hard tissue" component of the biological pump. Some surface marine organisms, like coccolithophores, produce hard structures out of calcium carbonate, a form of particulate inorganic carbon, by fixing bicarbonate. This fixation of DIC is an important part of the oceanic carbon cycle. Ca2+ + 2 HCO3− → CaCO3 + CO2 + H2O While the biological carbon pump fixes inorganic carbon (CO2) into particulate organic carbon in the form of sugar (C6H12O6), the carbonate pump fixes inorganic bicarbonate and causes a net release of CO2. In this way, the carbonate pump could be termed the carbonate counter pump. It works counter to the biological pump by counteracting the CO2 flux from the biological pump. Calcite and aragonite seas An aragonite sea contains aragonite and high-magnesium calcite as the primary inorganic calcium carbonate precipitates. The chemical conditions of the seawater must be notably high in magnesium content relative to calcium (high Mg/Ca ratio) for an aragonite sea to form. This is in contrast to a calcite sea in which seawater low in magnesium content relative to calcium (low Mg/Ca ratio) favors the formation of low-magnesium calcite as the primary inorganic marine calcium carbonate precipitate. The Early Paleozoic and the Middle to Late Mesozoic oceans were predominantly calcite seas, whereas the Middle Paleozoic through the Early Mesozoic and the Cenozoic (including today) are characterized by aragonite seas. Aragonite seas occur due to several factors, the most obvious of these is a high seawater Mg/Ca ratio (Mg/Ca > 2), which occurs during intervals of slow seafloor spreading. However, the sea level, temperature, and calcium carbonate saturation state of the surrounding system also determine which polymorph of calcium carbonate (aragonite, low-magnesium calcite, high-magnesium calcite) will form. Likewise, the occurrence of calcite seas is controlled by the same suite of factors controlling aragonite seas, with the most obvious being a low seawater Mg/Ca ratio (Mg/Ca < 2), which occurs during intervals of rapid seafloor spreading. Whiting events A whiting event is a phenomenon that occurs when a suspended cloud of fine-grained calcium carbonate precipitates in water bodies, typically during summer months, as a result of photosynthetic microbiological activity or sediment disturbance. The phenomenon gets its name from the white, chalky color it imbues to the water. These events have been shown to occur in temperate waters as well as tropical ones, and they can span for hundreds of meters. They can also occur in both marine and freshwater environments. The origin of whiting events is debated among the scientific community, and it is unclear if there is a single, specific cause. Generally, they are thought to result from either bottom sediment re-suspension or by increased activity of certain microscopic life such as phytoplankton. Because whiting events affect aquatic chemistry, physical properties, and carbon cycling, studying the mechanisms behind them holds scientific relevance in various ways. Great Calcite Belt The Great Calcite Belt (GCB) of the Southern Ocean is a region of elevated summertime upper ocean calcite concentration derived from coccolithophores, despite the region being known for its diatom predominance. The overlap of two major phytoplankton groups, coccolithophores and diatoms, in the dynamic frontal systems characteristic of this region provides an ideal setting to study environmental influences on the distribution of different species within these taxonomic groups. The Great Calcite Belt, defined as an elevated particulate inorganic carbon (PIC) feature occurring alongside seasonally elevated chlorophyll a in austral spring and summer in the Southern Ocean, plays an important role in climate fluctuations, accounting for over 60% of the Southern Ocean area (30–60° S). The region between 30° and 50° S has the highest uptake of anthropogenic carbon dioxide (CO2) alongside the North Atlantic and North Pacific oceans. Knowledge of the impact of interacting environmental influences on phytoplankton distribution in the Southern Ocean is limited. For example, more understanding is needed of how light and iron availability or temperature and pH interact to control phytoplankton biogeography. Hence, if model parameterizations are to improve to provide accurate predictions of biogeochemical change, a multivariate understanding of the full suite of environmental drivers is required. The Southern Ocean has often been considered as a microplankton-dominated (20–200 μm) system with phytoplankton blooms dominated by large diatoms and Phaeocystis sp. However, since the identification of the GCB as a consistent feature and the recognition of picoplankton (< 2 μm) and nanoplankton (2–20 μm) importance in high-nutrient, low-chlorophyll (HNLC) waters, the dynamics of small (bio)mineralizing plankton and their export need to be acknowledged. The two dominant biomineralizing phytoplankton groups in the GCB are coccolithophores and diatoms. Coccolithophores are generally found north of the polar front, though Emiliania huxleyi has been observed as far south as 58° S in the Scotia Sea, at 61° S across Drake Passage, and at 65°S south of Australia. Diatoms are present throughout the GCB, with the polar front marking a strong divide between different size fractions. North of the polar front, small diatom species, such as Pseudo-nitzschia spp. and Thalassiosira spp., tend to dominate numerically, whereas large diatoms with higher silicic acid requirements (e.g., Fragilariopsis kerguelensis) are generally more abundant south of the polar front. High abundances of nanoplankton (coccolithophores, small diatoms, chrysophytes) have also been observed on the Patagonian Shelf and in the Scotia Sea. Currently, few studies incorporate small biomineralizing phytoplankton to species level. Rather, the focus has often been on the larger and noncalcifying species in the Southern Ocean due to sample preservation issues (i.e., acidified Lugol’s solution dissolves calcite, and light microscopy restricts accurate identification to cells > 10 μm. In the context of climate change and future ecosystem function, the distribution of biomineralizing phytoplankton is important to define when considering phytoplankton interactions with carbonate chemistry, and ocean biogeochemistry. The Great Calcite Belt spans the major Southern Ocean circumpolar fronts: the Subantarctic front, the polar front, the Southern Antarctic Circumpolar Current front, and occasionally the southern boundary of the Antarctic Circumpolar Current. The subtropical front (at approximately 10 °C) acts as the northern boundary of the GCB and is associated with a sharp increase in PIC southwards. These fronts divide distinct environmental and biogeochemical zones, making the GCB an ideal study area to examine controls on phytoplankton communities in the open ocean. A high PIC concentration observed in the GCB (1 μmol PIC L−1) compared to the global average (0.2 μmol PIC L−1) and significant quantities of detached E. huxleyi coccoliths (in concentrations > 20,000 coccoliths mL−1) both characterize the GCB. The GCB is clearly observed in satellite imagery spanning from the Patagonian Shelf across the Atlantic, Indian, and Pacific oceans and completing Antarctic circumnavigation via the Drake Passage. Coccolithophores Since the industrial revolution 30% of the anthropogenic CO2 has been absorbed by the oceans, resulting in ocean acidification, which is a threat to calcifying alga. As a result, there has been profound interest in these calcifying algae, boosted by their major role in the global carbon cycle. Globally, coccolithophores, particularly Emiliania huxleyi, are considered to be the most dominant calcifying algae, which blooms can even be seen from outer space. Calcifying algae create an exoskeleton from calcium carbonate platelets (coccoliths), providing ballast which enhances the organic and inorganic carbon flux to the deep sea. Organic carbon is formed by means of photosynthesis, where CO2 is fixed and converted into organic molecules, causing removal of CO2 from the seawater. Counterintuitively, the production of coccoliths leads to the release of CO2 in the seawater, due to removal of carbonate from the seawater, which reduces the alkalinity and causes acidification. Therefore, the ratio between particulate inorganic carbon (PIC) and particulate organic carbon (POC) is an important measure for the net release or uptake of CO2. In short, the PIC:POC ratio is a key characteristic required to understand and predict the impact of climate change on the global ocean carbon cycle. Calcium particle morphologies See also carbonate compensation depth aragonite compensation depth lysocline calcareous ooze Carbonate pump Marine biogenic calcification snowline: the depth at which carbonate disappear from sediments under steady-state conditions References Sources Chemical oceanography Environmental chemistry Soil
Particulate inorganic carbon
[ "Chemistry", "Environmental_science" ]
3,219
[ "Chemical oceanography", "Environmental chemistry", "nan" ]
70,683,630
https://en.wikipedia.org/wiki/Convection%20enhanced%20delivery
Convection-enhanced delivery (CED) is method of drug delivery in which the drug is delivered into the brain using bulk flow rather than conventional diffusion. This is done by utilizing catheters inserted into the target region of the brain and utilizing pressure to deliver the therapeutic to a target region. CED has been used to delivery drugs to the central nervous system (CNS) for diseases such as cancer, epilepsy, and Parkinson's disease. CED has been used to deliver drugs to the CNS for its ability to bypass the blood–brain barrier (BBB) and target specific regions for targeted treatment, but current techniques using CED have failed to progress past clinical trials due to a variety of physical limitations associated with CED itself. Background The blood brain barrier (BBB) has historically proved to be a very difficult obstacle to overcome when aiming to deliver a drug to the brain. In order to overcome the difficulties in delivering therapeutic levels of drug past the BBB, drugs had to either be lipophilic molecules with a molecular weight below 600 Da or be transported across the BBB using some sort of cellular transport system. In the 1990s, a research group led by Edward Oldfield at the National Institutes of Health proposed utilizing CED to deliver drugs and molecules too big to bypass the BBB to the brain. CED is also useful to delivery drugs that have poor diffusive properties and allows for targeted placement of the catheter used to deliver the drugs. A vast majority of current clinical studies using CED are currently using CED as a method to treat brain tumors that are inoperable or have shown little response to conventional therapies. Mechanism of action CED is a method of drug delivery in which a pressure gradient is created at the tip of a catheter to use bulk flow rather than diffusion to delivery drugs into the brain. Diffusion has been limited by the diffusivity of the tissue, and can be expressed using Fick's law, , where J is diffusion, D is the diffusivity of the targeted tissue, and is the concentration gradient of the drug. Diffusion is can only be modified by the concentration gradient of a drug, meaning that in order to deliver drug to large parts of a tissue, high concentrations of a drug are needed in order to promote diffusion, which can result in toxicity. In comparison, bulk flow is limited only by Darcy's law, defined as , where v is velocity, K is the hydraulic conductivity of the molecule, and is the pressure gradient. Using bulk flow to deliver a drug can mean a drug can be delivered further into a target tissue with higher pressure, resulting in lower concentrations and less risk of drug toxicity. To perform a CED treatment, catheters are inserted through burr holes drilled into the skull. Treatments can use multiple catheters for a single delivery if that is required. The catheters are inserted into the interstitial space of the brain using image guidance. Once the catheters are placed at the desired site, the catheters are connection to an infusion pump which is used to create the pressure gradient for bulk flow. Infusion rates are typically set to 0.1-10 μL/min and the drug is delivered into the interstitial space, displacing any extracellular fluid. CED can result in delivery of drug centimeters deep into the tissue from the delivery site, rather than the millimeters deep that would result from delivery of drugs via diffusion. Clinical trials evaluating CED Current clinical trials exploring the use of CED to date have not resulted in any FDA approved treatments. These clinical trials have mostly been focused on using CED to treat glioblastoma and only two studies have been able to progress to stage 3 clinical trials. The first study began in 2004 and was comparing the efficacy of cintredekin besudotox delivered using CED and gliadel for the treatment of glioblastoma multiform. Results from this study showed similar survival rates between the two groups, but patients who were given CED treatment had higher rates of pulmonary emboli. The second stage 3 clinical trial began in 2008 and was delivering trabedersen via CED to treat anaplastic astrocytoma glioblastoma. This trial was terminated early due to the inability to recruit enough trial participants and efficacy of CED in this treatment was not established. These two studies have been the only major clinical trials which have compared the efficacy of CED treatment to current clinical standards for treatment. While CED clinical trials have primarily explored treating brain tumors, other conditions involving the brain have also been investigated in clinical trials. To date there have been 2 registered clinical trials, both in stage 1, which aim to use CED to treat Parkinson's disease. The first trial, which was registered in 2009, was withdrawn in 2017 for unknown reasons. The other clinical trial, which reached completion in 2022, delivered an adenovirus (AAV2) encoding for a glial cell line derived neurotropic growth factor (GDNF) directly into the brain using CED. GDNF is known to protect neurons which produce dopamine. Parkinson's disease has been shown to decrease the amount of dopamine which can be produced in the brain, so researchers hope to be able to decrease the side effects of Parkinson's disease by protecting neurons which produce dopamine. While results from this study have not been published as of April 2022, the pre-clinical research done in a Parkinson's disease model rhesus monkey showed that CED treatment with AAV2-GDNF resulted in neurological improvement without significant side effects. Non-clinical research Even though current clinical trials have not yet resulted in an FDA approved treatment, there is still plenty of research being done on delivering different types of therapeutics and treating different diseases being done. One of these areas of research is the visualization of the region of treatment. One research group was able visualize the regions of the brain that received drug from bulk flow mixing the desired drug with Gd-DTPA, a common MRI contrast agent. This allowed researchers to immediately take an MRI post CED treatment to assess if the drug was reaching the targeted area. Research has also tagged nanocarriers of their therapeutic with the MRI contrast gadoteridol for real time treatment imaging. Other than MRI contrasts, it has been shown to be possible to tag a therapeutic microcarrier with a radiolabeled or fluorescent molecule that can then be excited during imaging. The biggest limitation of this drug distribution visualization is that this technique only works ex vivo. One research group was able to optimize their liposomal design using this technique, showing the usefulness of this technique. While a common use of CED is to directly deliver drugs to the brain, it is also possible to deliver non-chemical therapeutics, such as proteins or growth factors, using CED. There are several types of microcarriers which have been used for CED, including nanospheres, nanoparticles, liposomes, micelles, and dendrites. Nanocarriers have several unique benefits for delivering therapeutics compared to conventional drug solutions. Firstly, nanocarriers can be modified to create an optimal carrier for the system that is being developed. These modifications can include tagging them for imaging, size, charge, osmolarity, viscosity, and changes in surface coating. The other large area of research being done currently on CED is the translation of CED from being used for brain tumors to other brain diseases. The primary conditions being researched for non-cancerous treatments include Parkinson's disease and epilepsy. Animal model research using CED to deliver therapeutics to the brain to treat Parkinson's disease have shown 3 promising therapeutics for treatment. Researchers for these therapeutics have typically used adenovirus carriers for therapeutics since many drugs used to treat Parkinson's disease currently are not chemical based but rather gene therapy or protein based. Current research areas focus on using GDNF, a growth factor which protects dopamine producing brain cells, glutamic acid decarboxylase (GAD), which is another therapeutic that helps to protect dopamine producing brain cells, and neurturin, which is a GDNF homolog. Another reported use of CED is in the treatment of epilepsy. Current epilepsy treatments are too large to pass through the BBB, so utilizing CED to delivery these drugs is currently one of the only ways to target the brain. The two primary antiepileptic drugs (AEDs) being delivered using CED in research are conotoxin N-type calcium channel antagonists and botulinum neurotoxins. Results from these studies showed promise in reducing the risk of seizures for up to 5 days when treated with calcium channel antagonists and up to 50 days when using botulinum neurotoxins Limitations and future directions While there has been promise in the use of CED to delivery drugs directly into the brain, there are some drawbacks with it. A vast majority of studies to date have failed to have consistent delivery from patient to patient for technical reasons surrounding the usage of CED. Incorrect placement of catheters can result in a less effective treatment with increased risks of leaks from the brain into other parts of the central nervous system (CNS). Another, more common occurrence is the incidence or reflux of the drug back into the catheter. Reflux can cause leakage into unintended areas as well as decrease the true volume of drug delivered. CED catheter improvements are currently being researched, with some research groups modifying the tips of the catheters to prevent reflux. The design of a balloon tipped catheter for use in CED has been proposed, and results showed that drug was successfully delivered into the brain using the balloon tipped catheter without any complication. Other proposed designs include the utilization of catheters with multiple exit sites, catheters with porous tips, and catheters with tips that are smaller than the rest of the catheter. New catheter designs also aim to allow for a greater flow rate while still minimizing the risk of reflux. These improvements to the technical limitations of CED aim to help researchers determine efficacy of a treatment without worrying about failed treatments due to limitations in the equipment of CED. With this in mind, there is a fast growing tech company in Baltimore, Maryland named CraniUS LLC, for which is inventing, designing and engineering the world's first fully-implantable, MRI-compatible, wirelessly-charged, bluetooth-enabled, high-profile craniofacial implant device to provide neurosurgical patients a safe option for receiving direct and chronic medicine delivery to their brain via convection-enhanced delivery; using a novel embedded, microfluidic-pump system and easy port-access system for repeated, transcutaneous filling. |url=http://www.CraniUSmed.com References Drug delivery devices
Convection enhanced delivery
[ "Chemistry" ]
2,287
[ "Pharmacology", "Drug delivery devices" ]
70,685,311
https://en.wikipedia.org/wiki/Mean%20line%20segment%20length
In geometry, the mean line segment length is the average length of a line segment connecting two points chosen uniformly at random in a given shape. In other words, it is the expected Euclidean distance between two random points, where each point in the shape is equally likely to be chosen. Even for simple shapes such as a square or a triangle, solving for the exact value of their mean line segment lengths can be difficult because their closed-form expressions can get quite complicated. As an example, consider the following question: What is the average distance between two randomly chosen points inside a square with side length 1? While the question may seem simple, it has a fairly complicated answer; the exact value for this is . Formal definition The mean line segment length for an n-dimensional shape S may formally be defined as the expected Euclidean distance ||⋅|| between two random points x and y, where λ is the n-dimensional Lebesgue measure. For the two-dimensional case, this is defined using the distance formula for two points (x1, y1) and (x2, y2) Approximation methods Since computing the mean line segment length involves calculating multidimensional integrals, various methods for numerical integration can be used to approximate this value for any shape. One such method is the Monte Carlo method. To approximate the mean line segment length of a given shape, two points are randomly chosen in its interior and the distance is measured. After several repetitions of these steps, the average of these distances will eventually converge to the true value. These methods can only give an approximation; they cannot be used to determine its exact value. Formulas Line segment For a line segment of length , the average distance between two points is . Triangle For a triangle with side lengths , , and , the average distance between two points in its interior is given by the formula where is the semiperimeter, and denotes . For an equilateral triangle with side length a, this is equal to Square and rectangles The average distance between two points inside a square with side length s is More generally, the mean line segment length of a rectangle with side lengths l and w is where is the length of the rectangle's diagonal. If the two points are instead chosen to be on different sides of the square, the average distance is given by Cube and hypercubes The average distance between points inside an n-dimensional unit hypercube is denoted as , and is given as The first two values, and , refer to the unit line segment and unit square respectively. For the three-dimensional case, the mean line segment length of a unit cube is also known as Robbins constant, named after David P. Robbins. This constant has a closed form, Its numerical value is approximately Andersson et. al. (1976) showed that satisfies the bounds Choosing points from two different faces of the unit cube also gives a result with a closed form, given by, Circle and sphere The average chord length between points on the circumference of a circle of radius r is And picking points on the surface of a sphere with radius r is Disks The average distance between points inside a disk of radius r is The values for a half disk and quarter disk are also known. For a half disk of radius 1: For a quarter disk of radius 1: Balls For a three-dimensional ball, this is More generally, the mean line segment length of an n-ball is where depends on the parity of , General bounds Burgstaller and Pillichshammer (2008) showed that for a compact subset of the n-dimensional Euclidean space with diameter 1, its mean line segment length L satisfies where denotes the gamma function. For n = 2, a stronger bound exists. References External links Length Probability problems
Mean line segment length
[ "Physics", "Mathematics" ]
767
[ "Scalar physical quantities", "Physical quantities", "Distance", "Quantity", "Size", "Probability problems", "Length", "Wikipedia categories named after physical quantities", "Mathematical problems" ]
70,687,909
https://en.wikipedia.org/wiki/Neodymium%20arsenate
Neodymium arsenate, also known as neodymium(III) arsenate, is the arsenate of neodymium with the chemical formula of NdAsO4. In this compound, neodymium exhibits the +3 oxidation state. It has good thermal stability, and its pKsp,c is 21.86±0.11. Preparation Neodymium arsenate can be obtained from the reaction between sodium arsenate (Na3AsO4) and neodymium chloride (NdCl3) in solution: Na3AsO4 + NdCl3 → 3 NaCl + NdAsO4↓ When crystallizing from a lead pyroarsenate flux, neodymium arsenate crystals produced explode when cooled. Neodymium arsenate also occurs in nature as a mineral. See also Arsenic References Neodymium(III) compounds Arsenates
Neodymium arsenate
[ "Chemistry" ]
190
[ "Inorganic compounds", "Inorganic compound stubs" ]
70,689,080
https://en.wikipedia.org/wiki/Pullulan%20bioconjugate
Pullulan bioconjugates are systems that use pullulan as a scaffold to attach biological materials to, such as drugs. These systems can be used to enhance the delivery of drugs to specific environments or the mechanism of delivery. These systems can be used in order to deliver drugs in response to stimuli, create a more controlled and sustained release, and provide a more targeted delivery of certain drugs. Pullulan formulation Pullulan is generated by the microbial A. pullulans through the processing mainly of glucose, but can also be produced from maltose, fructose, galactose, sucrose, and mannose. In a commercial setting, pullulan is obtained from a strain of A. pullulans that is non-toxic, non-pathogenic, and unmodified genetically that is given a liquid form of starch in a set environment. The pullulan produced can be modified by different conditions such as the nutrients provided, temperature, pH, oxygen content, and other supplements. The microbial needs to be provided with a source of carbon and nitrogen in order to produce pullulan and the ratio of carbon to nitrogen needs to be precise in order to maximize pullulan production. Higher levels of nitrogen than carbon are required as excess carbon can decrease the efficiency of the enzymes and excess nitrogen can increase the production of biomass, but does not affect the pullulan production. Oxygen is also important for the proliferation of the A. pullulans cells and the production of pullulan. Further supplements can be used in order to increase the level of pullulan production, such as olive oil and tween 80. While the manufacturing conditions of pullulan can be altered in order to increase yield, chemical modifications of pullulan can also be used to alter the properties of the pullulan. The unmodified structure of pullulan contains nine hydroxyl groups attached to the backbone of the molecule, and these hydroxyl groups can be replaced with other functional groups. Some examples of processes that can modify the functional groups of pullulan include sulfation, esterification, oxidation, etherification, copolymerization, amidification, and others. Pullulan can be given a negative charge through creating an ester linkage that attaches a carboxylate group to the hydroxyl, which yields a carboxymethyl pullulan. Pullulan is hydrophilic and can be modified to have hydrophobic functionality by adding a cholesterol group. The main benefit of the added hydrophobic functionality is that it makes it so the pullulan can form self assembling micelles. Another notable modification to pullulan is the acetylation of pullulan in order to create pullulan acetate (PA), which also has a hydrophobic functionality. PA has the benefit of forming self-assembled nanoparticles, which can simplify manufacturing of certain pullulan bioconjugates. Pullulan and pullulan derivatives can also be folated in order to improve cancer cell targeting as the nanoparticle can be endocytosed into the cancer cells through folate-mediated endocytosis. Stimuli responsive systems Pullulan bioconjugate systems can be formed to respond to many different stimuli to enhance the release of the drug to the target tissue. These stimuli include pH, temperature, photothermal, electrical, ultrasonic, magnetic, and enzymatic. The pH is often used to target tumor tissues, as the extracellular pH of tumors is more acidic than the normal cells. A pullulan and polydopamine hydrogel loaded with crystal violet demonstrated pH responsive behavior due to the protonation of the polydopamine, which increased the release of the crystal violet in the acidic environment. The study showed that at a pH of a normal cell's extracellular environment, 7.4, about 60% of the crystal violet was released compared to the 87% release when in a pH of 5.0. The use of pH responsive systems for the treatment of cancer may aid in the ability to overcome resistance of the drug as well as prevent excess damage to healthy tissue. Another pH responsive pullulan system was formed with pullulan and doxorubicin where the doxorubicin is attached to the pullulan by hydrazone bonds. The drug release of the doxorubicin was tested at two pHs, 7.4 and 5, where the hydrazine is stable at 7.4 and cleaves in acidic environments. The results from this study supported the results from the pullulan and polydopamine study, as doxorubicin was released faster in the acidic environment than the pH that reflected a normal cell's extracellular environment. Temperature can also be used as a trigger to control the drug release from pullulan systems. Thermal responsive pullulan systems can be used in conjunction with thermal generating treatments for cancer in order to improve the treatment. Nanoparticles composed of periodate oxidized carboxymethyl pullulan crosslinked with two Jeffamines were synthesized and demonstrated that the nanoparticle size could be decreased with increased temperature. The nanoparticles decrease in size with increasing temperature due to the increased temperature promoting the hydrophobic interactions of the structure. Altering the temperature can induce heating or cooling dynamics that are reversible, which allows for unique properties in terms of drug release. Pullulan can be altered with photosensitizers in order to provide a controlled thermal reaction in a target area. Spiropyrane can be added to pullulan in order to act as a photosensitizer. Electrical stimuli can be used to alter the delivery of drugs through pullulan constructs. A copolymer polyacrylamide-graft-pullulan was synthesized and used for transdermal delivery of rivastigmine tartarate. In this study, the use of electric stimuli demonstrated the ability to increase the diffusion rate and in a way acted as a controllable switch to control diffusion rate. Pullulan systems can be used to enhance ultrasound imaging, as pullulan-graft-poly(carboxybetaine methacrylate) demonstrated the ability to generate carbon dioxide in response to ultrasound, which enhanced the contrast. Superparamagnetic iron oxide nanoparticles (SPIONs) have been generated which have magnetic properties, which showed to improve uptake and also decrease the cytotoxicity. Enzymes can also be used to trigger drug release mechanisms, such as how esterase has been used to cleave photosensitizers from pullulan in order to increase the photodynamic reaction. As demonstrated in the last example, these stimuli response mechanisms do not have to be independent. They can be used in combinations in order to improve the efficacy of the drug delivery. Self-assembled pullulan mechanism When pullulan is modified with a hydrophobic functionality, such as cholesterol, the pullulan derivative forms self-assembled vesicles that can encapsulate a hydrophobic drug. With the hydrophobic functional group, the pullulan derivative is an amphiphilic molecule, which when in an aqueous environment forms a micelle. This micelle has a hydrophilic exterior with the pullulan backbone and a hydrophobic core due to the functional group added to the pullulan. The nanoparticles formed are spherical, have an average size of 20-30 nanometers according to dynamic light scattering measurements, and are able to be maintained in physiological conditions. Cholesteryl-pullulan (CHP) is an example of a pullulan derivative that is capable of forming self-assembled mechanisms and has been used to anticancer drugs. The size of the self-assembled nanoparticle can be adjusted by changing the amount of cholesterol attached to the pullulan. The higher the number of cholesterol substitutions, the smaller the nanoparticle created.  PA and folated PA (FPA) have been created and form self-assembled nanoparticles, which have been used to deliver epirubicin. Pullulan derivatives have been combined with gold to form self-assembled nanoparticles that were capable of loading doxorubicin. Pullulan-dexamethasone bioconjugates have been created which also exhibit self-assembling nanoparticles that have an approximate size of 400 nanometers and have shown to extend the release of the dexamethasone. Anticancer Pullulan is used as a bioconjugate platform in order to enhance the delivery of chemotherapeutics. Pullulan derivatives can be created in order to specifically target cancer cells. In terms of cancer therapeutics, pullulan can be used to encapsulate hydrophobic cancer therapeutics through self assembled micelles, can be linked to drugs in the form of a bioconjugate, and can be utilized for its pH responsive nature. Cancer drugs that have been used with pullulan include doxorubicin, paclitaxel, epirubicin, mitoxantrone, and 10-hydroxycamptothecin. Pullulan derivatives can be folated in order to take advantage of the higher density of folate receptors on cancer cells. Doxorubicin has been loaded into pullulan micelles and folated micelles for targeted delivery to cancer cells through folate mediated endocytosis. The use of folated pullulan nanoparticles shows lower toxicity and higher levels of drug accumulation within the cancer cells. The pH sensitivity of pullulan also makes pullulan a good candidate for chemotherapeutic delivery, as the pullulan can be altered by the acidic environment of the tumor to provide targeted release. Pullulan nanoparticles have also been used to deliver paclitaxel and proved to be stable under different environmental conditions. Curcumin pullulan derivatives have a great effect in targeting hepatocarcinoma cells, as the pullulan increases the ability of curcumin to solubilize, and therefore allows for the cells to properly uptake the curcumin. Pullulan micelles can also be used to deliver genes, such as p53, in order to suppress tumor development. The pullulan protects the RNA or DNA from degradation from enzymes within the body, which enables the ability of gene therapy for treatment of cancer. The addition of ascorbic acid to pullulan bioconjugates has demonstrated antimetastic properties, which can improve cation modified pullulan derivatives. There are many factors that make pullulan a suitable drug delivery platform for cancer therapeutics. Some of these factors include the chemical modifications, the pH responsiveness, as well as the ability for the pullulan to form self-assembled micelles that can protect the therapeutics from the immune system. In vitro research has been conducted that synthesized pullulan acetate nanoparticles altered with folate and then loaded with epirubicin. This study showed that the use of folate modification to pullulan increased the cytotoxicity of the drug as well as released the drug at a faster rate than unfolated pullulan acetate. Another pullulan folated system was researched, where pullulan gold nanoparticles were folated and encapsulated doxorubicin. The pullulan gold nanoparticle provided pH controlled release of the doxorubicin and demonstrated lower toxicity to non cancer cells than doxorubicin without a carrier platform. CHP systems have been developed to deliver protein vaccines and have shown success in generating different degrees of immune responses mostly with CD4 T cells. Biotinylated pullulan acetate (BPA) have been created as they have vitamin H functionality, which helps increase the level of interaction with cancer cells. The drawback with vitamin H is that increasing the vitamin H increases the interaction of the nanoparticles with cancer cells, but also lowers the concentration of the drug in the nanoparticle due to the altered hydrophobicity. Modifications to pullulan can be made to enhance the controlled release of drugs, such as pullulan-g-poly(L-lactide) due to the water insoluble nature of the polymeric component.  Doxorubicin has been conjugated to pullulan through hydrazone bonds, but was shown to have lower cytotoxic activity than doxorubicin without a delivery platform. Intravitreal applications The ocular space is a difficult area to deliver drugs into and therefore special drug delivery considerations need to be taken into account. Intravitreal injections are a common method of delivery drugs to the eye. Pullulan systems can be utilized in intravitreal injections in order to develop drugs that are long lasting and therefore require less frequent injections. One study looked at different chemical linkers to pullulan to test efficacy of said linkers in extending the release of rhodamine B (RhB). This study used ether (Pull-Et-RhB), hydrazone (Pull-Hy-RhB), and ester (Pull-Es-RhB) linkers to generate pullulan bioconjugates. Ex vivo modeling of the drug release indicated that the drug diffuses slower in the vitreous humor than in water. The ether bond was stable at differing pH, while the hydrazone and ester bond released the drug faster in more acidic pH, that reflected the pH of endosomes. The Pull-Hy-RhB demonstrated that this drug delivery system was capable of delivering the drug to the retina through testing of the blood in the vessels of the retina. Further studies have investigated the creation and efficacy of pullulan-dexamethasone bioconjugates for intravitreal injections. The study synthesized self-assembling pullulan nanoparticles with dexamethasone attached through hydrazone bonds. This study reiterated that the drug release was fast in acidic pH that mimicked the pH of lysosomes. The variation in drug release was that at the pH of the vitreous humor the drug took two weeks to release half of the drug, while took only two days, when in a lysosomal pH. Pharmacokinetic analysis was performed on this bioconjugate system and revealed that dexamethasone was released in the vitreous humor and that it remained for sixteen days and that a substantial amount of the bioconjugate left the vitreous humor intact. Overall the studies regarding pullulan bioconjugates for the application in intravitreal injections demonstrate that pullulan can provide sustained release as well as allow the drug to reach the retina. Other applications Pullulan has many other applications. Pullulan can be used as a scaffold material for stem cells, such as mesenchymal stem cells. Pullulan can be conjugated with photosensitive molecules in order to be used with photodynamic therapy. Pullulan can be modified to be a contrast agent for MRI in multiple ways such as oxidation, iron-oxide conjugates, and cation conjugates. Pullulan has been thiolated in order to generate mucoadhesive properties. This mucoadhesive system has been further modified by polyaminating pullulan to provide sustained drug release. A study developed a transdermal pullulan system that is capable of delivering rivastigmine tartarate in response to external electrical stimuli. Pullulan systems can be loaded with a plethora of different drugs including anti-inflammatory, antilipidemic, and antiglycemic drugs. Pullulan systems can be used to treat heart conditions through the delivery of beta blockers and inhibitors of angiotensin-converting enzyme. Pullulan can also be utilized in regards to bone disease as they can be used to deliver bisphosphonates and can help to image bone regeneration through MRI. References Pharmacology Biochemistry methods
Pullulan bioconjugate
[ "Chemistry", "Biology" ]
3,406
[ "Biochemistry methods", "Pharmacology", "Biochemistry", "Medicinal chemistry" ]
70,689,953
https://en.wikipedia.org/wiki/Chitosan%20nanoparticles
Chitosan-poly (acrylic acid) is a composite that has been increasingly used to create chitosan-poly(acrylic acid) nanoparticles. More recently, various composite forms have come out with poly(acrylic acid) being synthesized with chitosan which is often used in a variety of drug delivery processes. Chitosan which already features strong biodegradability and biocompatibility nature can be merged with polyacrylic acid to create hybrid nanoparticles that allow for greater adhesion qualities as well as promote the biocompatibility and homeostasis nature of chitosan poly(acrylic acid) complex. The synthesis of this material is essential in various applications and can allow for the creation of nanoparticles to facilitate a variety of dispersal and release behaviors and its ability to encapsulate a multitude of various drugs and particles. Background Research on nanoparticles and their chitosan nanoparticles grew in popularity in the early 1990s. mainly due to its biodegradability and biocompatibility nature. Chitosan, due to its molecular structure, can be dissolved well within a variety of solvents and a variety of biologics, such as acids like formic and lactic acid. Additionally, a benefit of chitosan is its ability to be greatly modified such as with other natural materials, synthetic materials, ligands, and even functionalized with various techniques. Such an experience can be seen with the synthesis with poly-(acrylic acid) devices. The addition of poly-(acrylic acid) can allow for an interaction to induce amphiphilicity and be spontaneously assembled. This can be important due to the beneficial impact on its stimuli responsiveness and for large-scale use. Structure, properties, and synthesis Chitosan Chitosan is a polysaccharide that is derived from chitin that is composed of an alkaline deacetylated monomer of glucosamine and an acetylated monomor glucosamine and binding through β-1,4 glycosidic and hydrogen bonds. The benefit of chitosan comes from their reactive groups such as -OH and -NH2. Various mechanisms for chitosan exist, and various isolation techniques can be issued for the fabrication of chitosan nanoparticles. Chitosan nanoparticle synthesis There are various mechanisms for chitosan nanoparticle synthesis. These mechanisms include ionic gelation/polyelectrolyte complexation, emulsion droplet coalescence, emulsion solvent diffusion, reverse miscellisation, desolvation, emulsification cross-linking, nanoprecipitation, and spray-drying. Ionic gelation/polyelectrolyte complexation Ionic gelation/polyelectrolyte complexation involves converting cationic chitosan solution with anionic tripolyphosphate and collecting precipitate in the form of nanoparticles. Emulsion droplet coalescence Emulsion droplet coalescence involves the formulation of chitosan nanoparticles by creating two stable emulsions with liquid paraffin by adding one with a stabilizer and another with sodium hydroxide again containing a stabilizer. This mixture of the two emulsions can be used to form nanoparticles. Emulsion solvent diffusion Emulsion solvent diffusion takes chitosan with stabilizer mixed in with an organic solvent such as methylene chloride/acetone that contains a drug that is hydrophilic and is diffused with acetone and chitosan nanoparticles are derived via centrifugation. Reverse miscellisation Reverse miscellisation involves taking an organic solvent lipophilic surfactant and adding chitosan with a drug and cross-linker like glutaraldehyde. The nanoparticles are then extracted. Desolvation Desolvation includes preparing chitosan solution and adding a precipitate with a stabilizing solution and precipitate such as acetone. Due to the insolubility of chitosan, the precipitate begins to form through the elimination of the liquid surrounding chitosan. A crosslinker such as glutaraldehyde can be added to formulate the nanoparticles Emulsification cross-linking Chitosan-based solution is developed in the oil face and translated into stabilized liquid. A crosslinker such as glutaraldehyde can then be used to derive chitosan nanoparticles. Nanoprecipitation Nanoprecipitation refers to using chitosan and dissolving it within a solvent and then having a pump to differentiate the dispersing phase and with tween 80, derive nanoparticles from the dispersing phase. Spray drying Spray drying involves taking chitosan and adding it to the solvent acetic acid solution. The solution will then be atomized. These droplets will be mixed with drying gas and after further evaporation, nanoparticles can be derived Poly(acrylic acid) Poly(acrylic acid) refers to acrylic acid that is polymerized. Poly(acrylic acid) is also known to have a neutral pH, have beneficial crosslinking properties, due to the charge properties of the side changes and poly(acrylic acid) being anionic 1,11–13,21,22. Poly (acrylic acid) is known to have good biocompatibility with chitosan, particularly with the amine groups (-NH2) Chitosan-poly(acrylic acid) nanoparticles An alternative method for the fabrication of chitosan nanoparticles includes the inclusion of polymerized groups of chitosan (Figure 2). This methodology can allow for the improvement of the chitosan cross-linking mechanism and improve overall drug release profiles for drugs such as amoxicillin and meloxicam. Additionally, when poly (acrylic acid) is localized within the inner shell, overall drug encapsulation can be improved. Ionic gelation with radical polymerization Ionic gelation with radical polymerization takes in a chitosan solution after through the addition of an acid monomer, the chitosan changes from the anion of an acrylic monomer. The nanoparticles are then derived after being self-settled overnight, and the unreacted monomer is removed. This is the main method for the formulation of poly (acrylic acid) based chitosan nanoparticles. Applications Biomedical applications Biomedical applications of chitosan-based nanoparticles range from cancer treatment to regenerative medicine and tissue engineering to inflammatory diseases to diabetic treatment to the treatment of cerebral diseases, cardiovascular diseases, infectious diseases, and even for vaccine delivery. Lung cancer, breast cancer, and colorectal cancer include the top 3 cancers in terms of frequency and are responsible for 1 out of 3 cancer cases and death burden worldwide. Chitosan-based nanoparticles provide benefits to make targeted drug delivery systems for biomedical use and overall improve the potential of oral administration of drugs (Figure 3). Figure 3 Advantages of chitosan nanoparticles. Adopted from Sharifi-Rad et al, 2021. Drug delivery system One of the main uses of chitosan-based nanoparticles involves drug delivery devices. The following are drugs delivered with chitosan-based nanoparticle: methotrexate, fucose-conjugated chitosan, 5-fluorouracil, doxorubicin, docetaxel, paclitaxel, propranolol-HCL, CyA, insulin, indomethacin, cefazolin, isoniazid, tetracycline, didanosine, isoniazid, rifampicin, folate, zaltoprofen, curcumin, cisplatin, camptothecin, bupivacaine, paclitaxel, prothionamide, hydrocortisone, albumin, ocimum gratissimum essential oil, triphosphate, RGD peptides and morphine. The targeting system again ranges from various drug systems, with a primary focus on targeting cancer within specific organs such as lung or colorectal. The potential of poly(acrylic acid) and the addition has shown success in improvements of overall gene expression and protein delivery through the ability to modify pH sensitivity, modify chemosensitivity, and modify targeting. Drug encapsulating system Another main use of chitosan-based nanoparticles involves the ability to withhold various drugs, organic compounds, and even inorganic analytes 5,8,9,11,12,23–25,28,32. These analytes include Fe3O4 (Figure 4). A Fe3O4 based chitosan poly(acrylic acid) nanoparticle or nanosphere can have applications such as toxic metal uptake for direct use in drug delivery systems, treatment of tumors, magnetic separation of biomolecules, and even MRI contrast enhancement. Figure 4 Magnetic nanospheres with chitosan-poly(acrylic acid). Adopted from Feng et al, 2009. Edible coating Chitosan alone or together with putrescine has been used successfully to slow the decay of fruits for up to 12 days when held at low temperatures. Limitations and future work Overall continued improvement of stability, biocompatibility, degradability, and nontoxicity is needed to improve the viability. Current limitations exist in routes of delivery, such as limited work in orally administered nanoparticles and drug delivery devices. Absorption should further be improved in chitosan poly(acrylic acid) nanoparticles for improved solubility for targeted drug delivery. Additionally, further work in cell viability and cell proliferation is needed within these nanoparticles for use in tissue regeneration. Additionally, current limitations exist in fabrication techniques and large chain implementation due to possible difficulties in the synthesis of chitosan-based nanoparticles. References Nanoparticles by composition Biomaterials Polysaccharides
Chitosan nanoparticles
[ "Physics", "Chemistry", "Biology" ]
2,132
[ "Biomaterials", "Carbohydrates", "Materials", "Medical technology", "Matter", "Polysaccharides" ]
70,691,232
https://en.wikipedia.org/wiki/Two-dimensional%20space
A two-dimensional space is a mathematical space with two dimensions, meaning points have two degrees of freedom: their locations can be locally described with two coordinates or they can move in two independent directions. Common two-dimensional spaces are often called planes, or, more generally, surfaces. These include analogs to physical spaces, like flat planes, and curved surfaces like spheres, cylinders, and cones, which can be infinite or finite. Some two-dimensional mathematical spaces are not used to represent physical positions, like an affine plane or complex plane. Flat The most basic example is the flat Euclidean plane, an idealization of a flat surface in physical space such as a sheet of paper or a chalkboard. On the Euclidean plane, any two points can be joined by a unique straight line along which the distance can be measured. The space is flat because any two lines transversed by a third line perpendicular to both of them are parallel, meaning they never intersect and stay at uniform distance from each-other. Curved Two-dimensional spaces can also be curved, for example the sphere and hyperbolic plane, sufficiently small portions of which appear like the flat plane, but on which straight lines which are locally parallel do not stay equidistant from each-other but eventually converge or diverge, respectively. Two-dimensional spaces with a locally Euclidean concept of distance but which can have non-uniform curvature are called Riemannian surfaces. (Not to be confused with Riemann surfaces.) Some surfaces are embedded in three-dimensional Euclidean space or some other ambient space, and inherit their structure from it; for example, ruled surfaces such as the cylinder and cone contain a straight line through each point, and minimal surfaces locally minimize their area, as is done physically by soap films. Relativistic Lorentzian surfaces look locally like a two-dimensional slice of relativistic spacetime with one spatial and one time dimension; constant-curvature examples are the flat Lorentzian plane (a two-dimensional subspace of Minkowski space) and the curved de Sitter and anti-de Sitter planes. Non-Euclidean Other types of mathematical planes and surfaces modify or do away with the structures defining the Euclidean plane. For example, the affine plane has a notion of parallel lines but no notion of distance; however, signed areas can be meaningfully compared, as they can in a more general symplectic surface. The projective plane does away with both distance and parallelism. A two-dimensional metric space has some concept of distance but it need not match the Euclidean version. A topological surface can be stretched, twisted, or bent without changing its essential properties. An algebraic surface is a two-dimensional set of solutions of a system of polynomial equations. Information-holding Some mathematical spaces have additional arithmetical structure associated with their points. A vector plane is an affine plane whose points, called vectors, include a special designated origin or zero vector. Vectors can be added together or scaled by a number, and optionally have a Euclidean, Lorentzian, or Galilean concept of distance. The complex plane, hyperbolic number plane, and dual number plane each have points which are considered numbers themselves, and can be added and multiplied. A Riemann surface or Lorentz surface appear locally like the complex plane or hyperbolic number plane, respectively. Definition and meaning Mathematical spaces are often defined or represented using numbers rather than geometric axioms. One of the most fundamental two-dimensional spaces is the real coordinate space, denoted consisting of pairs of real-number coordinates. Sometimes the space represents arbitrary quantities rather than geometric positions, as in the parameter space of a mathematical model or the configuration space of a physical system. Non-real numbers More generally, other types of numbers can be used as coordinates. The complex plane is two-dimensional when considered to be formed from real-number coordinates, but one-dimensional in terms of complex-number coordinates. A two-dimensional complex space – such as the two-dimensional complex coordinate space, the complex projective plane, or a complex surface – has two complex dimensions, which can alternately be represented using four real dimensions. A two-dimensional lattice is an infinite grid of points which can be represented using integer coordinates. Some two-dimensional spaces, such as finite planes, have only a finite set of elements. Further reading Dimension Multi-dimensional geometry 2 (number)
Two-dimensional space
[ "Physics" ]
886
[ "Geometric measurement", "Dimension", "Physical quantities", "Theory of relativity" ]
70,691,371
https://en.wikipedia.org/wiki/Meta-waveguide
In photonics, a meta-waveguide is a physical structures that guides electromagnetic waves with engineered functional subwavelength structures. Meta-waveguides are the result of combining the fields of metamaterials and metasurfaces into integrated optics. The design of the subwavelength architecture allows exotic waveguiding phenomena to be explored. Meta-waveguides can be classified by waveguide platforms or by design methods. If classified by underlying waveguide platform, engineered subwavelength structures can be classified in combination with dielectric waveguides, optical fibers, or plasmonic waveguides. If classified by design methods, meta-waveguides can be classified as either using design primarily by physical intuition, or by computer algorithm based inverse design methods. Meta-waveguides can provide new degrees of design freedom to the available structural library for optical waveguides in integrated photonics. Advantages can include enhancing the performance of conventional waveguide based integrated optical devices and creating novel device functionalities. Applications of meta-waveguides include beam/polarization splitting, integrated waveguide mode converters, versatile waveguide couplers, lab-on-fiber sensing, nano-optic endoscope imaging, on-chip wavefront shaping, structured-light generations, and optical neural networks. The meta-structures can also be further integrated with van der Waals materials to add more functionalities and reconfigurability. References Photonics Nanotechnology Applied and interdisciplinary physics Electromagnetic radiation
Meta-waveguide
[ "Physics", "Materials_science", "Engineering" ]
310
[ "Physical phenomena", "Applied and interdisciplinary physics", "Electromagnetic radiation", "Materials science", "Radiation", "Nanotechnology" ]
69,178,228
https://en.wikipedia.org/wiki/Garrod%20Lecture%20and%20Medal
The Garrod Lecture and Medal is an award presented by the British Society for Antimicrobial Chemotherapy. It was established in 1982 and named for L. P. Garrod. The medal is made of silver by the Birmingham Mint. The recipient of the award is considered by the society as having international authority in the field of antimicrobial chemotherapy. They are invited to deliver an accompanying lecture and receive honorary membership of the Society. Recipients References British lecture series Lists of physicians Medicine awards Medical education in the United Kingdom Medical lecture series 1982 establishments in the United Kingdom Recurring events established in 1982 Microbiology
Garrod Lecture and Medal
[ "Chemistry", "Technology", "Biology" ]
123
[ "Science and technology awards", "Microbiology", "Medicine awards", "Microscopy" ]
61,289,418
https://en.wikipedia.org/wiki/Rhodium%20trifluoride
Rhodium(III) fluoride or rhodium trifluoride is the inorganic compound with the formula RhF3. It is a red-brown, diamagnetic solid. Synthesis and structure The compound is prepared by fluorination of rhodium trichloride: It can also be obtained by direct combination of the elements: Anhydrous is insoluble in water and does not react with it, but the hydrates and can be prepared by adding hydrofluoric acid to aqueous rhodium(III) solutions. According to X-ray crystallography, the compound adopts the same structure as vanadium trifluoride, wherein the metal achieves octahedral coordination geometry. References Fluorides Platinum group halides Rhodium(III) compounds
Rhodium trifluoride
[ "Chemistry" ]
173
[ "Fluorides", "Salts" ]
61,290,269
https://en.wikipedia.org/wiki/Melanie%20Perkins
Melanie Perkins (born 1987) is an Australian technology entrepreneur, who is the co-founder and chief executive officer of Canva. She owns 18% of the company. Perkins is one of the youngest female CEOs of a tech start-up valued over 1 billion. , Perkins was one of Australia's richest women with an estimated net worth of US$4.4 billion.. In 2023, she ranked 89th in Forbes list of "World's 100 most powerful women" and 92nd in Fortune's list of Most Powerful Women. Early life Melanie Perkins was born in 1987 in Perth, Western Australia. She is the daughter of an Australian-born teacher and a Malaysian engineer of Filipino and Sri Lankan descent. She attended Sacred Heart College, a secondary school located in the northern Perth suburb of Sorrento. In high school, Perkins had aspirations of becoming a professional figure skater and would routinely wake up at 4:30 am to train. By the age of fourteen, she had started her first business, selling handmade scarves at shops and markets throughout Perth. She credits this experience with developing her entrepreneurial drive as ‘she never forgot the freedom and excitement from building a business.’ After high school, Perkins enrolled at The University of Western Australia, majoring in communications, psychology and commerce. At this time, Perkins was also a private tutor for students learning graphic design. She noticed the difficulties students had in learning design programs such as Adobe Photoshop: it would often take students a semester at university to be introduced to basic features of these complex design programs. Perkins thought there was a business opportunity in making the design process easier. Her idea was to make a design platform where no technical experience was required. She dropped out of university at age 19 to pursue her first business with Cliff Obrecht, Fusion Books. Career Fusion Books Fusion Books was founded by Perkins and Obrecht in 2007. Fusion Books allowed students to design their own school yearbooks by using a simple drag-and-drop tool equipped with a library of design templates that could be populated with photos, illustrations, and fonts. Originally, Perkins wanted to develop software that made the entire design process easier but due to the competition with large companies and her lack of resources, she concluded ‘it did not seem the logical thing to do’. Perkins's mother was a teacher who would also co-ordinate the school yearbook. Perkins saw how much time was required to design a yearbook and thought the high level of consumer friction would make yearbooks a good niche to test the idea for Canva. Started in the Duncraig living room of Perkins's mother, Obrecht would cold call schools in an attempt to get new clients for Fusion Books. Their parents would often help with printing the yearbooks. Over five years, Fusion Books grew into the largest yearbook company in Australia and expanded into France and New Zealand. Formation of Canva Perkins and Obrecht were originally based in Perth. Perkins claims that she was rejected by over 100 local investors in Perth. In 2011, prominent investor, Bill Tai visited Perth to judge a start-up competition. Perkins and Obrecht pitched Tai the initial idea for Canva over dinner. There were also other venture capitalists present including Rick Baker from Blackbird Ventures. They received no funding but became regular fixtures at gatherings hosted by Tai for investors and start-up founders. Some of these gatherings took place in Silicon Valley where Perkins and Obrecht met Lars Rasmussen, co-founder of Google Maps. He expressed interest in the idea but told the founders to ‘put everything on hold’ until they found a tech team of the calibre required. Rasmussen then became the tech adviser to the business, where he introduced Perkins and Obrecht to Cameron Adams, an ex-Google employee with relevant technical expertise. Adams was initially not interested in joining the business as he was starting his own business called fluent.io, software attempting to disrupt email. Adams was in Silicon Valley trying to raise funds for his start-up when Perkins sent him another email asking if he wanted to join the business. After that email, he agreed to join Canva, becoming its third founder and chief product officer. Perkins is the CEO of one of the few ‘unicorn’ start-ups that are profitable. In Nov. 2023, Forbes estimated her net worth at $ 3.6 bln. She remained CEO of Canva, and owned around 18 % of the company. She was named to the Financial Review Rich List of 2023. Women in start-ups Controversy surrounds the gender disparity in the technology industry as well as amongst start-ups, with one in four start-ups founded by a woman. Perkins is amongst the 2 percent of female CEOs of venture-backed companies. She wrote an article for people who feel like 'they are on the outside' and discussed her journey as a young entrepreneur in order to encourage people from diverse backgrounds to pursue big dreams and concentrate on their goals. Perkins has implemented policies at Canva that eliminate bias in the hiring process, that has resulted in Canva obtaining 41 percent female representation, significantly higher than the industry average of 28 percent. Personal life Perkins took an interest in kite surfing when she discovered many prominent venture capitalists use this as a way to network with founders. She would regularly kite-surf with venture capitalist Bill Tai. Perkins has also travelled the world extensively and credits a trip to India as a life-changing experience. In 2019, Obrecht proposed to Perkins on a holiday in Turkey's backpacker-friendly Cappadocia region. The engagement ring was $30. The couple have been critical of materialism, with Obrecht stating, ‘what is the point of hoarding stuff’. They have expressed a desire to donate most of their fortune to charity. Perkins and Obrecht married in January 2021 on Rottnest Island. Later that year, they joined the Giving Pledge, committing at least half of their fortune to philanthropic purposes. Net worth In 2020 Forbes named Perkins as one of the world's "Top Under 30 of the Decade". Perkins first appeared on The Australian Financial Review Rich List in 2020 with a net worth of 3.43 billion. , The Australian Financial Review assessed her and Obrecht's joint net worth as 13.18 billion, on the 2023 Rich List; making them the ninth wealthiest Australians. As of 31 January 2022, Forbes estimated Perkins' personal net worth at A$9.21 billion (US$6.5bn). Notes : Perkins' net worth is assessed in Financial Review Rich List as being held jointly with her spouse and business partner, Cliff Obrecht. References External links Living people 1987 births Australian billionaires Australian company founders Australian people of Filipino descent Australian people of Malaysian descent Australian people of Sri Lankan descent 21st-century Australian businesswomen 21st-century Australian businesspeople Australian women chief executives Australian women company founders Female billionaires Businesspeople from Perth, Western Australia Technology company founders
Melanie Perkins
[ "Technology" ]
1,416
[ "Lists of people in STEM fields", "Proprietary technology salespersons" ]
62,555,539
https://en.wikipedia.org/wiki/KLJN%20Secure%20Key%20Exchange
Random-resistor-random-temperature Kirchhoff-law-Johnson-noise key exchange, also known as RRRT-KLJN or simply KLJN, is an approach for distributing cryptographic keys between two parties that claims to offer unconditional security. This claim, which has been contested, is significant, as the only other key exchange approach claiming to offer unconditional security is Quantum key distribution. The KLJN secure key exchange scheme was proposed in 2005 by Laszlo Kish and Granqvist. It has the advantage over quantum key distribution in that it can be performed over a metallic wire with just four resistors, two noise generators, and four voltage measuring devices---equipment that is low-priced and can be readily manufactured. It has the disadvantage that several attacks against KLJN have been identified which must be defended against. "Given that the amount of effort and funding that goes into Quantum Cryptography is substantial (some even mock it as a distraction from the ultimate prize which is quantum computing), it seems to me that the fact that classic thermodynamic resources allow for similar inherent security should give one pause," wrote Henning Dekant, the founder of the Quantum Computing Meetup, in April 2013. The Cybersecurity Curricula 2017, a joint project of the Association for Computing Machinery, the IEEE Computer Society, the Association for Information Systems, and the International Federation for Information Processing Technical Committee on Information Security Education (IFIP WG 11.8) recommends teaching the KLJN Scheme as part of teaching "Advanced concepts" in its knowledge unit on cryptography. See Also/Further Reading http://www.scholarpedia.org/article/Secure_communications_using_the_KLJN_scheme http://noise.ece.tamu.edu/research_files/research_secure.htm Science: Simple Noise May Stymie Spies without Quantum Weirdness, Adrian Cho, September 30, 2005. http://noise.ece.tamu.edu/news_files/science_secure.pdf References Cryptography
KLJN Secure Key Exchange
[ "Mathematics", "Engineering" ]
438
[ "Applied mathematics", "Cryptography", "Cybersecurity engineering" ]
62,557,527
https://en.wikipedia.org/wiki/Fresh%20water
Fresh water or freshwater is any naturally occurring liquid or frozen water containing low concentrations of dissolved salts and other total dissolved solids. The term excludes seawater and brackish water, but it does include non-salty mineral-rich waters, such as chalybeate springs. Fresh water may encompass frozen and meltwater in ice sheets, ice caps, glaciers, snowfields and icebergs, natural precipitations such as rainfall, snowfall, hail/sleet and graupel, and surface runoffs that form inland bodies of water such as wetlands, ponds, lakes, rivers, streams, as well as groundwater contained in aquifers, subterranean rivers and lakes. Water is critical to the survival of all living organisms. Many organisms can thrive on salt water, but the great majority of vascular plants and most insects, amphibians, reptiles, mammals and birds need fresh water to survive. Fresh water is the water resource that is of the most and immediate use to humans. Fresh water is not always potable water, that is, water safe to drink by humans. Much of the earth's fresh water (on the surface and groundwater) is to a substantial degree unsuitable for human consumption without treatment. Fresh water can easily become polluted by human activities or due to naturally occurring processes, such as erosion. Fresh water makes up less than 3% of the world's water resources, and just 1% of that is readily available. About 70% of the world's freshwater reserves are frozen in Antarctica. Just 3% of it is extracted for human consumption. Agriculture uses roughly two thirds of all fresh water extracted from the environment. Fresh water is a renewable and variable, but finite natural resource. Fresh water is replenished through the process of the natural water cycle, in which water from seas, lakes, forests, land, rivers and reservoirs evaporates, forms clouds, and returns inland as precipitation. Locally, however, if more fresh water is consumed through human activities than is naturally restored, this may result in reduced fresh water availability (or water scarcity) from surface and underground sources and can cause serious damage to surrounding and associated environments. Water pollution also reduces the availability of fresh water. Where available water resources are scarce, humans have developed technologies like desalination and wastewater recycling to stretch the available supply further. However, given the high cost (both capital and running costs) and - especially for desalination - energy requirements, those remain mostly niche applications. A non-sustainable alternative is using so-called "fossil water" from underground aquifers. As some of those aquifers formed hundreds of thousands or even millions of years ago when local climates were wetter (e.g. from one of the Green Sahara periods) and are not appreciably replenished under current climatic conditions - at least compared to drawdown, these aquifers form essentially non-renewable resources comparable to peat or lignite, which are also continuously formed in the current era but orders of magnitude slower than they are mined. Definitions Numerical definition Fresh water can be defined as water with less than 500 parts per million (ppm) of dissolved salts. Other sources give higher upper salinity limits for fresh water, e.g. 1,000 ppm or 3,000 ppm. Systems Fresh water habitats are classified as either lentic systems, which are the stillwaters including ponds, lakes, swamps and mires; lotic which are running-water systems; or groundwaters which flow in rocks and aquifers. There is, in addition, a zone which bridges between groundwater and lotic systems, which is the hyporheic zone, which underlies many larger rivers and can contain substantially more water than is seen in the open channel. It may also be in direct contact with the underlying underground water. Sources The original source of almost all fresh water is precipitation from the atmosphere, in the form of mist, rain and snow. Fresh water falling as mist, rain or snow contains materials dissolved from the atmosphere and material from the sea and land over which the rain bearing clouds have traveled. The precipitation leads eventually to the formation of water bodies that humans can use as sources of freshwater: ponds, lakes, rainfall, rivers, streams, and groundwater contained in underground aquifers. In coastal areas fresh water may contain significant concentrations of salts derived from the sea if windy conditions have lifted drops of seawater into the rain-bearing clouds. This can give rise to elevated concentrations of sodium, chloride, magnesium and sulfate as well as many other compounds in smaller concentrations. In desert areas, or areas with impoverished or dusty soils, rain-bearing winds can pick up sand and dust and this can be deposited elsewhere in precipitation and causing the freshwater flow to be measurably contaminated both by insoluble solids but also by the soluble components of those soils. Significant quantities of iron may be transported in this way including the well-documented transfer of iron-rich rainfall falling in Brazil derived from sand-storms in the Sahara in north Africa. In Africa, it was revealed that groundwater controls are complex and do not correspond directly to a single factor. Groundwater showed greater resilience to climate change than expected, and areas with an increasing threshold between 0.34 and 0.39 aridity index exhibited significant sensitivity to climate change. Land-use could affect infiltration and runoff processes. The years of most recharge coincided with the most precipitation anomalies, such as during El Niño and La Niña events. Three precipitation-recharge sensitivities were distinguished: in super arid areas with more than 0.67 aridity index, there was constant recharge with little variation with precipitation; in most sites (arid, semi-arid, humid), annual recharge increased as annual precipitation remained above a certain threshold; and in complex areas down to 0.1 aridity index (focused recharge), there was very inconsistent recharge (low precipitation but high recharge). Understanding these relationships can lead to the development of sustainable strategies for water collection. This understanding is particularly crucial in Africa, where water resources are often scarce and climate change poses significant challenges. Water distribution Saline water in oceans, seas and saline groundwater make up about 97% of all the water on Earth. Only 2.5–2.75% is fresh water, including 1.75–2% frozen in glaciers, ice and snow, 0.5–0.75% as fresh groundwater. The water table is the level below which all spaces are filled with water, while the area above this level, where spaces in the rock and soil contain both air and water, is known as the unsaturated zone. The water in this unsaturated zone is referred to as soil moisture. Below the water table, the entire region is known as the saturated zone, and the water in this zone is called groundwater. Groundwater plays a crucial role as the primary source of water for various purposes including drinking, washing, farming, and manufacturing, and even when not directly used as a drinking water supply it remains vital to protect due to its ability to carry contaminants and pollutants from the land into lakes and rivers, which constitute a significant percentage of other people's freshwater supply. It is almost ubiquitous underground, residing in the spaces between particles of rock and soil or within crevices and cracks in rock, typically within of the surface, and soil moisture, and less than 0.01% of it as surface water in lakes, swamps and rivers. Freshwater lakes contain about 87% of this fresh surface water, including 29% in the African Great Lakes, 22% in Lake Baikal in Russia, 21% in the North American Great Lakes, and 14% in other lakes. Swamps have most of the balance with only a small amount in rivers, most notably the Amazon River. The atmosphere contains 0.04% water. In areas with no fresh water on the ground surface, fresh water derived from precipitation may, because of its lower density, overlie saline ground water in lenses or layers. Most of the world's fresh water is frozen in ice sheets. Many areas have very little fresh water, such as deserts. Freshwater ecosystems Water is a critical issue for the survival of all living organisms. Some can use salt water but many organisms including the great majority of higher plants and most mammals must have access to fresh water to live. Some terrestrial mammals, especially desert rodents, appear to survive without drinking, but they do generate water through the metabolism of cereal seeds, and they also have mechanisms to conserve water to the maximum degree. Challenges The increase in the world population and the increase in per capita water use puts increasing strains on the finite resources availability of clean fresh water. The response by freshwater ecosystems to a changing climate can be described in terms of three interrelated components: water quality, water quantity or volume, and water timing. A change in one often leads to shifts in the others as well. Limited resource Minimum streamflow An important concern for hydrological ecosystems is securing minimum streamflow, especially preserving and restoring instream water allocations. Fresh water is an important natural resource necessary for the survival of all ecosystems. Water pollution Society and culture Human uses Uses of water include agricultural, industrial, household, recreational and environmental activities. Global goals for conservation The Sustainable Development Goals are a collection of 17 interlinked global goals designed to be a "blueprint to achieve a better and more sustainable future for all". Targets on fresh water conservation are included in SDG 6 (Clean water and sanitation) and SDG 15 (Life on land). For example, Target 6.4 is formulated as "By 2030, substantially increase water-use efficiency across all sectors and ensure sustainable withdrawals and supply of freshwater to address water scarcity and substantially reduce the number of people suffering from water scarcity." Another target, Target 15.1, is: "By 2020, ensure the conservation, restoration and sustainable use of terrestrial and inland freshwater ecosystems and their services, in particular forests, wetlands, mountains and drylands, in line with obligations under international agreements." See also Notes Subnotes References External links The World Bank's work and publications on water resources U.S. Geological Survey Fresh Water National Geographic Aquatic ecology Hydrology Water supply
Fresh water
[ "Chemistry", "Engineering", "Biology", "Environmental_science" ]
2,094
[ "Hydrology", "Fresh water", "Ecosystems", "Environmental engineering", "Aquatic ecology", "Water supply" ]
77,944,540
https://en.wikipedia.org/wiki/Enzo%20Marinari
Enzo Marinari (born on July 7, 1957, in Avellino) is an Italian theoretical and computational physicist. He has contributed to introducing several new algorithms in computational physics, such as Parallel Tempering, the SU(N) updating method and Constraint Allocation Flux Balance Analysis (CAFBA). He is a professor at the Physics Department of the Sapienza University of Rome. Education and career Enzo Marinari got his physics degree at the Sapienza University of Rome in 1980. Until 1984 he worked as a staff scientist at the Theoretical Physics Institute of the CEA Saclay, in France. In 1988 he was nominated Associate Professor at the University of Rome Tor Vergata and in 1994 he became a full professor at the University of Cagliari. Since 1999 he is a full professor at the Physics Department of the Sapienza University of Rome in Italy. From 1992 until 1994 he was contemporarily the Physics Director for the Northeast Parallel Architecture Center (NPAC) in Syracuse, NY, USA. During the period 2004-2011, he was the Scientific Director for physics of the Institute for Biocomputation and Physics of Complex Systems (BIFI) at the University of Zaragoza, Spain. During his career Enzo Marinari has done research in different fields of physics, such as particle physics (QCD, string theory), statistical physics (spin glasses, disordered and complex systems, phase transitions, temperature chaos) and biophysics (metabolic and neural networks). He has been one of the founding members of the Spanish-Italian Janus collaboration and of the Italian APE collaboration, both promoting the use of computational methods in research in physics. He has written and edited several books and plays an active role in explaining science and its applications on mainstream media channels. Recognitions In 1978 and 1979 Enzo Marinari received the Borsa Persico of the Accademia dei Lincei. In 1988 he was elected as best physicist under the age of 35 by the Accademia dei Lincei. In 1992 he received an essay prize from the Gravity Research Foundation. References External links Personal Homepage Living people 1957 births 20th-century Italian physicists 21st-century Italian physicists Academic staff of the Sapienza University of Rome Statistical physicists
Enzo Marinari
[ "Physics" ]
458
[ "Statistical physicists", "Statistical mechanics" ]
77,952,999
https://en.wikipedia.org/wiki/Saint%20Petersburg%20State%20University%20Mathematics%20and%20Mechanics%20Faculty
The Saint Petersburg State University Mathematics and Mechanics Faculty is a research and education center in the fields of mathematics, mechanics, astronomy, and computer science. Early history In 1701 Tsar Peter I issued a decree founding a school of mathematical and navigation sciences. In 1724 the Russian Academy of Sciences, the , and the Academic Gymnasium were founded. This marked the beginning of the famous Saint Petersburg Mathematical School. When Saint Petersburg University was founded in 1819, the department of mathematics at the created four departments: pure mathematics, applied mathematics, astronomy, and physics. Gradually expanding, this department survived for over a century. By 1930, the Faculty of Physics and Mathematics had become a formal union of independent departments: mathematics, mechanics, astronomy-geodesy, physics, and geography (in 1929–1930, the State University Institute of Chemistry and the had already separated from it). In 1931, the university's faculties were abolished and replaced by sectors: mathematics and mechanics, physics, geophysics, soil science and botany, physiology, and zoology. In 1932–33, the sectors were transformed back into faculties: mathematics and mechanics, physics, biology, geology-soil-geography, and chemistry. History The first dean of the Faculty of Mathematics and Mechanics (technically, head of the corresponding sector) in 1931 was a postgraduate student of Grigorii Fikhtengol'ts, O. A. Beloglavek. In 1933, Mikhail Subbotin became the dean, leading the faculty until World War II. In February 1942 Professor was appointed dean but was arrested and repressed shortly thereafter. In 1945 the faculty was granted the premises of the former Bestuzhev Courses (10th Line, 33). A branch of the faculty was located at 14th Lines of Vasilyevsky Island, building 29. In 1957 a Computing Center was established at the faculty. In 1979 the faculty moved to the in the suburbs of Saint Petersburg, in the Petergof district. Until the 2011 reorganization the faculty included scientific institutions—the , the , and the Research Institute of Information Technology. In 2019 the Faculty of Mathematics and Computer Science separated from the Faculty of Mathematics and Mechanics, along with the building on 14th Lines of Vasilyevsky Island on Vasilyevsky Island. Departments The faculty consists of 22 departments, one interfaculty department, postgraduate and doctoral programs, the Intel research and educational laboratory, 9 service departments, a scientific library, an archive, and the scientific center "Dynamics." As of October 15, 2009, the faculty had 378 teachers, including 126 professors and 178 associate professors. Deans Olga Beloglavek (1931–1932) Mikhail Subbotin (1933–1941) Nikolai Roze (1942) Ogorodnikov, Kirill (1942–1948) Nikolai Yerugin (1949) Pyotr Gorshkov (1949–1951) Dmitry Faddeev (1952–1954) Nikolai Polyakhov (1954–1965) Sergey Vallander (1965–1973) Zenon Borevich (1973–1984) Sergey Ermakov (1984–1988) Gennady Leonov (1988–2018) Alexander Razov (2018–2022) Acting Dean Kustova, Elena Vladimirovna (2022–present) Alumni Mikhail Gromov Grigori Perelman (1982–1987) References External links Official website of the faculty Student website of the Faculty of Mathematics and Mechanics at SPbSU Mathematics and Mechanics Weeks I. Grekova, Leningrad University in the 1920s S. Ivanov "From the History of the Faculty of Mathematics and Mechanics" Saint Petersburg: Everest – Third Pole, 1997 Nazarov A.I., "On the Bachelor's Curriculum in Mathematics at Saint Petersburg State University," Mathematics in Higher Education, No. 11, 2013, pp. 63–66 Leonov, G.A., Terekhov, A.N., Novikov, B.A., Kruk, E.A., Nesterov, V.M., "Creation of a Scientific and Educational IT Cluster at the Faculty of Mathematics and Mechanics at SPbSU Based on Modern Fundamental Mathematics," Computer Tools in Education, No. 2: 42–57, 2017 Minutes of the SPbSU Academic Council Meeting (27.05.2019): On the Initiative to Create the Faculty of Mathematics and Computer Science Mathematics departments Mechanical engineering schools
Saint Petersburg State University Mathematics and Mechanics Faculty
[ "Engineering" ]
892
[ "Mechanical engineering schools", "Mechanical engineering organizations", "Engineering universities and colleges" ]
74,960,351
https://en.wikipedia.org/wiki/Arturo%20A.%20Keller
Arturo A. Keller is a civil and environmental engineer and an academic. He is a professor at the Bren School of Environmental Science & Management at the University of California, Santa Barbara. Keller is most known for his work on water quality and resource management, primarily focusing on emerging contaminants as well as creating technologies and management strategies to address water pollution. His work is highly cited, with over 23,300 citations. He is the recipient of the 2015 Agilent Thought Leadership award for his contributions towards the contemporary understanding of the potential environmental implications of nanotechnology, with a specific focus on its impact within agricultural systems. Education Keller obtained a B.S. in Chemical Engineering and B.A. in Chemistry from Cornell University in 1980. In 1992, he completed his M.S. in Civil (Environmental) Engineering, followed by a PhD in Civil (Environmental) Engineering in 1996 from Stanford University. Career Keller started his academic career in 1996 by joining the University of California, Santa Barbara. There he held multiple appointments including serving as an assistant professor at the Bren School of Environmental Science and Management from 1992 to 1996, and associate professor from 2002 to 2006. Since 2006, he has been a professor. In 2023, he was promoted to the rank of Distinguished professor. From 1992 to 1996, Keller worked as a Research Associate in the Environmental Division at the Electric Power Research Institute (EPRI). He co-directed the UC Center on the Environmental Implications of Nanotechnology, from 2008 to 2020. He also co-directed the USEPA-funded Chemical Life Cycle Collaborative between 2014 and 2019, where the team developed a framework to predict early life-cycle impacts of new chemicals based on molecular structure, applications, and use characteristics. Research Keller has contributed to the management of the Santa Ana River basin and the establishment of a nutrient trading program for the Ohio River Basin, which earned him recognition through a 2015 US Water Prize. His group received a grant from USEPA and developed a framework employing artificial intelligence, specifically machine learning, alongside other predictive techniques for expeditiously conducting risk assessments for both novel and pre-existing chemicals. He also developed the first numerical model, ChemFate, capable of accommodating diverse chemical classes within one unified framework. He has authored numerous publications spanning the fields of water quality and resource management, environmental engineering, the fate and toxicity of nanomaterials as well as their effects on crops. Environmental science and engineering Keller's environmental sciences research has focused on developing methods for quantifying nanomaterial use and release, both at the global and regional levels. His collaborative work with Suzanne McFerran and others provided a global assessment of likely engineered nanomaterials (ENM) emissions into the environment and landfills, revealing their dominant types, applications, and estimated distribution in various environmental compartments. In his estimation of the ENM concentrations at global, regional, national, and local levels, he used a life-cycle approach and material flow analysis, to assess ENM concentrations at different environmental scales, including examples like the San Francisco Bay area, addressing their relevance for industry, regulators, and toxicologists. In his 2014 study, alongside Anastasiya Lazareva, he estimated ENM release from different uses, in particular personal care products, developed an environmental release model for ENMs in major cities, highlighting local factors' influence on release, and found that ENM concentrations across cities would vary significantly, due to local conditions that control the fate of ENMs. In 2023, his team evaluated the potential implications of nanotechnology from 2020 to 2030, and found that there is a projected rapid pace of introduction of novel nanomaterials in applications such as renewable energy generation and storage, but that personal care products continue to represent the most significant release to the environment. Some of his current work is investigating the life cycle of these materials as they are processed in water treatment facilities, and accumulate in bio-solids. In collaboration with Peng Wang, Keller and his team have developed a novel class of magnetic nanomaterials, Mag-PCMAs, that can be used to treat water with a wide range of contaminants, including many organic pollutants, oxyanions such as perchlorate, and metals. Very recently, he and Qian Gao demonstrated the use of these novel nanoparticles for water disinfection, to remove pathogens while being able to reuse the disinfectant, thereby reducing cost and environmental impacts. Key to the eventual use of nanotechnology for water treatment will be its effectiveness and cost-competitiveness, which was assessed by Keller, Adeyemi Adeleye and other colleagues. With these concepts in mind, he and Victoria Broje developed an advanced oil skimmer for collecting oil from seawater after an oil spill. Fate and toxicity of different classes of nanomaterials Keller has focused on the fate and toxicity of different classes of nanomaterials. His collaborative work with Hongtao Wang and others explored the conditions that increase or decrease the likelihood of exposure to ENMs, particularly in the aquatic environment. Studies of the behavior of well known ENMs, such as Titanium Dioxide (TiO2), Zinc Oxide (ZnO), and Cerium Dioxide (CeO2), within aqueous matrices commonly encountered in realistic environmental settings such as freshwater, groundwater, estuarine and marine waters, demonstrated the major influence of water characteristics such as pH, natural organic matter, and ionic strength (water hardness and salinity). Furthermore, working with Adeyemi Adeleye and others, they demonstrated that microscopic organisms such as phytoplankton and microbes can release extracellular polymeric substances, that play a key role in the determining how ENMs will behave in natural waters. Other studies showed that ENMs are very likely to form aggregates with natural sediments in water, and in fact this can be used as a "cleansing" mechanism to remove ENMs from contaminated water, by adding clay particles to remove them. In 2014, he and his colleague Kendra Garner performed an analysis of publications, to develop the emerging patterns for ENMs in the environment, assessing the potential exposure and toxicity of the most widely used ENMs, and ranking them from high to low risk. These studies led to the development of the nanoFate model, which can be used to assess the predicted environmental concentrations of ENMs in different regions, under a variety of conditions, and considers the dynamics of ENM release as well as local climate and hydrology. Keller has also worked closely with ecotoxicologists, to investigate the health effects of ENMs on different aquatic organisms, such as marine phytoplankton, sea urchins, daphnids, and mussels. These studies have demonstrated that some ENMs pose a health risk to diverse organisms at higher concentrations, typically above predicted environmental concentrations. For example, TiO2 nanoparticles are phototoxic to marine phytoplankton, while ZnO nanoparticles notably inhibited their growth. Mussels are filter feeders, and can thus remove large number of particles from water, including ENMs, which can result in transfer of ENMs up the food chain. Eventually, the results of several toxicity studies on a wide range of aquatic species was assessed using Species Sensitivity Distributions for nanomaterials, a tool developed by USEPA to better assess the potential impact of toxicants on an ecosystem. Effects of nanomaterials on crops Keller, in his research, has recently turned his attention to the benefits and potential negative implications of ENMs on agricultural crops. Copper-based nanopesticides promise high effectiveness against fungi and other crop pests, while potentially reducing the amount applied. This may result in less cost for the farmer, and lower environmental implications. Working with Yiming Su and colleagues, they demonstrated that for nanotechnology to live up to its promise, costs have to continue to decrease, while effectiveness requires a careful assessment of the form in which the nanopesticides are formulated. In collaboration with Lijuan Zhao and others, the benefits of nanotechnology to reduce plant stress were assessed. To evaluate the effect of ENMs on crop plants, his research group have been researching the use of metabolomics, to assess how plants respond to the use of different ENMs. His metabolomics analysis with Lijuan Zhao and others highlighted the potential implications and detoxification strategies associated with the agricultural use of nano-Cu and demonstrated that exposure to copper nanoparticles (nano-Cu) in hydroponic culture significantly alters nutrient uptake, triggers metabolic changes, and activates defense mechanisms in cucumber plants. In his investigation of the interaction between Cu(OH)2 nano pesticides and lettuce plants, his study provided insights into the molecular-scale plant response to copper nano pesticides in agriculture, and revealed that exposure of lettuce plants to Cu(OH)2 nano pesticides predominantly accumulated copper in leaves, disrupted metabolism, caused oxidative stress, and triggered detoxification. Furthermore, a study suggested that Cu-containing nano pesticides, while not harming photosynthesis in cucumber plants, induce molecular responses related to antioxidant and detoxification genes, potentially serving as biomarkers for nano pesticide exposure. In related research, his exploration of the metabolic effects of Cu(OH)2 nano pesticide and copper ions on spinach leaves revealed reductions in antioxidants, disruption of metabolic pathways, and a potential decrease in nutritional value. Water quality and resource management At the larger scale, Keller has developed the science for large-scale water quality trading programs. For trading to be effective, knowledge of the factors that go into evaluating a trade was developed by Keller and his team. This work led to the 2015 United States Water Prize from the U.S. Water Alliance to the team led by Jessica Fox at the Electric Power Research Institute. Keller and Hongtao Wang, along with other collaborators, have also made contributions to the assessment of the Energy-Water Nexus, that is the linkage between these two key resources. His research highlighted many important aspects, including the fact that significant energy is needed for potable water treatment, as well as for wastewater processing. His research further emphasized that the water footprint of the iron and steel industry is also significant, with important implications for China and other major economies. Additionally, his research also stressed that water is also an important aspect in power generation, which is changing as the use of renewable energies continues to rise. Awards and honors 2015 – Agilent Thought Leadership Award, Agilent Technologies 2015 – United States Water Prize, U.S. Water Alliance Selected articles Keller, A. A., Wang, H., Zhou, D., Lenihan, H. S., Cherr, G., Cardinale, B. J., ... & Ji, Z. (2010). Stability and aggregation of metal oxide nanoparticles in natural aqueous matrices. Environmental science & technology, 44(6), 1962–1967. Keller, A. A., McFerran, S., Lazareva, A., & Suh, S. (2013). Global life cycle releases of engineered nanomaterials. Journal of nanoparticle research, 15, 1–17. Keller, A. A., & Lazareva, A. (2014). Predicted releases of engineered nanomaterials: from global to regional to local. Environmental Science & Technology Letters, 1(1), 65–70. Adeleye, A. S., Conway, J. R., Garner, K., Huang, Y., Su, Y., & Keller, A. A. (2016). Engineered nanomaterials for water treatment and remediation: Costs, benefits, and applicability. Chemical Engineering Journal, 286, 640–662. Miller, R. J., Lenihan, H. S., Muller, E. B., Tseng, N., Hanna, S. K., & Keller, A. A. (2010). Impacts of metal oxide nanoparticles on marine phytoplankton. Environmental science & technology, 44(19), 7329–7334. References American civil engineers Environmental engineers Cornell University alumni Stanford University alumni University of California, Santa Barbara faculty 21st-century American engineers 20th-century American engineers Living people Year of birth missing (living people)
Arturo A. Keller
[ "Chemistry", "Engineering" ]
2,567
[ "Environmental engineers", "Environmental engineering" ]
74,961,473
https://en.wikipedia.org/wiki/Transition%20metal%20sulfate%20complex
Transition metal sulfate complexes or sulfato complexes are coordination complexes with one or more sulfate ligands. Being the conjugate base of a strong acid (sulfuric acid), sulfate is not basic. It is more commonly a counterion in coordination chemistry, not a ligand. Bonding modes Sulfate binds to metals through one, two, three, or all four oxygen atoms. Among the handful of complexes containing sulfate (or sulfato) ligands, most examples feature unidentate or chelating bidentate sulfate. Well characterized xamples are found with cobalt(III) ammines since these complexes are exchange inert. Monodentate sulfate is found in [Co(tren)(NH3)(SO4)]+ (tren = tris(2-aminoethyl)amine) Although is unknown, forms instead (en = ethylenediamine). Bidentate sulfate is observed crystallographically in . Sulfate function as a tridentate bridging ligand . All four oxygen atoms of sulfate bond to metals in some Dawson-type polyoxometalates, e.g. [S2Mo18O62]4-. Sulfate as a counterion Tutton's salts, with the formula M'2M(SO4)2(H2O)6 (M' = K+, etc.; M = Fe2+, etc.), illustrate the ability of water to outcompete sulfate as a ligand for M2+. Similarly alums, such as chrome alum ([K(H2O)6][Cr(H2O)6][SO4]2), features with noncoordinated sulfate. In a related vein, some sulfato complexes confirmed by X-ray crystallography, convert to simple aquo complexes when dissolved in water. Copper(II) sulfate examplifies this behavior, sulfate is bonded to copper in the crystal but dissociates upon dissolution. Synthesis Sulfato complexes are commonly produced by reaction of metal sulfates with other ligands. In some cases, sulfato complexes are produced from sulfur dioxide: (PPh3 = triphenylphosphine) Sulfato complexes also arise by air-oxidation of metal sulfides. Reactions A dominant reaction of sulfato complexes is solvolysis, i.e. displacement of sulfate from the first coordination sphere by polar solvents such as water. Sulfato complexes are susceptible to protonation of uncoordinated oxygen atoms. Further reading Co(phen)2SO4 References Ligands Coordination chemistry Sulfates
Transition metal sulfate complex
[ "Chemistry" ]
542
[ "Sulfates", "Ligands", "Coordination chemistry", "Salts" ]
74,967,374
https://en.wikipedia.org/wiki/Seismic%20velocity%20structure
Seismic velocity structure is the distribution and variation of seismic wave speeds within Earth's and other planetary bodies' subsurface. It is reflective of subsurface properties such as material composition, density, porosity, and temperature. Geophysicists rely on the analysis and interpretation of the velocity structure to develop refined models of the subsurface geology, which are essential in resource exploration, earthquake seismology, and advancing our understanding of Earth's geological development. History The understanding of the Earth's seismic velocity structure has developed significantly since the advent of modern seismology. The invention of the seismogram in the 19th-century catalyzed the systematic study of seismic velocity structure by enabling the recording and analysis of seismic waves. 20th century The field of seismology achieved significant breakthroughs in the 20th century.In 1909, Andrija Mohorovičić identified a significant boundary within the Earth known as the Mohorovičić discontinuity, which demarcates the transition between the Earth's crust and mantle with a notable increase in seismic wave speeds. This work was furthered by Beno Gutenberg, who identified the boundary at the core-mantle layer in the early to mid-20th century. The 1960s introduction of the World Wide Standardized Seismograph Network dramatically improved the collection and understanding of seismic data, contributing to the broader acceptance of plate tectonics theory by illustrating variations in seismic velocities. Later, seismic tomography, a technique used to create detailed images of the Earth's interior by analyzing seismic waves, was propelled by the contributions of Keiiti Aki and Adam Dziewonski in the 1970s and 1980s, enabling a deeper understanding of the Earth's velocity structure. Their work laid the foundation for the Preliminary Reference Earth Model in 1981, a significant step toward modeling the Earth's internal velocities. The establishment of the Global Seismic Network in 1984 by Incorporated Research Institutions for Seismology further enhanced seismic monitoring capabilities, continuing the legacy of the WWSSN. 21st century The advancement in seismic tomography and the expansion of the Global Seismic Network, alongside greater computational power, have enabled more accurate modeling of the Earth's internal velocity structure. Recent progress focuses on the inner core's velocity features and applying methods like ambient noise tomography for improved imaging. Principle of seismic velocity structure The study of seismic velocity structure, using the principles of seismic wave propagation, offers critical insights into the Earth's internal structure, material composition, and physical states. Variations in wave speed, influenced by differences in material density and state (solid, liquid, or gas), alter wave paths through refraction and reflection, as described by Snell's Law. P-waves, which can move through all states of matter and provide data on a range of depths, change speed based on the material's properties, such as type, density, and temperature. S-waves, in contrast, are constrained to solids and reveal information about the Earth's rigidity and internal composition, including the discovery of the outer core's liquid state since they cannot pass through it. The study of these waves' travel times and reflections offers a reconstructive view of the Earth's layered velocity structure. Average velocity structure of planetary bodies Velocity structure of Earth Seismic waves traverse the Earth's layers at speeds that differ according to each layer's unique properties, with their velocities shaped by the respective temperature, composition, and pressure. The Earth's structure features distinct seismic discontinuities where these velocities shift abruptly, signifying changes in mineral composition or physical state. Crust Average P-wave velocity: 6.0–7.0 km/s (continental); 5.0–7.0 km/s (oceanic) Average S-wave velocity: 3.5–4.0 km/s Within the Earth's crust, seismic velocities increase with depth, mainly due to rising pressure, which makes materials denser. The relationship between crustal depth and pressure is direct; as the overlying rock exerts weight, it compacts underlying layers, reduces rock porosity, increases density, and can alter crystalline structures, thus accelerating seismic waves. Crustal composition varies, affecting seismic velocities. The upper crust typically contains sedimentary rocks with lower velocities (2.0–5.5 km/s), while the lower crust consists of denser basaltic and gabbroic rocks, leading to higher velocities. Although geothermal gradient, which refers to the increase in temperature with depth in the Earth's interior, can decrease seismic velocities, this effect is usually outweighed by the velocity-boosting impact of increased pressure. Upper mantle Average P-wave velocity: 7.5–8.5 km/s Average S-wave velocity: 4.5–5.0 km/s Seismic velocity in the upper mantle rises primarily due to increased pressure, similar to the crust but with a more pronounced effect on velocity. Additionally, pressure-induced mineral phase changes, where minerals rearrange their structures, in the upper mantle contribute to this acceleration. For example, olivine transforms into its denser polymorphs, wadsleyite and ringwoodite, at depths of approximately 410 km and 660 km respectively, resulting in a more compact structure that facilitates faster seismic wave propagation in the transition zone. Lower mantle Average P-wave velocity: 10–13 km/s Average S-wave velocity: 5.5–7.0 km/s In the lower mantle, the rise in seismic velocity is driven by increasing pressure, which is greater here than in the upper layers, resulting in denser rock and faster seismic wave travel. Although thermal effects may lessen seismic velocity by softening the rock, the predominant factor in the lower mantle remains the increase in pressure. Outer Core Average P-wave velocity: 8.0–10 km/s S-waves: Do not propagate as the outer core is liquid In the outer core, seismic velocity significantly decreases due to its liquid state, which impedes the speed of seismic waves despite the high pressure. This sharp decline is observed at the core-mantle boundary, also referred to as the D'' region or Gutenberg discontinuity. Furthermore, the reduction in seismic velocity in the outer core suggests the presence of lighter elements like oxygen, silicon, sulfur, and hydrogen, which lower the density of the outer core. Inner core Average P-wave velocity: ~11 km/s Average S-wave velocity: ~3.5 km/s The solid, high-density composition of the inner core, predominantly iron and nickel, results in increased seismic velocity compared to the liquid outer core. While light elements also present in the inner core modulate this velocity, their impact is relatively contained. Anisotropy of inner core The inner core is anisotropic, causing seismic waves to vary in speed depending on their direction of travel. P-waves, in particular, move more quickly along the inner core's rotational axis than across the equatorial plane. This suggests that Earth's rotation affects the alignment of iron crystals during the core's solidification. There is also evidence suggesting a distinct transition zone ("inner" inner core), with a hypothesized transition zone some 250 to 400 km beneath the inner core boundary (ICB). This is inferred from anomalies in travel times for P-wave that travels through the inner core. This transition zone, perhaps 100 to 200 km thick, may provide insights into the alignment of iron crystals, the distribution of light elements, or Earth's accretion history. Studying the inner core poses significant challenges for seismologists and geophysicists, given that it accounts for less than 1% of Earth's volume and is difficult for seismic waves to penetrate. Moreover, S-wave detection is challenging due to minimal compressional-shear wave conversion at the boundary and substantial attenuation within the inner core, leaving S-wave velocity uncertain and an area for future research. Lateral variation of velocity structure Lateral variation in seismic velocity is a horizontal change in seismic wave speeds across the Earth's crust due to differences in geological structures like rock types, temperature, and fluids presence, affecting seismic wave travel speed. This variation helps delineate tectonic plates and geological features and is key to resource exploration and understanding the Earth's internal heat flow. Discontinuity Discontinuities are zones or surfaces within the Earth that lead to abrupt changes in seismic velocity, revealing the composition and demarcating the boundaries between the Earth's layers. The following are key discontinuities within the Earth: Mohorovičić discontinuity: the boundary between the crust and the mantle, located approximately 30–50 km below the continental crust and 5–10 km beneath the oceanic crust. 410 km discontinuity: a phase transition where olivine becomes wadsleyite. 520 km discontinuity: a phase transition where wadsleyite becomes ringwoodite. 660 km discontinuity: a phase transition where of ringwoodite to bridgmanite and ferropericlase. Gutenberg discontinuity: the core-mantle boundary, at approximately 2890 km depth. Lehmann discontinuity: marking the inner core boundary (ICB), at approximately 5150 km depth. Velocity structure of the Moon Knowledge of the Moon's seismic velocity primarily stems from seismic records obtained by Apollo missions' Passive Seismic Experiment (PSE) stations. Between 1969 and 1972, five PSE stations were deployed on the lunar surface, with four operational until 1977. These four stations created a network on the near side of the moon, configured as an equilateral triangle with two stations at one vertex. This network recorded over 13,000 seismic events, and the gathered data remains a subject of ongoing study. The analysis has revealed four moonquake mechanisms: shallow, deep, thermal, and those caused by meteoroid impacts. Crust Average P-wave velocity: 5.1–6.8 km/s Average S-wave velocity: 2.96–3.9 km/s The seismic velocity on the Moon varies within its roughly 60 km thick crust, presenting a low seismic velocity at the surface. Velocity readings increase from 100 m/s near the surface to 4 km/s at a depth of 5 km and rise to 6 km/s at 25 km depth. At 25 km depth, a discontinuity presence, at which the seismic velocity increases abruptly to 7 km/s. This velocity then stabilizes, reflecting the consistent composition and hydrostatic pressure conditions at greater depths. Seismic velocities within the Moon's approximately 60 km thick crust exhibit an initial low of 100 m/s at the surface, which escalates to 4 km/s at 5 km depth, and then to 6 km/s at 25 km depth where velocities sharply increase to 7 km/s and stabilize, revealing a consistent composition and pressure conditions in deeper layers. Surface velocities are low due to the loose, porous nature of the regolith. Deeper, compaction increases velocities, with the region beyond 25 km depth characterized by dense, sealed anorthosite and gabbro layers, suggesting a crust with hydrostatic pressure. The Moon's geothermal gradient minimally reduces velocities by 0.1-0.2 km/s. Mantle Average P-wave velocity: 7.7 km/s Average S-wave velocity: 4.5 km/s Research into the seismic structure of the Moon's mantle is hampered by the scarcity of data. Analysis of moonquake waveforms suggests that seismic wave velocities in the upper mantle (ranging from 60 to 400 km in depth) exhibit a minor negative gradient, with S-wave speeds decreasing at rates between -6×10−4 to -13×10−4 km/s per kilometer. A decease in P-wave velocities has also been postulated. The data delineates a transition zone between 400 km and 480 km depth, where a noticeable decrement in the velocities of both P- and S-waves occurs. Uncertainty grows when probing the lower mantle, extending from 480 km to 1100 km beneath the lunar surface. Some studies detect a consistent decline in S-wave transmission, suggesting absorption or scattering phenomena, while other findings indicate that velocities for P- and S-waves may in fact rise. Temperature increases with depth are believed to be the primary influence behind the observed drop in velocities within the upper mantle, suggesting a mantle heavily regulated by thermal gradients rather than compositional changes. The delineated transition zone implies a division between the chemically distinct upper and lower mantles, possibly explained by an uptick in iron concentration due to high pressure and thermal conditions at depth. Deeper into the lower mantle, the debate over seismic characteristics continues, with theories of partial melting around the 1000 km depth mark to justify the attenuation of S-wave velocities. This molten state may cause a segregation of materials, resulting in a concentration of magnesium-rich olivine in the lower regions and potentially affecting seismic speeds. Core Understanding the seismic velocities within the Moon's core presents challenges due to the limited data available. Outer core: Average P-wave velocity: 4 km/s S-waves: Do not propagate as the outer core is liquid Inner core: Average P-wave velocity: 4.4 km/s Average S-wave velocity: 2.4 km/s The sharp decline in P-wave velocity at the mantle-core boundary suggests a liquid outer core, transitioning from 7.7 km/s in the mantle to 4 km/s in the outer core. The inability of S-waves to traverse this zone further confirms its fluid nature with molten iron sulphate. An increase in seismic velocities upon reaching the inner core intimates a transition to a solid phase. The presence of solid iron-nickel alloys, potentially alloyed with lighter elements, is deduced from this increase. Current geophysical models posit a relatively diminutive Lunar core, with the liquid outer core accounting for 1-3% of the Moon's total mass and the entire core constituting about 15-25% of the lunar mass. While some lunar models suggest the possibility of a core, its existence and characteristics are not unequivocally required by the observed data. Lateral variation of seismic velocity structure Crustal velocity also varies laterally, particularly in impact basins, where meteoroid collisions have compacted the substrate, resulting in higher velocities due to reduced porosity. Lateral variations in the Moon's seismic velocity structure are marked by differences in the crust's physical properties, especially within impact basins. The velocity increases in these regions are attributed to meteoroid impacts, which have compacted the lunar substrate, thereby increasing its density and reducing porosity. This phenomenon has been studied using seismic data from lunar missions, which show that the Moon's crustal structure varies significantly with location, reflecting its complex impact history and internal processes. Velocity structure of Mars The investigation into Mars's seismic velocity has primarily relied on models and the data gathered by the InSight mission, which landed on the planet in 2018. By September 30, 2019, InSight had detected 174 seismic events. Before InSight, the Viking 2 lander attempted to collect seismic data in the 1970s, but it captured only a limited number of local events, which did not yield conclusive insights. Crust Average P-wave velocity: 3.5–5 km/s Average S-wave velocity: 2–3 km/s The crust of Mars, ranging from 10 to 50 km in thickness, exhibits increasing seismic velocity as depth increases, attributable to rising pressure. The upper crust is characterized by low density and high porosity, leading to reduced seismic velocity. Two key discontinuities have been observed: one within the crust at a depth of 5 to 10 km, and another which is likely the crust-mantle boundary, occurring at a depth of 30 to 50 km. Mantle Upper mantle: Average P-wave velocity: 8 km/s Average S-wave velocity: 4.5 km/s Lower mantle: P-wave: 5.5 km/s S-waves: Not applicable (liquid) The Martian mantle, composed of iron-rich rocks, facilitates the transmission of seismic waves at high speeds. Research indicates a variation in seismic velocities between depths of 400 and 600 km, where S-wave speeds decrease while P-wave speeds remain constant or increase slightly. This region is known as the Low Velocity Zone (LVZ) in the Martian upper mantle and may be caused by a static layer overlying a convective mantle. The reduction in velocity at the LVZ is likely due to high temperatures and moderate pressures. Martian mantle research has also identified two discontinuities at depths of approximately 1100 km and 1400 km. These discontinuities suggest phase transitions from olivine to wadsleyite and from wadsleyite to ringwoodite, analogous to the Earth's mantle phase changes at depths of 410 km and 660 km. However, Mars's mantle composition differs from Earth's as it does not have a lower mantle predominated by bridgmanite. Recent study suggested the presence of a molten lower mantle layer in the Mars which could significantly affect the interpretation of seismic data and our understanding of the planet's thermal history. Core Average P-wave velocity: 5 km/s S-waves: Do not propagate as the outer core is liquid Scientific evidence suggests that Mars has a substantial liquid core, inferred from S-wave transmission patterns that indicate these waves do not pass through liquid. The core is likely composed of iron and nickel with a significant proportion of lighter elements, inferred from its lower-than-expected density. The presence of a solid inner core on Mars, comparable to Earth's, is currently the subject of scientific debate. No definitive evidence has yet confirmed the nature of the inner core, leaving its existence and characteristics as topics for further research. Lateral variation of velocity structure Lateral variations in the seismic velocity structure of Mars have been revealed by data from the InSight mission, indicating an intricately layered subsurface. InSight's seismic experiments suggest that these variations reflect differences in crustal thickness and composition, potentially caused by volcanic and tectonic processes unique to Mars. Such variations also provide evidence for the presence of a liquid layer above the core, suggesting a complex interplay of thermal and compositional factors affecting the planet's evolution. Further analysis of marsquake data may illuminate the relationship between these lateral variations and the Martian mantle's convective dynamics. Velocity structure of Enceladus Research on Enceladus's subsurface composition has provided theoretical velocity profiles in anticipation of future exploratory missions. While Enceladus's interior is poorly understood, scientists agree on a general structure consisting of an outer icy shell, a subsurface ocean, and a rocky core. In a recent study, three models—single core, thick ice, and layered core—were proposed to delineate Enceladus' internal characteristics. According to these models, seismic velocities are expected to decrease from the ice shell to the ocean, reflecting transitions from porous, fractured ice to a more fluid state. Conversely, velocities are predicted to rise within the solid silicate core, illustrating the stark contrast between the moon's various layers. Future plan Seismic exploration of celestial bodies has so far been limited to the Moon and Mars. However, future space missions are set to extend seismic studies to other entities in our solar system. The proposed Europa Lander Mission, slated for a launch window between 2025 and 2030, will investigate the seismic activity of Jupiter's moon, Europa. This mission plans to deploy the Seismometer to Investigate Ice and Ocean Structure (SIIOS), an instrument designed by the University of Arizona to withstand Europa's harsh, cold, and radiative environment. SIIOS's goal is to provide insight into Europa's icy crust and subterranean ocean. In conjunction with its Artemis program to the Moon, NASA has also funded initiatives under the Development and Advancement of Lunar Instrumentation (DALI) program. Among these, the Seismometer for a Lunar Network (SLN) project stands out. The SLN aims to facilitate the creation of a lunar seismometer network by integrating seismometers into future lunar landers or rovers. This initiative is part of NASA's broader effort to prepare for continued exploration of the Moon's geology. Methods The study of seismic velocity structure is typically conducted through the observation of seismic data coupled with inverse modeling, which involves adjusting a model based on observed data to infer the properties of the Earth's interior. Here are some methods used to study seismic velocity structure: Applications of velocity structure Applications of seismic velocity structure encompass a range of fields where understanding the Earth's subsurface is crucial: Limitation/Uncertainty S-wave velocity of the inner Earth's core Investigating Earth's inner core through seismic waves presents significant challenges. Directly observing seismic waves that traverses the inner core is difficult due to weak signal conversion at the core boundaries and high attenuation within the core. Recent techniques like earthquake late-coda correlation, which utilises the later part of a seismogram, provide estimates for the inner core's shear wave velocity but are not without challenges. Isotropic assumptions Seismic velocity studies often assume isotropy, treating Earth's subsurface as having uniform properties in all directions. This simplification is practical for analysis but may not be accurate. The inner core and mantle, for example, likely demonstrate anisotropic, or directionally dependent, properties, which can affect the accuracy of seismic interpretations. Dimensional considerations Seismic models are frequently one-dimensional, considering changes in Earth's properties with depth but neglecting lateral variations. Although this method eases computation, it fails to account for the planet's complex three-dimensional structure, potentially misleading our understanding of subsurface characteristics. Non-uniqueness of Inverse Modelling Seismic velocity structures are inferred through inverse modeling, fitting theoretical models to observed data. However, different models can often explain the same data, leading to non-unique solutions. This issue is compounded when inverse problems are poorly conditioned, where small data variations can suggest drastically different subsurface structures. Data Limitations for the Moon and Mars Seismic Studies In contrast to Earth, the seismic datasets for the Moon and Mars are sparse. The Apollo missions deployed a handful of seismometers across the Moon, and Mars's seismic data is limited to the InSight mission's findings. This scarcity restricts the resolution of velocity models for these celestial bodies and introduces greater uncertainty in interpreting their internal structures. See also Seismic wave Seismic tomography Low-velocity zone References Seismology Geophysics Geology Earthquakes
Seismic velocity structure
[ "Physics" ]
4,762
[ "Applied and interdisciplinary physics", "Geophysics" ]
74,968,763
https://en.wikipedia.org/wiki/Reconstruction%20of%20attosecond%20beating%20by%20interference%20of%20two-photon%20transitions
Reconstruction of attosecond beating by interference of two-photon transitions, more commonly known as RABBITT or RABBIT for short, is a widely used technique for obtaining the relative phase and amplitude of attosecond pulses. This technique involves the interference of two-photon interband transitions in solids. It is especially suited for diagnostics on the temporal structure of XUV pulses. The reconstruction of attosecond beating by interference of two-photon transitions is a valuable tool for studying ultrafast processes in materials and can provide insight into the dynamics of electrons in solids. History RABBITT was invented by Pierre Agostini, Harm Geert Muller and colleagues in 2001. References Experimental physics Ultrafast spectroscopy
Reconstruction of attosecond beating by interference of two-photon transitions
[ "Physics" ]
142
[ "Experimental physics" ]
74,970,969
https://en.wikipedia.org/wiki/Dawson%20structure
The Dawson structure is a well-known structural motif for heteropoly acids. The Dawson structure can be viewed as the fusion of two defect Keggin structure, fragments with three missing octahedra. As in Keggin structures, the Dawson structure has an oxyanion at its core. Unlike Keggin structures, there are two such anions, one at each side of the ellipsoidal anion. An example is , which can also be described as . Commonly, Dawson structures feature phosphate as the central oxyanions. When the Keggin anion is allowed to stand in aqueous solution, it converts to . References Cluster chemistry Heteropoly acids Anions
Dawson structure
[ "Physics", "Chemistry" ]
145
[ "Matter", "Acids", "Anions", "Heteropoly acids", "Cluster chemistry", "Organometallic chemistry", "Ions" ]
74,972,365
https://en.wikipedia.org/wiki/Transition%20metal%20phosphate%20complex
Transition metal phosphate complexes are coordination complexes with one or more phosphate ligands. Phosphate binds to metals through one, two, three, or all four oxygen atoms. The bidentate coordination mode is common. The second and third pKa's of phosphoric acid, pKa2 and pKa3, are 7.2 and 12.37, respectively. It follows that are sufficiently basic to serve as ligands. The examples below confirm this expectation. Molecular metal phosphate complexes have no or few applications. Examples Bidentate, chelating: One example is [Co(ethylenediamine)2(PO4)]. Bis-Bidentate: (tetraamine = ) Bidentate, bridging: Phosphate, like carboxylate and sulfate, is well suited to span metal-metal bonds. This bonding mode is illustrated by , which features a Mo-Mo triple bond. Related [Pt(III)]2 complexes have been reported. Tridentate, bridging. Several triangulo clusters feature a capping phosphate ligand, e.g. . Encapsulated: In phosphotungstic acid, all four oxygen atoms of phosphate are bonded to metals. Other transition metal phosphates Aside from molecular metal phosphate complexes, the topic of this article, many or most transition metal phosphates are nonmolecular, being coordination polymers or dense ternary or quaternary phases. Iron(III) phosphate, contemplated as a cathode material for batteries, is one example. Vanadyl phosphate () is a commercial catalyst for oxidation reactions. Many metal phosphates occur as minerals. Di- and polyphosphates Phosphates exist in many condensed oligomeric forms. Many of these derivatives function as ligands for metal ions. Pyrophosphate () and trimetaphosphate () have been particularly studied. They typically function as bi- and tridentate ligands. References Ligands Coordination chemistry Phosphates
Transition metal phosphate complex
[ "Chemistry" ]
407
[ "Ligands", "Coordination chemistry", "Phosphates", "Salts" ]
55,036,321
https://en.wikipedia.org/wiki/Edward%20A.%20Murphy%20%28chemist%29
Edward Arthur Murphy was a Dunlop researcher credited with the invention of latex foam, first marketed as Dunlopillo. Career Murphy worked for Dunlop in Birmingham, UK. He is listed as an inventor on more than 40 patents. Awards and Recognitions 1929 - Invented Dunlopillo latex, used as pad seats in public trams, trains, trolley buses and cockpits 1931 - Invented first latex mattress 1949 - Colwyn medal 1966 - Charles Goodyear Medal from the ACS Rubber Division References Polymer scientists and engineers
Edward A. Murphy (chemist)
[ "Chemistry", "Materials_science" ]
102
[ "Polymer scientists and engineers", "Physical chemists", "Polymer chemistry" ]
55,038,237
https://en.wikipedia.org/wiki/Cloniprazepam
Cloniprazepam is a benzodiazepine derivative and a prodrug of clonazepam, 7-aminoclonazepam, and other metabolites. Some of the minor metabolites include 3-hydroxyclonazepam and 6-hydroxyclonazepam, 3-hydroxycloniprazepam and ketocloniprazepam with ketone group formed where 3-hydroxy group was. It is a designer drug and an NPS (short for "new psychoactive substance"). At the end of 2017, cloniprazepam was an uncontrolled substance in most of the countries. See also Flutoprazepam Ro05-4082 List of designer drugs List of benzodiazepines List of benzodiazepine designer drugs References External links https://www.unodc.org/LSS/Substance/Details/d115de07-ab8f-4454-a8f7-3ea15f4a0748 https://web.archive.org/web/20160623204002/http://nsddb.eu/substance/590/ Anticonvulsants Anxiolytics Nitrobenzodiazepines Designer drugs GABAA receptor positive allosteric modulators Prodrugs 2-Chlorophenyl compounds Cyclopropyl compounds
Cloniprazepam
[ "Chemistry" ]
315
[ "Chemicals in medicine", "Prodrugs" ]
55,040,125
https://en.wikipedia.org/wiki/Zilch%20%28electromagnetism%29
In physics, zilch (or zilches) is a set of ten conserved quantities of the source-free electromagnetic field, which were discovered by Daniel M. Lipkin in 1964. The name refers to the fact that the zilches are only conserved in regions free of electric charge, and therefore have limited physical significance. One of the conserved quantities (Lipkin's ) has an intuitive physical interpretation and is also known as optical chirality. In particular, first, Lipkin observed that if he defined the quantities Optical chirality The free Maxwell equations imply that . The precedent equation implies that the quantity is constant. This time-independent quantity is one of the ten zilches discovered by Lipkin. Nowadays, the quantity is widely known as optical chirality (up to a factor of 1/2). The quantity is the spatial density of optical chirality, while is the optical chirality flux. Generalizing the aforementioned differential conservation law for , Lipkin found other nine conservation laws, all unrelated to the stress–energy tensor. He collectively named these ten conserved quantities the zilch (nowadays, they are also called the zilches) because of the apparent lack of physical significance. Properties of zilch tensor The zilch is often described in terms of the zilch tensor, . The latter can be expressed using the dual electromagnetic tensor as . The zilch tensor is symmetric under the exchange of its first two indices, and , while it is also traceless with respect to any two indices, as well as divergence-free with respect to any index. The conservation law means that the following ten quantities are time-independent: These are the ten zilches (or just the zilch) discovered by Lipkin. In fact, only nine zilches are independent. The time-independent quantity is known as the 00-zilch and is equal to the aforementioned optical chirality (). In general, the time-independent quantity is known as the -zilch (the indices run from 0 to 3) and it is clear that there are ten such quantities (nine independent). It was later demonstrated that Lipkin's zilch is part of an infinite number of zilch-like conserved quantities, a general property of free fields. History One of the zilches has been rediscovered. This is the zilch called "optical chirality", named by Tang and Cohen, since this zilch determines the degree of chiral asymmetry in the rate of excitation of a small chiral molecule by an incident electromagnetic field. A further physical insight of optical chirality was offered in 2012; optical chirality is to the curl or time derivative of the electromagnetic field, what helicity, spin and related quantities are to the electromagnetic field itself. The physical interpretation of all zilches for topologically non-trivial electromagnetic fields was investigated in 2018. Since the discovery of the ten zilches in 1964, there is an important open mathematical question concerning their relation with symmetries. (Recently, the full answer to this question seems to have been found ). This is the question: What are the symmetries of the standard Maxwell action functional: (with and is the dynamical field variable) that give rise to the conservation of all zilches using Noether's theorem. Until recently, the answer to this question had been given only for the case of optical chirality by Philbin in 2013. This open question was also emphasized by Aghapour, Andersson and Rosquist in 2020, while these authors found the symmetries of the duality-symmetric Maxwell action underlying the conservation of all zilches. (Aghapour, Andersson and Rosquist did not find the symmetries of the standard Maxwell action, but they speculated that such symmetries should exist ). There are also earlier works studying the conservation of zilch in the context of duality-symmetric electromagnetism, but the variational character of the corresponding symmetries was not established. The full answer to the aforementioned question seems to have been given for the first time in 2022, where the symmetries of the standard Maxwell action underlying the conservation of all zilches were found. According to this work, there is a hidden invariance algebra of free Maxwell equations in potential form that is related to the conservation of all zilches. See also Conservation law Noether's theorem References Electromagnetism Conservation laws Chirality
Zilch (electromagnetism)
[ "Physics", "Chemistry", "Biology" ]
934
[ "Pharmacology", "Physical phenomena", "Electromagnetism", "Origin of life", "Equations of physics", "Biochemistry", "Conservation laws", "Stereochemistry", "Chirality", "Fundamental interactions", "Asymmetry", "Biological hypotheses", "Symmetry", "Physics theorems" ]
58,216,096
https://en.wikipedia.org/wiki/Defluoridation
Defluoridation is the downward adjustment of the level of fluoride in drinking water. Worldwide, fluoride is one of the most abundant anions present in groundwater. Fluoride is more present in groundwater than surface water mainly due to the leaching of minerals. Groundwater accounts for 98 percent of the earth's potable water. An excess of fluoride in drinking water causes dental fluorosis and skeletal fluorosis. The World Health Organization has recommended a guideline value of 1.5 mg/L as the concentration above which dental fluorosis is likely. Fluorosis is endemic in more than 20 developed and developing nations. History Fluorosis was not identified as a problem until relatively recently. Few attempts to defluoridate water came before the 20th century. In the 1930s, several nations began to investigate fluoride's negative effects and how best to remove it. An aluminum and sand filter that removes fluorine from water was devised by Dr. S. P. Kramer in 1933; in 1945, M. Kenneth received a French patent for a water defluoridation technique; and in 1952, a functioning activated alumina community defluoridation plant was commissioned in Bartlett, Texas, USA. Techniques While various defluoridation techniques have been explored, each has its limitations. Existing techniques are often too costly (because the geographic areas prone to fluorosis are among the poorest regions on the planet), ineffective or even dangerous (some of the remediation processes add other contaminants to the water). The main techniques that have been, and continue to be, investigated with varying degrees of success include: adsorption, precipitation, ion exchange and membrane processes. Adsorption can be achieved with locally available adsorbent materials with high efficiency and cost-effectiveness. Cost-effective and locally-available herbal and indigenous products offer promising options. The process is dependent on pH and the presence of sulfate, phosphate, and bicarbonate which results in ionic competition. Disposal of fluoride-laden sludge is problematic. Precipitation is the most well-established and most widely used method, particularly at the community level. However, it has only moderate efficiency and a high chemical dose is required. Excessive use of aluminum salts produces sludge and adverse health effects through aluminum solubility. The so-called Nalgonda technique for reduction of fluoride involves stirring in of alum and lime, whereupon some of the fluoride precipitates together with aluminum hydroxide, and the water can be decanted and filtered. Ion Exchange removes fluoride up to 90-95% and retains the taste and colour of the water. Sulphates, phosphates, and bicarbonates also result in ionic competition in this method. Relatively high cost is a disadvantage and treated water sometimes has a low pH value and high levels of chloride. Membrane processes are effective technique and do not require chemicals. It works at wide pH range and interference by other ions is negligible. Negatives include higher costs and it skilled labour. This process is not suitable for water with high salinity. Calcium amended-hydroxyapatite is the most recent defluoridation technique in which aqueous calcium is amended to the fluoride contaminated water prior to contact with uncalcined synthetic hydroxyapatite adsorbent. In this novel defluoridation technique, amending aqueous calcium successfully prevents the dissolution of hydroxyapatite during the defluoridation and also enhances the defluoridation capacity of hydroxyapatite. In addition to these features, this ″calcium amended-hydroxyapatite″ defluoridation technique provides calcium-enriched alkaline drinking water and drinking of this defluoridated water may also help in fluorosis reversal. Thus, it is expected that utilization of this defluoridation technique to provide safe drinking water helps in the mitigation of fluorosis. References Water treatment
Defluoridation
[ "Chemistry", "Engineering", "Environmental_science" ]
821
[ "Water treatment", "Environmental engineering", "Water technology", "Water pollution" ]
58,217,500
https://en.wikipedia.org/wiki/Polyurethane%20dispersion
Polyurethane dispersion, or PUD, is understood to be a polyurethane polymer resin dispersed in water, rather than a solvent, although some cosolvent may be used. Its manufacture involves the synthesis of polyurethanes having carboxylic acid functionality or nonionic hydrophiles like PEG (polyethylene glycol) incorporated into, or pendant from, the polymer backbone. Two component polyurethane dispersions are also available. Background There has been a general trend towards converting existing resin systems to waterborne resins, for ease of use and environmental considerations. Particularly, their development was driven by increased demand for solventless systems since the manufacture of coatings and adhesives entailed the increasing release of solvents into the atmosphere from numerous sources. Using VOC exempt solvents is not a panacea as they have their own weaknesses. The problem has always been that polyurethanes in water are not stable, reacting to produce a urea and carbon dioxide. Many papers and patents have been published on the subject. For environmental reasons there is even a push to have PUD available both water-based and bio-based or made from renewable raw materials. PUDs are used because of the general desire to formulate coatings, adhesives, sealants and elastomers based on water rather than solvent, and because of the perceived or assumed benefits to the environment. Synthesis The techniques and manufacturing processes have changed over the years from those described in the first papers, journal articles and patents that were published. There are a number of techniques available depending on what type of species is required. An ion may be formed which can be an anion thus forming an anionic PUD or a cation may be formed forming a cationic PUD. Also, it is possible to synthesize a non-ionic PUD. This involves using materials that will produce an ethylene oxide backbone, or similar, or a water-soluble chain pendant from the main polymer backbone. Anionic PUDs are by far the most common available commercially. To produce these, initially a polyurethane prepolymer is manufactured in the usual way but instead of just using isocyanate and polyol, a modifier is included in the polymer backbone chain or pendant from the main backbone. This modifier is/was mainly dimethylol propionic acid (DMPA). This molecule contains two hydroxy groups and a carboxylic acid group. The OH groups react with the isocyanate groups to produce an NCO terminated prepolymer but with a pendant COOH group. This is now dispersed under shear in water with a suitable neutralizing agent such as triethylamine. This reacts with the carboxylic acid forming a salt which is water soluble. Usually, a diamine chain extender is then added to produce a polyurethane dispersed in water with no free NCO groups but with polyurethane and polyurea segments. Dytek A is commonly used as the chain extender. Various papers and patents show that an amine chain extender with more than two functionalities such as a triamine may be used too. Chain extender studies have been carried out. There is also a push to have a synthesis strategy that is non-isocyanate based. When blocked isocyanates are used there is no isocyanate (NCO) functionality and hence the water reaction producing carbon dioxide so dispersion is easier. Modifiers other than DMPA have been researched. It is also possible to introduce hydrophilicity into the polymeric molecule by using a modified chain extender rather than doing so in the polymer backbone or a pendant chain. Lower viscosity materials are often the result, as well as higher solids. A variation on this technique is to incorporate sulfonate groups. PUD/polyacrylate blends can be prepared this way also utilizing internal emulsifiers. Cationic PUD also introduce hydrophilic components when synthesized. This includes phosphonium entities. Techniques have and are being researched to improve the performance and water resistance properties by various techniques. This includes introducing star-branched polydimethylsiloxane. Research has been done and published that shows it is not the dispersion speed, mechanical agitation or high shear mixing that has the biggest effect on properties, but rather the chemical makeup. However, particle size distribution can be controlled by this to some extent. Uses They find use in coatings, adhesives, sealants and elastomers. Specific uses include industrial coatings, UV coating resins, floor coatings, hygiene coatings, wood coatings, adhesives, concrete coatings, automotive coatings, clear coatings and anticorrosive applications. They are also used in the design and manufacture of medical devices such as the polyurethane dressing, a liquid bandage based on polyurethane dispersion. To improve their functionality in flame retardant applications, products are being developed which have this feature built into the polymer molecule. They have also found use in general textile applications such as coating nonwovens. Leather coatings with antibacterial properties have also been synthesized using PUDs and silver nanoparticles. On a similar theme, recent (post 2020) innovations have included producing a waterborne polyurethane that has embedded silver particles to combat COVID. On a similar theme, PUD with antimicrobial properties have been developed. Weaknesses and disadvantages Although they are perceived to have good environmental credentials waterborne polyurethane dispersions tend to suffer from lower mechanical strength than other resins. The use of polycarbonate based polyols in the synthesis can help overcome this weakness. The wear and corrosion resistance is also not as good and hence they are often hybridized. Other strategies used to overcome some of the weaknesses include molecular design and mixing/compounding with inorganic rather than polymeric materials. The use of an anionic or cationic center or indeed a hydrophilic non-ionic manufacturing technique tends to result in a permanent inbuilt water resistance weakness. Research is being conducted and techniques developed to combat this weakness. Simple blending has also been employed. This has the advantage in that if no new molecule has been formed but merely blending with existing registered raw materials, then that is a way around the work required to get registration of the material under various country regimes such as REACH in Europe and TSCA in the United States. Because of the surface tension of water being so high, pinholes and other problems of air-entrainment tend to be more common and need special additives to combat. They also tend not to be manufactured with biobased polyols because vegetable based polyols don't have performance enhancing functional groups. Modification is possible to achieve this and enable even greener versions. Drying, curing and cross-linking is also not usually as good and hence research is proceeding in the area of post crosslinking to improve these features. Hybrids The disadvantages of PUDs are being improved by research. Hybridization using other materials and techniques is one such area. PUDs that are waterborne and UV curable are being intensely researched with well over 100 research papers produced in the 2000-2020 time period. Waterborne PUD- Acrylates based on epoxidized soybean oil that is also UV curable have been produced and are feasible. The nature of the acrylate affects the properties. One use of hybrids is in textile finishes. As ionic centers are introduced with waterborne PUDs, the water resistance and uptake in the final film has been studied extensively. The nature of the polyol and the level of COOH groups and hydrophobic modification with other moieties can improve this property. Polyester polyols give the biggest improvements. Polycarbonate polyols also enhance properties, especially if the polycarbonate is also fluorinated. Reinforcing PUDs with nanomaterials also improves properties, as does silicone modification. To make PUDs more hydrophobic and water repellent and thus remove a weakness, a number of techniques have been researched. One way is to add hydroxyethyl acrylate to the polyol reacting with isocyanate. Once the PUD is made it will have terminal double bond functionality from the acrylate. This may now be copolymerized with a very hydrophobic acrylate such as stearyl acrylate using free radical techniques. This long alkyl chain introduced confers hydrophobicity. Another method of hybridization is to make a PUD that is both anionic but with a very substantial nonionic modification utilizing a polyether polyol based on ethylene oxide. In addition, a silicone diol maybe incorporated. As epoxy resins have some outstanding properties, research using epoxy to modify PUD is taking place. PUDs that are based on thiol rather than hydroxyl and also modified with both acrylate as well as epoxy functionality have been produced and researched. As PUDs are resin dispersed in water, when cast as a film and dried they are inherently high gloss. They can be designed to be matte/flat by incorporating siloxane functionality. Since PUDs are usually considered green and environmentally friendly, techniques being researched also include capturing carbon dioxide from the atmosphere to make the raw materials and then further synthesis. See also Coatings Polymer science Synthetic resin Water Waterborne resins References External links Alberdink and Boley Website Lubrizol website Mitsui American General Info BASF Perstorp Range Incorez range Halox includes formulations DOW literature with overview Plastics Wood finishing materials Adhesives Coatings Elastomers Polymer chemistry Synthetic resins
Polyurethane dispersion
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
2,028
[ "Synthetic resins", "Synthetic materials", "Coatings", "Unsolved problems in physics", "Materials science", "Elastomers", "Polymer chemistry", "Amorphous solids", "Plastics" ]
58,222,565
https://en.wikipedia.org/wiki/Thomas%20H.%20Haines
Thomas Henry Haines (August 9, 1933 – December 17, 2023) was an American author, social activist, biochemist and academic. He was a professor of chemistry at City College of New York and of Biochemistry at the Sophie Davis School of Biomedical Education. He was a visiting professor in the Laboratory of Thomas Sakmar at Rockefeller University. He also served on the board of the Graham School, a social services and foster care agency in New York City. His scientific research focused on the structure and function of the living cell membrane. He is the father of Avril Haines, the seventh Director of National Intelligence. Early life and education Thomas Haines was born on August 9, 1933, to Elsie Cubbon Haines (1894–1955) and Charles Haines, who deserted when Haines was two. In 1937, "by reason of the insanity of the mother", a judge placed him at the Graham School, an orphanage in Hastings-on-Hudson, New York. The orphanage, now a social services and foster care agency, was founded in 1806 by Isabella Graham and Elizabeth Hamilton, the recently widowed wife of Alexander Hamilton. Haines remained at the orphanage until high school, when he became a resident houseboy and gardener for a wealthy Hastings family. The story of Haines' early life appears as "From the Orphanage to the Lab" in the Story Collider podcast. and in his autobiography with Mindy Lewis, A Curious Life: From Rebel Orphan to Innovative Scientist. Haines attended the City College of New York, with a B.S. in chemistry in 1957 and an M.S. in education in 1959. During that time he worked as live-in baby sitter for then-blacklisted American songwriter Jay Gorney (co-writer with Yip Harburg of the Depression era anthem, “Brother, Can You Spare a Dime?”) and his wife Sondra. There Haines came to know many other blacklisted professionals including actors Zero Mostel, Paul Robeson, and Lionel Stander, philosopher Barrows Dunham, and Bella Abzug, then a young lawyer defending blacklisted artists and intellectuals at HUAC hearings. Career After CCNY, Haines taught elementary school science at the Ethical Culture Fieldston School. He then became a laboratory assistant to Richard Block at the Boyce Thompson Institute where he studied the microorganism Ochromonas danica. When Block died in a plane crash, Haines took over his research projects. In 1964 he obtained his Doctor of Philosophy degree in chemistry from Rutgers University. Haines became assistant professor of chemistry at City College in 1964 and full professor of chemistry in 1972, a position he held until retiring in 2007. In 1972 he co-founded the Sophie Davis School of Biomedical Education with University President Robert Marshak. This remarkable program took new undergraduates directly into medical school. It continues today as The CUNY School of Medicine. Haines taught biochemistry to undergraduates and served as director of biochemistry at the school from 1974 to 2006. Deeply committed to his students, he also taught remedial summer school and regularly counseled struggling students and their parents. On many occasions he was voted most popular professor. Haines simultaneously conducted laboratory research and taught as professor of biochemistry in the doctoral program of biochemistry at the Graduate Center of the City University of New York. He has published extensively on the structure and function of living membranes, including on the function of cholesterol in blocking sodium leakage through membranes, and most recently on the function of cardiolipin in the mitochondrial membrane. From 1994 to 2001, Haines chaired the Partnership for Responsible Drug Information, which organized lectures and conferences to educate the public about alternatives to the "War on Drugs." Haines served as visiting professor at the Mitsubishi Institute in Japan, at the University of California at Berkeley, and in many other universities. On his retirement from CCNY, he became a visiting professor of biochemistry at the Sakmar Laboratory at Rockefeller University. In 2020, Haines was elected a Fellow of the American Association for the Advancement of Science "For initiating and setting up the CUNY Medical School at City College of New York to educate minority and disadvantaged students." Personal life In 1960, Haines married painter Adrienne Rappaport, who used the name Adrian Rappin professionally. They had one daughter, Avril Haines, an attorney who is serving as the current Director of National Intelligence in the Biden administration. Rappaport died in 1985 after developing chronic obstructive pulmonary disease and later contracting avian tuberculosis. In 1986, Haines married his current wife, economist Mary "Polly" Cleveland. In 1964, Haines and Rappaport purchased two small run-down rent-controlled apartment buildings on New York's Upper West Side for $140,000, $10,000 down and for a time employed Al Pacino as the building superintendent. When Haines and Cleveland sold the buildings for many millions of dollars in 2009, they put half the net proceeds into a foundation for the benefit of scientific and economic education. Haines died in New York on December 17, 2023, at the age of 90. External links References 1933 births 2023 deaths American biochemists City College of New York alumni City College of New York faculty Cornell University staff Rutgers University alumni CUNY Graduate Center faculty People from New York (state) Biochemistry educators 20th-century American biochemists Membrane proteins
Thomas H. Haines
[ "Chemistry", "Biology" ]
1,105
[ "Biochemistry", "Membrane proteins", "Protein classification", "Biochemistry educators" ]
63,465,236
https://en.wikipedia.org/wiki/Shvab%E2%80%93Zeldovich%20formulation
The Shvab–Zeldovich formulation is an approach to remove the chemical-source terms from the conservation equations for energy and chemical species by linear combinations of independent variables, when the conservation equations are expressed in a common form. Expressing conservation equations in common form often limits the range of applicability of the formulation. The method was first introduced by V. A. Shvab in 1948 and by Yakov Zeldovich in 1949. Method For simplicity, assume combustion takes place in a single global irreversible reaction where is the ith chemical species of the total species and and are the stoichiometric coefficients of the reactants and products, respectively. Then, it can be shown from the law of mass action that the rate of moles produced per unit volume of any species is constant and given by where is the mass of species i produced or consumed per unit volume and is the molecular weight of species i. The main approximation involved in Shvab–Zeldovich formulation is that all binary diffusion coefficients of all pairs of species are the same and equal to the thermal diffusivity. In other words, Lewis number of all species are constant and equal to one. This puts a limitation on the range of applicability of the formulation since in reality, except for methane, ethylene, oxygen and some other reactants, Lewis numbers vary significantly from unity. The steady, low Mach number conservation equations for the species and energy in terms of the rescaled independent variables where is the mass fraction of species i, is the specific heat at constant pressure of the mixture, is the temperature and is the formation enthalpy of species i, reduce to where is the gas density and is the flow velocity. The above set of nonlinear equations, expressed in a common form, can be replaced with linear equations and one nonlinear equation. Suppose the nonlinear equation corresponds to so that then by defining the linear combinations and with , the remaining governing equations required become The linear combinations automatically removes the nonlinear reaction term in the above equations. Shvab–Zeldovich–Liñán formulation Shvab–Zeldovich–Liñán formulation was introduced by Amable Liñán in 1991 for diffusion-flame problems where the chemical time scale is infinitely small (Burke–Schumann limit) so that the flame appears as a thin reaction sheet. The reactants can have Lewis number that is not necessarily equal to one. Suppose the non-dimensional scalar equations for fuel mass fraction (defined such that it takes a unit value in the fuel stream), oxidizer mass fraction (defined such that it takes a unit value in the oxidizer stream) and non-dimensional temperature (measured in units of oxidizer-stream temperature) are given by where is the reaction rate, is the appropriate Damköhler number, is the mass of oxidizer stream required to burn unit mass of fuel stream, is the non-dimensional amount of heat released per unit mass of fuel stream burnt and is the Arrhenius exponent. Here, and are the Lewis number of the fuel and oxygen, respectively and is the thermal diffusivity. In the Burke–Schumann limit, leading to the equilibrium condition . In this case, the reaction terms on the right-hand side become Dirac delta functions. To solve this problem, Liñán introduced the following functions where , is the fuel-stream temperature and is the adiabatic flame temperature, both measured in units of oxidizer-stream temperature. Introducing these functions reduces the governing equations to where is the mean (or, effective) Lewis number. The relationship between and and between and can be derived from the equilibrium condition. At the stoichiometric surface (the flame surface), both and are equal to zero, leading to , , and , where is the flame temperature (measured in units of oxidizer-stream temperature) that is, in general, not equal to unless . On the fuel stream, since , we have . Similarly, on the oxidizer stream, since , we have . The equilibrium condition defines The above relations define the piecewise function where is a mean Lewis number. This leads to a nonlinear equation for . Since is only a function of and , the above expressions can be used to define the function With appropriate boundary conditions for , the problem can be solved. It can be shown that and are conserved scalars, that is, their derivatives are continuous when crossing the reaction sheet, whereas and have gradient jumps across the flame sheet. References Combustion Fluid dynamics
Shvab–Zeldovich formulation
[ "Chemistry", "Engineering" ]
916
[ "Piping", "Chemical engineering", "Combustion", "Fluid dynamics" ]
73,587,993
https://en.wikipedia.org/wiki/Blockscale
Intel Blockscale was a brand of crypto-mining accelerator ASIC sold by the U.S. chip manufacturer Intel. The Blockscale product debuted in June 2022, and was cancelled by Intel in April 2023. Intel has stated that it will continue to supply chips to existing customers until April 2024. The Blockscale chips were SHA-256 hardware accelerators designed for proof-of-work calculations. According to Intel, they were capable of up to 580 GH/s with a power consumption of up to 22.7 W, and a claimed efficiency of up to 26 J/TH. The product came in three variants: the Blockscale 1120, 1140, and 1160. References Intel products Hardware acceleration Cryptographic hardware Bitcoin 2022 establishments in the United States 2023 disestablishments in the United States
Blockscale
[ "Technology" ]
178
[ "Computing stubs", "Hardware acceleration", "Computer hardware stubs", "Computer systems" ]
73,592,636
https://en.wikipedia.org/wiki/Anammox%20for%20wastewater%20treatment
Anammox is a wastewater treatment technique that removes nitrogen using anaerobic ammonium oxidation (anammox). This process is performed by anammox bacteria which are autotrophic, meaning they do not need organic carbon for their metabolism to function. Instead, the metabolism of anammox bacteria convert ammonium and nitrite into dinitrogen gas. Anammox bacteria are a wastewater treatment technique and wastewater treatment facilities are in the process of implementing anammox-based technologies to further enhance ammonia and nitrogen removal. Morphology and physiology Anammox bacteria can be found in wastewater treatment plants, lakes, suboxic zones, and coastal sediments. Anammox bacteria are temperature-dependent, requiring temperatures between 30˚C to 40˚C to grow. Anammox bacteria growth is also impacted by pH, growing best at pH ranges of 6.5 to 8.3. Anammox bacteria are made up of an anammoxosome membrane, which takes up 50% to 70% of the cell volume, and a cell membrane surrounded by ladderane lipids. Chemical process The two main chemicals needed for the metabolism of anammox bacteria to function are ammonia and nitrite. Nitrate and nitrite are produced by microorganisms within wastewater treatment facilities as a result of sewage treatment. The chemical compound ammonia monooxygenase converts ammonia in wastewater into nitrite during the nitrification process. Anaerobic ammonium oxidation bacteria (Anammox) reactions, are mediated by the chemoautotrophic bacteria that are from the phylum Planctomycetota. Anammoxosome is the compartment within anammox bacteria where anammox reactions occur. During this process, a proton gradient is produced across the anammoxosome membrane, starting a catabolic reaction. Nitrate is first converted to nitric oxide in the presence of nitrate reductase, which is the first step in this reaction. Anammox oxidizes ammonium into nitrite, which is the reduced to hydroxylamine. Hydroxylamine and ammonia then react to form hydrazine, which is then oxidized into nitrogen gas. Chemical reaction for anammox, conversion of ammonia to nitrogen gas Impacts on wastewater treatment Wastewater usually exists in a mix of solid and liquid forms. The composition of wastewater varies depending on how it has been generated. "Wastewater" may refer to domestic wastewater, wastewater from industry, or surface water runoff. Treatment of wastewater to improve sanitation is a major challenge in developing countries, as untreated wastewater can contaminate drinking water. Anammox bacteria treatments have been implemented in treatment facilities to help convert sewage wastewater into sludge ash, which is then used as a fertilizer source for agriculture. Sludge ash can be used as fertilizer due to its rich concentration of phosphorus and other nutrients necessary for plant growth. The crystallization of struvite (made up of magnesium, ammonium, and phosphate) during the wastewater treatment process can also be used as a fertilizer. The addition of magnesium to wastewater that already contains ammonium and phosphate allows for a 1:1:1 mole ratio in which all three elements bind to one another, allowing struvite to form as a product according to figure 1. The struvite crystals contain nutrients essential to plant growth that are easy to use and transport. This process also helps to recover nitrogen and phosphorus from wastewater, helping to improve surface water quality as these are two of the primary elements that can cause eutrophication.  If eutrophication occurs, an anammox cycle can take place in the absence of oxygen and with high nitrite and ammonia concentrations. These two compounds are needed for the anammox cycle to begin, and are present in wastewater in high concentrations. The anammox bacteria present can help clean up wastewater of excess nitrite and ammonia. References Anaerobic digestion Environmental microbiology
Anammox for wastewater treatment
[ "Chemistry", "Engineering", "Environmental_science" ]
804
[ "Water technology", "Environmental microbiology", "Anaerobic digestion", "Environmental engineering" ]
73,595,058
https://en.wikipedia.org/wiki/Cervical%20drug%20delivery
Cervical drug delivery is a route of carrying drugs into the body through the vagina and cervix. This is a form of localized drug delivery that prevents the drugs from impacting unintended areas of the body, which can lower side effects of toxic drugs such as chemotherapeutics. Cervical drug delivery has specific applications for a variety of female health issues: treatment of cervical cancer, pregnancy prevention, STD prevention, and STD treatment. Biological considerations Cervical mucus Viscous mucus secreted by glands in the cervix presents a unique environment for drug delivery.  Due to its ability to retain substances and slowly release them, it holds potential to be used as a natural, noninvasive drug delivery system.  The mucus can act as a reservoir for compounds that destroy pathogens.  However, the cervical mucus also presents a barrier to drug delivery as it can be really thick, making it difficult to permeate the mucus barrier. The mechanisms for penetration and bioactivity of the cervical mucus must be understood to utilize the mucus’s potential as a drug delivery system. Due to the changes in viscosity and water content of the mucus during the stages of the menstrual cycle, this poses a particularly complex consideration. For example, the cervical mucus will be thicker when a woman is not ovulating in order to prevent sperm from being able to penetrate the mucus barrier, which also in turn makes in more difficult for penetration of drug delivery systems. Vaginal pH levels The vaginal environment is slightly acidic, with pH ranging from 3.8 - 4.5 based on multiple factors such as age, natural bacteria, and stage of menstrual cycle.  Due to the variety in possible pH values, this poses an interesting consideration for drug delivery.  Absorption and release of drugs is often influenced by pH, so if the pH is changing through the menstrual cycle, different combinations could be needed at different times to achieve the most effective drug delivery system. Applications Birth control Some hormonal birth control methods utilize cervical drug delivery methods.  The earliest example of such dates to 1850 Ancient Egypt when acadia gum was inserted into the vagina, releasing spermicidal components.  Examples in the modern era include vaginal rings and intrauterine devices (IUDs) that release hormones into the reproductive system to prevent fertilization. Vaginal rings are plastic ring-shaped devices that are inserted into the vaginal canal and slowly release hormones such as estradiol or progestin. Hormonal IUDs are T-shaped devices inserted into the uterus, releasing progestin over extended periods of time. These hormones work to thicken the cervical mucus to prevent sperm from penetrating and reaching the fallopian tubes. Copper IUDs are another form of intrauterine devices that release copper ions instead of hormones. The copper ions are toxic to sperm and therefore prevents successful fertilization. Cervical cancer treatment The most common application of cervical drug delivery is for treatment of cervical cancer. Due to the direct route provided through the use of cervical drug delivery mechanisms, it proves to be the most effective route with the lowest number of side effects.  The localized treatment has been suggested as ideal as cancer is treated with highly toxic compounds, such as chemotherapeutics. The more contained the exposure to these compounds can be, the less negative impacts the patient will endure. Treatment can be delivered in the form of nanoparticles, vaginal gels, or films and reach the cervix quickly for ideal response. Vaginal gels are easily administered into the vaginal canal to reach the cervix due to low viscosity at room temperature. When inserted into the body which has a higher temperature, the gels become more viscous, allowing them to reside longer at the cervix and have more sustained release. Vaginal films are very thin films inserted into the vagina to release a compound. They can be maintained for six hours in cervical mucus, meaning they hold potential to treat cervical cancer caused by Human Papilloma Virus. Nanoparticle systems take advantage of the size of nanoparticles to encapsulate the drugs and pass through the mucus barrier. Sexually transmitted diseases Cervical drug delivery serves as a route for compounds to prevent and treat sexually transmitted diseases (STDs). Prevention of STDs for women can be achieved by administering preventative compounds into the vaginal canal prior to intercourse. One method researched is the use of a CAP technology that remains stable in the vaginal environment, but breaks down in the presence of human semen, releasing a drug to destroy STDs. This application of cervical drug delivery would be useful for prevention of STDs in women without interfering with the bodily environment until there is potential for infection.  STDs can also be treated through cervical drug delivery methods. Antibiotics for STD treatment are often administered into the vagina in the form of creams or gels. Fertility treatment Hormones to increase fertility are also often delivered through the cervical route. While this is less common than applications for birth control, it essentially utilizes the same concepts. Hormones influence the natural menstrual cycle, so instead of using hormones that prevent ovulation, hormones are used to encourage ovulation, such as follicle-stimulation hormones (FSH) and luteinizing hormones (LH). Fertility lubricants are the most common example of fertility treatments delivered through the cervical route. These lubricants are designed to mimic a pH and viscosity that is conducive for sperm to reach the fallopian tube. Studies are being done to combine the properties of fertility lubricants with fertility increasing hormones to lead to more direct and efficient treatment. Current products Vaginal rings Vaginal rings are most commonly used for birth control purposes but can also be used to release compounds that treat and prevent STDs as well. The rings come in one standard size that fits most women and are made of flexible materials that contain the desired compound, whether that be hormones for birth control or other compounds for STD treatment. These substances are then slowly released over an extended period of time, typically a month.  This is a convenient drug delivery method because they can easily be inserted and removed and do not prohibit intercourse. For birth control purposes, the vaginal ring is removed after 3 week and a new one is inserted a week later. Vaginal rings are also often used to treat symptoms of menopause, and these rings are replaced after a 3 month use. STD prevention and treatment through the use of vaginal rings is a newer application of such a device, but holds an advantage as a low maintenance option for women in areas with less access to regular healthcare. Suppositories Vaginal suppositories are forms of medications that are inserted into the vagina in a solid form, but melt from body heat to release the substances.  Common uses for them are to treat yeast or bacterial infections.  Hormones are often delivered in this form for treatment of menopausal or menstrual related issues.  Spermicide can also be delivered as a suppository when inserted prior to intercourse for birth control purposes. Insertion of vaginal suppositories uses a mechanism and applicator similar to that of a tampon. If there is no applicator available, they can be inserted with two fingers pushing them into the vaginal canal as far as is comfortable. Vaginal gels Vaginal gels are forms of medication that are water-based.  They are applied using a plastic applicator to distribute the gel throughout the length of the vaginal canal.  These gels tend to have release kinetics that are fast acting, which makes them useful for treatment of irritations.  Antibiotics are often distributed in the form of a gel for treatment of common infections, including sexually transmitted infections (STIs).  The gels also have the benefit of being lubricating, which grants additional relief to symptoms of dryness and itching that is common with vaginal infections. Gels that are in the form of liposomal structure have been shown to retain substances for extensive periods of time, making them useful for slow release of drugs administered through the cervical drug delivery route New technologies Vaginal films Vaginal films are soluble, thin sheets of medication that are inserted into the vagina where they dissolve and release the substances.  Typical uses include delivery of contraceptive substances or antibiotics for infections. These films are inserted by simply pushing them into the vagina with one’s fingers.  There they dissolve quickly upon interaction with natural vaginal fluids. Bioadhesives Bioadhesives are substances that naturally stick to live tissue.  Currently, studies are being done on bioadhesives adhering to cervical mucous membranes to allow for an extended release period.  The drug would be released with mucus to create a localized treatment with high effectiveness. This is anticipated to be highly useful for treatment of cervical cancer. Bioadhesives can come in multiple forms, such as films, tablets, or gels. The appeal of bioadhesives is that they take advantage of the mucous environment surrounding the cervix and utilize it to benefit the drug delivery mechanism. References Drug delivery devices Cervical cancer
Cervical drug delivery
[ "Chemistry" ]
1,894
[ "Pharmacology", "Drug delivery devices" ]
73,596,283
https://en.wikipedia.org/wiki/Zeolite%20membrane
A zeolite membrane is a synthetic membrane made of crystalline aluminosilicate materials, typically aluminum, silicon, and oxygen with positive counterions such as Na+ and Ca2+ within the structure. Zeolite membranes serve as a low energy separation method. They have recently drawn interest due to their high chemical and thermal stability, and their high selectivity. Currently zeolites have seen applications in gas separation, membrane reactors, water desalination, and solid state batteries. Currently zeolite membranes have yet to be widely implemented commercially due to key issues including low flux, high cost of production, and defects in the crystal structure. Production methods There are several methods used for the formation of Zeolite membranes. The In Situ method involves Zeolite membranes being formed on microporous supports of various materials, typically aluminum oxide or stainless steel. These supports are then immersed in a solution of Aluminum and Silicon at a specific stoichiometric ratio. Other factors of this solution can affect the formation of the zeolite membrane including: pH, Ionic Strength, temperature, and the addition of structure-determining reagents . Upon heating the solution, the crystals of the membrane begin to grow on the supports. In 2012, a “seeding method” was developed to produce zeolite membranes. In this case, the support is seeded with preformed zeolite crystals, before immersing it in the solution. These crystals allow for the formation of thinner membranes that typically contain fewer defects by growing the membranes off of existing structures. Properties Zeolite membranes drew initial interest as a separation method due to their high thermal and chemical stabilities. The crystal structure of zeolite membranes also creates a uniform pore size of approximately .3-1.3 nm in diameter. The crystal structure of zeolites also leads to the presence of several defects, which can often create gaps in the structure larger than these pores. The presence of defects can make these membranes far less effective, and it is difficult to produce defect free zeolite membranes. There are several mechanisms of transport that govern the separation of molecules by zeolite membranes. The main mechanisms for separation by zeolite membranes are molecular sieving, diffusion, and adsorption. Molecular sieving involves the rejection of any molecules of a size greater than the pore size of the membrane. This is a relatively simple sieving process which can separate out very large molecules. Adsorption involves molecules passing through the pores of the membrane being adsorbed onto the membrane surface. Adsorption properties of the membranes can be changed by adjusting various structural properties of the membrane. Surface diffusion is a process in which molecules adsorb to the pore wall of the membrane, and are slowly transported through the pores. During surface diffusion, molecules that are adsorbed at a higher rate can begin to block the membrane pores from other, less adsorbed, molecules. Surface diffusion can account for the high selectivity of certain molecules such as hydrogen by zeolite membranes. Surface diffusion typically plays a larger role in the transport of molecules at lower temperatures. Knudsen diffusion also contributes to the varying selectivity of zeolite membranes towards different molecules. Knudsen diffusion takes place when molecules are momentarily adsorbed to the pore wall and are then reflected off the surface in a random direction. This random motion allows for separation of molecules based on their velocities. Graham's law for diffusion dictates that lighter molecules will have a higher average velocity than heavier molecules, thus resulting in an increased flux with respect to lighter molecules. These differences in flux can be used to separate different molecules using zeolite membranes. Applications Gas separation Zeolite membranes have seen the most promise in regards to gas separation applications. The ability of zeolite membranes to adsorb certain molecules to its surface under varying conditions allows for researchers to perform highly selective separations. Adsorbed molecules block diffusion pores, and prevent the diffusion of other molecules through these pores. Zeolites typically adsorb carbon dioxide at the highest rate, lending themselves to use in carbon dioxide capture and separation. Diffusion selectivity governs the separation of molecules in zeolite membranes at higher temperatures. Diffusion selectivity allows for the quicker diffusion of smaller molecules through the membrane and slower diffusion of large molecules through the membrane's pores. The natural gas industry has seen the introduction of zeolite membranes for the separation of methane, carbon dioxide, and hydrogen gasses. Zeolites provide the advantage of thermal stability and higher selectivity when compared to polymer membranes that have typically been used for these purposes. There still needs to be improvement in the production of zeolite membranes, particularly regarding the cost, before they see widespread use. Membrane reactors Zeolite membranes have also been used in membrane reactors, since their chemical and thermal stabilities allow them to withstand reaction conditions. Membrane reactors function by removing the product of a reaction as the reaction occurs. This removal shifts the equilibrium of the reaction to allow for the formation of more products, as outlined by Le Chatelier's principle creating a more efficient reaction process. The high selectivity of zeolite membranes allows for them to be used to remove products from a reactor at high rates. Water desalination Zeolite membranes have recently been studied as an alternative for energy efficient water desalination. Currently water desalination is primarily done by Reverse Osmosis filtration which uses a dense polymeric membrane to purify the water. Zeolite membranes have been tested as an alternative water purification method, and are able to separate water from impurities. Zeolites have not been implemented for industrial water desalination purposes primarily due to their high cost when compared to traditional reverse osmosis membranes. References Membrane technology Filtration Zeolites
Zeolite membrane
[ "Chemistry" ]
1,192
[ "Membrane technology", "Filtration", "Separation processes" ]
64,852,005
https://en.wikipedia.org/wiki/Tyndall%27s%20bar%20breaker
Tyndall's bar breaker is a physical demonstration experiment to demonstrate the forces created by thermal expansion and shrinkage. It was demonstrated 1867 by the Irish scientist John Tyndall in his Christmas lectures for a "juvenile auditory". Setup The bar breaker experiment comprises a very rigid frame (d) and a massive connecting rod (b). The rod is held on one side by a cast iron bar (c) that is going to be broken in the experiment and, at the other end, by a nut (a) that is used to compensate the thermal expansion. Procedure During the experiment the steel rod (b) is heated with a flame (e) up to red heat temperature. During the heating phase the thermal expansion of the rod (b) is compensated by tightly fastening the nut (a). Taking away the flame starts the cooling phase. Typically the bar (c) breaks within a few minutes with a loud bang or it is at least deformed significantly. References Media Physics experiments Physics education
Tyndall's bar breaker
[ "Physics" ]
206
[ "Applied and interdisciplinary physics", "Experimental physics", "Physics experiments", "Physics education" ]
64,858,972
https://en.wikipedia.org/wiki/OroraTech
OroraTech is a German aerospace start-up company providing wildfire monitoring by employing nanosatellites. It was founded in 2018 as a university spin-off at the Technical University of Munich (TUM). The headquarters are in Munich, Germany. In June 2023, OroraTech joined the Copernicus Programme of the European Space Agency. History OroraTech's key idea had been developed during the MOVE-II CubeSat project and WARR at the TUM. Starting as a spin-off in January 2017, the company was incorporated as Orbital Oracle Technologies GmbH (short: OroraTech) in September 2018. Since OroraTech's technology is based on academic research at the TUM, TUM professors Ulrich Walter, a former astronaut, and Alexander W. Koch act as advisors to the company. Technology Wildfire detection using infrared sensors in space had been proposed as a technology since the 1990s. Technological advances, notably sunk space launch cost, enabled non-state actors to enter the market. As such, OroraTech operates a software platform for the detection and monitoring of wildfires based on measuring thermal-infrared radiation from space. The company is using data from existing satellites and develops their own constellation of 3-U CubeSats with thermal-infrared cameras to further improve temporal and spatial resolution of fire detection. The software platform generates various overlays on base maps to visualize fire risk and fire detections. At the current stage, the platform uses data from twelve satellites in polar and geostationary orbits, including such by NASA, ESA, and EUMETSAT. In early 2020, the platform had around 100 active users. The satellite technology is based on research from the MOVE-II project at the Chair of Astronautics (LRT) at the TUM. During the project, a 1-Unit CubeSat was launched with SpaceX in December 2018. OroraTech's first nanosatellite, based on the original CubeSat, was developed to reach 10 cm x 10 cm x 34 cm in size, weighing around 1.2 kg, and it was launched on 13 January 2022 as part of SpaceX's Transporter-3 rideshare mission. The satellite features an uncooled thermal-infrared imager for space applications, and GPU-accelerated on-orbit processing to reduce downlink latency and bandwidth for quicker wildfire alert dissemination, making it particularly efficient in tackling the issue of detecting wildfires in late afternoon images. As of June 2022, the company plans to put its next eight satellites into orbit by the end of 2023, aiming for a detection time of 30 minutes. A second satellite, once again hosted on a Lemur-2 cubesat platform, was launched on 12 June 2023 on a Falcon 9 Block 5 rocket as part of SpaceX Transporter-8 rideshare mission. Field application The technology is used by Wildfire Services in British Columbia (Canada) and New South Wales (Australia) for wildfire detection and wildfire suppression. International media used images from OroraTech's wildfire service for coverage of the 2020 wildfire season in California, Oregon, British Columbia, and Siberia. References Aerospace engineering organizations Remote sensing companies Scientific organisations based in Germany
OroraTech
[ "Engineering" ]
662
[ "Aeronautics organizations", "Aerospace engineering organizations", "Aerospace engineering" ]
64,862,660
https://en.wikipedia.org/wiki/Graham%E2%80%93Rothschild%20theorem
In mathematics, the Graham–Rothschild theorem is a theorem that applies Ramsey theory to combinatorics on words and combinatorial cubes. It is named after Ronald Graham and Bruce Lee Rothschild, who published its proof in 1971. Through the work of Graham, Rothschild, and in 1972, it became part of the foundations of structural Ramsey theory. A special case of the Graham–Rothschild theorem motivates the definition of Graham's number, a number that was popularized by Martin Gardner in Scientific American and listed in the Guinness Book of World Records as the largest number ever appearing in a mathematical proof. Background The theorem involves sets of strings, all having the same length , over a finite alphabet, together with a group acting on the alphabet. A combinatorial cube is a subset of strings determined by constraining some positions of the string to contain a fixed letter of the alphabet, and by constraining other pairs of positions to be equal to each other or to be related to each other by the group action. This determination can be specified more formally by means of a labeled parameter word, a string with wildcard characters in the positions that are not constrained to contain a fixed letter and with additional labels describing which wildcard characters must be equal or related by the group action. The dimension of the combinatorial cube is the number of free choices that can be made for these wildcard characters. A combinatorial cube of dimension one is called a combinatorial line. For instance, in the game of tic-tac-toe, the nine cells of a tic-tac-toe board can be specified by strings of length two over the three-symbol alphabet {1,2,3} (the Cartesian coordinates of the cells), and the winning lines of three cells form combinatorial lines. Horizontal lines are obtained by fixing the -coordinate (the second position of the length-two string) and letting the -coordinate be chosen freely, and vertical lines are obtained by fixing the -coordinate and letting the -coordinate be chosen freely. The two diagonal lines of the tic-tac-toe board can be specified by a parameter word with two wildcard characters that are either constrained to be equal (for the main diagonal) or constrained to be related by a group action that swaps the 1 and 3 characters (for the antidiagonal). The set of all combinatorial cubes of dimension , for strings of length over an alphabet with group action , is denoted . A subcube of a combinatorial cube is another combinatorial cube of smaller dimension that forms a subset of the set of strings in the larger combinatorial cube. The subcubes of a combinatorial cube can also be described by a natural composition action on parameter words, obtained by substituting the symbols of one parameter word for the wildcards of another. Statement With the notation above, the Graham–Rothschild theorem takes as parameters an alphabet , group action , finite number of colors , and two dimensions of combinatorial cubes and with . It states that, for every combination of , , , , and , there exists a string length such that, if each combinatorial cube in is assigned one of colors, then there exists a combinatorial cube in all of whose -dimensional subcubes are assigned the same color. An infinitary version of the Graham–Rothschild theorem is also known. Applications The special case of the Graham–Rothschild theorem with , , and the trivial group action is the Hales–Jewett theorem, stating that if all long-enough strings over a given alphabet are colored, then there exists a monochromatic combinatorial line. Graham's number is a bound for the Graham–Rothschild theorem with , , , , and a nontrivial group action. For these parameters, the set of strings of length over a binary alphabet describes the vertices of an -dimensional hypercube, every two of which form a combinatorial line. The set of all combinatorial lines can be described as the edges of a complete graph on the vertices. The theorem states that, for a high-enough dimension , whenever this set of edges of the complete graph is assigned two colors, there exists a monochromatic combinatorial plane: a set of four hypercube vertices that belong to a common geometric plane and have all six edges assigned the same color. Graham's number is an upper bound for this number , calculated using repeated exponentiation; it is believed to be significantly larger than the smallest for which the statement of the Graham–Rothschild theorem is true. References Ramsey theory Combinatorics on words
Graham–Rothschild theorem
[ "Mathematics" ]
945
[ "Theorems in combinatorics", "Theorems in discrete mathematics", "Combinatorics", "Combinatorics on words", "Ramsey theory" ]
72,170,992
https://en.wikipedia.org/wiki/National%20Earthquake%20Hazards%20Reduction%20Program
The National Earthquake Hazards Reduction Program (NEHRP) was established in 1977 by the United States Congress as part of the Earthquake Hazards Reduction Act of 1977. The original stated purpose for NEHRP was "to reduce the risks of life and property from future Earthquakes in the United States through the establishment and maintenance of an effective earthquake hazards reduction program." Congress periodically reviews and reauthorizes NEHRP, with the most recent review happening in 2018. NEHRP supports basic research that expands our knowledge of earthquakes and their impacts. The four basic earthquake hazard reduction goals of NEHRP have remained the same since its creation: Develop effective practices/policies and accelerate their implementation. Improve techniques for reducing vulnerabilities of facilities and systems. Improve earthquake hazards identification and risk assessment methods and their use. Improve the understanding of earthquakes and their effects. To accomplish these goals, NEHRP developed the Advisory Committee on Earthquake Hazards Reduction to advise congress on the programs progress in relation to: Improved design and construction methods and best practices Land use controls and redevelopment Prediction and early-warning systems Coordinated emergency preparedness plans Public education/involvement programs Primary NEHRP agencies NEHRP has four designated federal agencies that contribute to the program: Federal Emergency Management Agency (FEMA) of the United States Department of Homeland Security National Institute of Standards and Technology (NIST) of the United States Department of Commerce (NIST is the lead NEHRP agency) National Science Foundation (NSF) United States Geological Survey (USGS) of the United States Department of the Interior The majority of NEHRP's research activities are accomplished through the National Science Foundation funding of earthquake-related research in earth sciences, social sciences and engineering. NSF also provides post-earthquake empirical research using reconnaissance teams. These teams visit affected regions documenting the impacts, performance of construction and response/recovery. FEMA's primary role is to implement and distribute the final research products of NEHRP. This is done through the creation of a variety of published materials made available to 3rd parties. Recent research In 2020, NHERP and its partner agencies published "The 2020 NEHRP Recommended Seismic Provisions for New Buildings and Other Structures." The report comprises a summary of a 5 year research project and includes 37 recommended changes to American Society of Civil Engineers/SEI section 7-16 (Minimum Design Loads and Associated Criteria for Buildings and Other Structures), nine white papers and associated commentary. References Attribution: Research projects Government agencies established in 2006 Earthquake and seismic risk mitigation Disaster preparedness in the United States
National Earthquake Hazards Reduction Program
[ "Engineering" ]
520
[ "Structural engineering", "Earthquake and seismic risk mitigation" ]
72,181,213
https://en.wikipedia.org/wiki/Monterrey%20Foundry
The Monterrey Foundry (In Spanish: Fundidora de Fierro y Acero de Monterrey, S.A.) was a Mexican iron and steel foundry founded in 1900 in the city of Monterrey, becoming the first such foundry in Latin America and, for many years, the most important one in the region. At the end of the 19th century, Vicente Ferrara, aware of the existence of numerous iron and coal deposits in the surroundings of Monterrey, and having obtained experience working in steel foundries in the United States, saw the opportunity to found a similar company in Monterrey. To carry out his vision, he gained the support of an international consortium of entrepreneurs, including Antonio Basagoiti (Spain), Eugene Kelly (US), and Leon Signoret (France). As a capital-intensive industry, the enterprise also required significant investments from some of the wealthiest families of the industrialized north of Mexico at the turn of the twentieth century, including the Milmo, Madero, and Garza-Sada clans. Foreign capitalists, including the Guggenheims, also participated to a more limited extent. The company was successful during the first half of the twentieth century. Many significant engineering projects in Latin America were built with structural steel produced by the Monterrey Foundy. This included Torre Latinoamericana, the world's first major skyscraper successfully built on highly active seismic zone. After many years in private hands, the firm was nationalized by the Mexican government in 1977 and remained operated by the public sector until its bankruptcy in May 1986. Today, the old site of the foundry has become Fundidora Park. For 60 years it was dedicated exclusively to the production of non-flat iron and steel articles, such as railways, wire rods, corrugated rods, structural steel, and train wheels, among others. References Foundries Manufacturing companies based in Monterrey 1986 disestablishments in Mexico
Monterrey Foundry
[ "Chemistry" ]
381
[ "Foundries", "Metallurgical facilities" ]
72,182,187
https://en.wikipedia.org/wiki/Symbiosis%20in%20Amoebozoa
Amoebozoa of the free living genus Acanthamoeba and the social amoeba genus Dictyostelium are single celled eukaryotic organisms that feed on bacteria, fungi, and algae through phagocytosis, with digestion occurring in phagolysosomes. Amoebozoa are present in most terrestrial ecosystems including soil and freshwater. Amoebozoa contain a vast array of symbionts that range from transient to permanent infections, confer a range of effects from mutualistic to pathogenic, and can act as environmental reservoirs for animal pathogenic bacteria. As single celled phagocytic organisms, amoebas simulate the function and environment of immune cells like macrophages, and as such their interactions with bacteria and other microbes are of great importance in understanding functions of the human immune system, as well as understanding how microbiomes can originate in eukaryotic organisms. Amoeba-resistant microorganisms Some microorganisms have evolved to become resistant to Amoebozoa, and are able to survive, grow, and exit free-living amoebae after phagocytosis. In order for an organism to survive in an Amoebozoa, they have developed a way to avoid or survive digestion by their host's acidic and oxidative phagolysosomes. Many of these amoeba-resistant microorganisms (ARMs) survive either in the amoeba cytoplasm or in host derived vacuoles surrounded by plasma membrane, allowing them to not only avoid digestion, but actively reproduce inside their host with some are capable of lysing the amoeba host cell. Known symbionts of Amoebozoa include bacteria from Alphaproteobacteria, Betaproteobacteria, Bacteroidetes, Firmicutes, Proteobacteria, Chlamydiae, and Paraburkholderia, all with different effects on their host, even within the same phylum. For example, some Chlamydiae bacteria are able to increase the growth rates of their hosts or increase motility, other Chlamydiae strains are able to fight off other pathogenic symbionts like legionella, and some Chlamydiae are parasitic and decrease host fitness. Many free living amoeba species inhabit aquatic environments, including manufactured water systems. While in their encysted state, amoebas have a high resistance to extreme temperatures, UV radiation, osmolarity, and pH. Some species of pathogenic bacteria are able to take advantage of this resistance and survive in environments that would usually destroy them, and are able to use the amoebas as a "Trojan horse" to travel to new environments and animal hosts. Legionella pneumophila, a known human pathogen, has been observed in at least 13 different species of amoeba. Legionella has been shown to survive inside of an encysted amoeba host in chlorine treated water, and can release from the host in respirable vesicles when treated with biocides, with each vesicle possibly containing hundreds of legionella bacteria spread by aerosolized water. Recent human outbreaks of Legionella are likely due to aerosolized water containing amoeba derived Legionella vesicles produced by modern devices such as air-conditioning systems, water cooling towers, showers, clinical respiration devices, and whirlpool baths that have been contaminated with host amoebae. Farming symbionts Another unique example of symbiosis occurs in the social amoeba Dictyostelium discoideum. D. discoideum and other social amoeba differ from free living Acanthamoeba in that instead of encysting, they undergo a social cycle where individual D. discoideum cells aggregate together in a food scarce environment. This social cycle results in a differentiation between cells: ~20% are sacrificed to form a structural stalk, some transform into sentinel cells with immune like and detoxifying functions, and the rest of the aggregated amoeba form a ball of spores located in protective fruiting body. This fruiting body gives some amoeba from that population a chance to be transported to a food rich environment and survive. If they are not transported to a food rich environment, then the amoebas of that fruiting body will starve. Some D. discoideum amoebas contain Burkholderia bacteria that have been found to form a type of farming symbiosis with their discoideum hosts, who have reduced sentinel cell numbers. Burkholderia are able to persist in the fruiting bodies of their hosts that are carried by an animal or environmental path to a new environment. If there are few food bacteria in that new environment, then the social amoeba are able to seed the area with the contained Burkholderia and thus develop a food source. Farmer amoebas do produce fewer spores in a food rich environment than non-farmer amoebas, but this cost is countered by farmers’ ability to replenish their food supply when dispersing to food-poor environments. Additionally, some farmed Burkholderia produce compounds that are not detrimental to the amoeba host, but are detrimental to nonfarmer amoebas, giving the farmer amoebas a competitive advantage in mixed populations. Viral interactions Giant viruses, or nucleocytoplasmic large DNA viruses, frequently infect Amoebozoa and other protists causing amoeba lysis and cell rounding in 12 hours and amoeba population collapse in 55 hours. As such, there is a strong selective pressure on both Amoebozoa and their symbionts to resist these viruses. Acanthamoeba hattchettii is one species affected by giant viruses, and some use a bacterial symbiont (Parachlamydia acanthamoebae) to counter giant viruses from Marseilleviridae and Mimiviridae. Acanthamoeba that are infected with the chlamydiae symbiont and giant viruses are able to avoid lysis and cell rounding that normally occur with infection by repressing viral replication after the virus enters the cell, likely due to secondary metabolites produced by the symbiont. Acanthamoeba that are not infected by the symbiont or virus have the highest fitness with a doubling time that is twice as fast, Acanthamoeba that are infected with the chlamydia symbiont have the same fitness when infected by the virus and when not, and Acanthamoeba that are infected with just the virus have the lowest fitness with a total population collapse. Therefore, the chlamydiae symbiont acts as a mutualist with a significantly positive fitness effect during viral predation, but is also consistent with the parasitic lifestyle of many chlamydiae when acanthamoeba is not a victim of viral predation. As an obligate intracellular bacterium, the chlamydiae symbiont is effectively competing in the same niche as other giant viruses, and so has evolved to protect its host from its natural competitor. References Wikipedia Student Program Amoebozoa Symbiosis
Symbiosis in Amoebozoa
[ "Biology" ]
1,515
[ "Biological interactions", "Behavior", "Symbiosis" ]
72,182,831
https://en.wikipedia.org/wiki/Technetium%28IV%29%20oxide
Technetium(IV) oxide, also known as technetium dioxide, is a chemical compound with the formula TcO2 which forms the dihydrate, TcO2·2H2O, which is also known as technetium(IV) hydroxide. It is a radioactive black solid which slowly oxidizes in air. Preparation Technetium dioxide was first produced in 1949 by electrolyzing a solution of ammonium pertechnetate under ammonium hydroxide and this method is used for separating technetium from molybdenum and rhenium. There are now more efficient ways of producing the compound, such as the reduction of ammonium pertechnetate by zinc metal and hydrochloric acid, stannous chloride, hydrazine, hydroxylamine, ascorbic acid, by the hydrolysis of potassium hexachlorotechnate or by the decomposition of ammonium pertechnetate at 700 °C under an inert atmosphere: 2 NH4TcO4 → 2 TcO2 + 4 H2O + N2 All of these methods except the last lead to the formation of the dihydrate. The most modern method of producing this compound is by the reaction of ammonium pertechnetate with sodium dithionite. Properties The dihydrate dehydrates to anhydrous technetium dioxide at 300 °C, and if further heated sublime at 1,100 °C under an inert atmosphere, however, if oxygen is present, it will react with the oxygen to produce technetium(VII) oxide at 450 °C. If water is present, pertechnetic acid is produced by the reaction of technetium(VII) oxide with water. If technetium dioxide is treated with a base, such as sodium hydroxide, it forms the hydroxotechnetate(IV) ion, which is easily oxidized to pertechnetic acid in numerous ways, such as the reaction with alkaline hydrogen peroxide, concentrated nitric acid, bromine, or tetravalent cerium. The solubility of technetium(IV) oxide is very low and is reported to be 3.9 μg/L. The main species when technetium dioxide is dissolved in water is TcO2+ at pH below 1.5, TcO(OH)+ pH between 1.5 and 2.5, TcO(OH)2 pH between 2.5 and 10.9, and TcO(OH) above pH 10.9. The solubility can be affected by adding various organic ligands such as humic acid and EDTA, or by the addition of hydrochloric acid. This can be a problem if technetium(IV) oxide is released into the soil, as it will increase the solubility. If technetium dioxide is electrolyzed in acidic conditions, the following reaction occurs: TcO2·2H2O → TcO + 4 H+ + 3 e– The electrode potential measured for this reaction is kJ/mol. The molar magnetic susceptibility of TcO2·2H2O was found to be χm = . References Technetium compounds Oxides
Technetium(IV) oxide
[ "Chemistry" ]
672
[ "Oxides", "Salts" ]
76,518,026
https://en.wikipedia.org/wiki/Amerosporiopsis%20phaeographidis
Amerosporiopsis phaeographidis is a species of lichenicolous fungus in the subphylum Pezizomycotina. It grows as black spots on the lichen Phaeographis brasiliensis, from which it gets its name. It has only been found in one place in the Fakahatchee Strand Preserve State Park in Florida in the United States. Molecular phylogenetics testing might reveal that this is actually a new genus, but it is morphologically similar to the one other species in Amerosporiopsis, except that it has wider conidia, has no conidiophores, and lives in a different habitat. Description Amerosporiopsis phaeographidis exists primarily as a mycelium, which is the vegetative part of the fungus, consisting of a network of fine, filamentous cells that are colorless (hyaline) and embed within the host tissue. This fungus does not form ascomata, which are the sexual reproductive structures typically found in other fungi. The conidiomata, which are asexual reproductive structures, are typically more or less spherical (subglobose), black, and initially buried within the host tissue (immersed). As they mature, they break through the host's surface (). These structures are usually single-chambered (unilocular), with thick walls, measuring between 60 and 100 micrometers (μm) in diameter. Surrounding the conidiomata is a clypeus-like structure, which gives them an irregular appearance when viewed from above, and can extend up to 200 μm across. The walls of these conidiomata are formed only on the upper and side portions and consist of several layers of cells. The external layers are dark brown and turn dark olive when stained with a potassium hydroxide solution (K+), while the inner layers remain hyaline. The outer surface of these structures is covered with subspherical to elongated darker cells, which are especially visible under a microscope, giving the conidiomata a somewhat rough () texture. The base of the conidiomata wall may appear colorless or be indistinct. As they age, they sometimes open irregularly to form a cup-shaped () structure, though the opening (ostiole) through which spores typically release is often not noticeable or completely absent. Inside the conidiomata, spore-producing structures (conidiophores) emerge from the walls either from the base or the sides. These conidiophores are branched chains of short, irregular cells. The cells that actually produce the spores (conidiogenous cells) are tube-shaped, narrowing at one end to form a tiny opening, and are clear and smooth. These cells range in size from 7.5 to 12.7 μm in length and from 2.3 to 3.5 μm in width. The spores (conidia) themselves are clear (hyaline), without sections (aseptate), and range from rod-shaped () to narrowly spindle-shaped (narrowly ). They have rounded tops and bases that are indistinctly flat (truncate), with thin and smooth walls. These spores measure between 7.7 and 12.3 μm in length and between 1.0 and 1.8 μm in width. The length-to-breadth ratio of these spores varies from about 4.6 to 10. References Ascomycota Fungus species Fungi described in 2019 Fungi of Florida Lichenicolous fungi Taxa named by Paul Diederich Species known from a single specimen
Amerosporiopsis phaeographidis
[ "Biology" ]
758
[ "Individual organisms", "Species known from a single specimen", "Fungi", "Fungus species" ]