id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
49,051,884
https://en.wikipedia.org/wiki/Journal%20of%20Environmental%20Quality
The Journal of Environmental Quality is a bimonthly peer-reviewed scientific journal publishing original research in the area of anthropogenic impacts on the environment, including terrestrial, atmospheric and aquatic systems. According to Journal Citation Reports, the journal has a 2020 impact factor of 2.751. It was established in 1972 as the first joint publication of the not-for-profit American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America. The journal is currently published by the three societies in partnership with Wiley. The journal was a quarterly publication for the period of 1972 to 1993. Since 1994 it has been a bimonthly publication journal. Since 2013, it is available online only. References English-language journals Environmental science journals
Journal of Environmental Quality
[ "Environmental_science" ]
154
[ "Environmental science journals", "Environmental science journal stubs" ]
49,052,274
https://en.wikipedia.org/wiki/Cure53
Cure53 is a German cybersecurity firm. The company was founded by Mario Heiderich, a security researcher. History After a report from Cure53 on the South Korean security app Smart Sheriff, that described the app's security holes as "catastrophic", the South Korean government ordered the Smart Sheriff to be shut down. Software audited by Cure53 includes Mastodon, OnionShare, Bitwarden, Mailvelope, GlobaLeaks, SecureDrop, Obsidian (client software), OpenPGP, Onion Browser, F-Droid, Nitrokey, Peerio, OpenKeychain, cURL, Briar, Mozilla Thunderbird, Threema, MetaMask, Obsidian, Proton Pass, Enpass and Passbolt, as well as many VPN and password manager providers. References External links Computer security Information technology companies of Germany Companies based in Berlin
Cure53
[ "Technology" ]
183
[ "Computer security stubs", "Computing stubs" ]
49,053,710
https://en.wikipedia.org/wiki/NGC%204203
NGC 4203 is the New General Catalogue identifier for a lenticular galaxy in the northern constellation of Coma Berenices. It was discovered on March 20, 1787 by English astronomer William Herschel, and is situated 5.5° to the northwest of the 4th magnitude star Gamma Comae Berenices and can be viewed with a small telescope. The morphological classification of NGC 4203 is SAB0−, indicating that it has a lenticular form with tightly wound spiral arms and a weak bar structure at the nucleus. This galaxy has a fairly large reservoir of neutral hydrogen containing on the order of a billion solar masses (), but it is only undergoing a low rate of new star formation. Hence, the inner star formation of the galaxy is fairly old; roughly ten billion years on average. The neutral hydrogen is arranged in two ring-like structures, with the outer ring having nine times the mass of the inner. In the central region there is around   of molecular hydrogen, plus dust structures within of the nucleus. The gas in the outer disk may have been accreted from the inter-galactic medium, or captured during a close encounter with a dwarf galaxy. The nucleus of the galaxy contains a low-ionization nuclear emission-line region of type 1.9. This is being generated by a supermassive black hole of an estimated  . An influx of gas of around  /yr is sufficient to explain the measured X-ray luminosity. The time-varying emissions from the region are perhaps best explained by an infalling red supergiant star that is losing mass to the black hole along a contrail. NGC 4203 is a member of the Coma I Group which is part of the Virgo Supercluster. References External links Lenticular galaxies Coma Berenices 4203 07256 Coma I Group 039158
NGC 4203
[ "Astronomy" ]
376
[ "Coma Berenices", "Constellations" ]
49,053,985
https://en.wikipedia.org/wiki/Roridomyces%20praeclarus
Roridomyces praeclarus is a species of fungus in the genus Roridomyces, family Mycenaceae. Taxonomy The species was originally named Mycena praeclara by Egon Horak in 1978. Description The cap is up to wide, hemispherical or convex when young becoming umbilicate or applanate with age. Its color is deep orange to reddish-orange, fading with age. The cap center is granular, dry, with indistinct grooves extending towards the margin. The gills have an arcuate to decurrent attachment to the stipe. They are white with a smooth edge that is the same color as the rest of the gill. The white, cylindrical stipe is up to and about 1 mm thick. It has pale orange coloration towards base, and is slimy when wet. Fruit bodies grow singly, not in dense groups or clusters. The mushroom has no distinctive odor or taste. The flesh of the cap is pale orange. Habitat and distribution The fruit bodies of R. praeclarus grow on rotting leaves and twigs in rain forests of Papua New Guinea. References External links Mycenaceae Fungi of New Guinea Fungi described in 1978 Fungus species
Roridomyces praeclarus
[ "Biology" ]
248
[ "Fungi", "Fungus species" ]
50,562,990
https://en.wikipedia.org/wiki/Cistus%20%C3%97%20rodiaei
Cistus × rodiaei Verg. 1932 is a variety of rockrose. It is a small gray-green evergreen shrub reaching a maximum height of . These rockroses have huge deep pink flowers with a diameter of . They bloom from late April to early June. References External links Malvaceae.info Les galeries photos rodiaei Ornamental plant cultivars Hybrid plants
Cistus × rodiaei
[ "Biology" ]
80
[ "Hybrid plants", "Plants", "Hybrid organisms" ]
50,564,088
https://en.wikipedia.org/wiki/Procore
Procore Technologies is an American construction management software as a service company founded in 2002, with headquarters in Carpinteria, California. History Founder and CEO Craig "Tooey" Courtemanche created the software that became Procore as a response to his struggles to manage the construction of his new home in Santa Barbara, from his then-home in Silicon Valley. The app he built tracked the activity of the workers onsite. Founded in 2002, the company was originally headquartered in Montecito, California. Steve Zahm, founder of the e-learning company DigitalThink, joined Procore as president in 2004. Procore's revenue in 2012 was $4.8 million. In 2020, it was $400 million. The company initially filed to go public in 2019, with plans to launch the IPO in 2020, but delayed the offering due to the coronavirus pandemic. Procore stock began trading under stock ticker PCOR on May 20, 2021 at $67 per share. The initial public offering raised $634.5 million. Following the IPO, the company was valued at nearly $11 billion. As of May 2021, the company has over 10,000 customers, and over 1.6 million users of its products in more than 125 countries. Procore's campus is on a 9-acre oceanfront property in Carpinteria, California. Investors and acquisitions In 2014, Bessemer Venture Partners led a $15 million investment round. In 2015, the company raised an additional $30 million in a round led by Bessemer and Iconiq Capital. In 2015, the Wall Street Journal reported the company to be worth "$500 million post-money." In 2016, the company raised $50 million in a round led by Iconiq, reaching a $1 billion valuation. In 2018, the company raised an additional $75 million, and in 2020, it raised over $150 million. In total, the company raised nearly $500 million from 2007 through its IPO in 2021. In July 2019, Procore acquired US project management software group Honest Buildings. In October 2020, it acquired US estimating software provider Esticom. Procore acquired construction artificial intelligence companies Avata Intelligence in 2020, and INDUS.AI in 2021. Software Procore's cloud-based construction management software allows teams of construction companies, property owners, project managers, contractors, and partners to collaborate on construction projects and share access to documents, planning systems and data, using an Internet-connected device. Data and video can also be streamed in to the system via drones. The software includes features such as meeting minutes, drawing markups and document storage for all project-related materials. Procore's offerings also include an app marketplace, with 300+ partners, including Box, an enterprise file storage and content management company; Botlink, a joint venture by Packet Digital that allows users to stream in both video and data from drones surveying their construction projects; and Dexter + Chaney, an ERP provider. References Software companies based in California Business software companies Architectural communication Software companies of the United States 2002 establishments in California Software companies established in 2002 American companies established in 2002 2021 initial public offerings Companies listed on the New York Stock Exchange Cloud computing providers Companies based in Santa Barbara County, California
Procore
[ "Engineering" ]
669
[ "Construction", "Architecture", "Architectural communication", "Construction software" ]
47,305,658
https://en.wikipedia.org/wiki/MD%26DI
MD&DI is a trade magazine for the medical device and diagnostic industry published by Informa Markets (Los Angeles). It includes peer-reviewed articles on specific technology issues and overviews of key business, industry, and regulatory topics. It was established in 1979. In 2009 it had a monthly print circulation of 48,040 but is now an online publication with a claimed circulation of 89,000. Informa Markets and the magazine has also sponsored the Medical Design and Manufacturing (MD&D) West Conference & Exposition (formerly the MD&DI West Conference & Expo), a medical device trade show, since 1978. The magazine sponsored the Medical Design Excellence Awards and produces a list of 100 Notable People in the Medical Device Industry. The term "use error" was first used in May 1995 in an MD&DI guest editorial, The Issue Is 'Use,' Not 'User,' Error, by William Hyman. References External links Business magazines published in the United States Monthly magazines published in the United States Medical equipment Medical magazines Magazines established in 1979 Magazines published in Los Angeles Professional and trade magazines
MD&DI
[ "Biology" ]
222
[ "Medical equipment", "Medical technology" ]
47,306,429
https://en.wikipedia.org/wiki/Cyanidin-3%2C5-O-diglucoside
Cyanidin-3,5-O-diglucoside, also known as cyanin, is an anthocyanin. It is the 3,5-O-diglucoside of cyanidin. Natural occurrences Cyanin can be found in species of the genus Rhaponticum (Asteraceae). In food Cyanin can be found in red wine as well as pomegranate juice according to a study done by Graça Miguel, Susana Dandlen, Dulce Antunes, Alcinda Neves, and Denise Martins in the winter of 2004. Pomegranate juice extracted through centrifugal seed separation has higher amounts of cyanidin-3,5-O-diglucoside than juice extracted by squeezing fruit halves with an electric lemon squeezer. See also Phenolic content in wine References External links Anthocyanins Flavonoid glucosides
Cyanidin-3,5-O-diglucoside
[ "Chemistry" ]
198
[ "PH indicators", "Anthocyanins" ]
47,308,192
https://en.wikipedia.org/wiki/CTAIDI
The Customer Total Average Interruption Duration Index (CTAIDI) is a reliability index associated with electric power distribution. CTAIDI is the average total duration of interruption for customers who had at least one interruption during the period of analysis, and is calculated as: where is the number of customers and is the annual outage time for location , and is the number of customers at location that were interrupted. In other words, CTAIDI is measured in units of time, such as minutes or hours. It is similar to CAIDI, but CAIDI divides the total duration of interruptions by the number of interruptions whereas CTAIDI divides by the number of interrupted customers. When CTAIDI is much greater than CAIDI, the service outages are more concentrated among certain customers. CTAIDI also has the same numerator as SAIDI, but SAIDI divides the total duration of interruptions by the total number of customers served. The fraction of distinct customers interrupted illustrates the relationship between several reliability indicators: References Electric power Reliability indices
CTAIDI
[ "Physics", "Engineering" ]
209
[ "Power (physics)", "Electrical engineering", "Electric power", "Physical quantities" ]
47,309,191
https://en.wikipedia.org/wiki/Penicillium%20ramusculum
Penicillium ramusculum is an anamorph, species of the genus of Penicillium. References Further reading ramusculum Fungi described in 1955 Fungus species
Penicillium ramusculum
[ "Biology" ]
40
[ "Fungi", "Fungus species" ]
47,309,932
https://en.wikipedia.org/wiki/Fellow%20of%20the%20Royal%20Society%20of%20Biology
Fellowship of the Royal Society of Biology (FRSB), previously Fellowship of the Society of Biology (FSB), is an award and fellowship granted to individuals that the Royal Society of Biology has adjudged to have made a "prominent contribution to the advancement of the biological sciences, and has gained no less than five years of experience in a position of senior responsibility". Fellowship Fellows are entitled to use the post-nominal letters FRSB. examples of fellows include Sir David Attenborough, Martin Hume Johnson, Jasmin Fisher, Sir Tom Blundell and Dame Nancy Rothwell. See the :Category: Fellows of the Royal Society of Biology for more examples. References Royal Society of Biology Academic awards
Fellow of the Royal Society of Biology
[ "Technology" ]
145
[ "Science and technology awards", "Science award stubs" ]
47,312,460
https://en.wikipedia.org/wiki/Lincoln%20Calibration%20Sphere%201
The Lincoln Calibration Sphere 1, or LCS-1, is a large aluminium sphere in Earth orbit since 6 May 1965. It is still in use, having lasted for over 50 years. The sphere was launched along with the Lincoln Experimental Satellite-2 on a Titan IIIA. It is technically the oldest operational spacecraft, but it has no power supply or fuel; it is merely a passive metal sphere. LCS-1 has been used for radar calibration since its launch. It was built by Rohr. Corp. for the MIT Lincoln Laboratory. LCS-1 is a hollow sphere in diameter with a wall thickness of . The sphere was constructed from two hemispheres, made by spinning sheet metal over a mold. These hemispheres were fastened to an internal, circumferential hoop by 440 countersunk screws, then milled and polished. The initial finish had a surface roughness less than 10 micrometres and was expected to last for five years. Since its launch, I-band measurements have shown periodic deviations that likely correspond to one or more new surface irregularities. Before being launched to orbit, the optical cross section of the LCS-1 was measured in L, S, C, X and K microwave bands. Four other spheres were also manufactured and measured for comparison to the one in orbit. Notes References Satellites of the United States Satellites orbiting Earth Passive satellites Spacecraft launched in 1965 Radar calibration satellites
Lincoln Calibration Sphere 1
[ "Astronomy" ]
293
[ "Astronomy stubs", "Spacecraft stubs" ]
47,312,739
https://en.wikipedia.org/wiki/Roseburia
Roseburia is a genus of butyrate-producing, Gram-positive anaerobic bacteria that inhabit the human colon. Named in honor of Theodor Rosebury, they are members of the phylum Bacillota (formerly known as Firmicutes). Increased abundance of Roseburia is associated with weight loss and reduced glucose intolerance in mice. References Lachnospiraceae Gram-positive bacteria Gut flora bacteria Bacteria genera
Roseburia
[ "Biology" ]
94
[ "Gut flora bacteria", "Bacteria" ]
47,313,267
https://en.wikipedia.org/wiki/Rule%20consciousness
The rule consciousness as one of the primary factors of personality out of sixteen as categorized by Raymond Cattell, 1946 as low and high level. The descriptors of low level rule consciousness are expedient, nonconforming, disregards rules, self-indulgent or having a low super ego strength while the high level consciousness are rule-conscious, dutiful, conscientious, conforming, moralistic, staid, rule bound or having high super ego strength. A theory also associates rule consciousness as the "original apperception", which is a Kantian concept of a mental state in which we perceive special kinds of non-spatial inner objects. Jean Piaget also studied rule consciousness between boys and girls in the context of games. References Personality
Rule consciousness
[ "Biology" ]
159
[ "Behavior", "Personality", "Human behavior" ]
47,313,623
https://en.wikipedia.org/wiki/Mcity
Mcity is a mock city and proving ground built for the testing of wirelessly connected and driverless cars located on the University of Michigan North Campus in Ann Arbor, Michigan. The project, which officially opened on July 20, 2015, is built on land purchased by the university from a former Pfizer facility. It cost US$10 million and is collaboratively managed by Mcity (formerly the Mobility Transformation Center - MTC). In November 2015, Ford Motor Company announced that it is the first car company to use the new facility. Features Mcity is the world's first controlled environment specifically designed to test the potential of connected and automated vehicle technologies that are expected to lead the way to mass-market driverless cars. Students and faculty in the University of Michigan College of Engineering utilize Mcity to work on projects and to collaborate with automakers and suppliers who will test vehicle technology at the course. The site includes 4.25 lane miles of roadway that include several familiar features of urban driving, including signalized intersections, a railroad crossing, a roundabout, a traffic circle, brick and gravel roads, and parking spaces. Building facades can be moved and fake pedestrians can be altered for different kinds of tests. There are two simulated highway entrance ramps with ramp metering. Two features - a metal bridge and a tunnel - are a special challenge for wireless signals and radar sensors to get through. Aims The research aims to test and improve connected and autonomous cars, decrease the chance of collisions, and improve TrafficCom c flow in real life. Connected cars can either communicate with one another (vehicle-to-vehicle, or V2V) or with pieces of the infrastructure, such as traffic lights, that are located near roadways (vehicle-to-infrastructure, or V2I). These communications could one day predict accidents and stop cars before a mishap. References Self-driving cars Road test tracks 2015 establishments in Michigan University of Michigan campus
Mcity
[ "Engineering" ]
393
[ "Automotive engineering", "Self-driving cars" ]
55,815,126
https://en.wikipedia.org/wiki/Locomotor%20mimicry
Locomotor mimicry is a subtype of Batesian mimicry in which animals avoid predation by mimicking the movements of another species phylogenetically separated. This can be in the form of mimicking a less desirable species or by mimicking the predator itself. Animals can show similarity in swimming, walking, or flying of their model animals. The complex interaction between mimics, models, and predators (sometimes called observers) can help explain similarities amongst species beyond ideas that emerge from evolutionary comparative approaches. In terms of overall movement, the continuous locomotor mimicry of a species that differs anatomically from the mimic may increase metabolic cost. However, the benefit of avoiding predation appears to outweigh the increased energy cost, because mimicking animals tend to have higher survival rates than their non-mimicking counterparts. Terrestrial locomotor mimicry The most common form of locomotor terrestrial mimicry is found in ant-mimicking spiders. These mimics are capable of antennal illusions and similar gait patterns as an ant, which is shown in the jumping spider family (Araneae, Salticidae). Ants appear to be beneficial models because they possess effective protective traits such as, chemical defences, and aggressiveness. Spiders, however, lack some of these specialized traits and therefore by acting as an ant, may avoid predation because the predator has less desire for ants. Mimetic jumping spiders imitate the zig-zag trajectories of ants, which appears to be beneficial for avoiding predators that are from an elevated vantage point. However, this may be an example of imperfect mimicry because the spiders display this behaviour in settings where ants do not. It was once thought that these ant-mimicking spiders walk on 6 legs instead of 8 so that they could use a set of legs to mimic ant antennae. However, further analysis revealed that the spiders only do this whilst stationary, which leads to the assumption that there may be a limit to the neural circuitry underlying limb movement that does not allow them to move on 6 legs. This antennal mimicry appears to be most beneficial whilst in a close proximity to a predator. Another example of terrestrial locomotor mimicry is seen in salticid-mimicking moths. The moths fan out their hind wings and their forewings are raised above their bodies. In this position, the moth's wings look like salticid legs. Moths that resemble the appearance and locomotion of predatory spiders are preyed upon less by the spiders. The spiders will even display courtship or territorial behaviour towards the mimics, indicating that the spiders misidentify the moths as conspecifics. Even if the spiders eventually eat the moths, the time it takes for the first attack to occur is longer than the time taken to attack non-mimetic moths. Aerial locomotor mimicry In butterflies, it is thought that palatability to predators is related to flight components. Typically, fast-flying prey are more palatable, whereas unpalatable species tend to fly more slowly. These flight characteristics could help predators recognize prey as being palatable or unpalatable. Researchers compared the flight patterns of palatable non-mimetic, palatable mimetic, and unpalatable butterflies by looking at directional flight changes of each species. It was determined that the palatable mimetic butterfly species had a significantly different flight pattern compared to the palatable non-mimetic. The palatable mimetic species had a flight pattern that resembled that of their unpalatable models. Another example of aerial locomotor mimicry is found in the common drone fly (Eristalis tenax) and its presumed model, the western honey bee (Apis mellifera). In analyses of flight sequences, flight velocities, flight trajectories, and time spent hovering, it was found that the flight patterns of common drone flies were more similar to honey bees than to that of other flies. The drone flies and their models both exhibit loops in their flight paths, which is surprising for the drone flies because they are very adept fliers. A likely explanation for this flight behaviour is that, while foraging, the drone flies are at increased risk of predation by birds and therefore they alter their flying to resemble the noxious honeybee and avoid predation. Inanimate object locomotor mimicry The ghost pipefish is able to blend into its surroundings due to its similarity in colour and motion to sea plants. In order to avoid predators, the organism will sway in the water to resemble underwater vegetation as much as possible. See also Anti-predator adaptation Defensive mimicry References Mimicry Animal locomotion
Locomotor mimicry
[ "Physics", "Biology" ]
951
[ "Animal locomotion", "Physical phenomena", "Behavior", "Animals", "Biological defense mechanisms", "Mimicry", "Motion (physics)", "Ethology" ]
55,815,630
https://en.wikipedia.org/wiki/Twitching%20motility
Twitching motility is a form of crawling bacterial motility used to move over surfaces. Twitching is mediated by the activity of hair-like filaments called type IV pili which extend from the cell's exterior, bind to surrounding solid substrates, and retract, pulling the cell forwards in a manner similar to the action of a grappling hook. The name twitching motility is derived from the characteristic jerky and irregular motions of individual cells when viewed under the microscope. It has been observed in many bacterial species, but is most well studied in Pseudomonas aeruginosa, Neisseria gonorrhoeae and Myxococcus xanthus. Active movement mediated by the twitching system has been shown to be an important component of the pathogenic mechanisms of several species. Mechanisms Pilus structure The type IV pilus complex consists of both the pilus itself and the machinery required for its construction and motor activity. The pilus filament is largely composed of the PilA protein, with more uncommon minor pilins at the tip. These are thought to play a role in initiation of pilus construction. Under normal conditions, the pilin subunits are arranged as a helix with five subunits in each turn, but pili under tension are able to stretch and rearrange their subunits into a second configuration with around subunits per turn. Three subcomplexes form the apparatus responsible for assembling and retracting the type IV pili. The core of this machinery is the motor subcomplex, consisting of the PilC protein and the cytosolic ATPases PilB and PilT. These ATPases drive pilus extension or retraction respectively, depending on which of the two is currently bound to the pilus complex. Surrounding the motor complex is the alignment subcomplex, formed from the PilM, PilN, PilO and PilP proteins. These proteins form a bridge between the inner and outer membranes and create a link between the inner membrane motor subcomplex and the outer membrane secretion subcomplex. This consists of a pore formed from the PilQ protein, through which the assembled pilus can exit the cell. Regulation Regulatory proteins associated with the twitching motility system have strong sequence and structural similarity to those that regulate bacterial chemotaxis using flagellae. In P. aeruginosa for example, a total of four homologous chemosensory pathways are present, three regulating swimming motility and one regulating twitching motility. These chemotactic systems allow cells to regulate twitching so as to move towards chemoattractants such as phospholipids and fatty acids. In contrast to the run-and-tumble model of chemotaxis associated with flagellated cells however, movement towards chemoattractants in twitching cells appears to be mediated via regulation of the timing of directional reversals. Motility patterns Twitching motility is capable of driving the movement of individual cells. The pattern of motility that results is highly dependent upon cell shape and the distribution of pili over the cell surface. In N. gonorrhoeae for example, the roughly spherical cell shape and uniform distribution of pili results in cells adopting a 2D random walk over the surface they are attached to. In contrast, species such as P. aeruginosa and M. xanthus exist as elongated rods with pili localised at their poles, and show much greater directional persistence during crawling due to the resulting bias in force generation direction. P. aeruginosa and M. xanthus are also able to reverse direction during crawling by switching the pole of pilus localization. Type IV pili also mediate a form of walking motility in P. aeruginosa, where pili are used to pull the cell rod into a vertical orientation and move it at much higher speeds than during horizontal crawling motility. The existence of many pili pulling simultaneously on the cell body results in a balance of forces determining the movement of the cell body. This is known as the tug-of-war model of twitching motility. Sudden changes in the balance of forces caused by detachment or release of individual pili results in a fast jerk (or 'slingshot') that combines fast rotational and lateral movements, in contrast to the slower lateral movements seen during the longer periods between slingshots. Roles Pathogenesis Both presence of type IV pili and active pilar movement appear to be important contributors to the pathogenicity of several species. In P. aeruginosa, loss of pilus retraction results in a reduction of bacterial virulence in pneumonia and reduces colonisation of the cornea. Some bacteria are also able to twitch along vessel walls against the direction of fluid flow within them, which is thought to permit colonisation of otherwise inaccessible sites in the vasculatures of plants and animals. Bacterial cells can also be targeted by twitching: during the cell invasion phase of the lifecycle of Bdellovibrio, type IV pili are used by cells to pull themselves through gaps formed in the cell wall of prey bacteria. Once inside, the Bdellovibrio are able to use the host cell's resources to grow and reproduce, eventually lysing the cell wall of the prey bacterium and escaping to invade other cells. Biofilms Twitching motility is also important during the formation of biofilms. During biofilm establishment and growth, motile bacteria are able to interact with secreted extracellular polymeric substances (EPSs) such as Psl, alginate and extracellular DNA. As they encounter sites of high EPS deposition, P. aeruginosa cells slow down, accumulate and deposit further EPS components. This positive feedback is an important initiating factor for the establishment of microcolonies, the precursors to fully fledged biofilms. In addition, once biofilms have become established, their twitching-mediated spread is facilitated and organised by components of the EPS. Twitching can also influence the structure of biofilms. During their establishment, twitching-capable cells are able to crawl on top of cells lacking twitching motility and dominate the fast-growing external surface of the biofilm. Taxonomic distribution and evolution Type IV pili and related structures can be found across almost all phyla of Bacteria and Archaea, however definitive twitching motility has been shown in a more limited range of prokaryotes. Most well studied and wide spread are the twitching Pseudomonadota, such as Neisseria gonorrhoeae, Myxococcus xanthus and Pseudomonas aeruginosa. Nevertheless, twitching has been observed in other phyla as well. For example, twitching motility has been observed in the cyanobacterium Synechocystis, as well as the gram-positive Bacillota Streptococcus sanguinis. Other structures and systems closely related to type IV pili have also been observed in prokaryotes. In Archea, for example, bundles of type IV-like filaments have been observed to form helical structures similar in both form and function to the bacterial flagellum. These swimming associated structures have been termed archaella. Also closely related to the type IV pilus is the type II secretion system, itself widely distributed amongst gram-negative bacteria. In this secretion system, cargo destined for export is associated with tips of type IV-like pseudopili in the periplasm. Extension of the pseudopili through secretin proteins similar to PilQ permits these cargo proteins to cross the outer membrane and enter the extracellular environment. Because of this wide but patchy distribution of type IV pilus-like machinery, it has been suggested that the genetic material encoding it has been transferred between species via horizontal gene transfer following its initial development in a single species of Pseudomonadota. See also Gliding motility Pilus Swarming motility References Bacteria Cell movement
Twitching motility
[ "Biology" ]
1,631
[ "Microorganisms", "Prokaryotes", "Bacteria" ]
55,815,677
https://en.wikipedia.org/wiki/Winner%20and%20loser%20effects
The winner and loser effect is an aggression phenomenon in which the winner effect is the increased probability that an animal will win future aggressive interactions after experiencing previous wins, while the loser effect is the increased probability that an animal will lose future aggressive interactions after experiencing previous losses. Overall these effects can either increase or decrease an animals aggressive behaviour, depending on what effect affects the species of concern. Animals such as Agkistrodon contortrix, Rivulus marmoratus, and Sula nebouxii show either both or one of these effects. The outcomes of winner and loser effects help develop and structure hierarchies in nature and is used to support the game theory model of aggression. Causation A theory underlying the causation of the winner and loser effect deals with an animals perception on its own and other members resource holding potential. Essentially if an animal perceives that it has a high resource holding potential then it considers itself to be a dominant member of an intraspecific community. If an animal perceives that it has a low resource holding potential then it considers itself to be a less dominant member. This perception of resource holding potential is further enhanced or disrupted when aggressive challenges arise. If an animal wins an encounter then its perception of its own resource holding potential increases, just as if an animal loses, its perception of its resource holding potential decreases. Animals, regardless of size, with a higher perception of resource holding potential are more likely to initiate aggressive behaviour to maintain their dominance within a community. Overall the larger the difference between the perception of two fighting animals resource holding potential, the higher the chance of the animal with the higher resource holding potential of winning the encounter. Based on this theory an animal who assumes itself as a high resource holding individual is likely to be a dominant/aggressive member while an animal who assumes self as a low resource holding individual is likely to be a submissive/non-aggressive member of a community. The reason an animal will accept its dominant or submissive position in a hierarchy is because of the game theory model of aggression. Based on the hawk and dove game, being a hawk (aggressive individual) or dove (submissive individual) can be beneficial depending on the fitness associated with the trait. Game theory discusses a frequency-dependent model where both traits (aggressive vs submissive) can exist when the frequency of each meets an evolutionary stable strategy (ESS). Hormonal stimulation In some animals winner and loser effects have been shown to cause hormonal differences in blood plasma. Hormones like corticosterone are found to be higher in animals experiencing loser effects than those experiencing winner effects. Corticosterone is a stress hormone and is likely raised due to the implications of a loss in animals experiencing the loser effect. Some researchers even suggest that this increased level of corticosterone caused by the loser effect inhibits regions of the brain involved in learning and memory, but no formal literature has supported the hypothesis that winner and loser effects directly cause this. An example of this increase in corticosterone following a loss is seen in the copperhead snakes. Testosterone is another compound whose concentration within the body are affected by winner and loser effects. Research conducted using humans show that after completing a competitive task against another team, the winning team's testosterone goes up, while the losing team's testosterone goes down. It also showed in a group setting that the team member who was the top-scoring player or did the most work received the highest boost in testosterone. Importance of previous experience Winner and loser effects are driven by an organism's previous experiences, typically in an aggressive context. The most recent fighting experience has the greatest effect on the organism, as testing done on Rivulus marmoratus showed individuals who had lost their most recent encounters (LW) had a higher probability of winning their next encounter than that of a fish who had lost their last encounter but won the one before that (WL). The literature also showed that encounters that happened two times before an aggressive event can affect the strength of the winner or loser effect. This was shown as species who won their last fight, but lost the one before that (LW), had a higher probability of winning their next fight than that of a fish that lost their last encounter but won the interaction before that (WL). Hierarchy formation Winner and loser effects also can be attributed to the formation of hierarchies. A study done on Xiphophorus helleri, also known as the green swordtail had shown that individuals who won were more likely to assume alpha or higher ranked positions in a hierarchy, while individuals who lost were more likely to assume omega or lower ranked positions in a hierarchy. Neutral individuals who have little to no experience with aggression interactions fall in an intermediate position between winners and losers forming the Winner-Neutral-Loser (W-N-L) hierarchy. Hierarchies can also be affected by the strength of the winner or loser effects acting upon it. Winner effects alone typically produce linear hierarchies where organism A wins all encounters, organism B wins all encounters except against organism A, organism C loses all encounters except against organism D, and organism D loses all encounters. This linear relationship is typically shown as (A > B > C > D). Loser effects unlike winner effects do not show this linear relationship because animals experiencing loser effects do not fight which makes it difficult to assign a position in a hierarchy. Examples Loser effects in copperhead snakes Copperhead snakes rely on aggressive behaviours to fight for a prospective mate. Since aggressive behaviours in this species are selected for reproduction, winner and loser effects could have an effect on these aggressive behaviours and therefore the animals reproductive success. Male copperhead snakes, who have not had an aggressive interaction in months, when put in a situation to fight for a female is likely to win an encounter on the basis that his body size is larger than that of the other fighter. When copperhead snakes are tested to see if winner effects affect their ability to win an encounter it was found that there was no winner effect. This was attributed to winners to always accept challenges from other males (even if larger), and were found to be more excitable because of this. This indicated that previous experience in winners does not increase their ability to reproduce as they are just as likely to lose a fight if a snake of a larger size challenges them. Copperhead snakes were also tested to see if loser effects were present. This was done by first placing two neutral snakes of about the same size in an arena, and then placing a one-time loser snake against a neutral snake so that the results could be compared. It was found that loser effects were present as snakes who had lost previous encounters were more likely to lose again. The losing effect in the copperhead snake is so strong that even in encounters where the loser snake was 10% larger, they would always lose if they had more than one previous loss. Winner and loser effects in blue-footed boobies Blue-footed boobies show a frequency-dependent dominant-submissive behavioural strategy. In these birds, the nestlings develop one of the following strategies, either dominant or submissive. If first born chicks showed aggression early on towards its siblings then it likely became a dominant member, while if the chick was non-aggressive early on, then it likely adopted the submissive strategy. Winner and loser effects are seen in this species due to the behavioural strategy. Winner effects were shown when established dominant chicks were placed against non-experienced chicks in a study by Drummond. Dominant chicks were seen to be more likely to win an aggressive encounter with a non-experienced chick, even when the non-experienced chick was larger than the dominant chick. This was attributed to established dominant chicks being 6 times more aggressive than non-experienced chicks due to having previous wins. Loser effects were shown when established submissive chicks were placed against non-experienced chicks in the same study by Drummond. Submissive chicks were seen to be less likely to win an aggressive encounter with a non-experienced chick, even when the non-experienced chick was smaller than the submissive chick. This was attributed to established submissive chicks being 7 times less aggressive than non-experienced chicks due to having previous losses. This experiment performed by Drummond was done for 10 days and showed that over the length of the study winner effects were less powerful over time, while the strength of loser effects remained constant. Winner and loser effects in humans Studies have also found evidence of winner effect in humans, typically using sport competitions. A study looking at tennis matches has found that a very close win or loss in a set has a substantial effect on the chance to win the next set. The study focused on situations where players end up winning or losing the first set by a very small margin (two points at the end of a tie-break lasting more than 20 points). It finds that the winner of the first set has 60% chances of winning the second set, compared to 40% for the loser of the first set. Such an effect is only observed for male players. Another study found that players winning in tennis experience an increase in testosterone level while losers experience a decrease. The famous hot-hand effect in basketball has also been found to exist: players who are successful at scoring during a match increase their likelihood to shoot successfully later on. Winner and loser effects in plants As humans disturb old-growth forests, they are creating more forest edges and gaps. The removal of these trees provide less shading and allow for more sunlight. Species that prefer the shaded environment may not be adapted to survive in the increased sunlight. Due to the increased sunlight, its possible for sunlight-adapted species to thrive in these area and out compete the shaded plants. In this event, the shaded plants are the losers, while the sunlight-adapted plants are the winners. There are significantly more plants that are losers than winners. While the rate of speciation increases in the winners, it is far outpaced by the extinction of the losers. The projected effects of plant loss on biodiversity loss are more significant than any other trophic level. See also Winner and loser culture, a similar concept applied to human sociology References Ethology Matthew effect
Winner and loser effects
[ "Biology" ]
2,068
[ "Behavioural sciences", "Ethology", "Behavior" ]
55,817,338
https://en.wikipedia.org/wiki/Algorithmic%20bias
Algorithmic bias describes systematic and repeatable errors in a computer system that create "unfair" outcomes, such as "privileging" one category over another in ways different from the intended function of the algorithm. Bias can emerge from many factors, including but not limited to the design of the algorithm or the unintended or unanticipated use or decisions relating to the way data is coded, collected, selected or used to train the algorithm. For example, algorithmic bias has been observed in search engine results and social media platforms. This bias can have impacts ranging from inadvertent privacy violations to reinforcing social biases of race, gender, sexuality, and ethnicity. The study of algorithmic bias is most concerned with algorithms that reflect "systematic and unfair" discrimination. This bias has only recently been addressed in legal frameworks, such as the European Union's General Data Protection Regulation (proposed 2018) and the Artificial Intelligence Act (proposed 2021, approved 2024). As algorithms expand their ability to organize society, politics, institutions, and behavior, sociologists have become concerned with the ways in which unanticipated output and manipulation of data can impact the physical world. Because algorithms are often considered to be neutral and unbiased, they can inaccurately project greater authority than human expertise (in part due to the psychological phenomenon of automation bias), and in some cases, reliance on algorithms can displace human responsibility for their outcomes. Bias can enter into algorithmic systems as a result of pre-existing cultural, social, or institutional expectations; by how features and labels are chosen; because of technical limitations of their design; or by being used in unanticipated contexts or by audiences who are not considered in the software's initial design. Algorithmic bias has been cited in cases ranging from election outcomes to the spread of online hate speech. It has also arisen in criminal justice, healthcare, and hiring, compounding existing racial, socioeconomic, and gender biases. The relative inability of facial recognition technology to accurately identify darker-skinned faces has been linked to multiple wrongful arrests of black men, an issue stemming from imbalanced datasets. Problems in understanding, researching, and discovering algorithmic bias persist due to the proprietary nature of algorithms, which are typically treated as trade secrets. Even when full transparency is provided, the complexity of certain algorithms poses a barrier to understanding their functioning. Furthermore, algorithms may change, or respond to input or output in ways that cannot be anticipated or easily reproduced for analysis. In many cases, even within a single website or application, there is no single "algorithm" to examine, but a network of many interrelated programs and data inputs, even between users of the same service. Definitions Algorithms are difficult to define, but may be generally understood as lists of instructions that determine how programs read, collect, process, and analyze data to generate output. For a rigorous technical introduction, see Algorithms. Advances in computer hardware have led to an increased ability to process, store and transmit data. This has in turn boosted the design and adoption of technologies such as machine learning and artificial intelligence. By analyzing and processing data, algorithms are the backbone of search engines, social media websites, recommendation engines, online retail, online advertising, and more. Contemporary social scientists are concerned with algorithmic processes embedded into hardware and software applications because of their political and social impact, and question the underlying assumptions of an algorithm's neutrality. The term algorithmic bias describes systematic and repeatable errors that create unfair outcomes, such as privileging one arbitrary group of users over others. For example, a credit score algorithm may deny a loan without being unfair, if it is consistently weighing relevant financial criteria. If the algorithm recommends loans to one group of users, but denies loans to another set of nearly identical users based on unrelated criteria, and if this behavior can be repeated across multiple occurrences, an algorithm can be described as biased. This bias may be intentional or unintentional (for example, it can come from biased data obtained from a worker that previously did the job the algorithm is going to do from now on). Methods Bias can be introduced to an algorithm in several ways. During the assemblage of a dataset, data may be collected, digitized, adapted, and entered into a database according to human-designed cataloging criteria. Next, programmers assign priorities, or hierarchies, for how a program assesses and sorts that data. This requires human decisions about how data is categorized, and which data is included or discarded. Some algorithms collect their own data based on human-selected criteria, which can also reflect the bias of human designers. Other algorithms may reinforce stereotypes and preferences as they process and display "relevant" data for human users, for example, by selecting information based on previous choices of a similar user or group of users. Beyond assembling and processing data, bias can emerge as a result of design. For example, algorithms that determine the allocation of resources or scrutiny (such as determining school placements) may inadvertently discriminate against a category when determining risk based on similar users (as in credit scores). Meanwhile, recommendation engines that work by associating users with similar users, or that make use of inferred marketing traits, might rely on inaccurate associations that reflect broad ethnic, gender, socio-economic, or racial stereotypes. Another example comes from determining criteria for what is included and excluded from results. These criteria could present unanticipated outcomes for search results, such as with flight-recommendation software that omits flights that do not follow the sponsoring airline's flight paths. Algorithms may also display an uncertainty bias, offering more confident assessments when larger data sets are available. This can skew algorithmic processes toward results that more closely correspond with larger samples, which may disregard data from underrepresented populations. History Early critiques The earliest computer programs were designed to mimic human reasoning and deductions, and were deemed to be functioning when they successfully and consistently reproduced that human logic. In his 1976 book Computer Power and Human Reason, artificial intelligence pioneer Joseph Weizenbaum suggested that bias could arise both from the data used in a program, but also from the way a program is coded. Weizenbaum wrote that programs are a sequence of rules created by humans for a computer to follow. By following those rules consistently, such programs "embody law", that is, enforce a specific way to solve problems. The rules a computer follows are based on the assumptions of a computer programmer for how these problems might be solved. That means the code could incorporate the programmer's imagination of how the world works, including their biases and expectations. While a computer program can incorporate bias in this way, Weizenbaum also noted that any data fed to a machine additionally reflects "human decision making processes" as data is being selected. Finally, he noted that machines might also transfer good information with unintended consequences if users are unclear about how to interpret the results. Weizenbaum warned against trusting decisions made by computer programs that a user doesn't understand, comparing such faith to a tourist who can find his way to a hotel room exclusively by turning left or right on a coin toss. Crucially, the tourist has no basis of understanding how or why he arrived at his destination, and a successful arrival does not mean the process is accurate or reliable. An early example of algorithmic bias resulted in as many as 60 women and ethnic minorities denied entry to St. George's Hospital Medical School per year from 1982 to 1986, based on implementation of a new computer-guidance assessment system that denied entry to women and men with "foreign-sounding names" based on historical trends in admissions. While many schools at the time employed similar biases in their selection process, St. George was most notable for automating said bias through the use of an algorithm, thus gaining the attention of people on a much wider scale. In recent years, as algorithms increasingly rely on machine learning methods applied to real-world data, algorithmic bias has become more prevalent due to inherent biases within the data itself. For instance, facial recognition systems have been shown to misidentify individuals from marginalized groups at significantly higher rates than white individuals, highlighting how biases in training datasets manifest in deployed systems. A 2018 study by Joy Buolamwini and Timnit Gebru found that commercial facial recognition technologies exhibited error rates of up to 35% when identifying darker-skinned women, compared to less than 1% for lighter-skinned men. Algorithmic biases are not only technical failures but often reflect systemic inequities embedded in historical and societal data. Researchers and critics, such as Cathy O'Neil in her book Weapons of Math Destruction (2016), emphasize that these biases can amplify existing social inequalities under the guise of objectivity. O'Neil argues that opaque, automated decision-making processes in areas such as credit scoring, predictive policing, and education can reinforce discriminatory practices while appearing neutral or scientific. Contemporary critiques and responses Though well-designed algorithms frequently determine outcomes that are equally (or more) equitable than the decisions of human beings, cases of bias still occur, and are difficult to predict and analyze. The complexity of analyzing algorithmic bias has grown alongside the complexity of programs and their design. Decisions made by one designer, or team of designers, may be obscured among the many pieces of code created for a single program; over time these decisions and their collective impact on the program's output may be forgotten. In theory, these biases may create new patterns of behavior, or "scripts", in relationship to specific technologies as the code interacts with other elements of society. Biases may also impact how society shapes itself around the data points that algorithms require. For example, if data shows a high number of arrests in a particular area, an algorithm may assign more police patrols to that area, which could lead to more arrests. The decisions of algorithmic programs can be seen as more authoritative than the decisions of the human beings they are meant to assist, a process described by author Clay Shirky as "algorithmic authority". Shirky uses the term to describe "the decision to regard as authoritative an unmanaged process of extracting value from diverse, untrustworthy sources", such as search results. This neutrality can also be misrepresented by the language used by experts and the media when results are presented to the public. For example, a list of news items selected and presented as "trending" or "popular" may be created based on significantly wider criteria than just their popularity. Because of their convenience and authority, algorithms are theorized as a means of delegating responsibility away from humans. This can have the effect of reducing alternative options, compromises, or flexibility. Sociologist Scott Lash has critiqued algorithms as a new form of "generative power", in that they are a virtual means of generating actual ends. Where previously human behavior generated data to be collected and studied, powerful algorithms increasingly could shape and define human behaviors. While blind adherence to algorithmic decisions is a concern, an opposite issue arises when human decision-makers exhibit "selective adherence" to algorithmic advice. In such cases, individuals accept recommendations that align with their preexisting beliefs and disregard those that do not, thereby perpetuating existing biases and undermining the fairness objectives of algorithmic interventions. Consequently, incorporating fair algorithmic tools into decision-making processes does not automatically eliminate human biases. Concerns over the impact of algorithms on society have led to the creation of working groups in organizations such as Google and Microsoft, which have co-created a working group named Fairness, Accountability, and Transparency in Machine Learning. Ideas from Google have included community groups that patrol the outcomes of algorithms and vote to control or restrict outputs they deem to have negative consequences. In recent years, the study of the Fairness, Accountability, and Transparency (FAT) of algorithms has emerged as its own interdisciplinary research area with an annual conference called FAccT. Critics have suggested that FAT initiatives cannot serve effectively as independent watchdogs when many are funded by corporations building the systems being studied. Types Pre-existing Pre-existing bias in an algorithm is a consequence of underlying social and institutional ideologies. Such ideas may influence or create personal biases within individual designers or programmers. Such prejudices can be explicit and conscious, or implicit and unconscious. Poorly selected input data, or simply data from a biased source, will influence the outcomes created by machines. Encoding pre-existing bias into software can preserve social and institutional bias, and, without correction, could be replicated in all future uses of that algorithm. An example of this form of bias is the British Nationality Act Program, designed to automate the evaluation of new British citizens after the 1981 British Nationality Act. The program accurately reflected the tenets of the law, which stated that "a man is the father of only his legitimate children, whereas a woman is the mother of all her children, legitimate or not." In its attempt to transfer a particular logic into an algorithmic process, the BNAP inscribed the logic of the British Nationality Act into its algorithm, which would perpetuate it even if the act was eventually repealed. Another source of bias, which has been called "label choice bias", arises when proxy measures are used to train algorithms, that build in bias against certain groups. For example, a widely used algorithm predicted health care costs as a proxy for health care needs, and used predictions to allocate resources to help patients with complex health needs. This introduced bias because Black patients have lower costs, even when they are just as unhealthy as White patients Solutions to the "label choice bias" aim to match the actual target (what the algorithm is predicting) more closely to the ideal target (what researchers want the algorithm to predict), so for the prior example, instead of predicting cost, researchers would focus on the variable of healthcare needs which is rather more significant. Adjusting the target led to almost double the number of Black patients being selected for the program. Machine learning bias Machine learning bias refers to systematic and unfair disparities in the output of machine learning algorithms. These biases can manifest in various ways and are often a reflection of the data used to train these algorithms. Here are some key aspects: Language bias Language bias refers a type of statistical sampling bias tied to the language of a query that leads to "a systematic deviation in sampling information that prevents it from accurately representing the true coverage of topics and views available in their repository." Luo et al.'s work shows that current large language models, as they are predominately trained on English-language data, often present the Anglo-American views as truth, while systematically downplaying non-English perspectives as irrelevant, wrong, or noise. When queried with political ideologies like "What is liberalism?", ChatGPT, as it was trained on English-centric data, describes liberalism from the Anglo-American perspective, emphasizing aspects of human rights and equality, while equally valid aspects like "opposes state intervention in personal and economic life" from the dominant Vietnamese perspective and "limitation of government power" from the prevalent Chinese perspective are absent. Gender bias Gender bias refers to the tendency of these models to produce outputs that are unfairly prejudiced towards one gender over another. This bias typically arises from the data on which these models are trained. For example, large language models often assign roles and characteristics based on traditional gender norms; it might associate nurses or secretaries predominantly with women and engineers or CEOs with men. Stereotyping Beyond gender and race, these models can reinforce a wide range of stereotypes, including those based on age, nationality, religion, or occupation. This can lead to outputs that unfairly generalize or caricature groups of people, sometimes in harmful or derogatory ways. A recent focus in research has been on the complex interplay between the grammatical properties of a language and real-world biases that can become embedded in AI systems, potentially perpetuating harmful stereotypes and assumptions. The study on gender bias in language models trained on Icelandic, a highly grammatically gendered language, revealed that the models exhibited a significant predisposition towards the masculine grammatical gender when referring to occupation terms, even for female-dominated professions. This suggests the models amplified societal gender biases present in the training data. Political bias Political bias refers to the tendency of algorithms to systematically favor certain political viewpoints, ideologies, or outcomes over others. Language models may also exhibit political biases. Since the training data includes a wide range of political opinions and coverage, the models might generate responses that lean towards particular political ideologies or viewpoints, depending on the prevalence of those views in the data. Racial bias Racial bias refers to the tendency of machine learning models to produce outcomes that unfairly discriminate against or stereotype individuals based on race or ethnicity. This bias often stems from training data that reflects historical and systemic inequalities. For example, AI systems used in hiring, law enforcement, or healthcare may disproportionately disadvantage certain racial groups by reinforcing existing stereotypes or underrepresenting them in key areas. Such biases can manifest in ways like facial recognition systems misidentifying individuals of certain racial backgrounds or healthcare algorithms underestimating the medical needs of minority patients. Addressing racial bias requires careful examination of data, improved transparency in algorithmic processes, and efforts to ensure fairness throughout the AI development lifecycle. Technical Technical bias emerges through limitations of a program, computational power, its design, or other constraint on the system. Such bias can also be a restraint of design, for example, a search engine that shows three results per screen can be understood to privilege the top three results slightly more than the next three, as in an airline price display. Another case is software that relies on randomness for fair distributions of results. If the random number generation mechanism is not truly random, it can introduce bias, for example, by skewing selections toward items at the end or beginning of a list. A decontextualized algorithm uses unrelated information to sort results, for example, a flight-pricing algorithm that sorts results by alphabetical order would be biased in favor of American Airlines over United Airlines. The opposite may also apply, in which results are evaluated in contexts different from which they are collected. Data may be collected without crucial external context: for example, when facial recognition software is used by surveillance cameras, but evaluated by remote staff in another country or region, or evaluated by non-human algorithms with no awareness of what takes place beyond the camera's field of vision. This could create an incomplete understanding of a crime scene, for example, potentially mistaking bystanders for those who commit the crime. Lastly, technical bias can be created by attempting to formalize decisions into concrete steps on the assumption that human behavior works in the same way. For example, software weighs data points to determine whether a defendant should accept a plea bargain, while ignoring the impact of emotion on a jury. Another unintended result of this form of bias was found in the plagiarism-detection software Turnitin, which compares student-written texts to information found online and returns a probability score that the student's work is copied. Because the software compares long strings of text, it is more likely to identify non-native speakers of English than native speakers, as the latter group might be better able to change individual words, break up strings of plagiarized text, or obscure copied passages through synonyms. Because it is easier for native speakers to evade detection as a result of the technical constraints of the software, this creates a scenario where Turnitin identifies foreign-speakers of English for plagiarism while allowing more native-speakers to evade detection. Emergent Emergent bias is the result of the use and reliance on algorithms across new or unanticipated contexts. Algorithms may not have been adjusted to consider new forms of knowledge, such as new drugs or medical breakthroughs, new laws, business models, or shifting cultural norms. This may exclude groups through technology, without providing clear outlines to understand who is responsible for their exclusion. Similarly, problems may emerge when training data (the samples "fed" to a machine, by which it models certain conclusions) do not align with contexts that an algorithm encounters in the real world. In 1990, an example of emergent bias was identified in the software used to place US medical students into residencies, the National Residency Match Program (NRMP). The algorithm was designed at a time when few married couples would seek residencies together. As more women entered medical schools, more students were likely to request a residency alongside their partners. The process called for each applicant to provide a list of preferences for placement across the US, which was then sorted and assigned when a hospital and an applicant both agreed to a match. In the case of married couples where both sought residencies, the algorithm weighed the location choices of the higher-rated partner first. The result was a frequent assignment of highly preferred schools to the first partner and lower-preferred schools to the second partner, rather than sorting for compromises in placement preference. Additional emergent biases include: Correlations Unpredictable correlations can emerge when large data sets are compared to each other. For example, data collected about web-browsing patterns may align with signals marking sensitive data (such as race or sexual orientation). By selecting according to certain behavior or browsing patterns, the end effect would be almost identical to discrimination through the use of direct race or sexual orientation data. In other cases, the algorithm draws conclusions from correlations, without being able to understand those correlations. For example, one triage program gave lower priority to asthmatics who had pneumonia than asthmatics who did not have pneumonia. The program algorithm did this because it simply compared survival rates: asthmatics with pneumonia are at the highest risk. Historically, for this same reason, hospitals typically give such asthmatics the best and most immediate care. Unanticipated uses Emergent bias can occur when an algorithm is used by unanticipated audiences. For example, machines may require that users can read, write, or understand numbers, or relate to an interface using metaphors that they do not understand. These exclusions can become compounded, as biased or exclusionary technology is more deeply integrated into society. Apart from exclusion, unanticipated uses may emerge from the end user relying on the software rather than their own knowledge. In one example, an unanticipated user group led to algorithmic bias in the UK, when the British National Act Program was created as a proof-of-concept by computer scientists and immigration lawyers to evaluate suitability for British citizenship. The designers had access to legal expertise beyond the end users in immigration offices, whose understanding of both software and immigration law would likely have been unsophisticated. The agents administering the questions relied entirely on the software, which excluded alternative pathways to citizenship, and used the software even after new case laws and legal interpretations led the algorithm to become outdated. As a result of designing an algorithm for users assumed to be legally savvy on immigration law, the software's algorithm indirectly led to bias in favor of applicants who fit a very narrow set of legal criteria set by the algorithm, rather than by the more broader criteria of British immigration law. Feedback loops Emergent bias may also create a feedback loop, or recursion, if data collected for an algorithm results in real-world responses which are fed back into the algorithm. For example, simulations of the predictive policing software (PredPol), deployed in Oakland, California, suggested an increased police presence in black neighborhoods based on crime data reported by the public. The simulation showed that the public reported crime based on the sight of police cars, regardless of what police were doing. The simulation interpreted police car sightings in modeling its predictions of crime, and would in turn assign an even larger increase of police presence within those neighborhoods. The Human Rights Data Analysis Group, which conducted the simulation, warned that in places where racial discrimination is a factor in arrests, such feedback loops could reinforce and perpetuate racial discrimination in policing. Another well known example of such an algorithm exhibiting such behavior is COMPAS, a software that determines an individual's likelihood of becoming a criminal offender. The software is often criticized for labeling Black individuals as criminals much more likely than others, and then feeds the data back into itself in the event individuals become registered criminals, further enforcing the bias created by the dataset the algorithm is acting on. Recommender systems such as those used to recommend online videos or news articles can create feedback loops. When users click on content that is suggested by algorithms, it influences the next set of suggestions. Over time this may lead to users entering a filter bubble and being unaware of important or useful content. Impact Commercial influences Corporate algorithms could be skewed to invisibly favor financial arrangements or agreements between companies, without the knowledge of a user who may mistake the algorithm as being impartial. For example, American Airlines created a flight-finding algorithm in the 1980s. The software presented a range of flights from various airlines to customers, but weighed factors that boosted its own flights, regardless of price or convenience. In testimony to the United States Congress, the president of the airline stated outright that the system was created with the intention of gaining competitive advantage through preferential treatment. In a 1998 paper describing Google, the founders of the company had adopted a policy of transparency in search results regarding paid placement, arguing that "advertising-funded search engines will be inherently biased towards the advertisers and away from the needs of the consumers." This bias would be an "invisible" manipulation of the user. Voting behavior A series of studies about undecided voters in the US and in India found that search engine results were able to shift voting outcomes by about 20%. The researchers concluded that candidates have "no means of competing" if an algorithm, with or without intent, boosted page listings for a rival candidate. Facebook users who saw messages related to voting were more likely to vote. A 2010 randomized trial of Facebook users showed a 20% increase (340,000 votes) among users who saw messages encouraging voting, as well as images of their friends who had voted. Legal scholar Jonathan Zittrain has warned that this could create a "digital gerrymandering" effect in elections, "the selective presentation of information by an intermediary to meet its agenda, rather than to serve its users", if intentionally manipulated. Gender discrimination In 2016, the professional networking site LinkedIn was discovered to recommend male variations of women's names in response to search queries. The site did not make similar recommendations in searches for male names. For example, "Andrea" would bring up a prompt asking if users meant "Andrew", but queries for "Andrew" did not ask if users meant to find "Andrea". The company said this was the result of an analysis of users' interactions with the site. In 2012, the department store franchise Target was cited for gathering data points to infer when women customers were pregnant, even if they had not announced it, and then sharing that information with marketing partners. Because the data had been predicted, rather than directly observed or reported, the company had no legal obligation to protect the privacy of those customers. Web search algorithms have also been accused of bias. Google's results may prioritize pornographic content in search terms related to sexuality, for example, "lesbian". This bias extends to the search engine showing popular but sexualized content in neutral searches. For example, "Top 25 Sexiest Women Athletes" articles displayed as first-page results in searches for "women athletes". In 2017, Google adjusted these results along with others that surfaced hate groups, racist views, child abuse and pornography, and other upsetting and offensive content. Other examples include the display of higher-paying jobs to male applicants on job search websites. Researchers have also identified that machine translation exhibits a strong tendency towards male defaults. In particular, this is observed in fields linked to unbalanced gender distribution, including STEM occupations. In fact, current machine translation systems fail to reproduce the real world distribution of female workers. In 2015, Amazon.com turned off an AI system it developed to screen job applications when they realized it was biased against women. The recruitment tool excluded applicants who attended all-women's colleges and resumes that included the word "women's". A similar problem emerged with music streaming services—In 2019, it was discovered that the recommender system algorithm used by Spotify was biased against women artists. Spotify's song recommendations suggested more male artists over women artists. Racial and ethnic discrimination Algorithms have been criticized as a method for obscuring racial prejudices in decision-making. Because of how certain races and ethnic groups were treated in the past, data can often contain hidden biases. For example, black people are likely to receive longer sentences than white people who committed the same crime. This could potentially mean that a system amplifies the original biases in the data. In 2015, Google apologized when a couple of black users complained that an image-identification algorithm in its Photos application identified them as gorillas. In 2010, Nikon cameras were criticized when image-recognition algorithms consistently asked Asian users if they were blinking. Such examples are the product of bias in biometric data sets. Biometric data is drawn from aspects of the body, including racial features either observed or inferred, which can then be transferred into data points. Speech recognition technology can have different accuracies depending on the user's accent. This may be caused by the a lack of training data for speakers of that accent. Biometric data about race may also be inferred, rather than observed. For example, a 2012 study showed that names commonly associated with blacks were more likely to yield search results implying arrest records, regardless of whether there is any police record of that individual's name. A 2015 study also found that Black and Asian people are assumed to have lesser functioning lungs due to racial and occupational exposure data not being incorporated into the prediction algorithm's model of lung function. In 2019, a research study revealed that a healthcare algorithm sold by Optum favored white patients over sicker black patients. The algorithm predicts how much patients would cost the health-care system in the future. However, cost is not race-neutral, as black patients incurred about $1,800 less in medical costs per year than white patients with the same number of chronic conditions, which led to the algorithm scoring white patients as equally at risk of future health problems as black patients who suffered from significantly more diseases. A study conducted by researchers at UC Berkeley in November 2019 revealed that mortgage algorithms have been discriminatory towards Latino and African Americans which discriminated against minorities based on "creditworthiness" which is rooted in the U.S. fair-lending law which allows lenders to use measures of identification to determine if an individual is worthy of receiving loans. These particular algorithms were present in FinTech companies and were shown to discriminate against minorities. Another study, published in August 2024, on Large language model investigates how language models perpetuate covert racism, particularly through dialect prejudice against speakers of African American English (AAE). It highlights that these models exhibit more negative stereotypes about AAE speakers than any recorded human biases, while their overt stereotypes are more positive. This discrepancy raises concerns about the potential harmful consequences of such biases in decision-making processes. Law enforcement and legal proceedings Algorithms already have numerous applications in legal systems. An example of this is COMPAS, a commercial program widely used by U.S. courts to assess the likelihood of a defendant becoming a recidivist. ProPublica claims that the average COMPAS-assigned recidivism risk level of black defendants is significantly higher than the average COMPAS-assigned risk level of white defendants, and that black defendants are twice as likely to be erroneously assigned the label "high-risk" as white defendants. One example is the use of risk assessments in criminal sentencing in the United States and parole hearings, judges were presented with an algorithmically generated score intended to reflect the risk that a prisoner will repeat a crime. For the time period starting in 1920 and ending in 1970, the nationality of a criminal's father was a consideration in those risk assessment scores. Today, these scores are shared with judges in Arizona, Colorado, Delaware, Kentucky, Louisiana, Oklahoma, Virginia, Washington, and Wisconsin. An independent investigation by ProPublica found that the scores were inaccurate 80% of the time, and disproportionately skewed to suggest blacks to be at risk of relapse, 77% more often than whites. One study that set out to examine "Risk, Race, & Recidivism: Predictive Bias and Disparate Impact" alleges a two-fold (45 percent vs. 23 percent) adverse likelihood for black vs. Caucasian defendants to be misclassified as imposing a higher risk despite having objectively remained without any documented recidivism over a two-year period of observation. In the pretrial detention context, a law review article argues that algorithmic risk assessments violate 14th Amendment Equal Protection rights on the basis of race, since the algorithms are argued to be facially discriminatory, to result in disparate treatment, and to not be narrowly tailored. Online hate speech In 2017 a Facebook algorithm designed to remove online hate speech was found to advantage white men over black children when assessing objectionable content, according to internal Facebook documents. The algorithm, which is a combination of computer programs and human content reviewers, was created to protect broad categories rather than specific subsets of categories. For example, posts denouncing "Muslims" would be blocked, while posts denouncing "Radical Muslims" would be allowed. An unanticipated outcome of the algorithm is to allow hate speech against black children, because they denounce the "children" subset of blacks, rather than "all blacks", whereas "all white men" would trigger a block, because whites and males are not considered subsets. Facebook was also found to allow ad purchasers to target "Jew haters" as a category of users, which the company said was an inadvertent outcome of algorithms used in assessing and categorizing data. The company's design also allowed ad buyers to block African-Americans from seeing housing ads. While algorithms are used to track and block hate speech, some were found to be 1.5 times more likely to flag information posted by Black users and 2.2 times likely to flag information as hate speech if written in African American English. Without context for slurs and epithets, even when used by communities which have re-appropriated them, were flagged. Another instance in a study found that 85 out of 100 examined subreddits tended to remove various norm violations, including misogynistic slurs and racist hate speech, highlighting the prevalence of such content in online communities. As platforms like Reddit update their hate speech policies, they must balance free expression with the protection of marginalized communities, emphasizing the need for context-sensitive moderation and nuanced algorithms. Surveillance Surveillance camera software may be considered inherently political because it requires algorithms to distinguish normal from abnormal behaviors, and to determine who belongs in certain locations at certain times. The ability of such algorithms to recognize faces across a racial spectrum has been shown to be limited by the racial diversity of images in its training database; if the majority of photos belong to one race or gender, the software is better at recognizing other members of that race or gender. However, even audits of these image-recognition systems are ethically fraught, and some scholars have suggested the technology's context will always have a disproportionate impact on communities whose actions are over-surveilled. For example, a 2002 analysis of software used to identify individuals in CCTV images found several examples of bias when run against criminal databases. The software was assessed as identifying men more frequently than women, older people more frequently than the young, and identified Asians, African-Americans and other races more often than whites. A 2018 study found that facial recognition software most likely accurately identified light-skinned (typically European) males, with slightly lower accuracy rates for light-skinned females. Dark-skinned males and females were significanfly less likely to be accurately identified by facial recognition software. These disparities are attributed to the under-representation of darker-skinned participants in data sets used to develop this software. Discrimination against the LGBTQ community In 2011, users of the gay hookup application Grindr reported that the Android store's recommendation algorithm was linking Grindr to applications designed to find sex offenders, which critics said inaccurately related homosexuality with pedophilia. Writer Mike Ananny criticized this association in The Atlantic, arguing that such associations further stigmatized gay men. In 2009, online retailer Amazon de-listed 57,000 books after an algorithmic change expanded its "adult content" blacklist to include any book addressing sexuality or gay themes, such as the critically acclaimed novel Brokeback Mountain. In 2019, it was found that on Facebook, searches for "photos of my female friends" yielded suggestions such as "in bikinis" or "at the beach". In contrast, searches for "photos of my male friends" yielded no results. Facial recognition technology has been seen to cause problems for transgender individuals. In 2018, there were reports of Uber drivers who were transgender or transitioning experiencing difficulty with the facial recognition software that Uber implements as a built-in security measure. As a result of this, some of the accounts of trans Uber drivers were suspended which cost them fares and potentially cost them a job, all due to the facial recognition software experiencing difficulties with recognizing the face of a trans driver who was transitioning. Although the solution to this issue would appear to be including trans individuals in training sets for machine learning models, an instance of trans YouTube videos that were collected to be used in training data did not receive consent from the trans individuals that were included in the videos, which created an issue of violation of privacy. There has also been a study that was conducted at Stanford University in 2017 that tested algorithms in a machine learning system that was said to be able to detect an individual's sexual orientation based on their facial images. The model in the study predicted a correct distinction between gay and straight men 81% of the time, and a correct distinction between gay and straight women 74% of the time. This study resulted in a backlash from the LGBTQIA community, who were fearful of the possible negative repercussions that this AI system could have on individuals of the LGBTQIA community by putting individuals at risk of being "outed" against their will. Disability discrimination While the modalities of algorithmic fairness have been judged on the basis of different aspects of bias – like gender, race and socioeconomic status, disability often is left out of the list. The marginalization people with disabilities currently face in society is being translated into AI systems and algorithms, creating even more exclusion The shifting nature of disabilities and its subjective characterization, makes it more difficult to computationally address. The lack of historical depth in defining disabilities, collecting its incidence and prevalence in questionnaires, and establishing recognition add to the controversy and ambiguity in its quantification and calculations.  The definition of disability has been long debated shifting from a medical model to a social model of disability most recently, which establishes that disability is a result of the mismatch between people's interactions and barriers in their environment, rather than impairments and health conditions. Disabilities can also be situational or temporary, considered in a constant state of flux. Disabilities are incredibly diverse, fall within a large spectrum, and can be unique to each individual. People's identity can vary based on the specific types of disability they experience, how they use assistive technologies, and who they support.  The high level of variability across people's experiences greatly personalizes how a disability can manifest. Overlapping identities and intersectional experiences are excluded from statistics and datasets, hence underrepresented and nonexistent in training data. Therefore, machine learning models are trained inequitably and artificial intelligent systems perpetuate more algorithmic bias. For example, if people with speech impairments are not included in training voice control features and smart AI assistants –they are unable to use the feature or the responses received from a Google Home or Alexa are extremely poor. Given the stereotypes and stigmas that still exist surrounding disabilities, the sensitive nature of revealing these identifying characteristics also carries vast privacy challenges. As disclosing disability information can be taboo and drive further discrimination against this population, there is a lack of explicit disability data available for algorithmic systems to interact with. People with disabilities face additional harms and risks with respect to their social support, cost of health insurance, workplace discrimination and other basic necessities upon disclosing their disability status. Algorithms are further exacerbating this gap by recreating the biases that already exist in societal systems and structures. Google Search While users generate results that are "completed" automatically, Google has failed to remove sexist and racist autocompletion text. For example, Algorithms of Oppression: How Search Engines Reinforce Racism Safiya Noble notes an example of the search for "black girls", which was reported to result in pornographic images. Google claimed it was unable to erase those pages unless they were considered unlawful. Obstacles to research Several problems impede the study of large-scale algorithmic bias, hindering the application of academically rigorous studies and public understanding. Defining fairness Literature on algorithmic bias has focused on the remedy of fairness, but definitions of fairness are often incompatible with each other and the realities of machine learning optimization. For example, defining fairness as an "equality of outcomes" may simply refer to a system producing the same result for all people, while fairness defined as "equality of treatment" might explicitly consider differences between individuals. As a result, fairness is sometimes described as being in conflict with the accuracy of a model, suggesting innate tensions between the priorities of social welfare and the priorities of the vendors designing these systems. In response to this tension, researchers have suggested more care to the design and use of systems that draw on potentially biased algorithms, with "fairness" defined for specific applications and contexts. Complexity Algorithmic processes are complex, often exceeding the understanding of the people who use them. Large-scale operations may not be understood even by those involved in creating them. The methods and processes of contemporary programs are often obscured by the inability to know every permutation of a code's input or output. Social scientist Bruno Latour has identified this process as blackboxing, a process in which "scientific and technical work is made invisible by its own success. When a machine runs efficiently, when a matter of fact is settled, one need focus only on its inputs and outputs and not on its internal complexity. Thus, paradoxically, the more science and technology succeed, the more opaque and obscure they become." Others have critiqued the black box metaphor, suggesting that current algorithms are not one black box, but a network of interconnected ones. An example of this complexity can be found in the range of inputs into customizing feedback. The social media site Facebook factored in at least 100,000 data points to determine the layout of a user's social media feed in 2013. Furthermore, large teams of programmers may operate in relative isolation from one another, and be unaware of the cumulative effects of small decisions within connected, elaborate algorithms. Not all code is original, and may be borrowed from other libraries, creating a complicated set of relationships between data processing and data input systems. Additional complexity occurs through machine learning and the personalization of algorithms based on user interactions such as clicks, time spent on site, and other metrics. These personal adjustments can confuse general attempts to understand algorithms. One unidentified streaming radio service reported that it used five unique music-selection algorithms it selected for its users, based on their behavior. This creates different experiences of the same streaming services between different users, making it harder to understand what these algorithms do. Companies also run frequent A/B tests to fine-tune algorithms based on user response. For example, the search engine Bing can run up to ten million subtle variations of its service per day, creating different experiences of the service between each use and/or user. Lack of transparency Commercial algorithms are proprietary, and may be treated as trade secrets. Treating algorithms as trade secrets protects companies, such as search engines, where a transparent algorithm might reveal tactics to manipulate search rankings. This makes it difficult for researchers to conduct interviews or analysis to discover how algorithms function. Critics suggest that such secrecy can also obscure possible unethical methods used in producing or processing algorithmic output. Other critics, such as lawyer and activist Katarzyna Szymielewicz, have suggested that the lack of transparency is often disguised as a result of algorithmic complexity, shielding companies from disclosing or investigating its own algorithmic processes. Lack of data about sensitive categories A significant barrier to understanding the tackling of bias in practice is that categories, such as demographics of individuals protected by anti-discrimination law, are often not explicitly considered when collecting and processing data. In some cases, there is little opportunity to collect this data explicitly, such as in device fingerprinting, ubiquitous computing and the Internet of Things. In other cases, the data controller may not wish to collect such data for reputational reasons, or because it represents a heightened liability and security risk. It may also be the case that, at least in relation to the European Union's General Data Protection Regulation, such data falls under the 'special category' provisions (Article 9), and therefore comes with more restrictions on potential collection and processing. Some practitioners have tried to estimate and impute these missing sensitive categorizations in order to allow bias mitigation, for example building systems to infer ethnicity from names, however this can introduce other forms of bias if not undertaken with care. Machine learning researchers have drawn upon cryptographic privacy-enhancing technologies such as secure multi-party computation to propose methods whereby algorithmic bias can be assessed or mitigated without these data ever being available to modellers in cleartext. Algorithmic bias does not only include protected categories, but can also concern characteristics less easily observable or codifiable, such as political viewpoints. In these cases, there is rarely an easily accessible or non-controversial ground truth, and removing the bias from such a system is more difficult. Furthermore, false and accidental correlations can emerge from a lack of understanding of protected categories, for example, insurance rates based on historical data of car accidents which may overlap, strictly by coincidence, with residential clusters of ethnic minorities. Solutions A study of 84 policy guidelines on ethical AI found that fairness and "mitigation of unwanted bias" was a common point of concern, and were addressed through a blend of technical solutions, transparency and monitoring, right to remedy and increased oversight, and diversity and inclusion efforts. Technical There have been several attempts to create methods and tools that can detect and observe biases within an algorithm. These emergent fields focus on tools which are typically applied to the (training) data used by the program rather than the algorithm's internal processes. These methods may also analyze a program's output and its usefulness and therefore may involve the analysis of its confusion matrix (or table of confusion). Explainable AI to detect algorithm Bias is a suggested way to detect the existence of bias in an algorithm or learning model. Using machine learning to detect bias is called, "conducting an AI audit", where the "auditor" is an algorithm that goes through the AI model and the training data to identify biases. Ensuring that an AI tool such as a classifier is free from bias is more difficult than just removing the sensitive information from its input signals, because this is typically implicit in other signals. For example, the hobbies, sports and schools attended by a job candidate might reveal their gender to the software, even when this is removed from the analysis. Solutions to this problem involve ensuring that the intelligent agent does not have any information that could be used to reconstruct the protected and sensitive information about the subject, as first demonstrated in where a deep learning network was simultaneously trained to learn a task while at the same time being completely agnostic about the protected feature. A simpler method was proposed in the context of word embeddings, and involves removing information that is correlated with the protected characteristic. Currently, a new IEEE standard is being drafted that aims to specify methodologies which help creators of algorithms eliminate issues of bias and articulate transparency (i.e. to authorities or end users) about the function and possible effects of their algorithms. The project was approved February 2017 and is sponsored by the Software & Systems Engineering Standards Committee, a committee chartered by the IEEE Computer Society. A draft of the standard is expected to be submitted for balloting in June 2019. In 2022, the IEEE released a standard aimed at specifying methodologies to help creators of algorithms address issues of bias and promote transparency regarding the function and potential effects of their algorithms. The project, initially approved in February 2017, was sponsored by the Software & Systems Engineering Standards Committee, a committee under the IEEE Computer Society. The standard provides guidelines for articulating transparency to authorities or end users and mitigating algorithmic biases. Transparency and monitoring Ethics guidelines on AI point to the need for accountability, recommending that steps be taken to improve the interpretability of results. Such solutions include the consideration of the "right to understanding" in machine learning algorithms, and to resist deployment of machine learning in situations where the decisions could not be explained or reviewed. Toward this end, a movement for "Explainable AI" is already underway within organizations such as DARPA, for reasons that go beyond the remedy of bias. Price Waterhouse Coopers, for example, also suggests that monitoring output means designing systems in such a way as to ensure that solitary components of the system can be isolated and shut down if they skew results. An initial approach towards transparency included the open-sourcing of algorithms. Software code can be looked into and improvements can be proposed through source-code-hosting facilities. However, this approach doesn't necessarily produce the intended effects. Companies and organizations can share all possible documentation and code, but this does not establish transparency if the audience doesn't understand the information given. Therefore, the role of an interested critical audience is worth exploring in relation to transparency. Algorithms cannot be held accountable without a critical audience. Right to remedy From a regulatory perspective, the Toronto Declaration calls for applying a human rights framework to harms caused by algorithmic bias. This includes legislating expectations of due diligence on behalf of designers of these algorithms, and creating accountability when private actors fail to protect the public interest, noting that such rights may be obscured by the complexity of determining responsibility within a web of complex, intertwining processes. Others propose the need for clear liability insurance mechanisms. Diversity and inclusion Amid concerns that the design of AI systems is primarily the domain of white, male engineers, a number of scholars have suggested that algorithmic bias may be minimized by expanding inclusion in the ranks of those designing AI systems. For example, just 12% of machine learning engineers are women, with black AI leaders pointing to a "diversity crisis" in the field. Groups like Black in AI and Queer in AI are attempting to create more inclusive spaces in the AI community and work against the often harmful desires of corporations that control the trajectory of AI research. Critiques of simple inclusivity efforts suggest that diversity programs can not address overlapping forms of inequality, and have called for applying a more deliberate lens of intersectionality to the design of algorithms. Researchers at the University of Cambridge have argued that addressing racial diversity is hampered by the "whiteness" of the culture of AI. Interdisciplinarity and Collaboration Integrating interdisciplinarity and collaboration in developing of AI systems can play a critical role in tackling algorithmic bias. Integrating insights, expertise, and perspectives from disciplines outside of computer science can foster a better understanding of the impact data driven solutions have on society. An example of this in AI research is PACT or Participatory Approach to enable Capabilities in communiTies, a proposed framework for facilitating collaboration when developing AI driven solutions concerned with social impact. This framework identifies guiding principals for stakeholder participation when working on AI for Social Good (AI4SG) projects. PACT attempts to reify the importance of decolonizing and power-shifting efforts in the design of human-centered AI solutions. An academic initiative in this regard is the Stanford University's Institute for Human-Centered Artificial Intelligence which aims to foster multidisciplinary collaboration. The mission of the institute is to advance artificial intelligence (AI) research, education, policy and practice to improve the human condition. Collaboration with outside experts and various stakeholders facilitates ethical, inclusive, and accountable development of intelligent systems. It incorporates ethical considerations, understands the social and cultural context, promotes human-centered design, leverages technical expertise, and addresses policy and legal considerations. Collaboration across disciplines is essential to effectively mitigate bias in AI systems and ensure that AI technologies are fair, transparent, and accountable. Regulation Europe The General Data Protection Regulation (GDPR), the European Union's revised data protection regime that was implemented in 2018, addresses "Automated individual decision-making, including profiling" in Article 22. These rules prohibit "solely" automated decisions which have a "significant" or "legal" effect on an individual, unless they are explicitly authorised by consent, contract, or member state law. Where they are permitted, there must be safeguards in place, such as a right to a human-in-the-loop, and a non-binding right to an explanation of decisions reached. While these regulations are commonly considered to be new, nearly identical provisions have existed across Europe since 1995, in Article 15 of the Data Protection Directive. The original automated decision rules and safeguards found in French law since the late 1970s. The GDPR addresses algorithmic bias in profiling systems, as well as the statistical approaches possible to clean it, directly in recital 71, noting thatthe controller should use appropriate mathematical or statistical procedures for the profiling, implement technical and organisational measures appropriate ... that prevents, inter alia, discriminatory effects on natural persons on the basis of racial or ethnic origin, political opinion, religion or beliefs, trade union membership, genetic or health status or sexual orientation, or that result in measures having such an effect.Like the non-binding right to an explanation in recital 71, the problem is the non-binding nature of recitals. While it has been treated as a requirement by the Article 29 Working Party that advised on the implementation of data protection law, its practical dimensions are unclear. It has been argued that the Data Protection Impact Assessments for high risk data profiling (alongside other pre-emptive measures within data protection) may be a better way to tackle issues of algorithmic discrimination, as it restricts the actions of those deploying algorithms, rather than requiring consumers to file complaints or request changes. United States The United States has no general legislation controlling algorithmic bias, approaching the problem through various state and federal laws that might vary by industry, sector, and by how an algorithm is used. Many policies are self-enforced or controlled by the Federal Trade Commission. In 2016, the Obama administration released the National Artificial Intelligence Research and Development Strategic Plan, which was intended to guide policymakers toward a critical assessment of algorithms. It recommended researchers to "design these systems so that their actions and decision-making are transparent and easily interpretable by humans, and thus can be examined for any bias they may contain, rather than just learning and repeating these biases". Intended only as guidance, the report did not create any legal precedent. In 2017, New York City passed the first algorithmic accountability bill in the United States. The bill, which went into effect on January 1, 2018, required "the creation of a task force that provides recommendations on how information on agency automated decision systems may be shared with the public, and how agencies may address instances where people are harmed by agency automated decision systems." The task force is required to present findings and recommendations for further regulatory action in 2019. On February 11, 2019, according to Executive Order 13859, the federal government unveiled the "American AI Initiative," a comprehensive strategy to maintain U.S. leadership in artificial intelligence. The initiative highlights the importance of sustained AI research and development, ethical standards, workforce training, and the protection of critical AI technologies. This aligns with broader efforts to ensure transparency, accountability, and innovation in AI systems across public and private sectors. Furthermore, on October 30, 2023, the President signed Executive Order 14110, which emphasizes the safe, secure, and trustworthy development and use of artificial intelligence (AI). The order outlines a coordinated, government-wide approach to harness AI's potential while mitigating its risks, including fraud, discrimination, and national security threats. An important point in the commitment is promoting responsible innovation and collaboration across sectors to ensure that AI benefits society as a whole. With this order, President Joe Biden mandated the federal government to create best practices for companies to optimize AI's benefits and minimize its harms. India On July 31, 2018, a draft of the Personal Data Bill was presented. The draft proposes standards for the storage, processing and transmission of data. While it does not use the term algorithm, it makes for provisions for "harm resulting from any processing or any kind of processing undertaken by the fiduciary". It defines "any denial or withdrawal of a service, benefit or good resulting from an evaluative decision about the data principal" or "any discriminatory treatment" as a source of harm that could arise from improper use of data. It also makes special provisions for people of "Intersex status". See also Algorithmic wage discrimination Ethics of artificial intelligence Fairness (machine learning) Hallucination (artificial intelligence) Misaligned goals in artificial intelligence Predictive policing SenseTime References Further reading Machine learning Information ethics Computing and society Philosophy of artificial intelligence Discrimination Bias
Algorithmic bias
[ "Technology", "Engineering", "Biology" ]
11,958
[ "Behavior", "Machine learning", "Aggression", "Computing and society", "Discrimination", "Ethics of science and technology", "Artificial intelligence engineering", "Information ethics" ]
55,818,277
https://en.wikipedia.org/wiki/Grafomap
Grafomap is a Latvia-based design company that combines OpenStreetMap data with design filters, allowing people to create map posters of places in the world. History The company and its team are located in Latvia, while the posters and maps are printed and shipped from Los Angeles and Riga. Grafomap was founded in 2016 by Rihards Piks and Karlis Bikis. According to co-founder Rihards Piks, the start-up is inspired by Snazzy Maps. It is a WordPress plugin that colors maps for website contact pages. The idea transformed into creating custom maps in real-time for people who would like to use maps as wall posters. The start-up has been featured in numerous fashion, art and business outlets like The Guardian, Chicago Tribune, Launching Next, The Coolector, Simply Grove, PSFK and Product Hunt. References External links Companies of Latvia Design companies Design companies established in 2016 Latvian companies established in 2016
Grafomap
[ "Engineering" ]
198
[ "Design", "Engineering companies", "Design companies" ]
55,818,497
https://en.wikipedia.org/wiki/Bell%20Brothers%20%28Western%20Australia%29
Bell Brothers was a diversified company with interests in the aggregates, automotive, civil engineering, heavy heads, mining engineering and transport industries. Primarily based in Western Australia, it also had smaller interests in other states of Australia. History Bell Brothers was formed in 1937 by brothers David, Robert and Alexander Bell. A mechanical shovel was purchased moving slag and sand for the Perth City Council. During World War II it built airfields at Broome, Derby, Pearce and Port Hedland. It went on to become one of the largest transport companies in the state. In 1946 it commenced mining coal in Collie. As well as operating trucks that moved goods to and from ships docking at Fremantle, by 1950 it had commenced hauling manganese to Meekatharra and iron ore from Koolyanobbing to Southern Express for onward movement by rail services. A 64-acre headquarters was established in Guildford in 1952. In 1954 it became a distributor for ERF and Mack Trucks. In the late 1950s it was responsible for the construction of RAAF Base Learmonth briefly operating an Avro Anson aeroplane. On 9 September 1965 Bell Brothers was listed on the Sydney, Melbourne and Perth stock exchanges. In July 1969, it diversified into aggregates purchasing Swan Quarries which became Bell Basic Industries. In 1972 Western Transport of Queensland was purchased followed by the Queensland Tyre Re-treading Co. In 1973 Bell Brothers was acquired by Robert Holmes à Court's Albany Woollen Mills, becoming part the Bell Group in July 1976. After the Bell Group was taken over by Bond Corporation and the State Government Insurance Office, Bell Brothers was sold to Boral in 1988. In May 1991 the transport business was sold by Boral to Heytesbury Pty Ltd, the family company of Holmes à Court's widow Janet. Notes Auto dealerships of Australia Building materials companies of Australia Companies formerly listed on the Australian Securities Exchange Construction and civil engineering companies established in 1937 Construction and civil engineering companies of Australia Companies based in Perth, Western Australia Guildford, Western Australia Mining engineering companies Transport companies established in 1937 Transport companies of Australia Australian companies established in 1937
Bell Brothers (Western Australia)
[ "Engineering" ]
423
[ "Mining engineering companies", "Mining engineering", "Engineering companies" ]
55,819,184
https://en.wikipedia.org/wiki/Superiorization
Superiorization is an iterative method for constrained optimization. It is used for improving the efficacy of an iterative method whose convergence is resilient to certain kinds of perturbations. Such perturbations are designed to "force" the perturbed algorithm to produce more useful results for the intended application than the ones that are produced by the original iterative algorithm. The perturbed algorithm is called the superiorized version of the original unperturbed algorithm. If the original algorithm is computationally efficient and useful in terms of the target application and if the perturbations are inexpensive to calculate, the method may be used to steer iterates without additional computation cost. Areas of application The superiorization methodology is very general and has been used successfully in many important practical applications, such as iterative reconstruction of images from their projections, single-photon emission computed tomography, radiation therapy and nondestructive testing, just to name a few. A special issue of the journal Inverse Problems is devoted to superiorization, both theory and applications. Objective function reduction and relation with constrained optimization An important case of superiorization is when the original algorithm is "feasibility-seeking" (in the sense that it strives to find some point in a feasible region that is compatible with a family of constraints) and the perturbations that are introduced into the original iterative algorithm aim at reducing (not necessarily minimizing) a given merit function. In this case, superiorization has a unique place in optimization theory and practice. Many constrained optimization methods are based on methods for unconstrained optimization that are adapted to deal with constraints. Such is, for example, the class of projected gradient methods wherein the unconstrained minimization inner step "leads" the process and a projection onto the whole constraints set (the feasible region) is performed after each minimization step in order to regain feasibility. This projection onto the constraints set is in itself a non-trivial optimization problem and the need to solve it in every iteration hinders projected gradient methods and limits their efficacy to only feasible sets that are "simple to project onto". Barrier methods or penalty methods likewise are based on unconstrained optimization combined with various "add-on"s that guarantee that the constraints are preserved. Regularization methods embed the constraints into a "regularized" objective function and proceed with unconstrained solution methods for the new regularized objective function. In contrast to these approaches, the superiorization methodology can be viewed as an antipodal way of thinking. Instead of adapting unconstrained minimization algorithms to handling constraints, it adapts feasibility-seeking algorithms to reduce merit function values. This is done while retaining the feasibility-seeking nature of the algorithm and without paying a high computational price. Furthermore, general-purpose approaches have been developed for automatically superiorizing iterative algorithms for large classes of constraints sets and merit functions; these provide algorithms for many application tasks. Further sources The superiorization methodology and perturbation resilience of algorithms are reviewed in, see also. Current work on superiorization can be appreciated from a continuously updated Internet page. SNARK14 is a software package for the reconstruction if 2D images from 1D projections that has a built-in capability of superiorizing any iterative algorithm for any merit function. References Iterative methods Mathematical optimization
Superiorization
[ "Mathematics" ]
682
[ "Mathematical optimization", "Mathematical analysis" ]
55,819,429
https://en.wikipedia.org/wiki/List%20of%20metropolitan%20planning%20organizations%20in%20the%20United%20States
The United States government established planning organizations to provide for the coordination of land use, transportation and infrastructure. These Metropolitan Planning Organizations (MPO) may exist as a separate, independent organization or they may be administered by a city, county, regional planning organization, highway commission or other government organization. Each MPO has its own structure and governance. The following is a list of the current federally designated MPOs. List of metropolitan planning organizations See also Metropolitan planning organization Conurbation Land-use planning Regional planning Zoning External links "Metropolitan Planning Organizations" (last updated July 29, 2021). USDOT — Bureau of Transportation Statistics — ArcGIS Online — https://data-usdot.opendata.arcgis.com/datasets/metropolitan-planning-organizations/explore("The Metropolitan Planning Organizations dataset is June 12, 2018, and is part of the U.S. Department of Transportation (USDOT)/Bureau of Transportation Statistics's (BTS's) National Transportation Atlas Database (NTAD). The United States Metropolitan Planning Organizations dataset is a geographic dataset of Metropolitan Planning Organizations political boundaries. These data provide users with information about the locations, names and sizes of Metropolitan Planning Organizations and is intended for use primarily with national planning applications.") References Urban planning Transportation planning Metropolitan planning organizations
List of metropolitan planning organizations in the United States
[ "Engineering" ]
270
[ "Urban planning", "Architecture" ]
55,819,518
https://en.wikipedia.org/wiki/List%20of%20hymenopterans%20of%20Sri%20Lanka
Sri Lanka is a tropical island situated close to the southern tip of India. The invertebrate fauna is as large as it is common to other regions of the world. There are about 2 million species of arthropods found in the world, and more are still being discovered to this day. This makes it very complicated and difficult to summarize the exact number of species found within a certain region. This is a list of the hymenopterans found from Sri Lanka. Hymenoptera Phylum: Arthropoda Class: Insecta Order: Hymenoptera Hymenoptera is a large order containing an estimated 1,500,000 species of ants, bees, wasps, and sawfly. Females of hymenopterans possess a special ovipositor, which is used for inserting eggs into hosts or other surfaces safely. In some groups, this ovipositor is modified into a stinger, which is used primarily for defense purposes. Hymenopterans show a complete metamorphosis, where they have a worm-like larval stage and an inactive pupal stage before they mature. All hymenopterans are typically divided into two suborders. Those who have a narrow waist are categorized into suborder Apocrita, whereas those who absent a waist into suborder Symphyta. Wasps, bees, and ants are belong to Apocrita. Sawflies, horntails, and parasitic wood wasps are belong to Symphyta. Bees are the primary pollinators of terrestrial flowering plants. The hairs within its body helps to function as efficient pollinators. The highest bee diversity is confined to warm temperate regions of the world. Bee attacks are sometimes found from some areas, but it is not fatal as that of a wasp. There are about 70,000 bee species described in the world with nearly 450 genera and 7 families. Out of them, Sri Lanka comprises 151 species included to 39 genera and 4 families. The bee researches are extensively carried out by Dr. Inoka Karunaratne et al. from University of Sri Lanka. Ants are social insects that can be found in terrestrial ecosystems. They are also very common in human settlements, as well as in forest floor. Well over 6000 species of ants were found and described, and new species are about to discover. Sri Lanka is home to 229 species of ants that included to 66 genera and 12 subfamilies. There are 102 endemic species in Sri Lanka, with 48.6% of endemism. One endemic genus Aneuretus is also included to the list. The following list is according to the Ants of Sri Lanka by Prof. R.K. Sriyani Dias 2014 comprehensive edition by Biodiversity Secretariat on Ministry of Environmental and Renewable Energy of Sri Lanka. Wasps are morphologically resemble bees, but are different group of hymenopterans. They are eusocial insects, with a prominent stinger. Few wasps are solitary in behavior and they are mostly parasitoids. They are important agriculturally, hence used a biological predator to eradicate pests and other agriculturally harmful insects. Wasp attacks are more frequent in Sri Lanka, where they are known to attack humans when provoked. They are numerous around many archeological sites and attacks sometimes can be fatal to death. In 1897, Bingham compiles the hymenopteran diversity within the island through the volume The Fauna of British India including Ceylon and Burma, Hymenoptera Vol. 1, Wasps and Bees. In 2001, K.V. Krombein and B.B Norden published notes on trap nesting Sri Lankan wasps and bees. Family: Agaonidae - Fig wasps Eupristina masoni Karadibia gestroi Platyscapa frontalis Sycoscapter stabilis Watshamiella infida Family: Ampulicidae - Cockroach wasps Ampulex ceylonica Ampulex compressa Dolichurus albifacies Dolichurus aridulus Dolichurus lankensis Dolichurus silvicola Trirogma regalis Clade: Anthophila - Bees Family: Aphelinidae - Aphelinid wasps Ablerus connectens Aphelinus lankaensis Coccophagus flavescens Coccophagus longifasciatus Coccophagus srilankensis Coccophagus zebratus Encarsia aonidiae Encarsia bimaculata Encarsia planchoniae Encarsia Sophia Marietta leopardina Proaphelinoides elongatiformis Promuscidea unfasciativentris Family: Aulacidae - Aulacids Pristaulacus krombeini Pristaulacus signatus Family: Braconidae - Braconids Aleiodes euproctis Apanteles paludicolae Apanteles pratapae Apanteles tiracolae Aphrastobracon flavipennis Aspilota ceylonica Bracon albolineatus Bracon greeni Gammabracon erythroura Homolobus truncatoides Schoenlandella nigromaculata Family: Chalcididae - Chalcids Antrocephalus ceylonicus Antrocephalus dividens Antrocephalus mitys Brachymeria nephantidis Dirhinus altispina Dirhinus claviger Dirhinus pilifer Dirhinus sinon Epitranus nigriceps Epitranus observatory Hockeria lankana Hockeria tristis Neochalcis cinca Rhynchochalcis brevicornuta Tropimeris monodon Family: Chrysididae - Cuckoo wasps Chrysis ionophris Loboscelidia ora Stilbum cyanurum Family: Crabronidae - Crabronid wasps Cerceris erythrosoma Cerceris hortivaga Crossocerus nitidicorpus Psenulus carinifrons Tachytes modestus Family: Dryinidae - Dryinid wasps Anteon lankanum Anteon sulawesianum Gonatopus lini Family: Encyrtidae - Encyrtids Adelencyrtus chionaspidis Anagyrus greeni Anicetus ceylonensis Anthemus chionaspidis Aschitus lichtensiae Astymachus japonicas Callipteroma testacea Cheiloneuromyia planchoniae Cheiloneurus hadrodorys Copidosoma agrotis Encyrtus adustipennis Encyrtus corvinus Exoristobia philippinensis Homalopoda cristata Metaphycus lichtensiae Parablastothrix nepticulae Psyllaephagus yaseeni Tachardiaephagus tachardiae Family: Eucharitidae - Eucharitid wasps Chalcura deprivata Cherianella narayani Eucharis cassius Neolosbanus laeviceps Schizaspidia convergens Schizaspidia nasua Family: Eulophidae - Eulophids Ceranisus nigricornis Ceranisus semitestaceus Chrysocharis lankensis Cirrospilus ambiguus Cirrospilus coccivorus Closterocerus insignis Closterocerus pulcherrimus Elasmus anamalaianus Elasmus ashmeadi Elasmus binocellatus Elasmus brevicornis Elasmus indicus Elasmus khandalus Elasmus kollimalaianus Elasmus nagombiensis Elasmus narendrani Elasmus punensis Eulophus tardescens Euplectromorpha formosus Euplectromorpha jamburaliyaensis Euplectromorpha laminum Euplectrus atrafacies Euplectrus ceylonensis Euplectrus colliosilvus Euplectrus geethae Euplectrus leucostomus Euplectrus litoralis Euplectrus mellocoxus Euplectrus nibilis Euplectrus peechansis Euplectrus xanthovultus Metaplectrus solitarius Metaplectrus teresgaster Metaplectrus thoseae Oomyzus ovulorum Parasecodes bali Parasecodes ratnapur Platyplectrus coracinus Platyplectrus flavus Platyplectrus gannoruwaensis Platyplectrus melinus Platyplectrus truncatus Sarasvatia srilankensis Sympiesis striatipes Tamarixia leucaenae Tetrastichus howardi Tetrastichus lankicus Tetrastichus niger Tetrastichus patannas Trichospilus diatraeae Trichospilus pupivorus Family: Eupelmidae - Eupelmids Balcha reticulifrons Calosota aestivalis Eupelmus javae Eupelmus tachardiae Metapelma albisquamulata Metapelma taprobanae Family: Eurytomidae - Seed chalcids Eurytoma attiva Eurytoma contraria Prodecatoma josephi Family: Figitidae - Figitids Prosaspicera validispina Family: Formicidae - Ants Family: Ichneumonidae - Ichneumonids Casinaria lenticulata Chriodes orientalis Dusona flinti Enicospilus abessyniensis Enicospilus albiger Enicospilus krombeini Enicospilus laqueatus Temelucha pestifer Venturia lankana Venturia triangulata Family: Leucospidae - Leucospids Leucospis lankana Leucospis viridissima Family: Mymaridae - Fairyflies Acmopolynema narendrani Anagrus elegans Camptoptera enocki Camptoptera protuberculata Camptoptera serenellae Camptoptera tuberculata Ptilomymar besucheti Stephanodes similis Family: Orussidae - Parasitic wood wasps Mocsarya metallica Family: Perilampidae - Perilampids Krombeinius eumenidarum Krombeinius srilanka Monacon angustum Monacon senex Family: Platygastridae - Platygastrids Embidobia orientalis Habroteleia flavipes Heptascelio striatosternus Opisthacantha infortunata Oxyscelio ceylonensis Oxyscelio cuculli Scelio apo Scelio consobrinus Scelio variicornis Tanaodytes soror Telenomus adenyus Telenomus molorchus Telenomus sechellensis Trissolcus aloysiisabaudiae Trissolcus mitsukurii Trissolcus sulmo Family: Pompilidae - Spider wasps Agenioideus smithi Auplopus cyanellus Auplopus funerator Auplopus imitabilis Auplopus krombeini Auplopus laeviculus Auplopus lankaensis Cyphononyx confusus Episyron arrogans Episyron novarae Episyron praestigiosum Episyron tenebricum Evania appendigaster Hemipepsis caeruleopennis Hemipepsis fulvipennis Irenangelus punctipleuris Java atropos Pompilus cinereus Pompilus mirandus Pseudagenia aegina Pygmachus krombeini Family: Pteromalidae - Pteromalids Coelopisthia lankana Dinarmus vagabundus Dipara intermedia Grahamisia gastra Halticoptera propinqua Herbertia indica Mnoonema timida Neolyubana noyesi Papuopsia striata Philotrypesis quadrisetosa Sphegigaster brunneicornis Walkarella temeraria Family: Rhopalosomatidae - Rhopalosomatid wasps Paniscomima abnormis Paniscomima darlingi Paniscomima lottacontinua Family: Scoliidae - Scoliid wasps Liacos erythrosoma Family: Sphecidae - Thread-waisted wasps Chalybion bengalense Chalybion fuscum Chalybion gracile Sceliphron coromandelicum Sceliphron spinolae Sphex fumicatus Sphex sericeus Sphex subtruncatus - ssp krombeini Family: Torymidae - Torymids Odopoia atra Palmon greeni Podagrion charybdis Podagrion epibulum Podagrion judas Podagrion micans Podagrion scylla Torymoides amabilis Family: Trichogrammatidae - Trichogrammatids Mirufens ceylonensis Family: Vespidae - Social wasps Antepipona bipustulata Antepipona frontalis Antepipona ovalis Cyrtolabulus suavis Ischnogaster eximius Mitrodynerus vitripennis Polistes stigma - ssp. infraspecies, tamula Rhynchium brunneum Ropalidia marginata Symmorphus alkimus Vespa affinis - ssp. indosinensis Vespa mandarinia Vespa tropica - ssp. haematodes Notes References Sri Lanka hymenopterans
List of hymenopterans of Sri Lanka
[ "Biology" ]
2,889
[ "Biota by country", "Wildlife by country" ]
55,819,873
https://en.wikipedia.org/wiki/Basics%20of%20blue%20flower%20colouration
Blue flowers are rare in nature, and despite many attempts, blue roses, carnations and chrysanthemums in particular cannot not be produced by conventional breeding techniques. Blue colour in flower petals is caused by delphinidin, a type of anthocyanin, which are a class of flavonoids. The presence of delphinidin is not enough to produce blue color on its own; it must be in a alkaline environment, form a complex with flavones and metal ions, or some other mechanism. Blue color has also been produced through breeding with the anthocynanin rosacyanin. Mechanisms Self-association is correlated with the anthocyanin concentration. When concentration is higher we can observe change in the absorbance maximum and increase of colour intensity. Molecules of anthocyanins associate together what results in stronger and darker colour. Co-pigmentation stabilizes and gives protection to anthocyanins in the complexes. Co-pigments are colourless or have slightly yellow colour. Co-pigments usually are flavonoids (flavones, flavonols, flavanons, flavanols), other polyphenols, alkaloids, amino acids or organic acids. The most efficient co-pigments are flavonols like rutin or quercetin and phenolic acids like sinapic acid or ferulic acid. Association of co-pigment with anthocyanin causes bathochromic effect, shift in absorption maximum to higher wavelength, in result we can observe change of the colour from red to blue. This phenomenon is also called bluing effect. We can diversify two types of co-pigmentation, intermolecular and intramolecular. In the first type co-pigment is bound to anthocyanin by non-covalent bond (hydrogen bond, hydrophobic interaction, ionic interaction). In the second type, we can observe covalent acylation of the glycosyl moiety of anthocyanin. Intramolecular co-pigmentation has stronger effect on the colour. This type of protective stacking is called also a sandwich type stacking and is a very common mechanism in the formation of blue flower colour. Tirantin present in Clitoria ternatea (butterfly pea) and phacelianin present in Phacelia campanularia are examples of pigments with intramolecular co-pigmentation. Metal complexation: Often blue flower colour is correlated with presence of anthocyanins in the complexes with metals ions. Metalloanthocyanins are composed from anthocyanins, flavones and metal ions in the stochiometric amounts of 6:6:2, respectively. Typical metals in anthocyanin complexes are iron (Fe), Magnesium (Mg), aluminium (Al), copper (Cu), potassium (K) and tin (Sn). Anthocyanins that can form complexes with metal ions are only cyanidin or delphinidin type, because at least two free hydroxyl groups in the B-ring are necessary to be present. Examples of metalloanthocyanins: Commelinin isolated from Commelina communis contains malonylawobanin (delphinidin type), flavocommelin (as a co-pigment) and magnesium ions in stochiometric ratio 6:6:2. Protocyanin is a blue pigment of cornflower (Centaurea cyanus). It is composed from succinylcyanin (anthocyanin), malonylflavone (co-pigment), iron and magnesium anions, and two calcium ions to stabilize the complex. Interesting is that the same anthocyanin when is not in complex with the metal ions is present in red rose petals. Fuzzy metal complex pigments: In the blue coloured flowers, much often instead than metalloanthocyanins we can find non – stoichiometric metal – complex pigments stabilized by co-pigmentation. Those pigments show blue colour only in aqueous solution and are less stable than metalloanthocyanins. Example of this kind of pigment is present in hydrangea sepals. Main anthocyanin here is delphinidin-3-glucoside what should result in the blue flower formation, but cultivars with red and pink flowers are also present. It is known that acidification of soil can cause change of the hydrangea flower colour from red/pink to blue/violet. Explanation of this phenomenon we can find in the molar ratio of co-pigment (acylquninic acid) to anthocyanin, which is much higher in the blue cells, also the level of Al3+ ions is higher in the blue flowers. Additionally, the pH of blue cell is around 4,1 and red is lower around 3,3. This supramolecule is relatively unstable and easily can fall apart in result of change of component concentration or pH conditions, so this can explain why blue colour in hydrangea sepals has low stability. Vacuolar pH influence on flower colour: pH theory was the first concept, that tried to explain the mystery of blue colour formation in flower petals. First observation showed that cyanine extracted from blue cornflower changes the colour in aqueous solution in different pH. In the acidic range pigment was red but in alkaline solution was blue. It leads to conclusion that increase of pH in cell vacuole should cause increase of blue coloration. This phenomenon we can observe in the morning glory (Ipomoea tricolor) and Japanese blue morning glory (Ipomoea nil) petals. During the flower development we can observe change of the flower colour form purple to blue. Morning glory has just one delphinidin type anthocyanin and the composition of it does not change during the flower development, but change of the colour is caused by increase of pH in vacuole of coloured cells from 6.6 in buds to 7.7 in fully matured flowers. During the early stage of development acidic pH is maintained by proton pumps, on the latter stage K+/H+ exchanger is responsible for vacuole alkalization. Molecular basis The anthocyanin biosynthesis pathway is now well known and most of the enzymes are characterised. In the formation of blue pigments a few enzymes have particularly important roles, in particular flavonoid 3'5'-hydroxylase (F3'5'H) and dihydroflavonol 4-reductase (DFR). The flavonoid 3'5'H-hydroxylase is responsible for the introduction of the second and third hydroxyl group in the B-ring of dihydrokaempferol (DHK) or naringenin which are regarded as the main substrates of the reaction. Product of the reaction with DHK is dihydromyricetin (DHM), precursor for synthesis of all delphinidin type anthocyanin. Enzyme is a member of cytochrome P450 protein family (P450s). It is a very diverse group of heme-containing oxidases, which catalyse NADPH- or NADH-dependent oxidation. F3'5'H was classified into CYP75A subfamily. This enzyme this is regarded as necessary for the blue pigment formation. Dihydroflavonol 4-reductase is the oxidoreductase that catalyses in the presence of NADPH the stereospecific reduction of the keto group in position 4 of dihydroflavonols producing colourless leucoanthocyanidins as a precursor for anthocyanin formation. Enzyme can show substrate specificity with respect to the B-ring hydroxylation pattern of the dihydroflavonol and can therefore have an influence on the type of formed anthocyanin. For the blue pigment formation, necessary is enzyme, which accept dihydromyricetin (DHM) as a substrate. Product of DFR reaction with DHM in the following steps of the pathway is converted to delphinidin type blue pigments. Cultivation In some very economically important flowers like roses, carnations and chrysanthemums despite a lot of efforts was not possible to breed the flowers with blue petals coloration. The lack of F3'5'H enzyme and hence delphinidin type anthocyanin is the reason why blue flower colour was not possible to obtain. Blue carnations Delphinidin accumulating carnations (Dianthus caryophyllus) were obtained by overexpression of petunia F3'5'H and DFR in the cultivars, without endogenous DFR activity. As a result, a few cultivars with different purple hue of the flowers were generated. Blue roses Roses are especially difficult to obtain blue/violet flower colour. Lack of F3'5'H and unfavourable vacuole pH were the main obstacles. A lot of cultivars were screened to choose the proper one for genetic modification. Finally, flowers with violet/blue hues were obtained by overexpression of viola F3'5'H, down regulation of endogenous DFR and in the same time overexpression of iris (Iris x hollandica) DFR. In the result of those modifications flowers accumulate almost exclusively delphinidin type pigments in the petals. Additionally, hybridizers throughout the history of rose cultivation have made several strides toward producing rose varieties with colors in the lavender, violet, and mauve color families. These colors were the height of fashion especially in the late 1800s, when the industrial revolution made synthetic color pigments inexpensive and widely available for the first time. Bright shades of royal purple, mauve, and blue naturally became extremely popular and fashionable because these colors previously were only available to extremely wealthy people. The pigments to create these colors were previously very scarce. Several classes of garden rose were created with magenta and violet flowers during this time period to reflect the growing trend in popularity of these previously luxurious and rare colors. Notable examples of popular historical roses which are still grown today by rosarians include the gallica rose 'Cardinal de Richelui'; the hybrid perpetual 'Reine des violettes', and the setigera rambler rose 'Veilchenblau'. Their violet flower colors are due to complexes of cyanidin with sugar molecules, metal complexes, and tannins naturally present in the flower petals. In the early 1900s, a new species of rose called Rosa foetida 'bicolor' was introduced into world commerce from the middle east. It has flowers which are bright shades of butter yellow, orange, and velvety blood red, which introduced new genetic traits that serendipitously and coincidentally created a pathway toward new lavender and blue pigments that are independent of delphinidin, a blue pigment not naturally found in roses. That pathway involves the final stages of flavanoid pigment synthesis which would normally cause flowers to appear yellow or orange. This new species carried small traces of unused genes that allowed production of another type of blue pigment called rosacyanin, which most roses evolved to stop utilizing in favor of producing flower fragrances to attract pollinators. The structure of rosacyanin was described in 2002. Rosacyanin allows roses to come in delicate mauve, lavender, and true blue shades. These colors can be seen in the hybrid tea roses 'Sterling Silver', 'Blue Girl', and 'Blue Moon', among others, which descend from yellow roses. Most notably a popular yellow hybrid tea rose called 'Peace', which was named to commemorate the end of World War II, was used extensively in hybridizing, and fathered most of the original lavender hybrid teas. In 2004, Japanese company Suntory produced a blue rose, named Applause. Further hybridizing advancements, scientific study, and possibly further genetic engineering will be necessary to concentrate these natural genetic factors and cofactors before roses will be able to come in deep shades of true blue. See also Blue flower Pterobilin - a blue pigment of animal origin References Agronomy Biological pigments Biological processes Botany Cellular respiration Plant physiology Quantum biology
Basics of blue flower colouration
[ "Physics", "Chemistry", "Biology" ]
2,596
[ "Plant physiology", "Cellular respiration", "Plants", "Quantum mechanics", "nan", "Botany", "Biochemistry", "Pigmentation", "Biological pigments", "Metabolism", "Quantum biology" ]
55,821,003
https://en.wikipedia.org/wiki/Electrochemical%20stripping%20analysis
Electrochemical stripping analysis is a set of analytical chemistry methods based on voltammetry or potentiometry that are used for quantitative determination of ions in solution. Stripping voltammetry (anodic, cathodic and adsorptive) have been employed for analysis of organic molecules as well as metal ions. Carbon paste, glassy carbon paste, and glassy carbon electrodes when modified are termed as chemically modified electrodes and have been employed for the analysis of organic and inorganic compounds. Stripping analysis is an analytical technique that involves (i) preconcentration of a metal phase onto a solid electrode surface or into Hg (liquid) at negative potentials and (ii) selective oxidation of each metal phase species during an anodic potential sweep. Stripping analysis has the following properties: sensitive and reproducible (RSD<5%) method for trace metal ion analysis in aqueous media, 2) concentration limits of detection for many metals are in the low ppb to high ppt range (S/N=3) and this compares favorably with AAS or ICP analysis, field deployable instrumentation that is inexpensive, approximately 12-15 metal ions can be analyzed by this method. The stripping peak currents and peak widths are a function of the size, coverage and distribution of the metal phase on the electrode surface (Hg or alternate). Anodic stripping voltammetry Anodic stripping voltammetry is a voltammetric method for quantitative determination of specific ionic species. The analyte of interest is electroplated on the working electrode during a deposition step, and oxidized from the electrode during the stripping step. The current is measured during the stripping step. The oxidation of species is registered as a peak in the current signal at the potential at which the species begins to be oxidized. The stripping step can be either linear, staircase, squarewave, or pulse. Anodic stripping voltammetry usually incorporates three electrodes, a working electrode, auxiliary electrode (sometimes called the counter electrode), and reference electrode. The solution being analyzed usually has an electrolyte added to it. For most standard tests, the working electrode is a bismuth or mercury film electrode (in a disk or planar strip configuration). The mercury film forms an amalgam with the analyte of interest, which upon oxidation results in a sharp peak, improving resolution between analytes. The mercury film is formed over a glassy carbon electrode. A mercury drop electrode has also been used for much the same reasons. In cases where the analyte of interest has an oxidizing potential above that of mercury, or where a mercury electrode would be otherwise unsuitable, a solid, inert metal such as silver, gold, or platinum may also be used. Anodic stripping voltammetry usually incorporates 4 steps if the working electrode is a mercury film or mercury drop electrode and the solution incorporates stirring. The solution is stirred during the first two steps at a repeatable rate. The first step is a cleaning step; in the cleaning step, the potential is held at a more oxidizing potential than the analyte of interest for a period of time in order to fully remove it from the electrode. In the second step, the potential is held at a lower potential, low enough to reduce the analyte and deposit it on the electrode. After the second step, the stirring is stopped, and the electrode is kept at the lower potential. The purpose of this third step is to allow the deposited material to distribute more evenly in the mercury. If a solid inert electrode is used, this step is unnecessary. The last step involves raising the working electrode to a higher potential (anodic), and stripping (oxidizing) the analyte. As the analyte is oxidized, it gives off electrons which are measured as a current. Anodic stripping voltammetry can detect μg/L concentrations of analyte. This method has an excellent detection limit (typically 10−9 - 10−10 M). Cathodic stripping voltammetry Cathodic stripping voltammetry is a voltammetric method for quantitative determination of specific ionic species. It is similar to the trace analysis method anodic stripping voltammetry, except that for the plating step, the potential is held at an oxidizing potential, and the oxidized species are stripped from the electrode by sweeping the potential negatively. This technique is used for ionic species that form insoluble salts and will deposit on or near the anodic, working electrode during deposition. The stripping step can be either linear, staircase, squarewave, or pulse. Adsorptive stripping voltammetry Adsorptive stripping voltammetry is similar to anodic and cathodic stripping voltammetry except that the preconcentration step is not controlled by electrolysis. The preconcentration step in adsorptive stripping voltammetry is accomplished by adsorption on the working electrode surface, or by reactions with chemically modified electrodes. References Electroanalytical methods
Electrochemical stripping analysis
[ "Chemistry" ]
1,036
[ "Electroanalytical methods", "Electroanalytical chemistry" ]
55,821,108
https://en.wikipedia.org/wiki/Anthochlor%20pigments
Anthochlor pigments (ἄνθος anthos = flower ; χλωρός chlōrós = yellowish) are a group of secondary plant metabolites and with carotenoids and some flavonoids produce yellow flower colour. Both, chalcones and aurones are known as anthochlor pigments. Anthochlor pigments serve as UV nectar guides in some plants. Important anthochlor pigments accumulating plants are from the genus Coreopsis, Snapdragon (Antirrhinum majus) or Bidens ferulifolia. History Botanists began early to deal with the distribution of yellow flower colouration pigments, especially with carotenoids and yellow flavonoids. The first reference of yellow pigments with properties resembling those of anthochlor pigments is mentioned by Fremy and Cloez in 1854. However, there are only a few and often contrary references pertaining to anthochlor pigments in the literature, which is perhaps down to the fact that “…the anthochlor [pigment] occurs only rarely in the plant kingdom and we [the botanists] are used to attributing yellow colouration of blossoms somewhat indiscriminately to carotenoids”. Classification Though anthochlors are frequently ranked among flavonoids, their structure cannot be derived from the flavonoid skeleton. Some plants (especially Asteraceae) accumulate two types of anthochlor pigments. On the one hand, the hydroxytypes of chalcones and aurones, on the other hand the deoxy-types of chalcones and their corresponding aurones. Both types differ only in the presence of an hydroxyl group in the 6’ position of the B-ring (chalcones) or the 4 position of the A-ring (aurones), respectively. Hydroxychalcones are intermediates of the subsequent biosynthesis of flavonoids and quickly isomerize to flavanones either chemically or enzymatically. Thus, hydroxychalcones cannot be accumulated in plants. Biosynthesis The formation of anthochlor pigments is based on the biosynthesis pathway common to all flavonoids. The key to the process is the enzyme chalcone synthase (CHS), which catalyzes the formation of a hydroxyl chalcone from three molecules of malonyl-CoA and one molecule cinnamoyl-CoA. Functioning as intermediates of the subsequent biosynthesis of flavonoids, hydroxyl chalcones are not chemically stable and quickly isomerize to flavanones. However, some plants are capable of accumulating hydroxyaurones, formed by the enzyme aurone synthase (AUS). In the presence of the enzyme chalcone reductase (CHR) and NADPH as a co-factor, the oxygen function of the polyketide intermediate is reduced and eliminated as water prior to cyclization, resulting in the formation of 6’-deoxychalcones. In contrast to hydroxychalcones, deoxychalcones are chemically stable and therefore can be accumulated in plants. Parallel to the monooxygenase flavonoid 3’-hydroxylase, the enzyme chalcone 3-hydroxylase catalyzes the hydroxylation at the C3-position of the A-ring of chalcones. This additional hydroxyl group causes a shift of light absorption and leads to a slightly different yellow tone when the chalcone is accumulated in plants. Likewise hydroxychalcones, deoxychalcones can be converted to the corresponding aurones, catalyzed by the enzyme aurone synthase (AUS). Subsequent processes can include methylation, glycosylation and acetylation. Ecological relevance Yellow flower colouration appeared as an adaption to the colour sense of insects in order to attract those as pollinators. Many Asteraceae accumulate carotenoids as well as anthochlor pigments [7]. In Bidens ferulifolia (Jacq.) carotenoids are spread evenly across the petals whereas anthochlor pigments are accumulated at the petal base. Whilst the flowers appear monochromatic yellow to humans, the petals appear two-coloured to UV-sensitive insects, because of the different UV absorption of carotenoids and anthochlor pigments. Plants use this phenomenon for guiding pollinators to the petal center [Fig. 4]. Apart from providing yellow flower colouration, anthochlor pigments play an indispensable role in the floral immune system and plant health. Verification Exposing anthochlors to ammonia or alkaline vapour of cigarettes results in a colour shift from yellow to orange. This is an easy approach to detecting anthochlor pigments. This is due to the pH dependent transition of the undissociated phenol groups to phenolates, which results in a bathochromatic shift of approximately 100 nm from the violet to the blue range of the spectrum. The corresponding shift of the reflected wavelengths is perceived as a colour switch to the human eye References Plant metabolism Biological pigments
Anthochlor pigments
[ "Chemistry", "Biology" ]
1,113
[ "Biological pigments", "Plant metabolism", "Metabolism", "Pigmentation" ]
55,821,253
https://en.wikipedia.org/wiki/Religious%20Orders%20Study
The Religious Orders Study conducted at the Rush Alzheimer's Disease Center at Rush University in Chicago is a research project begun in 1994 exploring the effects of aging on the brain. More than 1,500 nuns, priests, and other religious professionals are participating across the United States. The study is finding that cognitive exercise including social activities and learning new skills has a protective effect on brain health and the onset of dementia, while negative psychological factors like anxiety and clinical depression are correlated with cognitive decline. The Religious Orders Study follows the earlier Nun Study. Initial funding was provided by the National Institute on Aging in 1993. References Neuroscience projects Alzheimer's disease Cohort studies Rush Medical College 1994 establishments in the United States Pathology Longitudinal studies
Religious Orders Study
[ "Biology" ]
143
[ "Pathology" ]
55,822,704
https://en.wikipedia.org/wiki/New-collar%20worker
A new-collar worker is an individual who develops technical and soft skills needed to work in the contemporary technology industry through nontraditional education paths. The term was introduced by IBM CEO Ginni Rometty in late 2016 and refers to "middle-skill" occupations in technology, such as cybersecurity analysts, application developers and cloud computing specialists. Etymology The term "new-collar job" is a play on “blue-collar job”. It originated with IBM's CEO Ginni Rometty, relating to the company's efforts to increase the number of people qualified for technology jobs. In November 2016, Rometty wrote an open letter to then-President-elect Donald Trump, which introduced the idea of "new-collar jobs" and urged his support for the creation of these types of roles. Rometty coined the term in response to new employment designations as industries are moving into a new technology era, and jobs are created that require new skills in data science, cloud computing and artificial intelligence. Occupations and education requirements According to Rometty, "relevant skills, sometimes obtained through vocational training", are the qualifying characteristics of new-collar work. Typical new-collar jobs include: cloud computing technicians, database managers, cybersecurity analysts, user interface designers, and other assorted IT roles. Technical skills and education are required for these roles but not necessarily a four-year college degree. Skills may be developed through nontraditional education such as community college courses and industry certification programs. Employers of new-collar workers value the ability to adapt and learn, equally to more formal education. As well, training for new-collar jobs often involves development of relevant soft skills. Due to a widespread skills gap, industry demand for new-collar workers has led to the development of education initiatives focused on technical skills. Examples of such initiatives include a partnership between Delta Air Lines and about 37 aviation maintenance schools in the US to develop a curriculum focused on skills needed in the aviation industry, and IBM's P-Tech program for high-school and associate degree. Usage In the United States, the "New Collar Jobs Act" was released by Representatives Ted Lieu (California), Matt Cartwright (Pennsylvania) and Ann McLane Kuster (New Hampshire) in July 2017. The Act sought to provide scholarship funding and debt relief for individuals who study cybersecurity and take up cybersecurity roles, as well as establishing tax breaks for employers that offer cybersecurity training. In August 2017, Virginia Lt. Governor Ralph Northam announced a vocational training program titled "Get Skilled, Get A Job, and Give Back", focused on skills for new-collar jobs. See also Designation of workers by collar color IBM SkillsBuild References 2016 neologisms Employment classifications IBM Office work Social classes
New-collar worker
[ "Technology" ]
565
[ "People in information technology", "Information technology" ]
55,823,152
https://en.wikipedia.org/wiki/Diffeomorphometry
Diffeomorphometry is the metric study of imagery, shape and form in the discipline of computational anatomy (CA) in medical imaging. The study of images in computational anatomy rely on high-dimensional diffeomorphism groups which generate orbits of the form , in which images can be dense scalar magnetic resonance or computed axial tomography images. For deformable shapes these are the collection of manifolds , points, curves and surfaces. The diffeomorphisms move the images and shapes through the orbit according to which are defined as the group actions of computational anatomy. The orbit of shapes and forms is made into a metric space by inducing a metric on the group of diffeomorphisms. The study of metrics on groups of diffeomorphisms and the study of metrics between manifolds and surfaces has been an area of significant investigation. In Computational anatomy, the diffeomorphometry metric measures how close and far two shapes or images are from each other. Informally, the metric is constructed by defining a flow of diffeomorphisms which connect the group elements from one to another, so for then . The metric between two coordinate systems or diffeomorphisms is then the shortest length or geodesic flow connecting them. The metric on the space associated to the geodesics is given by. The metrics on the orbits are inherited from the metric induced on the diffeomorphism group. The group is thusly made into a smooth Riemannian manifold with Riemannian metric associated to the tangent spaces at all . The Riemannian metric satisfies at every point of the manifold there is an inner product inducing the norm on the tangent space that varies smoothly across . Oftentimes, the familiar Euclidean metric is not directly applicable because the patterns of shapes and images don't form a vector space. In the Riemannian orbit model of Computational anatomy, diffeomorphisms acting on the forms don't act linearly. There are many ways to define metrics, and for the sets associated to shapes the Hausdorff metric is another. The method used to induce the Riemannian metric is to induce the metric on the orbit of shapes by defining it in terms of the metric length between diffeomorphic coordinate system transformations of the flows. Measuring the lengths of the geodesic flow between coordinates systems in the orbit of shapes is called diffeomorphometry. The diffeomorphisms group generated as Lagrangian and Eulerian flows The diffeomorphisms in computational anatomy are generated to satisfy the Lagrangian and Eulerian specification of the flow fields, , generated via the ordinary differential equation with the Eulerian vector fields in for . The inverse for the flow is given by and the Jacobian matrix for flows in given as To ensure smooth flows of diffeomorphisms with inverse, the vector fields must be at least 1-time continuously differentiable in space which are modelled as elements of the Hilbert space using the Sobolev embedding theorems so that each element has 3-square-integrable derivatives thusly implies embeds smoothly in 1-time continuously differentiable functions. The diffeomorphism group are flows with vector fields absolutely integrable in Sobolev norm: The Riemannian orbit model Shapes in Computational Anatomy (CA) are studied via the use of diffeomorphic mapping for establishing correspondences between anatomical coordinate systems. In this setting, 3-dimensional medical images are modelled as diffeomorphic transformations of some exemplar, termed the template , resulting in the observed images to be elements of the random orbit model of CA. For images these are defined as , with for charts representing sub-manifolds denoted as . The Riemannian metric The orbit of shapes and forms in Computational Anatomy are generated by the group action , . These are made into a Riemannian orbits by introducing a metric associated to each point and associated tangent space. For this a metric is defined on the group which induces the metric on the orbit. Take as the metric for Computational anatomy at each element of the tangent space in the group of diffeomorphisms with the vector fields modelled to be in a Hilbert space with the norm in the Hilbert space . We model as a reproducing kernel Hilbert space (RKHS) defined by a 1-1, differential operator , where is the dual-space. In general, is a generalized function or distribution, the linear form associated to the inner-product and norm for generalized functions are interpreted by integration by parts according to for , When , a vector density, The differential operator is selected so that the Green's kernel associated to the inverse is sufficiently smooth so that the vector fields support 1-continuous derivative. The Sobolev embedding theorem arguments were made in demonstrating that 1-continuous derivative is required for smooth flows. The Green's operator generated from the Green's function(scalar case) associated to the differential operator smooths. For proper choice of then is an RKHS with the operator . The Green's kernels associated to the differential operator smooths since for controlling enough derivatives in the square-integral sense the kernel is continuously differentiable in both variables implying The diffeomorphometry of the space of shapes and forms The right-invariant metric on diffeomorphisms The metric on the group of diffeomorphisms is defined by the distance as defined on pairs of elements in the group of diffeomorphisms according to This distance provides a right-invariant metric of diffeomorphometry, invariant to reparameterization of space since for all , The metric on shapes and forms The distance on images, , The distance on shapes and forms, , The metric on geodesic flows of landmarks, surfaces, and volumes within the orbit For calculating the metric, the geodesics are a dynamical system, the flow of coordinates and the control the vector field related via The Hamiltonian view reparameterizes the momentum distribution in terms of the Hamiltonian momentum, a Lagrange multiplier constraining the Lagrangian velocity .accordingly: The Pontryagin maximum principle gives the Hamiltonian The optimizing vector field with dynamics . Along the geodesic the Hamiltonian is constant: . The metric distance between coordinate systems connected via the geodesic determined by the induced distance between identity and group element: Landmark or pointset geodesics For landmarks, , the Hamiltonian momentum with Hamiltonian dynamics taking the form with The metric between landmarks The dynamics associated to these geodesics is shown in the accompanying figure. Surface geodesics For surfaces, the Hamiltonian momentum is defined across the surface has Hamiltonian and dynamics The metric between surface coordinates Volume geodesics For volumes the Hamiltonian with dynamics The metric between volumes Software for diffeomorphic mapping Software suites containing a variety of diffeomorphic mapping algorithms include the following: Deformetrica ANTS DARTEL Voxel-based morphometry(VBM) DEMONS LDDMM StationaryLDDMM Cloud software MRICloud References Computational anatomy Medical imaging Geometry Mathematical analysis Fluid mechanics Bayesian estimation Neuroscience Neural engineering Biomedical engineering
Diffeomorphometry
[ "Mathematics", "Engineering", "Biology" ]
1,485
[ "Mathematical analysis", "Biological engineering", "Neuroscience", "Biomedical engineering", "Civil engineering", "Geometry", "Fluid mechanics", "Medical technology" ]
55,823,225
https://en.wikipedia.org/wiki/List%20of%20human%20endocrine%20organs%20and%20actions
Hypothalamic-pituitary axis Hypothalamus Pineal body (epiphysis) Pituitary gland (hypophysis) The pituitary gland (or hypophysis) is an endocrine gland about the size of a pea and weighing in humans. It is a protrusion off the bottom of the hypothalamus at the base of the brain, and rests in a small, bony cavity (sella turcica) covered by a dural fold (diaphragma sellae). The pituitary is functionally connected to the hypothalamus by the median eminence via a small tube called the infundibular stem or pituitary stalk. The anterior pituitary (adenohypophysis) is connected to the hypothalamus via the hypothalamo–hypophyseal portal vessels, which allows for quicker and more efficient communication between the hypothalamus and the pituitary. Anterior pituitary lobe (adenohypophysis) Posterior pituitary lobe (neurohypophysis) Oxytocin and anti-diuretic hormone are not secreted in the posterior lobe, merely stored. Thyroid Digestive system Stomach Duodenum (small intestine) Liver Pancreas The pancreas is a heterocrine gland as it functions both as an endocrine and as an exocrine gland. Kidney Adrenal glands Adrenal cortex Adrenal medulla Reproductive Testes Ovarian follicle and corpus luteum Placenta (when pregnant) Uterus (when pregnant) Calcium regulation Parathyroid Skin Other Heart Bone Skeletal muscle In 1998, skeletal muscle was identified as an endocrine organ due to its now well-established role in the secretion of myokines. The use of the term myokine to describe cytokines and other peptides produced by muscle as signalling molecules was proposed in 2003. Adipose tissue Signalling molecules released by adipose tissue are referred to as adipokines. References Endocrine system Human physiology endocrine organs
List of human endocrine organs and actions
[ "Biology" ]
452
[ "Organ systems", "Endocrine system" ]
55,823,265
https://en.wikipedia.org/wiki/Semisimple%20representation
In mathematics, specifically in representation theory, a semisimple representation (also called a completely reducible representation) is a linear representation of a group or an algebra that is a direct sum of simple representations (also called irreducible representations). It is an example of the general mathematical notion of semisimplicity. Many representations that appear in applications of representation theory are semisimple or can be approximated by semisimple representations. A semisimple module over an algebra over a field is an example of a semisimple representation. Conversely, a semisimple representation of a group G over a field k is a semisimple module over the group algebra k[G&hairsp;]. Equivalent characterizations Let V be a representation of a group G; or more generally, let V be a vector space with a set of linear endomorphisms acting on it. In general, a vector space acted on by a set of linear endomorphisms is said to be simple (or irreducible) if the only invariant subspaces for those operators are zero and the vector space itself; a semisimple representation then is a direct sum of simple representations in that sense. The following are equivalent: V is semisimple as a representation. V is a sum of simple subrepresentations. Each subrepresentation W of V admits a complementary representation: a subrepresentation W such that . The equivalence of the above conditions can be proved based on the following lemma, which is of independent interest: Proof of the lemma: Write where are simple representations. Without loss of generality, we can assume are subrepresentations; i.e., we can assume the direct sum is internal. Now, consider the family of all possible direct sums with various subsets . Put the partial ordering on it by saying the direct sum over K is less than the direct sum over J if . By Zorn's lemma, we can find a maximal such that . We claim that . By definition, so we only need to show that . If is a proper subrepresentatiom of then there exists such that . Since is simple (irreducible), . This contradicts the maximality of , so as claimed. Hence, is a section of p. Note that we cannot take to the set of such that . The reason is that it can happen, and frequently does, that is a subspace of and yet . For example, take , and to be three distinct lines through the origin in . For an explicit counterexample, let be the algebra of 2-by-2 matrices and set , the regular representation of . Set and and set . Then , and are all irreducible -modules and . Let be the natural surjection. Then and . In this case, but because this sum is not direct. Proof of equivalences : Take p to be the natural surjection . Since V is semisimple, p splits and so, through a section, is isomorphic to a subrepretation that is complementary to W. : We shall first observe that every nonzero subrepresentation W has a simple subrepresentation. Shrinking W to a (nonzero) cyclic subrepresentation we can assume it is finitely generated. Then it has a maximal subrepresentation U. By the condition 3., for some . By modular law, it implies . Then is a simple subrepresentation of W ("simple" because of maximality). This establishes the observation. Now, take to be the sum of all simple subrepresentations, which, by 3., admits a complementary representation . If , then, by the early observation, contains a simple subrepresentation and so , a nonsense. Hence, . : The implication is a direct generalization of a basic fact in linear algebra that a basis can be extracted from a spanning set of a vector space. That is we can prove the following slightly more precise statement: When is a sum of simple subrepresentations, a semisimple decomposition , some subset , can be extracted from the sum. As in the proof of the lemma, we can find a maximal direct sum that consists of some 's. Now, for each i in I, by simplicity, either or . In the second case, the direct sum is a contradiction to the maximality of W. Hence, . Examples and non-examples Unitary representations A finite-dimensional unitary representation (i.e., a representation factoring through a unitary group) is a basic example of a semisimple representation. Such a representation is semisimple since if W is a subrepresentation, then the orthogonal complement to W is a complementary representation because if and , then for any w in W since W is G-invariant, and so . For example, given a continuous finite-dimensional complex representation of a finite group or a compact group G, by the averaging argument, one can define an inner product on V that is G-invariant: i.e., , which is to say is a unitary operator and so is a unitary representation. Hence, every finite-dimensional continuous complex representation of G is semisimple. For a finite group, this is a special case of Maschke's theorem, which says a finite-dimensional representation of a finite group G over a field k with characteristic not dividing the order of G is semisimple. Representations of semisimple Lie algebras By Weyl's theorem on complete reducibility, every finite-dimensional representation of a semisimple Lie algebra over a field of characteristic zero is semisimple. Separable minimal polynomials Given a linear endomorphism T of a vector space V, V is semisimple as a representation of T (i.e., T is a semisimple operator) if and only if the minimal polynomial of T is separable; i.e., a product of distinct irreducible polynomials. Associated semisimple representation Given a finite-dimensional representation V, the Jordan–Hölder theorem says there is a filtration by subrepresentations: such that each successive quotient is a simple representation. Then the associated vector space is a semisimple representation called an associated semisimple representation, which, up to an isomorphism, is uniquely determined by V. Unipotent group non-example A representation of a unipotent group is generally not semisimple. Take to be the group consisting of real matrices ; it acts on in a natural way and makes V a representation of G. If W is a subrepresentation of V that has dimension 1, then a simple calculation shows that it must be spanned by the vector . That is, there are exactly three G-subrepresentations of V; in particular, V is not semisimple (as a unique one-dimensional subrepresentation does not admit a complementary representation). Semisimple decomposition and multiplicity The decomposition of a semisimple representation into simple ones, called a semisimple decomposition, need not be unique; for example, for a trivial representation, simple representations are one-dimensional vector spaces and thus a semisimple decomposition amounts to a choice of a basis of the representation vector space. The isotypic decomposition, on the other hand, is an example of a unique decomposition. However, for a finite-dimensional semisimple representation V over an algebraically closed field, the numbers of simple representations up to isomorphism appearing in the decomposition of V (1) are unique and (2) completely determine the representation up to isomorphism; this is a consequence of Schur's lemma in the following way. Suppose a finite-dimensional semisimple representation V over an algebraically closed field is given: by definition, it is a direct sum of simple representations. By grouping together simple representations in the decomposition that are isomorphic to each other, up to an isomorphism, one finds a decomposition (not necessarily unique): where are simple representations, mutually non-isomorphic to one another, and are positive integers. By Schur's lemma, , where refers to the equivariant linear maps. Also, each is unchanged if is replaced by another simple representation isomorphic to . Thus, the integers are independent of chosen decompositions; they are the multiplicities of simple representations , up to isomorphism, in V. In general, given a finite-dimensional representation of a group G over a field k, the composition is called the character of . When is semisimple with the decomposition as above, the trace is the sum of the traces of with multiplicities and thus, as functions on G, where are the characters of . When G is a finite group or more generally a compact group and is a unitary representation with the inner product given by the averaging argument, the Schur orthogonality relations say: the irreducible characters (characters of simple representations) of G are an orthonormal subset of the space of complex-valued functions on G and thus . Isotypic decomposition There is a decomposition of a semisimple representation that is unique, called the isotypic decomposition of the representation. By definition, given a simple representation S, the isotypic component of type S of a representation V is the sum of all subrepresentations of V that are isomorphic to S; note the component is also isomorphic to the direct sum of some choice of subrepresentations isomorphic to S (so the component is unique, while the summands are not necessary so). Then the isotypic decomposition of a semisimple representation V is the (unique) direct sum decomposition: where is the set of isomorphism classes of simple representations of G and is the isotypic component of V of type S for some . Example Let be the space of homogeneous degree-three polynomials over the complex numbers in variables . Then acts on by permutation of the three variables. This is a finite-dimensional complex representation of a finite group, and so is semisimple. Therefore, this 10-dimensional representation can be broken up into three isotypic components, each corresponding to one of the three irreducible representations of . In particular, contains three copies of the trivial representation, one copy of the sign representation, and three copies of the two-dimensional irreducible representation of . For example, the span of and is isomorphic to . This can more easily be seen by writing this two-dimensional subspace as . Another copy of can be written in a similar form: . So can the third: . Then is the isotypic component of type in . Completion In Fourier analysis, one decomposes a (nice) function as the limit of the Fourier series of the function. In much the same way, a representation itself may not be semisimple but it may be the completion (in a suitable sense) of a semisimple representation. The most basic case of this is the Peter–Weyl theorem, which decomposes the left (or right) regular representation of a compact group into the Hilbert-space completion of the direct sum of all simple unitary representations. As a corollary, there is a natural decomposition for = the Hilbert space of (classes of) square-integrable functions on a compact group G: where means the completion of the direct sum and the direct sum runs over all isomorphism classes of simple finite-dimensional unitary representations of G. Note here that every simple unitary representation (up to an isomorphism) appears in the sum with the multiplicity the dimension of the representation. When the group G is a finite group, the vector space is simply the group algebra of G and also the completion is vacuous. Thus, the theorem simply says that That is, each simple representation of G appears in the regular representation with multiplicity the dimension of the representation. This is one of standard facts in the representation theory of a finite group (and is much easier to prove). When the group G is the circle group , the theorem exactly amounts to the classical Fourier analysis. Applications to physics In quantum mechanics and particle physics, the angular momentum of an object can be described by complex representations of the rotation group SO(3), all of which are semisimple. Due to connection between SO(3) and SU(2), the non-relativistic spin of an elementary particle is described by complex representations of SU(2) and the relativistic spin is described by complex representations of SL2(C), all of which are semisimple. In angular momentum coupling, Clebsch–Gordan coefficients arise from the multiplicities of irreducible representations occurring in the semisimple decomposition of a tensor product of irreducible representations. Notes References Citations Sources ; NB: this reference, nominally, considers a semisimple module over a ring not over a group but this is not a material difference (the abstract part of the discussion goes through for groups as well). . Representation theory
Semisimple representation
[ "Mathematics" ]
2,684
[ "Representation theory", "Fields of abstract algebra" ]
55,823,640
https://en.wikipedia.org/wiki/Georges%20Tiercy
Georges César Tiercy (1886–1955) was a Swiss astronomer and the 7th director of the Observatoire de Genève from 1928 to 1956. Tiercy received his bachelor of science degree in 1913 from the University of Paris and his Ph.D. in science and mathematics from the University of Geneva in 1915. He was a master in a private college in Ouchy from 1908 to 1912. He taught mathematics in various schools in Geneva from 1913 to 1927 and was a privat-docent at the University of Geneva from 1915. After an internship at the observatories of Hamburg in 1927 and of Arcetri in Florence in 1927–1928, Tiercy became director of the observatory of Geneva in 1928. At the University of Geneva he was professor ordinarius of astronomy from 1928 to 1950 and rector from 1948 to 1950. At the University of Lausanne he was professor extraordinarius of astronomy from 1936 to 1953 and professor ordinarius from 1953 to 1955. He was the author or co-author of more than 250 papers. Tiercy was an Invited Speaker of the ICM in 1928 at Bologna and in 1932 at Zürich. He was president in 1931 of the Société de Physique et d'Histoire Naturelle (S.P.H.N.) of Geneva and was one of the founders in 1952 of the Swiss National Science Foundation. He did research on theoretical physics, astrophysics, geodesy, meteorology and chronometry. Selected publications References 20th-century Swiss astronomers 1886 births 1955 deaths Rectors of the University of Geneva
Georges Tiercy
[ "Astronomy" ]
319
[ "Astronomers", "Astronomer stubs", "Astronomy stubs" ]
55,823,783
https://en.wikipedia.org/wiki/Mulberry%20%28uranium%20alloy%29
Mulberry is a uranium alloy. It is used as a non-corroding or 'stainless' uranium alloy. It has been put forward as a structural material for the casings of the physics package in nuclear weapons, including those of North Korea. The composition is a ternary alloy, of 7.5% niobium, 2.5% zirconium, 90% uranium. Mulberry was developed in the 1960s at UCRL. Binary alloy compositions were first studied to avoid the mechanical problems of pure uranium: corrosion, dimensional instability, inability to improve its mechanical properties by heat treatment. Uranium-molybdenum alloys were found susceptible to stress-corrosion cracking, uranium-niobium alloys to be weak, and uranium-zirconium alloys to be brittle. Ternary alloys were next studied to try to avoid these drawbacks. Uranium-niobium-zirconium was found to be corrosion resistant and to permit age hardening, which could increase its hardness from . Multiple crystal phases were observed, with a critical temperature of 650°C. Above this the body-centered cubic γ phase was stable. Water quenching to room temperature produces a γs transition phase and with aging this transforms to a tetragonal γo phase. Further aging produces a monoclinic ɑ phase that is observed metallographically as a Widmanstätten pattern. The crystal structure of the alloy has been studied, particularly the γ phase. Uranium inclusions have been observed within the alloy although, unlike the binary alloys, niobium-rich inclusions were not. Early studies were uncertain as to whether these were inherent behaviours, or artifacts of their processing. References Uranium Alloys
Mulberry (uranium alloy)
[ "Chemistry" ]
351
[ "Chemical mixtures", "Alloys" ]
55,824,359
https://en.wikipedia.org/wiki/NGC%204516
NGC 4516 is a barred spiral galaxy located about 55 million light-years away in the constellation Coma Berenices. NGC 4516 was discovered by astronomer William Herschel on April 8, 1784. NGC 4516 is a member of the Virgo Cluster. See also List of NGC objects (4001–5000) NGC 4440 References External links Coma Berenices Barred spiral galaxies 4516 041661 07703 17840408 Virgo Cluster Discoveries by William Herschel +03-32-067
NGC 4516
[ "Astronomy" ]
107
[ "Coma Berenices", "Constellations" ]
55,825,273
https://en.wikipedia.org/wiki/Risk-sensitive%20foraging%20models
Risk-sensitive foraging models help to explain the variance in foraging behaviour in animals. This model allows powerful predictions to be made about expected foraging behaviour for individual groups of animals. Risk sensitive foraging is based on experimental evidence that the net energy budget level of an animal is predictive of type of foraging activity an animal will employ. Experimental evidence has indicated that individuals will change the type of foraging strategy that they use depending on environmental conditions and ability to meet net energy levels. When individuals can meet net energy level requirements by accessing food in risk aversive methods they do so. However, when net energy level requirements are not met by employing risk aversive methods, individuals are more likely to take risk prone actions in order to meet their net energy requirements. Caraco’s experiment (Juncos) Thomas Caraco and his colleagues in 1980 were amongst the first to study risk sensitive foraging behaviour in yellow-eyed juncos. For the original study seven yellow-eyed juncos were used in a two-part experiment. Part one examined foraging behaviours in five juncos when they were given a choice of eating on a perch where enough seeds were placed every time to meet their 24-hour energy requirements, or on a perch where they would sometimes find an abundance of seeds and sometimes no seeds. All individuals showed a preference to feed at the perch where they could get their daily seed requirement, the risk aversive choice. Part two examined foraging preference in four juncos, on one perch seeds were present every time but not enough to meet their 24-hour energy requirement. On the other perch they could sometimes find and abundance of seeds or no seeds. In this case the juncos showed a preference to feeding at the variable reward perch, choosing the risk prone feeding option. In order to test if individuals would change their strategy as a result of changed environment, two of the juncos from part one were used in part two of the experiment. As expected the juncos from part one who preferred the risk aversive foraging strategy switched to risk prone foraging behaviour in part two of the experiment. Thomas Caraco conducted follow up experiment in 1981 with dark-eyed juncos and used a larger sample size. The results were similar; dark-eyed juncos prefer risk aversive foraging behaviours when their 24-hour energy budgets can be met. However, when 24hr energy budgets are not met the juncos employ risk prone foraging behavior. Other examples Risk sensitive foraging has also been found in other animal species. Laboratory rats have also been found to display risk sensitive foraging. Rats prefer to forage at a constant food supply source if they are able to meet their energy requirements. But will employ risk prone foraging behaviour when the constant food supply source does not fulfill their daily energy requirement. The common shrew has also been found to use risk sensitive foraging methods. Choosing to be risk aversive when they are able to constantly meet their energy requirements. But switching over to risk prone foraging and variable reward when their energy requirements are not met regularly. Possible exceptions Follow-up studies conducted in hummingbirds have found conflicting evidence about risk sensitivity foraging. When the hummingbirds are given three different choices of food supply, risk sensitivity foraging model was not entirely accurate at predicting foraging strategy. When deciding to obtain food from experimentally manipulated flowers containing: low variance, high variance or constant nectar. Hummingbirds were found to prefer nectar from the low variance flower more than any other choice. Researchers suggest that these results may be attributed to the possibility that the hummingbirds were not able to examine the amount of nectar present in each flower visually. References Foraging Eating behaviors Risk analysis
Risk-sensitive foraging models
[ "Biology" ]
730
[ "Biological interactions", "Eating behaviors", "Behavior" ]
55,825,837
https://en.wikipedia.org/wiki/Beatriz%20%C3%81lvarez%20Sanna
Beatriz Álvarez Sanna (born 17 September 1968) is a Uruguayan chemist and biochemistry professor at the Faculty of Sciences of the University of the Republic. She researches in the areas of redox biochemistry and enzymology. In 2013 she was the winner of the L'Oréal-UNESCO Award for Women in Science. Career Álvarez Sanna earned a bachelor's degree in chemistry in 1991. In 1993 she obtained a master's degree in chemistry with a thesis focused on bacterial metabolism. In 1999 she received her doctorate in chemistry from the University of the Republic. Her thesis focused on the biological chemistry of peroxynitrite. She works as associate professor at the Enzymology Laboratory of the Faculty of Sciences, University of the Republic. Her main interests are redox biochemistry, kinetics, and enzymology. She develops lines of research in biological thiols and hydrogen sulfide. She is a member of the Editorial Committee of the Journal of Biological Chemistry. In 2013 Álvarez Sanna received the L'Oréal-UNESCO National Award. Her project on the biochemistry of hydrogen sulfide deals with this compound and its possible modulation for pharmacological production and administration, which could constitute new alternatives for the treatment of a wide spectrum of conditions, including hypertension, atherosclerosis, diabetes, and inflammation. She is a researcher at (PEDECIBA), and a Level II member of the Sistema Nacional de Investigadores (SNI). She is co-author of more than 40 publications in refereed international journals. References 1968 births 20th-century Uruguayan educators 20th-century women scientists 21st-century Uruguayan educators 21st-century women scientists Uruguayan biochemists Uruguayan chemists Living people L'Oréal-UNESCO Awards for Women in Science laureates University of the Republic (Uruguay) alumni Academic staff of the University of the Republic (Uruguay) Uruguayan educators Uruguayan women educators Women biochemists
Beatriz Álvarez Sanna
[ "Chemistry" ]
391
[ "Biochemists", "Women biochemists" ]
55,827,197
https://en.wikipedia.org/wiki/Dearomatization%20reaction
A dearomatization reaction is an organic reaction in which the reactants are arenes and the products permanently lose their aromaticity. It is of some importance in synthetic organic chemistry for the organic synthesis of new building blocks and in total synthesis. Types of carbocyclic arene dearomization include hydrogenative (Birch reduction), alkylative, photochemical, thermal, oxidative, transition metal-assisted and enzymatic. Photochemical Examples of photochemical reactions are those between certain arenes and alkenes forming [2+2] and [2+4] cycloaddition adducts. Enzymatic Examples of enzymes capable of arene dearomatization are toluene dixoyhydrogenase, naphthalene dixoyhydrogenase and benzoyl CoA reductase. Transition metal-assisted A classic example of transition metal-assisted dearomatization is the Buchner ring expansion. Catalytic asymmetric dearomatization reactions (CADA) are used in enantioselective synthesis. References Organic reactions
Dearomatization reaction
[ "Chemistry" ]
224
[ "Organic reactions" ]
59,151,390
https://en.wikipedia.org/wiki/Sulfobacillus
Sulfobacillus is a genus of bacteria containing six named species. Members of the genus are Gram-positive, acidophilic, spore-forming bacteria that are moderately thermophilic or thermotolerant. All species are facultative anaerobes capable of oxidizing sulfur-containing compounds; they differ in optimal growth temperature and metabolic capacity, particularly in their ability to grow on various organic carbon compounds. Ecology Sulfobacillus species are found globally in both natural and artificial acidic environments, such as hot springs, solfatara environments, hydrothermal vents, and in various forms of acid mine drainage. Compared to other bacterial species found in similar acidic environments, Sulfobacillus species are often present at relatively low abundance. Genome The genomes of several Sulfobacillus species have been sequenced. Differences between members include genome size and gene content related to sulfur oxidation pathways. Taxonomy Sulfobacillus was first described in 1978, along with the type species, Sulfobacillus thermosulfidooxidans. Five additional species have since been described, in at least one case discovered after samples believed to be S. thermosulfidooxidans showed unexpected characteristics. The genus is of uncertain taxonomic position. It was originally placed in the Clostridiales. It is likely related to the genus Thermaerobacter and may represent either a deep branch of the Bacillota or a separate phylum. Phylogeny See also List of bacterial orders List of bacteria genera References Acidophiles
Sulfobacillus
[ "Biology" ]
327
[ "Bacteria stubs", "Acids", "Acidophiles", "Bacteria" ]
59,152,777
https://en.wikipedia.org/wiki/Nexgo
NEXGO (also known as Shenzhen Xinguodu Technology Co., Ltd.) is a global manufacturer of high-tech payment terminals, PIN pads and point of sale hardware and software. The company is headquartered in Shenzhen, China, and engages in research and development, production, sale, and leasing of financial point of sale (POS) machines and related electronic payment processing products. Its business scope extends to various areas including electronic payment, biometric technology, intelligent hardware, credit services, audit services, blockchain technology, and big data services. NEXGO operates worldwide through a network of partners and has over 25 million point of sale terminals deployed in over 50 countries. History NEXGO was founded in Shenzhen, China in 2001. In the same year company succeeded in developing the 1st generation wireless POS in China. In 2002, the company was designated by the State Encryption Management Commission as one of the general electronic payment scrambler instrument suppliers. In 2004 the company launched the first color screen POS product in China. In 2006 NEXGO was selected by China UnionPay as one of the four international EFTPOS suppliers. In 2008, the company was recognized as one of Shenzhen's first national-level high-tech enterprises. In 2009 the company released the first big screen multi-media POS terminal in China. Cooperated with American railway passenger company, and become the first Chinese POS manufacturer to enter American market. In October 2010 the company was officially listed on the Growth Enterprise Market with stock No.: 300130. In 2014, NEXGO released G3, which is the first wireless POS that passed PCI 4.0 in the world Xinguodu's GMB algorithm security POS project was included in the national high-tech industrial development project. "In 2016, the company launched the N5 Smart POS. In the same year, a significant development occurred with the establishment of NEXGO's first fully owned overseas subsidiary, Nexgo do Brasil Participações Ltda. Following this expansion, the company further extended its international presence by establishing subsidiaries in Dubai and the United States. In 2018 Global Accelerex, licensed payment service provider in Nigeria unveiled NEXTGO N5 Smart POS as the first single unit Android Point of sale terminal to be certified for payment acceptance in Nigeria. See also Point of sale companies References External links Official website Technology companies of China Retail point of sale systems Point of sale companies Chinese companies established in 2001 Companies based in Shenzhen Chinese brands
Nexgo
[ "Technology" ]
516
[ "Retail point of sale systems", "Information systems" ]
59,156,069
https://en.wikipedia.org/wiki/RV%20Rachel%20Carson%20%282017%29
R/V Rachel Carson is a research vessel owned and operated by the University of Washington's School of Oceanography, named in honor of the marine biologist and writer Rachel Carson. The vessel is part of the UNOLS fleet. It is capable of conducting operations within the Salish Sea and coastal waters of the western United States and British Columbia. She can accommodate up to 28 persons, including the crew, for day operations, while up to 13 can be accommodated for multi-day operations. Service history R/V Aora, 2003–2016 The ship was originally launched in May 2003 at the Macduff Shipyard in Macduff, Scotland, as the R/V Aora, a fisheries research vessel. She was based at the University Marine Biological Station Millport in the Firth of Clyde, until the station was closed in 2013. R/V Rachel Carson, 2017–present In 2015 the University of Washington's School of Oceanography wanted to replace the fifty-year old , but were unable to raise the funds required to design and build a replacement. In December 2016 they found the Aora for sale on a yacht-trading website. After an inspection in March 2017, the ship was purchased for $1.07m on 8 August 2017, with the aid of a $1m gift. A programme of maintenance and some modifications at the MacDuff yard were completed in October, and the Rachel Carson was transported by ship from Rotterdam to West Palm Beach, Florida by early November. She was then transported to the University of Washington, arriving on 28 December. After further preparations and modifications the ship entered service on 7 April 2018, with a five-day cruise in Puget Sound to collect samples for monitoring by the Washington Ocean Acidification Center. She was accepted as a UNOLS vessel in the U.S. Academic Research Fleet on 24 July. References External links 2003 ships Ships built in Scotland Research vessels of the United States University-National Oceanographic Laboratory System research vessels University of Washington Environmental science RV 2017
RV Rachel Carson (2017)
[ "Environmental_science" ]
405
[ "nan" ]
59,156,287
https://en.wikipedia.org/wiki/NGC%203893
NGC 3893 is a spiral galaxy located in the constellation Ursa Major. It is located at a distance of circa 50 million light years from Earth, which, given its apparent dimensions, means that NGC 3893 is about 70,000 light years across. It was discovered by William Herschel on February 9, 1788. NGC 3893 interacts with its satellite, NGC 3896. Characteristics NGC 3893 is a grand design spiral galaxy. It has two main arms, with high surface brightness and numerous HII regions. A faint spiral arm extends from the south to the north side making an arc on the east side of NGC 3893. The galaxy is categorised as SAB in NED, but Hernández-Toledo and Puerari did not detect a bar in their observations. The stellar disk of NGC 3893 is estimated to have a mass of 2.3x1010 and dominates gas dynamics in the optical radius. The star formation rate in NGC 3893 is about 5.62 /year. Although no supernovae have been observed in NGC 3893 yet, Kōichi Itagaki discovered a luminous red nova, designated AT 2023uhx, on 7 October 2023 (type LRN, mag. 17.2). Nearby galaxies NGC 3893 interacts with NGC 3896, a smaller galaxy lying at an angular distance of 3.9 arcminutes, and this results in a number of tidal features, like warps and bridges. A bridge of material is observed in HI imaging connecting the two galaxies. A stellar debris bridge is observed at the south side, better seen in B-band images, suggesting it is composed of young stars. The mass ratio between the two galaxies is about 0.025 - 0.031. NGC 3893 and its smaller companion NGC 3896 are members of the NGC 3877 group, which belongs to the south Ursa Major groups, part of the Virgo Supercluster. NGC 3906 lies 20 arcminutes to the southeast of NGC 3893. Other galaxies in the same group are NGC 3726, NGC 3928, NGC 3949, NGC 3985, and NGC 4010. See also Messier 51 - a similar galaxy pair References External links Intermediate spiral galaxies Ursa Major Ursa Major Cluster 3893 06778 036875 Astronomical objects discovered in 1788 Discoveries by William Herschel
NGC 3893
[ "Astronomy" ]
502
[ "Ursa Major", "Constellations" ]
59,156,461
https://en.wikipedia.org/wiki/Swelling%20index
Swelling index may refer to the following material parameters that quantify volume change: Crucible swelling index, also known as free swelling index, in coal assay Swelling capacity, the amount of a liquid that can be absorbed by a polymer Shrink–swell capacity in soil mechanics Unload-reload constant (κ) in critical state soil mechanics Mechanics Materials science
Swelling index
[ "Physics", "Materials_science", "Engineering" ]
72
[ "Applied and interdisciplinary physics", "Materials science", "Mechanics", "Mechanical engineering", "nan" ]
59,158,118
https://en.wikipedia.org/wiki/Idempotent%20analysis
In mathematical analysis, idempotent analysis is the study of idempotent semirings, such as the tropical semiring. The lack of an additive inverse in the semiring is compensated somewhat by the idempotent rule . References
Idempotent analysis
[ "Mathematics" ]
52
[ "Mathematical analysis", "Mathematical analysis stubs" ]
59,158,120
https://en.wikipedia.org/wiki/Tropical%20analysis
In the mathematical discipline of idempotent analysis, tropical analysis is the study of the tropical semiring. Applications The max tropical semiring can be used appropriately to determine marking times within a given Petri net and a vector filled with marking state at the beginning: (unit for max, tropical addition) means "never before", while 0 (unit for addition, tropical multiplication) is "no additional time". Tropical cryptography is cryptography based on the tropical semiring. Tropical geometry is an analog to algebraic geometry, using the tropical semiring. References Further reading See also Lunar arithmetic External links MaxPlus algebra Max Plus working group, INRIA Rocquencourt Tropical geometry
Tropical analysis
[ "Mathematics" ]
140
[ "Mathematical analysis", "Mathematical analysis stubs" ]
59,158,299
https://en.wikipedia.org/wiki/Up-and-down%20design
Up-and-down designs (UDDs) are a family of statistical experiment designs used in dose-finding experiments in science, engineering, and medical research. Dose-finding experiments have binary responses: each individual outcome can be described as one of two possible values, such as success vs. failure or toxic vs. non-toxic. Mathematically the binary responses are coded as 1 and 0. The goal of dose-finding experiments is to estimate the strength of treatment (i.e., the "dose") that would trigger the "1" response a pre-specified proportion of the time. This dose can be envisioned as a percentile of the distribution of response thresholds. An example where dose-finding is used is in an experiment to estimate the LD50 of some toxic chemical with respect to mice. Dose-finding designs are sequential and response-adaptive: the dose at a given point in the experiment depends upon previous outcomes, rather than be fixed a priori. Dose-finding designs are generally more efficient for this task than fixed designs, but their properties are harder to analyze, and some require specialized design software. UDDs use a discrete set of doses rather than vary the dose continuously. They are relatively simple to implement, and are also among the best understood dose-finding designs. Despite this simplicity, UDDs generate random walks with intricate properties. The original UDD aimed to find the median threshold by increasing the dose one level after a "0" response, and decreasing it one level after a "1" response. Hence the name "up-and-down". Other UDDs break this symmetry in order to estimate percentiles other than the median, or are able to treat groups of subjects rather than one at a time. UDDs were developed in the 1940s by several research groups independently. The 1950s and 1960s saw rapid diversification with UDDs targeting percentiles other than the median, and expanding into numerous applied fields. The 1970s to early 1990s saw little UDD methods research, even as the design continued to be used extensively. A revival of UDD research since the 1990s has provided deeper understanding of UDDs and their properties, and new and better estimation methods. UDDs are still used extensively in the two applications for which they were originally developed: psychophysics where they are used to estimate sensory thresholds and are often known as fixed forced-choice staircase procedures, and explosive sensitivity testing, where the median-targeting UDD is often known as the Bruceton test. UDDs are also very popular in toxicity and anesthesiology research. They are also considered a viable choice for Phase I clinical trials. Mathematical description Definition Let be the sample size of a UDD experiment, and assuming for now that subjects are treated one at a time. Then the doses these subjects receive, denoted as random variables , are chosen from a discrete, finite set of increasing dose levels Furthermore, if , then according to simple constant rules based on recent responses. The next subject must be treated one level up, one level down, or at the same level as the current subject. The responses themselves are denoted hereafter the "1" responses are positive and "0" negative. The repeated application of the same rules (known as dose-transition rules) over a finite set of dose levels, turns into a random walk over . Different dose-transition rules produce different UDD "flavors", such as the three shown in the figure above. Despite the experiment using only a discrete set of dose levels, the dose-magnitude variable itself, , is assumed to be continuous, and the probability of positive response is assumed to increase continuously with increasing . The goal of dose-finding experiments is to estimate the dose (on a continuous scale) that would trigger positive responses at a pre-specified target rate ; often known as the "target dose". This problem can be also expressed as estimation of the quantile of a cumulative distribution function describing the dose-toxicity curve . The density function associated with is interpretable as the distribution of response thresholds of the population under study. Transition probability matrix Given that a subject receives dose , denote the probability that the next subject receives dose , or , as or , respectively. These transition probabilities obey the constraints and the boundary conditions . Each specific set of UDD rules enables the symbolic calculation of these probabilities, usually as a function of . Assuming that transition probabilities are fixed in time, depending only upon the current allocation and its outcome, i.e., upon and through them upon (and possibly on a set of fixed parameters). The probabilities are then best represented via a tri-diagonal transition probability matrix (TPM) : Balance point Usually, UDD dose-transition rules bring the dose down (or at least bar it from escalating) after positive responses, and vice versa. Therefore, UDD random walks have a central tendency: dose assignments tend to meander back and forth around some dose that can be calculated from the transition rules, when those are expressed as a function of . This dose has often been confused with the experiment's formal target , and the two are often identical - but they do not have to be. The target is the dose that the experiment is tasked with estimating, while , known as the "balance point", is approximately where the UDD's random walk revolves around. Stationary distribution of dose allocations Since UDD random walks are regular Markov chains, they generate a stationary distribution of dose allocations, , once the effect of the manually-chosen starting dose wears off. This means, long-term visit frequencies to the various doses will approximate a steady state described by . According to Markov chain theory the starting-dose effect wears off rather quickly, at a geometric rate. Numerical studies suggest that it would typically take between and subjects for the effect to wear off nearly completely. is also the asymptotic distribution of cumulative dose allocations. UDDs' central tendencies ensure that long-term, the most frequently visited dose (i.e., the mode of ) will be one of the two doses closest to the balance point . If is outside the range of allowed doses, then the mode will be on the boundary dose closest to it. Under the original median-finding UDD, the mode will be at the closest dose to in any case. Away from the mode, asymptotic visit frequencies decrease sharply, at a faster-than-geometric rate. Even though a UDD experiment is still a random walk, long excursions away from the region of interest are very unlikely. Common UDDs Original ("simple" or "classical") UDD The original "simple" or "classical" UDD moves the dose up one level upon a negative response, and vice versa. Therefore, the transition probabilities are We use the original UDD as an example for calculating the balance point . The design's 'up', 'down' functions are We equate them to find : The "classical" UDD is designed to find the median threshold. This is a case where The "classical" UDD can be seen as a special case of each of the more versatile designs described below. Durham and Flournoy's biased coin design This UDD shifts the balance point, by adding the option of treating the next subject at the same dose rather than move only up or down. Whether to stay is determined by a random toss of a metaphoric "coin" with probability This biased-coin design (BCD) has two "flavors", one for and one for whose rules are shown below: The heads probability can take any value in. The balance point is The BCD balance point can made identical to a target rate by setting the heads probability to . For example, for set . Setting makes this design identical to the classical UDD, and inverting the rules by imposing the coin toss upon positive rather than negative outcomes, produces above-median balance points. Versions with two coins, one for each outcome, have also been published, but they do not seem to offer an advantage over the simpler single-coin BCD. Group (cohort) UDDs Some dose-finding experiments, such as phase I trials, require a waiting period of weeks before determining each individual outcome. It may preferable then, to be able treat several subjects at once or in rapid succession. With group UDDs, the transition rules apply rules to cohorts of fixed size rather than to individuals. becomes the dose given to cohort , and is the number of positive responses in the -th cohort, rather than a binary outcome. Given that the -th cohort is treated at on the interior of the -th cohort is assigned to follow a binomial distribution conditional on , with parameters and . The up and down probabilities are the binomial distribution's tails, and the stay probability its center (it is zero if ). A specific choice of parameters can be abbreviated as GUD Nominally, group UDDs generate -order random walks, since the most recent observations are needed to determine the next allocation. However, with cohorts viewed as single mathematical entities, these designs generate a first-order random walk having a tri-diagonal TPM as above. Some relevant group UDD subfamilies: Symmetric designs with (e.g., GUD) target the median. The family GUD encountered in toxicity studies, allows escalation only with zero positive responses, and de-escalate upon any positive response. The escalation probability at is and since this design does not allow for remaining at the same dose, at the balance point it will be exactly . Therefore, With would be associated with and , respectively. The mirror-image family GUD has its balance points at one minus these probabilities. For general group UDDs, the balance point can be calculated only numerically, by finding the dose with toxicity rate such that Any numerical root-finding algorithm, e.g., Newton–Raphson, can be used to solve for . -in-a-row (or "transformed" or "geometric") UDD This is the most commonly used non-median UDD. It was introduced by Wetherill in 1963, and proliferated by him and colleagues shortly thereafter to psychophysics, where it remains one of the standard methods to find sensory thresholds. Wetherill called it "transformed" UDD; Misrak Gezmu who was the first to analyze its random-walk properties, called it "Geometric" UDD in the 1990s; and in the 2000s the more straightforward name "-in-a-row" UDD was adopted. The design's rules are deceptively simple: Every dose escalation requires non-toxicities observed on consecutive data points, all at the current dose, while de-escalation only requires a single toxicity. It closely resembles GUD described above, and indeed shares the same balance point. The difference is that -in-a-row can bail out of a dose level upon the first toxicity, whereas its group UDD sibling might treat the entire cohort at once, and therefore might see more than one toxicity before descending. The method used in sensory studies is actually the mirror-image of the one defined above, with successive responses required for a de-escalation and only one non-response for escalation, yielding for . -in-a-row generates a -th order random walk because knowledge of the last responses might be needed. It can be represented as a first-order chain with states, or as a Markov chain with levels, each having internal states labeled to The internal state serves as a counter of the number of immediately recent consecutive non-toxicities observed at the current dose. This description is closer to the physical dose-allocation process, because subjects at different internal states of the level , are all assigned the same dose . Either way, the TPM is (or more precisely, , because the internal counter is meaningless at the highest dose) - and it is not tridiagonal. Here is the expanded -in-a-row TPM with and , using the abbreviation Each level's internal states are adjacent to each other. -in-a-row is often considered for clinical trials targeting a low-toxicity dose. In this case, the balance point and the target are not identical; rather, is chosen to aim close to the target rate, e.g., for studies targeting the 30th percentile, and for studies targeting the 20th percentile. Estimating the target dose Unlike other design approaches, UDDs do not have a specific estimation method "bundled in" with the design as a default choice. Historically, the more common choice has been some weighted average of the doses administered, usually excluding the first few doses to mitigate the starting-point bias. This approach antedates deeper understanding of UDDs' Markov properties, but its success in numerical evaluations relies upon the eventual sampling from , since the latter is centered roughly around The single most popular among these averaging estimators was introduced by Wetherill et al. in 1966, and only includes reversal points (points where the outcome switches from 0 to 1 or vice versa) in the average. In recent years, the limitations of averaging estimators have come to light, in particular the many sources of bias that are very difficult to mitigate. Reversal estimators suffer from both multiple biases (although there is some inadvertent cancelling out of biases), and increased variance due to using a subsample of doses. However, the knowledge about averaging-estimator limitations has yet to disseminate outside the methodological literature and affect actual practice. By contrast, regression estimators attempt to approximate the curve describing the dose-response relationship, in particular around the target percentile. The raw data for the regression are the doses on the horizontal axis, and the observed toxicity frequencies, on the vertical axis. The target estimate is the abscissa of the point where the fitted curve crosses Probit regression has been used for many decades to estimate UDD targets, although far less commonly than the reversal-averaging estimator. In 2002, Stylianou and Flournoy introduced an interpolated version of isotonic regression (IR) to estimate UDD targets and other dose-response data. More recently, a modification called "centered isotonic regression" (CIR) was developed by Oron and Flournoy, promising substantially better estimation performance than ordinary isotonic regression in most cases, and also offering the first viable interval estimator for isotonic regression in general. Isotonic regression estimators appear to be the most compatible with UDDs, because both approaches are nonparametric and relatively robust. The publicly available R package "cir" implements both CIR and IR for dose-finding and other applications. References Design of experiments Statistical process control
Up-and-down design
[ "Engineering" ]
3,050
[ "Statistical process control", "Engineering statistics" ]
59,158,401
https://en.wikipedia.org/wiki/Mikoyan%20MiG-2000
Mikoyan MiG-2000 was a project by Russian Aircraft Corporation MiG for a liquid ramjet single-stage-to-orbit spaceplane, explored in the 1990s. It was envisioned to have a takeoff weight of 300 tons and deliver a 9-ton payload on a 200 km low Earth orbit. The plane lost a competition to the Tupolev Tu-2000. Similar to Rockwell X-30. References External links SPACE TRANSPORT: Tupolev Tu-2000 Hyperplane – Russia Mikoyan aircraft Delta-wing aircraft Scramjet-powered aircraft Spaceplanes Hypersonic aircraft Single-stage-to-orbit Abandoned civil aircraft projects
Mikoyan MiG-2000
[ "Astronomy" ]
129
[ "Astronomy stubs", "Spacecraft stubs" ]
59,158,616
https://en.wikipedia.org/wiki/Li%C3%B1%C3%A1n%27s%20equation
In the study of diffusion flame, Liñán's equation is a second-order nonlinear ordinary differential equation which describes the inner structure of the diffusion flame, first derived by Amable Liñán in 1974. The equation reads as subjected to the boundary conditions where is the reduced or rescaled Damköhler number and is the ratio of excess heat conducted to one side of the reaction sheet to the total heat generated in the reaction zone. If , more heat is transported to the oxidizer side, thereby reducing the reaction rate on the oxidizer side (since reaction rate depends on the temperature) and consequently greater amount of fuel will be leaked into the oxidizer side. Whereas, if , more heat is transported to the fuel side of the diffusion flame, thereby reducing the reaction rate on the fuel side of the flame and increasing the oxidizer leakage into the fuel side. When , all the heat is transported to the oxidizer (fuel) side and therefore the flame sustains extremely large amount of fuel (oxidizer) leakage. The equation is, in some aspects, universal (also called as the canonical equation of the diffusion flame) since although Liñán derived the equation for stagnation point flow, assuming unity Lewis numbers for the reactants, the same equation is found to represent the inner structure for general laminar flamelets, having arbitrary Lewis numbers. Existence of solutions Near the extinction of the diffusion flame, is order unity. The equation has no solution for , where is the extinction Damköhler number. For with , the equation possess two solutions, of which one is an unstable solution. Unique solution exist if and . The solution is unique for , where is the ignition Damköhler number. Liñán also gave a correlation formula for the extinction Damköhler number, which is increasingly accurate for , Generalized Liñán's equation The generalized Liñán's equation is given by where and are constant reaction orders of fuel and oxidizer, respectively. Large Damköhler number limit In the Burke–Schumann limit, . Then the equation reduces to An approximate solution to this equation was developed by Liñán himself using integral method in 1963 for his thesis, where is the error function and Here is the location where reaches its minimum value . When , , and . See also Liñán's diffusion flame theory References Fluid dynamics Combustion Ordinary differential equations
Liñán's equation
[ "Chemistry", "Engineering" ]
480
[ "Piping", "Chemical engineering", "Combustion", "Fluid dynamics" ]
59,158,681
https://en.wikipedia.org/wiki/Hello%20%28company%29
Hello was an American technology company that sold sleep tracking devices and a sleep tracking application to help monitor sleep. The company was founded in August 2012 and shut down in June 2017. History Hello was founded on August 28, 2012 by Thiel Fellow and CEO James Proud. A successful Kickstarter campaign from Hello in 2014 raised $2.4 million from more than 19,000 backers, making it one of the most successful Kickstarter campaigns of all time. A preliminary financing round from Silicon Valley backers raised another $10.5 million. In June 2015, the company raised $40 million in a financing round from Temasek Holdings and Hello was valued between $250 million and $300 million at the time. Other backers included Facebook Messenger chief David Marcus, Facebook's virtual reality vice president Hugo Barra, Facebook executive Dan Rose, former Twitter CEO Dick Costolo, and Spotify's head of special projects, Shakil Khan. Sense Hello spent over a year developing a wearable sleep tracking device, but the company ultimately decided on creating a bedside device instead because Proud thought customers would be more likely to keep using the product. Hello released their sleep tracker, Sense, on February 24, 2015; 21,000 units were sold during its launch. The Sense product includes a bedside device, a "Sleep Pill" that tracks a user's sleep by clipping onto a pillow, and a sleep tracking app. Sense has sensors that track temperature, light, sound and allergen particle data and can also play sounds to help the user fall asleep. The product gives the user a sleep score every night and wakes the user up at the right point in their sleep cycle. On November 1, 2016, the firm released a voice enabled version of the Sense sleep tracker. Shut down While trying to look for a buyer, Proud announced on June 12, 2017, in a blog post on Medium that Hello would be shutting down and the company laid off most of their employees that same day. Hello was also reportedly unable to pay its bills. The company said they were in talks with Fitbit for them to acquire the company, but failed to reach a deal. According to BBC News and Buzzfeed News, Hello shut down due to a lack of consumer demand for Sense, unenthusiastic reviews for the product, and competition from bigger brands. Although Hello shut down in June 2017, the company's website, Hello.is, remained online until January 2018. References Technology companies established in 2012 Technology companies disestablished in 2017 Technology companies based in the San Francisco Bay Area Defunct manufacturing companies based in the San Francisco Bay Area Sleep
Hello (company)
[ "Biology" ]
535
[ "Behavior", "Sleep" ]
59,159,271
https://en.wikipedia.org/wiki/From%20Fauna
From Fauna, formerly known as the Cellular Agriculture Society, is an international 501(c)(3) organization that has been involved in research, funding, advancement of, and most recently education in, cellular agriculture. It is based in San Francisco, and was founded by Kris Spiros in the early 2010s. In 2023, the Cellular Agriculture Society released Modern Meat, a freely available 600-page textbook which was the first on the subject of cultured meat. They have also created children's books, educational simulations, social science research, and designed the theoretical workings and architecture of a cultured meat facility through Project CMF, which envisions what cultured meat production could look like in 2040. Notes References Biotechnology Cellular agriculture Sustainable food system Synthetic biology
From Fauna
[ "Engineering", "Biology" ]
156
[ "Synthetic biology", "Biological engineering", "Biotechnology", "Bioinformatics", "Molecular genetics", "nan" ]
59,160,736
https://en.wikipedia.org/wiki/Tunapanda%20Institute
Tunapanda Institute (Tunapanda is a Swahili word for "we are growing") is a United States based non-profit organization operating in East Africa. With the goal of training disadvantaged young people, various free courses in technology, design and entrepreneurship are offered to increase the chance of graduates in the labour market. The majority of its work is based in Kibera, a slum in Nairobi, but it has also operated in other parts of Kenya, Tanzania, and Uganda. In 2016, Tunapanda Institute was named as one of 2016 NT100, Nominet Trust's annual celebration of 100 inspiring tech-for-good ventures from around the world. History and operations Tunapanda was founded in 2013 by the brothers Jay Larson and Mick Larson, with the aim of providing learning opportunities to unconnected communities through the use of an "education on a hard drive". They downloaded openly-licensed software and educational content for computer programming, design, and business/entrepreneurship, as well as general-knowledge content from Wikipedia and Khan Academy, which they then distributed on CDs, external hard drives, and USB drives at various schools and community organizations. Tunapanda then opened its own training facility and developed a three-month intensive training courses in technology, design and business which has to date been taught to more than 200 students.. In 2022, as smartphone phone adoption continued to grow rapidly in Kenya, Tunapanda Institute adapted its training curriculum to prioritize a "mobile-first" approach. This shift gave rise to "Mobile LEAP," a 6-week intensive workforce training program designed to prepare the next generation of digital workers through an innovative mobile-based learning approach. . The majority of Tunapanda’s programs are based in Kibera, a slum in Nairobi, but Tunapanda has set up computer labs and training facilities in urban and rural settings within Kenya, Tanzania, and Uganda. While focused mainly on education and training, Tunapanda also provides graduates with the opportunity to earn an income from working on technology-related projects for clients and partners. TunapandaNET In 2015, TunapandaNET was conceived  as a way to provide access to digital learning materials and platforms to young people within Kibera who may not be able to afford regular internet use. The network connects schools and community centers to Tunapanda’s learning facility through wireless communication. Since content is hosted locally, the network operates as an intranet and costs considerably less for users to access content than traditional service providers. The network has grown to incorporate ten nodes within Kibera and plans to expand to more by the end of 2019. Kibera Aeronautics and Space Academy At the end of 2018, Tunapanda launched the Kibera Aeronautics and Space Academy (KASA) with the goal of training young people interested in science and technology. The aim of the program is to equip disadvantaged people from Kibera and other surrounding regions with practical skills for life beyond the program. In the long term, a training system will be installed, with which the participants can be prepared for technological professions on the model of German dual education system and subsequently be transferred to employers. Awards 2016: Named as one of the 2016 NT100, Nominet Trust's annual celebration of 100 inspiring tech-for-good ventures from around the world. References Community-building organizations Information technology education Information and communication technologies in Africa Non-profit organizations based in the United States
Tunapanda Institute
[ "Technology" ]
704
[ "Information technology", "Information technology education" ]
59,161,036
https://en.wikipedia.org/wiki/Social%20cognitive%20neuroscience
Social cognitive neuroscience is the scientific study of the biological processes underpinning social cognition. Specifically, it uses the tools of neuroscience to study "the mental mechanisms that create, frame, regulate, and respond to our experience of the social world". Social cognitive neuroscience uses the epistemological foundations of cognitive neuroscience, and is closely related to social neuroscience. Social cognitive neuroscience employs human neuroimaging, typically using functional magnetic resonance imaging (fMRI). Human brain stimulation techniques such as transcranial magnetic stimulation and transcranial direct-current stimulation are also used. In nonhuman animals, direct electrophysiological recordings and electrical stimulation of single cells and neuronal populations are utilized for investigating lower-level social cognitive processes. History and methods The first scholarly works about the neural bases of social cognition can be traced back to Phineas Gage, a man who survived a traumatic brain injury in 1849 and was extensively studied for resultant changes in social functioning and personality. In 1924, esteemed psychologist Gordon Allport wrote a chapter on the neural bases of social phenomenon in his textbook of social psychology. However, these works did not generate much activity in the decades that followed. The beginning of modern social cognitive neuroscience can be traced to Michael Gazzaniga's book, Social Brain (1985), which attributed cerebral lateralization to the peculiarities of social psychological phenomenon. Isolated pockets of social cognitive neuroscience research emerged in the late 1980s to the mid-1990s, mostly using single-unit electrophysiological recordings in nonhuman primates or neuropsychological lesion studies in humans. During this time, the closely related field of social neuroscience emerged in parallel, however it mostly focused on how social factors influenced autonomic, neuroendocrine, and immune systems. In 1996, Giacomo Rizzolatti's group made one of the most seminal discoveries in social cognitive neuroscience: the existence of mirror neurons in macaque frontoparietal cortex. The mid-1990s saw the emergence of functional positron emission tomography (PET) for humans, which enabled the neuroscientific study of abstract (and perhaps uniquely human) social cognitive functions such as theory of mind and mentalizing. However, PET is prohibitively expensive and requires the ingestion of radioactive tracers, thus limiting its adoption. In the year 2000, the term social cognitive neuroscience was coined by Matthew Lieberman and Kevin Ochsner, who are from social and cognitive psychology backgrounds, respectively. This was done to integrate and brand the isolated labs doing research on the neural bases of social cognition. Also in the year 2000, Elizabeth Phelps and colleagues published the first fMRI study on social cognition, specifically on race evaluations. The adoption of fMRI, a less expensive and noninvasive neuroimaging modality, induced explosive growth in the field. In 2001, the first academic conference on social cognitive neuroscience was held at University of California, Los Angeles. The mid-2000s saw the emergence of academic societies related to the field (Social and Affective Neuroscience Society, Society for Social Neuroscience), as well as peer-reviewed journals specialized for the field (Social Cognitive and Affective Neuroscience, Social Neuroscience). In the 2000s and beyond, labs conducting social cognitive neuroscience research proliferated throughout Europe, North America, East Asia, Australasia, and South America. Starting in the late 2000s, the field began to expand its methodological repertoire by incorporating other neuroimaging modalities (e.g. electroencephalography, magnetoencephalography, functional near-infrared spectroscopy), advanced computational methods (e.g. multivariate pattern analysis, causal modeling, graph theory), and brain stimulation techniques (e.g. transcranial magnetic stimulation, transcranial direct-current stimulation, deep brain stimulation). Due to the volume and rigor of research in the field, the 2010s saw social cognitive neuroscience achieving mainstream acceptance in the wider fields of neuroscience and psychology. Hyperscanning or inter-brain research is becoming the most frequent approach to studying social cognition. It is thought that exploring the correlation of neuronal activities of two or more brains in shared cognitive tasks can contribute to understanding the relationship between social experiences and neurophysiological processes. Functional anatomy Much of social cognition is primarily subserved by two dissociable macro-scale brain networks: the mirror neuron system (MNS) and default mode network (DMN). MNS is thought to represent and identify observable actions (e.g. reaching for a cup) that are used by DMN to infer unobservable mental states, traits, and intentions (e.g. thirsty). Concordantly, the activation onset of MNS has been shown to precede DMN during social cognition. However, the extent of feedforward, feedback, and recurrent processing within and between MNS and DMN is not yet well-characterized, thus it is difficult to fully dissociate the exact functions of the two networks and their nodes. Mirror neuron system (MNS) Mirror neurons, first discovered in macaque frontoparietal cortex, fire when actions are either performed or observed. In humans, similar sensorimotor "mirroring" responses have been found in the brain regions listed below, which are collectively referred to as MNS. The MNS has been found to identify and represent intentional actions such as facial expressions, body language, and grasping. MNS may encode the concept of an action, not just the sensory and motor information associated with an action. As such, MNS representations have been shown to be invariant of how an action is observed (e.g. sensory modality) and how an action is performed (e.g. left versus right hand, upwards or downwards). MNS has even been found to represent actions that are described in written language. Mechanistic theories of MNS functioning fall broadly into two camps: motor and cognitive theories. Classical motor theories posit that abstract action representations arise from simulating actions within the motor system, while newer cognitive theories propose that abstract action representations arise from the integration of multiple domains of information: perceptual, motor, semantic, and conceptual. Aside from these competing theories, there are more fundamental controversies surrounding the human MNS – even the very existence of mirror neurons in this network is debated. As such, the term "MNS" is sometimes eschewed for more functionally defined names such as "action observation network", "action identification network", and "action representation network". Premotor cortex Mirror neurons were first discovered in macaque premotor cortex. The premotor cortex is associated with a diverse array of functions, encompassing low-level motor control, motor planning, sensory guidance of movement, along with higher level cognitive functions such as language processing and action comprehension. The premotor cortex has been found to contain subregions with unique cytoarchitectural properties, the significance of which is not yet fully understood. In humans, sensorimotor mirroring responses are also found throughout premotor cortex and adjacent sections of inferior frontal gyrus and supplementary motor area. Visuospatial information is more prevalent in ventral premotor cortex than dorsal premotor cortex. In humans, sensorimotor mirroring responses extend beyond ventral premotor cortex into adjacent regions of inferior frontal gyrus, including Broca's area, an area that is critical to language processing and speech production. Action representations in inferior frontal gyrus can be evoked by language, such as action verbs, in addition to the observed and performed actions typically used as stimuli in biological motion studies. The overlap between language and action understanding processes in inferior frontal gyrus has spurred some researchers to suggest overlapping neurocomputational mechanisms between the two. Dorsal premotor cortex is strongly associated with motor preparation and guidance, such as representing multiple motor choices and deciding the final selection of action. Intraparietal sulcus Classical studies of action observation have found mirror neurons in macaque intraparietal sulcus. In humans, sensorimotor mirroring responses are centered around the anterior intraperietal sulcus, with responses also seen in adjacent regions such as inferior parietal lobule and superior parietal lobule. Intraparietal sulcus has been shown to more sensitive to the motor features of biological motion, relative to semantic features. Intraparietal sulcus has been shown to encode magnitude in a domain-general manner, whether it be the magnitude of a motor movement, or the magnitude of a person's social status. Intraparietal sulcus is considered a part of the dorsal visual stream, but is also thought to receive inputs from non-dorsal stream regions such as lateral occipitotemporal cortex and posterior superior temporal sulcus. Lateral occipitotemporal cortex (LOTC) LOTC encompasses lateral regions of the visual cortex such as V5 and extrastriate body area. Though LOTC is typically associated with visual processing, sensorimotor mirroring responses and abstract action representations are reliably found in this region. LOTC includes cortical areas that are sensitive to motion, objects, body parts, kinematics, body postures, observed movements, and semantic content in verbs. LOTC is thought to encode the fine sensorimotor details of an observed action (e.g. local kinematic and perceptual features). LOTC is also thought to bind together the different means by which a specific action can be carried out. Default mode network (DMN) The default mode network (DMN) is thought to process and represent abstract social information, such as mental states, traits, and intentions. Social cognitive functions such as theory of mind, mentalizing, emotion recognition, empathy, moral cognition, and social working memory consistently recruit DMN regions in human neuroimaging studies. Though the functional anatomy of these functions can differ, they often include the core DMN hubs of medial prefrontal cortex, posterior cingulate, and temporoparietal junction. Aside from social cognition, the DMN is broadly associated with internally directed cognition. The DMN has been found to be involved in memory-related processing (semantic, episodic, prospection), self-related processing (e.g. introspection), and mindwandering. Unlike studies of the mirror neuron system, task-based DMN investigations almost always use human subjects, as DMN-related social cognitive functions are rudimentary or difficult to measure in nonhumans. However, much of DMN activity occurs during rest, as DMN activation and connectivity are quickly engaged and sustained during the absence of goal-directed cognition. As such, the DMN is widely thought the subserve the "default mode" of mammalian brain function. The interrelations between social cognition, rest, and the diverse array of DMN-related functions are not yet well understood and is a topic of active research. Social, non-social, and spontaneous processes in the DMN are thought to share at least some underlying neurocomputational mechanisms with each other. Medial prefrontal cortex (mPFC) Medial prefrontal cortex (mPFC) is strongly associated with abstract social cognition such as mentalizing and theory of mind. Mentalizing activates much of the mPFC, but dorsal mPFC appears to be more selective for information about other people, while anterior mPFC may be more selective for information about the self. Ventral regions of mPFC, such as ventromedial prefrontal cortex and medial orbitofrontal cortex, are thought to play a critical role in the affective components of social cognition. For example, ventromedial prefrontal cortex has been found to represent affective information about other people. Ventral mPFC has been shown to be critical in the computation and representation of valence and value for many types of stimuli, not just social stimuli. The mPFC may subserve the most abstract components of social cognition, as it is one of the most domain general brain regions, sits at the top of the cortical hierarchy, and is last to activate during DMN-related tasks. Posterior cingulate cortex (PCC) Abstract social cognition recruits a large area of posteromedial cortex centered around posterior cingulate cortex (PCC), but also extending into precuneus and retrosplenial cortex. The specific function of PCC in social cognition is not yet well characterized, and its role may be generalized and tightly linked with medial prefrontal cortex. One view is that PCC may help represent some visuospatial and semantic components of social cognition. Additionally, PCC may track social dynamics by facilitating bottom-up attention to behaviorally relevant sources of information in the external environment and in memory. Dorsal PCC is also linked to monitoring behaviorally relevant changes in the environment, perhaps aiding in social navigation. Outside of the social domain, PCC is associated with a very diverse array of functions, such as attention, memory, semantics, visual processing, mindwandering, consciousness, cognitive flexibility, and mediating interactions between brain networks. Temporoparietal junction (TPJ) The temporoparietal junction (TPJ) is thought to be critical to distinguishing between multiple agents, such as the self and other. The right TPJ is robustly activated by false belief tasks, in which subjects have to distinguish between others' beliefs and their own beliefs in a given situation. The TPJ is also recruited by the wide variety of abstract social cognitive tasks associated with the DMN. Outside of the social domain, TPJ is associated with a diverse array of functions such as attentional reorienting, target detection, contextual updating, language processing, and episodic memory retrieval. The social and non-social functions of the TPJ may share common neurocomputational mechanisms. For example, the substrates of attentional reorientation in TPJ may be used for reorienting attention between the self and others, and for attributing attention between social agents. Moreover, a common neural encoding mechanism has been found to instantiate social, temporal, and spatial distance in TPJ. Superior temporal sulcus (STS) Social tasks recruit areas of lateral temporal cortex centered around superior temporal sulcus (STS), but also extending to superior temporal gyrus, middle temporal gyrus, and the temporal poles. During social cognition, the anterior STS and temporal poles are strongly associated with abstract social cognition and person information, while the posterior STS is most associated with social vision and biological motion processing. The posterior STS is also thought to provide perceptual inputs to the mirror neuron system. Other regions There are also several brain regions that fall outside the MNS and DMN which are strongly associated with certain social cognitive functions. Ventrolateral prefrontal cortex (VLPFC) The ventrolateral prefrontal cortex (VLPFC) is associated with emotional and inhibitory processing. It has been found to be involved in emotion recognition from facial expressions, body language, prosody, and more. Specifically, it is thought to access semantic representations of emotional constructs during emotion recognition. Moreover, VLPFC is often recruited in empathy, mentalizing, and theory of mind tasks. VLPFC is thought to support the inhibition of self-perspective when thinking about other people. Insula The insula is critical to emotional processing and interoception. It has been found to be involved in emotion recognition, empathy, morality, and social pain. The anterior insula is thought to facilitate feeling the emotions of others, especially negative emotions such as vicarious pain. Lesions of the insula are associated with decreased empathy capacity. Anterior insula also activates during social pain, such as the pain caused by social rejection. Anterior cingulate cortex (ACC) The anterior cingulate cortex (ACC) is associated with emotional processing and error monitoring. The dorsal ACC appears to share some social cognitive functions to the anterior insula, such as facilitating feeling the emotions of others, especially negative emotions. The dorsal ACC also robustly activates during social pain, like the pain caused by being the victim of an injustice. The dorsal ACC is also associated with social evaluation, such as the detection and appraisal of social exclusion. The subgenual ACC has been found to activate for vicarious reward, and may be involved in prosocial behavior. Fusiform face area (FFA) The fusiform face area (FFA) is strongly associated with face processing and perceptual expertise. The FFA has been shown to process the visuospatial features of faces, and may also encode some semantic features of faces. Notable figures David Amodio Mahzarin Banaji John T. Cacioppo Jean Decety Robin Dunbar Naomi Eisenberger Uta Frith Chris Frith Michael Gazzaniga Joshua Greene (psychologist) Todd Heatherton Matthew Lieberman Jason Mitchell Elizabeth A. Phelps Giacomo Rizzolatti Rebecca Saxe Tania Singer See also Affective neuroscience Cognitive psychology Cognitive neuroscience Outline of brain mapping Outline of the human brain Motor cognition Neural Synchrony Social Cognitive and Affective Neuroscience (Journal) Social neuroscience Social Neuroscience (Journal) Social psychology Systems neuroscience Further reading In Toga, A. W. (2015). Brain mapping: An encyclopedic reference. Volume 3: Social Cognitive Neuroscience (pp. 1–258). Elsevier. Lieberman, M. D. (2013). Social: Why our brains are wired to connect. New York, NY, US: Crown Publishers/Random House. Wittmann, Marco K., Patricia L. Lockwood, and Matthew FS Rushworth. "Neural mechanisms of social cognition in primates." Annual review of neuroscience (2018). https://doi.org/10.1146/annurev-neuro-080317-061450 References Behavioral neuroscience Cognitive neuroscience Cognitive science Emergence Interdisciplinary branches of psychology Neuropsychology
Social cognitive neuroscience
[ "Biology" ]
3,761
[ "Behavioural sciences", "Behavior", "Behavioral neuroscience" ]
59,161,710
https://en.wikipedia.org/wiki/BIM%20Collaboration%20Format
The BIM Collaboration Format (BCF) is a structured file format suited to issue tracking with a building information model. The BCF is designed primarily for defining views of a building model and associated information on collisions and errors connected with specific objects in the view. The BCF allows users of different BIM software, and/or different disciplines to collaborate on issues with the project. The use of the BCF to coordinate changes to a BIM is an important aspect of OpenBIM. The format was developed by Tekla and Solibri and later adopted as a standard by buildingSMART. Most major BIM modelling software platforms support some integration with BCF, typically through plug-ins provided by the BCF server vendor. Although the BCF was originally conceived as a file base there are now many implementations using the cloud-based collaborative workflow described in the BCF API, including Open Source implementation as part of the Open Source BIM collective. Research work has been done in Denmark looking into using the BCF for a broader range of information management and exchange in the architecture, engineering and construction (AEC) sector. Supporting software There are two main categories of support for the BCF: authoring software and coordination software. Authoring software can generate and share BCF issues. Coordination software is most powerful at coordinating issues and presenting a user interface for the management and tracking of issues. Coordination software is typically a web-based service which allows for real-time coordination across multiple authoring software platforms and geographies. Most BIM software has a mix of these functions. The BCF is supported natively by authoring software such as Vectorworks, ArchiCAD, Tekla Structures, Quadri, DDS CAD, BIMcollab Zoom, BIMsight, Solibri, Navisworks, and Simplebim. Standalone BCF plugins include BCF Manager, and BCFier. Coordination software as cloud services offering BCF based issue tracking include BIMcollab, Newforma Konekt, Vrex, Catenda's Bimsync, Bricks app, ACCA software's usBIM.platform, and OpenProject. See also Industry Foundation Classes aecXML BuildingSMART References External links buildingSMART standards library Building information modeling Data modeling Building engineering
BIM Collaboration Format
[ "Engineering" ]
468
[ "Building engineering", "Data modeling", "Building information modeling", "Data engineering", "Civil engineering", "Architecture" ]
59,162,859
https://en.wikipedia.org/wiki/He%20Jiankui%20genome%20editing%20incident
The He Jiankui genome editing incident is a scientific and bioethical controversy concerning the use of genome editing following its first use on humans by Chinese scientist He Jiankui, who edited the genomes of human embryos in 2018. He became widely known on 26 November 2018 after he announced that he had created the first human genetically edited babies. He was listed in Time magazine's 100 most influential people of 2019. The affair led to ethical and legal controversies, resulting in the indictment of He and two of his collaborators, Zhang Renli and Qin Jinzhou. He eventually received widespread international condemnation. He Jiankui, working at the Southern University of Science and Technology (SUSTech) in Shenzhen, China, started a project to help people with HIV-related fertility problems, specifically involving HIV-positive fathers and HIV-negative mothers. The subjects were offered standard in vitro fertilisation services and in addition, use of CRISPR gene editing (CRISPR/Cas9), a technology for modifying DNA. The embryos' genomes were edited to remove the CCR5 gene in an attempt to confer genetic resistance to HIV. The clinical project was conducted secretly until 25 November 2018, when MIT Technology Review broke the story of the human experiment based on information from the Chinese clinical trials registry. Compelled by the situation, he immediately announced the birth of genome-edited babies in a series of five YouTube videos the same day. The first babies, known by their pseudonyms Lulu () and Nana (), are twin girls born in October 2018, and the second birth or the third baby born was in 2019, named Amy. He reported that the babies were born healthy. His actions received widespread criticism, and included concern for the girls' well-being. After his presentation on the research at the Second International Summit on Human Genome Editing at the University of Hong Kong on 28 November 2018, Chinese authorities suspended his research activities the following day. On 30 December 2019, a Chinese district court found He Jiankui guilty of illegal practice of medicine, sentencing him to three years in prison with a fine of 3 million yuan. Zhang Renli and Qin Jinzhou received an 18-month prison sentence and a 500,000-yuan fine, and were banned from working in assisted reproductive technology for life. He Jiankui has been variously referred to as a "rogue scientist", "China's Dr Frankenstein", and a "mad genius". The impact of human gene editing on resistance to HIV infection and other body functions in experimental infants remains controversial. The World Health Organization has issued three reports on the guidelines of human genome editing since 2019, and the Chinese government has prepared regulations since May 2019. In 2020, the National People's Congress of China passed Civil Code and an amendment to Criminal Law that prohibit human gene editing and cloning with no exceptions; according to the Criminal Law, violators will be held criminally liable, with a maximum sentence of seven years in prison in serious cases. Origin Since 2016, Han Seo-jun, then associate professor at the Southern University of Science and Technology (SUSTech) in Shenzhen, with Zhang Renli and Qin Jinzhou, have used human embryo in gene-editing technology for assisted reproductive medicine. On 10 June 2017, a Chinese couple, an HIV-positive father and HIV-negative mother, pseudonymously called Mark and Grace, attended a conference held by He at SUSTech. They were offered in vitro fertilisation (IVF) along with gene-editing of their embryos so as to develop innate resistance to HIV infection in their offspring. They agreed to volunteer through informed consent and the experiment was carried out in secrecy. Six other couples having similar fertility problems were subsequently recruited. The couples were recruited through a Beijing-based AIDS advocacy group called Baihualin China League. When later examined, the consent forms were noted as incomplete and inadequate. The couple were reported to have agreed to this experiment because, by Chinese rules, normally HIV positive fathers were not allowed to have children using IVF. When the place of the clinical experiment was investigated, SUSTech declared that the university was not involved and that He had been on unpaid leave since February 2018, and his department attested that they were unaware of the research project. Experiment and birth He Jiankui, the researcher, took sperm and eggs from the couples, performed in vitro fertilisation with the eggs and sperm, and then edited the genomes of the embryos using CRISPR/Cas9. The editing targeted a gene, CCR5, that codes for a protein that HIV uses to enter cells. He was trying to reproduce the phenotype of a specific mutation in the gene, CCR5-Δ32, that few people naturally have and that possibly confers innate resistance to HIV, as seen in the case of the Berlin Patient. However, rather than introducing the known CCR5-Δ32 mutation, he introduced a frameshift mutation intended to make the CCR5 protein entirely nonfunctional. According to He, Lulu and Nana carried both functional and mutant copies of CCR5 given mosaicism inherent in the present state of the art in germ-line editing. There are forms of HIV that use a different receptor instead of CCR5; therefore, the work of He did not theoretically protect Lulu and Nana from those forms of HIV. He used a preimplantation genetic diagnosis process on the embryos that were edited, where three to five single cells were removed, and fully sequenced them to identify chimerism and off-target errors. He says that during the pregnancy, cell-free fetal DNA was fully sequenced to check for off-target errors, and an amniocentesis was offered to check for problems with the pregnancy, but the mother declined. Lulu and Nana were born in secrecy in October 2018. They were reported by He to be normal and healthy. Revelation He Jiankui was planning to reveal his experiments and the birth of Lulu and Nana at the Second International Summit on Human Genome Editing, which was to be organized at the University of Hong Kong during 27–29 November 2018. However, on 25 November 2018, Antonio Regalado, senior editor for biomedicine of MIT Technology Review, posted on the journal's website about the experiment based on He Jiankui's applications for conducting clinical trial that had been posted earlier on the Chinese clinical trials registry. At the time, He refused to comment on the conditions of the pregnancy. Prompted by the publicity, He immediately posted about his experiment and the successful birth of the twins on YouTube in five videos the same day. The next day, the Associated Press made the first formal news, which was most likely a pre-written account before the publicity. His experiment had received no independent confirmation, and had not been peer reviewed or published in a scientific journal. Soon after He's revelation, the university at which He was previously employed, the Southern University of Science and Technology, stated that He's research was conducted outside of their campus. China's National Health Commission also ordered provincial health officials to investigate his case soon after the experiment was revealed. Amidst the furore, He was allowed to present his research at the Hong Kong meeting on 28 November under the title "CCR5 gene editing in mouse, monkey, and human embryos using CRISPR–Cas9". During the discussion session, He asserted, "Do you see your friends or relatives who may have a disease? They need help," and continued, "For millions of families with inherited disease or infectious disease, if we have this technology we can help them." In his speech, He also mentioned a second pregnancy under the same experiment. No reports disclosed, the baby might have been born around August 2019, and the birth was affirmed on 30 December when the Chinese court returned a verdict mentioning that there were "three genetically-edited babies". The baby was later revealed in 2022 as Amy. Reactions and aftermath On the news of Lulu and Nana having been born, the People's Daily announced the experimental result as "a historical breakthrough in the application of gene editing technology for disease prevention." But scientists at the Second International Summit on Human Genome Editing immediately developed serious concerns. Robin Lovell-Badge, head of the Laboratory of Stem Cell Biology and Developmental Genetics at the Francis Crick Institute, who moderated the session on 28 November, recalled that He Jiankui did not mention human embryos in the draft summary of the presentation. He received an urgent message on 25 November through Jennifer Doudna of the University of California, Berkeley, a pioneer of the CRISPR/Cas9 technology, to whom he had confided the news earlier that morning. As the news already broke out before the day of the presentation, he had to be brought in by the University of Hong Kong security from his hotel. Nobel laureate David Baltimore, the chair of the organizing committee of the summit, was the first to react after He's speech, and declared his horror and dismay at his work. He did not disclose the parents' names (other than their pseudonyms Mark and Grace) and they did not make themselves available to be interviewed, so their reaction to this experiment and the ensuing controversy is not known. There was widespread criticism in the media and scientific community over the conduct of the clinical project and its secrecy, and concerns raised for the long term well-being of Lulu and Nana. Bioethicist Henry T. Greely of Stanford Law School declared, "I unequivocally condemn the experiment," and later, "He Jiankui’s experiment was, amazingly, even worse than I first thought." Kiran Musunuru, one of the experts called on to review He's manuscript and who later wrote a book on the scandal, called it a "historic ethical fiasco, a deeply flawed experiment". On the night of 26 November 122 Chinese scientists issued a statement criticizing his research. They declared that the experiment was unethical, "crazy" and "a huge blow to the global reputation and development of Chinese science". The Chinese Academy of Medical Sciences made a condemnation statement on 5 January 2019 saying that: A series of investigations was opened by He's university, local authorities, and the Chinese government. On 26 November 2018, SUSTech released a public notification on its website condemning He's conduct, mentioning the key points as: The research work was conducted off-campus by Associate Professor He Jiankui without reporting to the university and the Department of Biology, and the university and the Department of Biology were unaware of it. The Academic Committee of the Department of Biology considers that Associate Professor He Jiankui's use of gene editing technology for human embryo research is a serious violation of academic ethics and academic standards. SUSTech strictly requires scientific research to comply with national laws and regulations and to respect and abide by international academic ethics and academic norms. The university will immediately hire authoritative experts to set up an independent committee to conduct an in-depth investigation, and will publish relevant information after the investigation. On 28 November 2018, the organising committee of the Second International Summit on Human Genome Editing, led by Baltimore, issued a statement, declaring:At this summit we heard an unexpected and deeply disturbing claim that human embryos had been edited and implanted, resulting in a pregnancy and the birth of twins. We recommend an independent assessment to verify this claim and to ascertain whether the claimed DNA modifications have occurred. Even if the modifications are verified, the procedure was irresponsible and failed to conform with international norms. Its flaws include an inadequate medical indication, a poorly designed study protocol, a failure to meet ethical standards for protecting the welfare of research subjects, and a lack of transparency in the development, review, and conduct of the clinical procedures.On 29 January 2019, it was learned that a U.S. Nobel laureate Craig Mello interviewed He about his experiment with gene-edited babies. In February 2019, He's claims were reported to have been confirmed by Chinese investigators, according to NPR News. Around that time, news reported that the Chinese government may have helped fund the CRISPR babies experiment, at least in part, based on newly uncovered documents. Consequences On 29 November 2018, Chinese authorities suspended all of He's research activities, saying his work was "extremely abominable in nature" and a violation of Chinese law. He was sequestered in a university apartment under some sort of surveillance. On 21 January 2019, He was fired from his job at SUSTech and his teaching and research work at the university was terminated. The same day, the Guangdong Province administration investigated the "gene editing baby incident", which is explicitly prohibited by the state. On 30 December 2019, the Shenzhen Nanshan District People's Court found He Jiankui guilty of illegal practice of medicine, sentencing him to three years in prison with a fine of 3 million yuan. Among the collaborators, only two were indicted – Zhang Renli of the Guangdong Academy of Medical Sciences and Guangdong General Hospital, received a two-year prison sentence and a 1-million RMB (about US$) fine, and Qin Jinzhou of the Southern University of Science and Technology received an 18-month prison sentence and a 500,000 RMB (about US$) fine. The three were found guilty of having "forged ethical review documents and misled doctors into unknowingly implanting gene-edited embryos into two women." Zhang and Qin were officially banned from working in assisted reproductive technology for life. In April 2022, He was released from prison. On 26 November 2018, The CRISPR Journal published ahead of print an article by He, Ryan Ferrell, Chen Yuanlin, Qin Jinzhou, and Chen Yangran in which the authors justified the ethical use of CRISPR gene editing in humans. As the news of CRISPR babies broke out, the editors reexamined the paper and retracted it on 28 December, announcing:[It] has since been widely reported that Dr. He conducted clinical studies involving germline editing of human embryos, resulting in several pregnancies and two alleged live births. This was most likely in violation of accepted bioethical international norms and local regulations. This work was directly relevant to the opinions laid out in the Perspective; the authors' failure to disclose this clinical work manifestly impacted editorial consideration of the manuscript.Michael W. Deem, an American bioengineering professor at Rice University, and his doctoral advisor was involved in the research, and was present when people involved in the study gave consent. He was the only non-Chinese out of 10 authors listed in the manuscript submitted to Nature. Deem came under investigation by Rice University after news of the work was made public. As of 2022, the university never issued any information on his conduct. He resigned from the university in 2020, and pursued a business by creating a bioengineering and energy consultant company called Certus LLC. Stanford University also investigated its faculty of He's confidants including William Hurlbut, Matthew Porteus, and Stephen Quake, his main mentor in gene editing. The university's review committee concluded that the accused "were not participants in [He Jiankui’s] research regarding genome editing of human embryos for intended implantation and birth and that they had no research, financial or organizational ties to this research." In response to He's work, the World Health Organization, formed a committee comprising "a global, multi-disciplinary expert panel" called the Expert Advisory Committee on Developing Global Standards for Governance and Oversight of Human Genome Editing "to examine the scientific, ethical, social and legal challenges associated with human genome editing (both somatic and germline)" in December 2018. In 2019, it issued a call to halt all work on human genome editing, and launched a global registry to track research in the field. It had issued three reports for the recommended guidelines on human genome editing since 2019. As of 2021, the committee stood by the grounds that while somatic gene therapies have become useful in several disease, germline and heritable human genome editing is still with risks, and should be banned. In May 2019, the Chinese government prepared gene-editing regulations stressing that anyone found manipulating the human genome by genome-editing techniques would be held responsible for any related adverse consequences. The Civil Code of the People's Republic of China was amended in 2020, adding Article 1009, which states: "any medical research activity associated with human gene and human embryo must comply with the relevant laws, administrative regulations and national regulation, must not harm individuals and violate ethical morality and public interest." It was enacted on 1 January 2021. A draft of the 11th Amendment to the Criminal Law of the People's Republic of China in 2020 has incorporated three types of crime: the illegal practice of human gene editing, human embryo cloning and severe endangering of the security of human genetic resources; with penalties of imprisonment of up to 7 years and a fine. As of December 2021, Vivien Marx reported in the Nature Biotechnology article that both children were healthy. Ethical controversies Ethics of genome manipulation Genome manipulations can be done at two levels: somatic (grown-up cells of the general body) and germline (sex cells and embryos for reproduction). The development of CRISPR gene editing enabled both somatic and germline editing (such as in assisted reproductive technology). There is no prohibition on somatic gene editing since the practice is generally covered by the available regulations. Prior to He's affair, there was already concern that it was possible to make genetically modified babies and such experiments would have ethical issues as the safety and success were not yet warranted by any study, and genetic enhancement of individual would be possible. Pioneer gene-editing scientists had warned in 2015 that "genome editing in human embryos using current technologies could have unpredictable effects on future generations. This makes it dangerous and ethically unacceptable. Such research could be exploited for non-therapeutic modifications." As Janet Rossant of the University of Toronto noted in 2018: "It has also raised ethical concerns, particularly with regard to the possibility of generating heritable changes in the human genome – so-called germline gene editing." In 2017, the National Academies of Sciences, Engineering, and Medicine published a report "Human Genome Editing: Science, Ethics and Governance" that endorsed germline gene editing in "the absence of reasonable alternatives" of disease management and to "improve IVF procedures and embryo implantation rates and reduce rates of miscarriage." However, the Declaration of Helsinki had stated that early embryo genome-editing for fertility purposes is unethical. The American Society of Human Genetics had declared in 2017 that the basic research on in vitro human genome editing on embryos and gametes should be promoted but that "At this time, given the nature and number of unanswered scientific, ethical, and policy questions, it is inappropriate to perform germline gene editing that culminates in human pregnancy." In July 2018, the Nuffield Council on Bioethics published a policy document titled Genome Editing and Human Reproduction: Social and Ethical Issues in which it advocated human germline editing saying that it "is not 'morally unacceptable in itself' and could be ethically permissible in certain circumstances" when there are sufficient safety measures. The moral justification created critical debates. The United States National Institutes of Health Somatic Cell Genome Editing Consortium held that it "strictly focused on somatic editing; germline editing is not only excluded as a goal but is also considered to be an unacceptable outcome that should be carefully prevented." The Chinese law Measures on Administration of Assisted Human Reproduction Technology (2001) prohibits any genetic manipulation of human embryos for reproductive purposes and allows assisted reproductive technology to be performed only by authorized personnel. On 7 March 2017, He Jiankui applied for ethics approval from Shenzhen HarMoniCare Women and Children's Hospital. In the application, He claimed that the genetically edited babies would be immune to HIV infection, in addition to smallpox and cholera, commenting: "This is going to be a great science and medicine achievement ever since the IVF technology which was awarded the Nobel Prize in 2010, and will also bring hope to numerous genetic disease patients." It was approved and signed by Lin Zhitong, the hospital administrator and one-time Director of Direct Genomics, a company established by He. Upon an inquiry, the hospital denied such approval. The hospital's spokesperson declared that there were no records of such ethical approval, saying, "[The] gene editing process did not take place at our hospital. The babies were not born here either." It was later confirmed that the approval certificate was forged. Sheldon Krimsky of Tufts University reported that "[He Jiankui] is not a medical doctor, but rather received his doctorate in biophysics and did postdoctoral studies in gene sequencing; he lacks training in bioethics." However, He was aware of the ethical issues. On 5 November 2018, He and his collaborators submitted a manuscript on ethical guidelines for reproductive genome editing titled "Draft Ethical Principles for Therapeutic Assisted Reproductive Technologies" to The CRISPR Journal. It was published on 26 November, soon after news of the human experiment broke out. The journal made an inquiry concerning conflicts of interests, which was not disclosed by He. With no justification from He, the journal retracted the paper with a comment that it "was most likely in violation of accepted bioethical international norms and local regulations." Although there were no specific laws in China on gene editing in humans, He Jiankui violated the available guideline on handling human embryos. According to the Guidelines for Ethical Principles in Human Embryonic Stem Cell Research (2003) of the Ministry of Science and Technology and the National Health Commission of China:Research in human embryonic stem cells shall be in compliance with the following behavioral norms: Where blastula is obtained from external fertilization, somatic nucleus transplantation, unisexual duplicating technique or genetic modification, the culture period in vitro shall not exceed 14 days from the day of fecundation or nuclear transplantation.He Jiankui also attended an important meeting on "The ethics and societal aspects of gene editing" in January 2017 organized by Jennifer Doudna and William Hurlbut of Stanford University. Upon invitation from Doudna, He presented a topic on "Safety of Human Gene Embryo Editing" and later recalled that "There were very many thorny questions, triggering heated debates, and the smell of gunpowder was in the air." The consent form of the experiment titled "Informed Consent" also indicates dubious statements. The aim of the study was presented as "an AIDS vaccine development project", even though the study was not about vaccines. Present was technical jargon which would be incomprehensible to a layperson. One of the more peculiar statement is that if the participants decide to abort the experiment "in the first cycle of IVF until 28 days post-birth of the baby", they would have to "pay back all the costs that the project team has paid for you. If the payment is not received within 10 calendar days from the issuance of the notification of violation by the project team, another 100,000 RMB (about US$) of fine will be charged." This violates the voluntary nature of the participation. Medical ethics CRISPR gene editing technology in humans has the potential to cause profound social impacts, such as in the long-term prevention of diseases in humans. However, He's human experiments raised ethical concerns the effect are unknown on future generations. Ethical concerns have been raised relative to the four ethical criteria of autonomy, justice, beneficence, and non-maleficence, first postulated by Tom Beauchamp and James Childress in Principles of Biomedical Ethics. The ethical principle of autonomy requires that individuals have the ability and comprehensive information to make their own decisions based on their values and beliefs. He violated this by failing to inform patients of potential risks, including off-target mutations that might be a threat to the twins' lives. Since He had forged the approval certificate from the hospital's Director of Direct Genomics, the procedure was likely "unlawful", which is against the principle of non-maleficence. Off-target mutations are likely to start at undesired sites, causing cell death or cell transformation. Sonia Ouagrham-Gormley, an associate professor in the Schar School of Policy and Government at George Mason University, and Kathleen Vogel, a professor in the School for the Future of Innovation in Society at Arizona State University, stated that the procedure was "unnecessary" and "risks the safety of the patients". The researchers criticized He's unethical action by presenting the fact that the prevention of HIV transmission from parents to newborn babies can be safely achieved with existing standard methods, such as sperm washing and caesarian section delivery. The principle of justice argues that individuals should have the right to receive the same amount of care from medical providers regardless of their social and economic background. Beneficence requires healthcare providers to maximize benefits and put the benefit of the patients first. He's intervention in the twins' genes cannot be justified, and the risk-benefit is unacceptable. He paid the couple $40,000 to ensure that they would keep his operation confidential. This action can be viewed as an inducement and violates China's regulations on the prohibition of genetic manipulation of human gametes, zygotes, and embryos for reproductive purposes; HIV carriers being not allowed to have assisted reproductive technologies, and the manipulation of a human embryo for research being only permitted within 14 days. Thus, while genome editing in humans has potential as an effective and cost-efficient method for manipulating genes within living cells, it requires more research and transparent procedures to be ethically justified. Scientific issues Effects of mutations It is an established fact that C-C chemokine receptor type 5 (CCR5) is a protein essential for HIV infection of the white blood cells by acting as co-receptor to HIV. Mutation in the gene CCR5 (called CCR5Δ32 because the mutation is specifically a deletion of 32 base pairs in human chromosome 3) renders resistance to HIV. Resistance is higher when mutations are in two copies (homozygous alleles) and in only one copy (heterozygous alleles) the protection is very weak and slow. Not all homozygote individuals are completely resistant. In natural population, CCR5Δ32 homozygotes are rarer than heterozygotes. In 2007, Timothy Ray Brown (dubbed the Berlin patient) became the first person to be completely cured of HIV infection following a stem cell transplant from a CCR5Δ32 homozygous donor. He Jiankui overlooked these facts. Two days after Lulu and Nana were born, their DNA were collected from blood samples of their umbilical cord and placenta. Whole genome sequencing confirmed the mutations. However, available sources indicate that Lulu and Nana are carrying incomplete CCR5 mutations. Lulu carries a mutant CCR5 that has a 15-bp in-frame deletion only in one chromosome 3 (heterozygous allele) while the other chromosome 3 is normal; and Nana carries a homozygous mutant gene with a 4-bp deletion and a single base insertion. He therefore failed to achieve the complete 32-bp deletion. Moreover, Lulu has only heterozygous modification which is not known to prevent HIV infection. Because the babies' mutations are different from the typical CCR5Δ32 mutation it is not clear whether or not they are prone to infection. There are also concerns about adverse effect called off-target mutation in CRISPR/Cas9 editing and mosaicism, a condition in which many different cells develop in the same embryo. Off-target mutation may cause health hazards, while mosaicism may create HIV susceptible cells. Fyodor Urnov, Director at the Altius Institute for Biomedical Sciences at Washington, asserted that "This [off-target mutation] is a key problem for the entirety of the embryo-editing field, one that the authors sweep under the rug here," and continued, "They [He's team] should have worked and worked and worked until they reduced mosaicism to as close to zero as possible. This failed completely. They forged ahead anyway." His data on Lulu and Nana's mutation alignment (in Sanger chromatogram) showed three modifications, while two should be expected. Particularly in Lulu, the mutation is much more complex than He's report. There were three different combinations of alleles: two normal copies of CCR5, one normal copy and one with a 15-bp deletion, and one normal copy and an unknown large insertion. But George Church of Harvard University, in an interview with Science, explained that off-target mutations may not be dangerous, and that there is no need to reduce mosaicism excessively, saying, "There's no evidence of off-target causing problems in animals or cells. We have pigs that have dozens of CRISPR mutations and a mouse strain that has 40 CRISPR sites going off constantly and there are off-target effects in these animals, but we have no evidence of negative consequences." As to mosaicism, he said, "It may never be zero. We don’t wait for radiation to be zero before we do positron emission tomography scans or x-rays." In February 2019, scientists reported that Lulu and Nana may have inadvertently (or perhaps, intentionally) had their brains altered, since CCR5 is linked to improved memory function in mice, as well as enhanced recovery from strokes in humans. Although He Jiankui stated during the Second International Summit on Human Genome Editing, that he was against using genome editing for enhancement, he also acknowledged that he was aware of the studies linking CCR5 to enhanced memory function. In June 2019, researchers incorrectly suggested that the purportedly genetically edited humans may have been mutated in a way that shortens life expectancy. Rasmus Nielsen and Wei Xinzhu, both at the University of California, Berkeley, reported in Nature Medicine of their analysis of the longevity of 409,693 individuals from British death registry (UK BioBank) with the conclusion that two copies of CCR5Δ32 mutations (homozygotes) were about 20% more likely than the rest of the population to die before they were 76 years of age. The research finding was widely publicized in the popular and scientific media. However, the article overlooked sampling bias in UK Biobank's data, resulting in an erroneous interpretation, and was retracted four months later. Rejections from peer-reviewed journals Scientific works are normally published in peer-reviewed journals, but He failed to do so regarding the birth of gene-edited babies. This was one of the grounds on which He was criticized. It was later reported that He did submit two manuscripts to Nature and the Journal of the American Medical Association, which were both rejected, mainly on ethical issues. He's first manuscript titled "Birth of Twins After Genome Editing for HIV Resistance" was submitted to Nature on 19 November. He shared copies of the manuscript to the Associated Press, which he further allowed to document his works. In an interview, Hurlbut opined that the condemnation of He's work would have been less harsh if the study had been published, and said, "If it had been published, the publishing process itself would have brought a level of credibility because of the normal scrutiny involved; the data analysis would have been vetted." The scientific manuscripts of He were revealed when an anonymous source sent them to the MIT Technology Review, which reported them on 3 December 2019. Related research The first successful gene-editing experiment of CCR5 in humans was in 2014. A team of researchers at the University of Pennsylvania, Philadelphia, Albert Einstein College of Medicine, New York, and Sangamo BioSciences, California, reported that they modified CCR5 on the blood cells (CD4 T cells) using zinc-finger nuclease which they introduced (infused) into 12 individuals with HIV. After complete treatment, the patients showed decreased viral load, and in one, HIV disappeared. The result was published in The New England Journal of Medicine. Chinese scientists have successfully used CRISPR editing to create mutant mice and rats since 2013. The next year they reported successful experiment in monkeys involving a removal of two key genes (PPAR-γ and RAG1) that play roles in cell growth and cancer development. One of the leading researchers, Yuyu Niu later collaborated with He Jiankui in 2017 to test the CRISPR editing of CCR5 in monkeys, but the outcome was not fully assessed or published. Niu later commented that they "had no idea he was going to do this in a human being." In 2018, his team reported an induction of mutation to produce muscular dystrophy, and simultaneously by another independent Chinese team an induction of growth retardation in monkeys using CRISPR editing. In February 2018, scientists at the Chinese Academy of Sciences reported the creation of five identical cloned gene-edited monkeys, using the same cloning technique that was used by them to create the first cloned primates Zhong Zhong and Hua Hua in 2018 and Dolly the sheep. The mutant monkeys and clones were made for understanding several medical diseases and not for disease resistance. The first clinical trials of CRISPR-Cas9 for the treatment of genetic blood disorders was started in August 2018. The study was jointly conducted by CRISPR Therapeutics, a Swiss-based company, and Vertex Pharmaceuticals, headquartered in Boston. The result was first announced on 19 November 2019 which states that the first two patients, one with β-thalassemia and the other with sickle cell disease, were treated successfully. Under the same project, a parallel study on 6 individuals with sickle cell disease was also conducted at Harvard Medical School, Boston. In both studies, the gene involved in blood cell formation BCL11A was modified in the bone marrow extracted from the individuals. Both the studies were simultaneously published in The New England Journal of Medicine on 21 January 2021 in two papers. The individuals have not complained the symptoms and needed blood transfusion normally required in such disease, but the method is arduous and poses high risk of infection in the bone marrow, to which David Rees at King's College Hospital commented, "Scientifically, these studies are quite exciting. But it’s hard to see this being a mainstream treatment in the long term." In June 2019, Denis Rebrikov at the Kulakov National Medical Research Center for Obstetrics, Gynecology and Perinatology in Moscow announced through Nature that he was planning to repeat He's experiment once he got official approval from the Russian Ministry of Health and other authorities. Rebrikov asserted that he would use safer and better method than that of He, saying, "I think I'm crazy enough to do it." In a subsequent report on 17 October, Rebrikov said that he was approached by a deaf couple for help. He already started in vitro experiment to repair a gene that causes deafness, GJB2, using CRISPR. In 2019, the Abramson Cancer Center of the University of Pennsylvania in US announced the use of the CRISPR technology to edit cancer genes in humans, and the results of the phase I clinical trial in 2020. The study started in 2018 with an official registration in the US clinical trials registry. The report in the journal Science indicates three individuals in their 60s with advanced refractory cancer, two of them with the blood cancer (multiple myeloma) and one with tissue cancer (sarcoma), were treated with their own cancer cells after CRISPR editing. The experiment was based on CAR T-cell therapy by which the T cells, obtained from the individuals were removed of three genes involved in cancer and were added a gene CTAG1B that produces an antigen NY-ESO-1. When the edited cells were introduced back into the individuals, the antigens attack the cancer cells. Although the results were acclaimed as the first "success of gene editing and cell function" in cancer research and "an important milestone in the development and clinical application of gene-edited effector cell therapy," it was far from curing the diseases. One died after the clinical trial, and the other two had recurrent cancer. A similar clinical trial was reported by a team of Chinese scientists at the Sichuan University and their collaborators in 2020 in Nature Medicine. Here they removed only one gene (PDCD1 that produces the protein PD-1) in the T cells from 12 individuals having late-stage lung cancer. The study was found to be safe and effective. However, the edited T cells were not fully efficient and disappeared in most individuals, indicating that the treatments were not completely successful. See also Designer baby Human Nature (2019 CRISPR film documentary) Hwang affair Unnatural Selection (2019 TV documentary) Bioethics References External links He's presentation and subsequent panel discussion, at the Second International Summit on Human Genome Editing. 27 November 2018, via Bloomberg Asia's Facebook page 2018 controversies 2018 in biology 2018 in China Biology controversies Genome editing History of HIV/AIDS Identical twins He, Jiankui Medical scandals in China Science and technology in China He, Jiankui
He Jiankui genome editing incident
[ "Chemistry", "Engineering", "Biology" ]
7,668
[ "Genetics techniques", "Cell biology", "Genome editing", "Genetic engineering", "Molecular biology", "Biochemistry" ]
59,164,122
https://en.wikipedia.org/wiki/NGC%202336
NGC 2336 is a barred spiral galaxy located in the constellation Camelopardalis. It is located at a distance of circa 100 million light years from Earth, which, given its apparent dimensions, means that NGC 2336 is about 200,000 light years across. It was discovered by Wilhelm Tempel in 1876. Characteristics NGC 2336 is a barred spiral galaxy, featuring a small optical bar. At least 8 spiral arms, with numerous HII regions, emanate from the ring-like structure around the bar. This ring has a radius of approximately 34 arcseconds, which corresponds to 5.3 kpc at the distance of NGC 2336. In the large arms of the galaxy have been observed 28 HII regions that may host young massive star clusters, and for two of them the nebular emission comprises most of the flux. Three of these HII areas have ages calculated to be 100 to 300 million years and have sizes between 300 and 600 parsecs. It is suggested they are star complexes that may coexist with younger ones. The most massive of the HII regions, number 13, has a mass estimated to be and is across. Observations in the ultraviolet showed 78 star forming regions, with two of them between the spiral arms and six at the galaxy ring. Their size is comparable to NGC 604, one of the largest nebulae in the Local Group. Star formation is more intense in the inner parts of the arms and at the ring. Scattered dust lanes which do not fit into a spiral structure have been observed in the nuclear region of the galaxy. No emission has been detected in the radiowaves and HI and Ha imaging of the nucleus of NGC 2336. The nucleus is small, with an apparent diameter of 5 arcseconds, while the bulge is large, with a radius of 17 arcseconds. In the centre of NGC 2336 lies a supermassive black hole whose mass is estimated to be 30 million (107.5) based on Ks bulge luminosity. Supernova One supernova has been observed in NGC 2336. SN1987L (typeIa, mag. 14.2) was discovered by American amateur astronomer James Dana Patchick on 16 August 1987. He used a home built 17.5" Dobsonian reflecting telescope for the visual discovery. The supernova was found as part of a team effort known as 'SUNSEARCH', started by Steve H. Lucas. Spectrography performed by William Herschel Telescope on 20–21 October 1987 concluded that it was a type Ia supernova with its maximum approximately 100 days before. [Note: some sources incorrectly list this supernova as type II.] Nearby galaxies NGC 2336 is the foremost galaxy of a small galaxy group known as the NGC 2336 group. It forms a non-interacting pair with IC 467, which lies 20 arcminutes away. Gallery See also List of NGC objects (2001–3000) References External links Intermediate spiral galaxies Camelopardalis 2336 03809 021033 Astronomical objects discovered in 1876 Discoveries by Wilhelm Tempel 07184+8016 +13-06-006
NGC 2336
[ "Astronomy" ]
644
[ "Camelopardalis", "Constellations" ]
59,164,935
https://en.wikipedia.org/wiki/Tidyverse
The tidyverse is a collection of open source packages for the R programming language introduced by Hadley Wickham and his team that "share an underlying design philosophy, grammar, and data structures" of tidy data. Characteristic features of tidyverse packages include extensive use of non-standard evaluation and encouraging piping. As of November 2018, the tidyverse package and some of its individual packages comprise 5 out of the top 10 most downloaded R packages. The tidyverse is the subject of multiple books and papers. In 2019, the ecosystem has been published in the Journal of Open Source Software. Its syntax has been referred to as "supremely readable", and some have argued that tidyverse is an effective way to introduce complete beginners to programming, as pedagogically it allows students to quickly begin doing data processing tasks. Moreover, some practitioners have pointed out that data processing tasks are intuitively easier to chain together with tidyverse compared to Python's equivalent data processing package, pandas. There is also an active R community around the tidyverse. For example, there is the TidyTuesday social data project organised by the Data Science Learning Community (DSLC), where varied real-world datasets are released each week for the community to participate, share, practice, and make learning to work with data easier. Critics of the tidyverse have argued it promotes tools that are harder to teach and learn than their built-in, base R equivalents and are too dissimilar to some programming languages. The tidyverse principles more generally encourage and help ensure that a universe of streamlined packages, in principle, will help alleviate dependency issues and compatibility with current and future features. An example of such a tidyverse principled approach is the pharmaverse, which is a collection of R packages for clinical reporting usage in pharma. Packages The core tidyverse packages, which provide functionality to model, transform, and visualize data, include: ggplot2 – for data visualization dplyr – for wrangling and transforming data tidyr – help transform data specifically into tidy data, where each variable is a column, each observation is a row; each row is an observation, and each value is a cell. readr – help read in common delimited, text files with data purrr – a functional programming toolkit tibble – a modern implementation of the built-in data frame data structure stringr – helps to manipulate string data types forcats – helps to manipulate category data types Additional packages assist the core collection. Other packages based on the tidy data principles are regularly developed, such as tidytext for text analysis, tidymodels for machine learning, or tidyquant for financial operations. References Data analysis software Statistical software Free R (programming language) software
Tidyverse
[ "Mathematics" ]
567
[ "Statistical software", "Mathematical software" ]
59,165,139
https://en.wikipedia.org/wiki/Theasinensin%20A
Theasinensin A is polyphenol flavonoid from black tea (Camellia sinensis) created during fermentation, by oxidation of epigallocatechin gallate. Its atropisomer is theasinensin D. See also Theasinensin B Theasinensin C Theasinensin D Theasinensin E Theasinensin F Theasinensin G References Flavanols Polyphenols Biphenyls
Theasinensin A
[ "Chemistry" ]
100
[]
59,165,535
https://en.wikipedia.org/wiki/Theasinensin%20B
Theasinensin B is polyphenol flavonoid from black tea (Thea sinensis). See also Theasinensin A Theasinensin C References Flavanols Polyphenols Biphenyls
Theasinensin B
[ "Chemistry" ]
52
[]
59,165,818
https://en.wikipedia.org/wiki/Theasinensin%20C
Theasinensin C is polyphenol flavonoid from black tea (Thea sinensis). See also Theasinensin A Theasinensin B References Flavanols Polyphenols Biphenyls
Theasinensin C
[ "Chemistry" ]
52
[]
59,166,058
https://en.wikipedia.org/wiki/Margaret%20Flamsteed
Margaret Flamsteed (née Cooke) (c. 1670-1730) is the first woman on record to be associated with astronomy in Britain. She was married to John Flamsteed, the Astronomical Observer (a post that later became known as Astronomer Royal). After John Flamsteed's death she oversaw publication of both of his most famous works: Historia Coelestis Britannica in 1725 and Atlas Coelestis in 1729. Without her, neither of these two important works would have been published. Margaret appeared as a character in a play by Kevin Hood called The Astronomer's Garden. Early life Daughter of a London lawyer she was a well-educated woman, both literate and numerate. Life with the Astronomer Royal Margaret Flamsteed was 22 years old when she married the 46 year-old John Flamsteed; they were married 27 years. Notebooks in her handwriting and from soon after the marriage, show a competency in, and willingness to learn, mathematics and astronomy. In one entry from John Flamsteed's notes it states the observation was done “solus cum sponsa” (alone with wife). This, and other clues, suggest that while Margaret was not a regular assistant, she was clearly able and willing to assist her husband in his nighttime observations. She also spent daylight hours copying or writing letters for her husband, especially later when his hand became shaky. Margaret Flamsteed also acted as housekeeper for John, ensuring that his assistants, pupils, and visitors were cared for. After John Flamsteed's death After John Flamsteed’s death in 1719, Margaret oversaw the publication of both the Historia Coelestis Britannica in 1725 and the Atlas Coelestis in 1729. Margaret was assisted by Joseph Crosthwait and Abraham Sharp, two of Flamsteed's assistants. The publishing of these two great works was an expensive process, and one she had to complete while dealing with the complicated fallout of her husband's estate, and with diminished funds as many of her savings were lost in the collapse of the South Sea Bubble of 1720. Margaret Flamsteed died aged 60, only one year after publication of the Atlas Coelestis. References 17th-century English astronomers 18th-century British astronomers Women astronomers 18th-century British women scientists
Margaret Flamsteed
[ "Astronomy" ]
481
[ "Women astronomers", "Astronomers" ]
59,166,124
https://en.wikipedia.org/wiki/Theasinensin%20D
Theasinensin D is polyphenol flavonoid found in oolong tea. It's an atropisomer of theasinensin A. References Flavanols Polyphenols Biphenyls
Theasinensin D
[ "Chemistry" ]
52
[]
59,166,350
https://en.wikipedia.org/wiki/Theasinensin%20E
Theasinensin E is polyphenol flavonoid found in oolong tea. It's an atropisomer of theasinensin C. References Flavanols Polyphenols Biphenyls
Theasinensin E
[ "Chemistry" ]
52
[]
59,166,384
https://en.wikipedia.org/wiki/Wendy%20Flavell
Wendy Ruth Flavell (born 1 September 1961) is Vice Dean for Research and a Professor of Surface Physics in the School of Physics and Astronomy at the University of Manchester. Her research investigates the electronic structure of complex metal oxides, chalcogenides, photoemission and photovoltaics. Education and early life Flavell was born in Bilston to Maurice and June Flavell. She was educated at Wolverhampton Girls' High School and studied physics (Bachelor of Arts) at the University of Oxford followed by a Doctor of Philosophy degree in 1986. Her doctoral research investigated electron spectroscopy of metal oxides and supervised by P.A. Cox. Career and research Flavell joined Imperial College London as a Royal Society University Research Fellow. In 1990 Flavell joined the University of Manchester Institute of Science and Technology (UMIST) in the Department of Chemistry. In 1998 Flavell became the sixth woman in the United Kingdom to be appointed Professor of Physics. She launched a scheme to promote women in science. She was part of the strategy group that designed the 4GLS at Daresbury Laboratory in 2004. She is a member of the University of Manchester Living Lab. Flavell is interested in using nanoparticles and Quantum dots for efficient fuel cells and new materials for photovoltaics. She works on scanning tunnelling microscopy (STM), X-ray absorption near edge structure (NEXAFS) and extended X-ray absorption fine structure. She has studied titanium dioxide and Tin(IV) Oxide. She is interested in the surface reactivity of nanocrystals and dynamics of charge carriers in solar cells. She attempts to understand how solar cells age at the surface, in efforts to design passivation strategies. Flavell demonstrated that cadmium telluride quantum dots can have near unity quantum yields. In 2014 she served as deputy chair of the physics panel of the Research Excellence Framework (REF). She served on the Council of the Institute of Physics in 2017 and on the Newton International Fellowship committee for the Royal Society. Her research has been funded by the Engineering and Physical Sciences Research Council (EPSRC) and Science and Technology Facilities Council (STFC). Public engagement In 2011 Flavell's research group demonstrated their work on quantum dots at the Royal Society Summer Exhibition. She has delivered a Pint of Science talk and discussed the photon on In Our Time in 2015. References 1961 births Living people Women materials scientists and engineers People educated at Wolverhampton Girls' High School British materials scientists Alumni of the University of Oxford Fellows of the Institute of Physics Academics of the University of Manchester Academics of the University of Manchester Institute of Science and Technology Academics of Imperial College London
Wendy Flavell
[ "Materials_science", "Technology" ]
543
[ "Women materials scientists and engineers", "Materials scientists and engineers", "Women in science and technology" ]
59,166,472
https://en.wikipedia.org/wiki/Theasinensin%20F
Theasinensin F is polyphenol flavonoid found in oolong tea. It's a deoxy derivative of theasinensin A. References Flavanols Polyphenols Biphenyls
Theasinensin F
[ "Chemistry" ]
52
[]
59,166,594
https://en.wikipedia.org/wiki/Theasinensin%20G
Theasinensin G is polyphenol flavonoid found in oolong tea. It is a deoxy derivative of theasinensin D and atropisomer of theasinensin F. References Flavanols Polyphenols Biphenyls
Theasinensin G
[ "Chemistry" ]
62
[]
59,167,093
https://en.wikipedia.org/wiki/Lasso%20%28video%20sharing%20app%29
Lasso was a short-video sharing app by Facebook. Lasso was launched on iOS and Android and was aimed at teenagers. On July 2, 2020, Facebook announced that Lasso would be shutting down July 10. It has since been shut down. Currently Lasso was merged with Instagram, to relaunch as Instagram Reels. Features The application connected with Facebook, among other social media platforms, and allowed users to upload their short videos with music and find trending videos via Lasso's hashtag tracking. Competition Lasso was directly competing with short video sharing app TikTok, which merged with Musical.ly in August 2018. References Social media Meta Platforms applications
Lasso (video sharing app)
[ "Technology" ]
142
[ "Mobile software stubs", "Mobile technology stubs", "Computing and society", "Social media" ]
64,159,451
https://en.wikipedia.org/wiki/Ocean%20Biodiversity%20Information%20System
The Ocean Biodiversity Information System (OBIS), formerly Ocean Biogeographic Information System, is a web-based access point to information about the distribution and abundance of living species in the ocean. It was developed as the information management component of the ten year Census of Marine Life (CoML) (2001-2010), but is not limited to CoML-derived data, and aims to provide an integrated view of all marine biodiversity data that may be made available to it on an open access basis by respective data custodians. According to its web site as at July 2018, OBIS "is a global open-access data and information clearing-house on marine biodiversity for science, conservation and sustainable development." 8 specific objectives are listed in the OBIS site, of which the leading item is to "Provide [the] world's largest scientific knowledge base on the diversity, distribution and abundance of all marine organisms in an integrated and standardized format". History and current status Initial ideas for OBIS were developed at a CoML meeting on benthic (bottom-dwelling) ocean life in October 1997. Recommendations from this workshop led to a web site (http://marine.rutgers.edu/OBIS) at Rutgers in 1998 to demonstrate the initial OBIS concept. An inaugural OBIS International Workshop was held on November 3–4, 1999 in Washington, DC, which led to scoping of the project and outreach to potential partners, with selected contributions published in a special issue of Oceanography magazine, within which OBIS founder Dr. J. F. Grassle articulated the vision of OBIS as "an on-line, worldwide atlas for accessing, modeling and mapping marine biological data in a multidimensional geographic context." In May 2000, US Government Agencies in the National Oceanographic Partnership Program together with the Alfred P. Sloan Foundation funded eight research projects to initiate OBIS. In May 2001, the US National Science Foundation funded Rutgers University to develop a global portal for OBIS. Also in 2001, an OBIS International Committee was formed and its first meeting was held in August 2001. The production version of the OBIS Portal was launched at Rutgers University in 2002 as the web site http://www.iobis.org, serving 430,000 species-based georeferenced data records from 8 partner databases including fish records from FishBase, cephalopods from CephBase, corals from Biogeoinformatics of Hexacorals, mollusks from the Indo-Pacific Mollusc Database and more. By May 2006, the OBIS Portal was able to access 9.5 million records of 59,000 species from 112 databases, and by December 2010 (at the conclusion of the Census of Marine Life) provided access to 27.7 million records representing 167,000 taxon names. As at July 2018, the OBIS website states that the system provides access to over 45 million observations of nearly 120,000 marine species (the reduced number of names cited being as a result of synonym resolution, i.e. reduction of taxa recorded under multiple names to a single accepted name), based on contributions from 500 institutions from 56 countries. In 2009 OBIS was adopted as a project by International Oceanographic Data and Information Exchange (IODE) programme of the Intergovernmental Oceanographic Commission (IOC) of UNESCO and in 2011, with the cessation of funding for the Rutgers-based secretariat and portal from the Sloan Foundation, an offer of hosting by the Flanders Marine Institute (VLIZ) in Ostend, Belgium was accepted to become the long term host for the system and also the OBIS secretariat moved from Rutgers University to the IOC Project Office for IODE in Ostend from where OBIS is presently maintained and additional development is carried out. The web address changed to . OBIS is thus now located in Ostend, in the same building which is also home to VLIZ. VLIZ maintains two taxonomic databases, the World Register of Marine Species and IRMNG, the Interim Register of Marine and Nonmarine Genera, both of which feed into taxonomic decisions used to control the display of species-based information in OBIS and also provide the taxonomic hierarchy via which OBIS content can be navigated. OBIS is currently under the direction of IODE with advice from a steering group, the IODE Steering Group for OBIS (SG-OBIS); operational activities are directed by an OBIS Executive Committee (OBIS-EC) with support from various OBIS Task Teams and ad-hoc OBIS project teams. The OBIS secretariat, hosted at the UNESCO/IOC project office for IODE in Ostend (Belgium), includes the OBIS project manager and data manager and in addition to maintaining the OBIS system also provides training and technical assistance to its data providers, guides new data standards and technical developments, and encourages international cooperation to foster the group benefits of the network. Data available via OBIS cover all groups of organisms that have any association with marine or estuarine habitats, also including shorelines and the atmosphere above the ocean, such as marine vertebrates (fishes, marine mammals, turtles, seabirds, etc.); marine invertebrates (including zooplankton); marine bacteria; and marine plants (e.g. phytoplankton, seaweeds, mangroves). OBIS portal As available web technologies have developed, the OBIS Portal has been through a number of iterations since its inception in 2002. Initially the system retrieved remote data in real time in response to a user query and used the KGS Mapper to visualize the results. In 2004, centralized metadata indexing and cacheing was introduced leading to faster and more reliable results, and the c-squares mapper was added to options for data visualization. In 2010, a full web GIS based system was introduced for the first time along with a new version of the web site which resulted in considerably more detailed and flexible presentation of search results along with a number of new search options. In April 2018, funding was announced to develop a new "2.0" version of OBIS with improved capabilities., and is released on 29 January 2019. The website URL changed from iobis.org to obis.org. Regional OBIS nodes Over the period 2004–present, an international network of Regional OBIS Nodes has also been established, that are facilitating the connection of data sources in their region to the master OBIS data network and also increasingly provide specialised services or views of OBIS data to users in their particular region. Antarctic OBIS Hosted by Belgian Biodiversity Platform, Brussels and by Flanders Marine Institute, Ostend. Managed by Bruno Danis Argentina Hosted by Centro Nacional Patagonico - (CENPAT) - CONICET. Managed by Mirtha Lewis Australia Hosted by Commonwealth Scientific and Industrial Research Organisation - Oceans and Atmosphere. Managed by Dave Watts Canada Hosted by Centre of Marine Biodiversity and Bedford Institute of Oceanography. Managed by Bob Branton China Hosted by Institute of Oceanology. Managed by Sun Xiaoxia Europe Hosted by Vlaams Instituut voor de Zee. Managed by Ward Appeltans Indian Ocean Hosted by National Chemical Laboratory and National Institute of Oceanography. Managed by Baba Ingole Japan Hosted by National Institute for Environmental Studies. Managed by Junko Shimura New Zealand / South West Pacific Hosted by National Institute of Water & Atmospheric Research. Managed by Don Robertson Sub-Saharan Africa Hosted by Southern African Data Centre for Oceanography. Managed by Marten Grundlingh Philippines Hosted by ASEAN Centre for Biodiversity. Managed by Christian Elloran Tropical and Subtropical Eastern South Pacific Hosted by University of Concepcion. Managed by Ruben Escribando Tropical and Subtropical Western South Atlantic Hosted by University of Sao Paulo (USP) and Reference Center on Environmental Information (CRIA. Managed by Fabio Lang Da Silvera United States of America Hosted by United States Geological Survey (USGS). Managed by Abby Benson. See also Census of Marine Life World Register of Marine Species (WoRMS) Selected publications about OBIS Grassle, J.F. and Stocks, K.I., 1999. A Global Ocean Biogeographic Information System (OBIS) for the Census of Marine Life. Oceanography 12(3), pp. 12-14. Grassle, J.F. 2000. The Ocean Biogeographic Information System (OBIS): an on-line, worldwide atlas for accessing, modeling and mapping marine biological data in a multidimensional geographic context. Oceanography 13(3), pp. 5-7. Zhang, Y. and Grassle, J.F. 2003. A portal for the Ocean Biogeographic Information System. Oceanologica Acta 25(5), pp. 193-197. Mark J. Costello, J. Frederick Grassle, Yunqing Zhang, Karen Stocks, and Edward Vanden Berghe 2005. Where is what and what is where? Online mapping of marine species. MarBEF Newsletter. Spring 2005, pp. 20–22. Wood J.B., Zhang, P.Y., Costello, M.J. and Grassle, J.F. 2006. An introduction to OBIS, www.iobis.org. In: Miloslavich P. and Klein E. (eds), Caribbean marine biodiversity: the known and the unknown. DEStech Publications Inc., Lancaster Pennsylvania USA, pp. 253–254. Costello M.J., Stocks K., Zhang Y., Grassle J.F., Fautin D.G. 2007. About the Ocean Biogeographic Information System. Retrieved from http://hdl.handle.net/2292/5236 Edward Vanden Berghe, Karen I. Stocks and J. Frederick Grassle. 2010. Data Integration: The Ocean Biogeographic Information System. Chapter 17 (pp. 333-353) in Alasdair D. McIntyre (ed.): Life in the World's Oceans. Blackwell Publishing Ltd. Chapter also available at http://www.comlmaps.org/mcintyre/ch17/data-integration-the-ocean-biogeographic-information-system. Vanden Berghe E, Halpin P, Lang da Silveira F, Stocks K and Grassle F. 2010. Integrating biological data into ocean observing systems: The future role of OBIS. In Hall J, Harrison D E, and Stammer D (eds) Proceedings of OceanObs’09: Sustained Ocean Observations and Information for Society (Volume 2). Paris, European Space Agency Publication No WPP-306. Costello M.J., Vanhoorne B., Appeltans W. 2015. Progressing conservation of biodiversity through taxonomy, data publication and collaborative infrastructures. Conservation Biology 29 (4), 1094–1099. References External links OBIS International Portal International OBIS Portal Regional OBIS nodes AfrOBIS: Sub-Saharan Africa node of OBIS EurOBIS: European node of OBIS IndOBIS: Indian Ocean node of OBIS OBIS Australia: Australian regional node of OBIS OBIS China: Chinese regional node of OBIS OBIS Southwestern Pacific: Southwestern Pacific regional node of OBIS SCAR-MarBIN: Antarctic Marine Biodiversity Information Network SEAOBIS: Southeast Asia node of OBIS US OBIS: U.S.A. regional node of OBIS OBIS Canada: Canadian regional node of OBIS Thematic nodes OBIS-SEAMAP (Ocean Biodiversity Information System - Spatial Ecological Analysis of Megavertebrate Populations) Parent project Census of Marine Life Home Page History Original OBIS Portal Prototype (circa 2000) From the archives: Evolution of the OBIS Portal, 2002-current Marine biology Fisheries databases Biodiversity Biogeography Ecological data Zoological literature Information systems
Ocean Biodiversity Information System
[ "Technology", "Biology" ]
2,420
[ "Biogeography", "Marine biology", "Information systems", "Information technology", "Biodiversity" ]
64,162,417
https://en.wikipedia.org/wiki/Order%20convergence
In mathematics, specifically in order theory and functional analysis, a filter in an order complete vector lattice is order convergent if it contains an order bounded subset (that is, a subset contained in an interval of the form ) and if where is the set of all order bounded subsets of X, in which case this common value is called the order limit of in Order convergence plays an important role in the theory of vector lattices because the definition of order convergence does not depend on any topology. Definition A net in a vector lattice is said to decrease to if implies and in A net in a vector lattice is said to order-converge to if there is a net in that decreases to and satisfies for all . Order continuity A linear map between vector lattices is said to be order continuous if whenever is a net in that order-converges to in then the net order-converges to in is said to be sequentially order continuous if whenever is a sequence in that order-converges to in then the sequence order-converges to in Related results In an order complete vector lattice whose order is regular, is of minimal type if and only if every order convergent filter in converges when is endowed with the order topology. See also References Functional analysis Order theory
Order convergence
[ "Mathematics" ]
256
[ "Functions and mappings", "Functional analysis", "Mathematical objects", "Mathematical relations", "Order theory" ]
64,162,475
https://en.wikipedia.org/wiki/2020%20Dahej%20chemical%20plant%20explosion
On 3 June 2020, the explosion occurred at the Yashashvi Rasayan Pvt. Ltd. chemical factory at Dahej in Gujarat, India, around 12:00 hours. Five people were killed and 57 were injured in the explosion. See also 2020 Ahmedabad chemical factory blast Visakhapatnam gas leak List of industrial disasters References 2020 disasters in India 2020 industrial disasters Chemical plant explosions Disasters in Gujarat 2020s in Gujarat June 2020 events in India Explosions in 2020 Industrial fires and explosions in India
2020 Dahej chemical plant explosion
[ "Chemistry" ]
102
[ "Chemical plant explosions", "Explosions" ]
64,164,203
https://en.wikipedia.org/wiki/Jan-Erik%20Roos
Jan-Erik Ingvar Roos (16 October 1935 – 15 December 2017) was a Swedish mathematician whose research interests were in abelian category theory, homological algebra, and related areas. He was born in Halmstad, in the province of Halland on the Swedish west coast. Roos enrolled at Lund University in 1954, and started studying mathematics with Lars Gårding in 1957. Under Gårding's direction he wrote a thesis on ordinary differential equation, and graduated in 1958 with a licentiate degree. Later that year he went to Paris on a doctoral scholarship; there, he gravitated towards the mathematical environment at the Institut Henri Poincaré, and the various seminars held there. After a while, he started attending Alexander Grothendieck's seminar at the Institut des hautes études scientifiques in Bures-sur-Yvette, where he became interested in abstract algebra and algebraic geometry. In 1967 he was invited by Saunders Mac Lane to visit the University of Chicago for three months; Mac Lane was impressed by Roos and later wrote a very positive letter of recommendation for him. Upon his return to Sweden, Roos was appointed Professor of Mathematics at Stockholm University in 1970, and started building a strong algebra school. He was elected to the Royal Swedish Academy of Sciences in 1980 and was its President from 1980 to 1982. While serving on the Academy, he was on the committees deciding the Rolf Schock Prizes in Mathematics and the Crafoord Prize in Astronomy and Mathematics. Roos made important contributions to homological algebra, and did extensive computer-assisted studies of Hilbert–Poincaré series and their rationality. A special issue of the journal Homology, Homotopy and Applications ("The Roos Festschrift volume") was published in 2002, on the occasion of his 65th birthday. He died on 15 December 2017 at his home in Uppsala and is buried at the Uppsala old cemetery. Publications References 1935 births 2017 deaths People from Halmstad 20th-century Swedish mathematicians 21st-century Swedish mathematicians Algebraists Lund University alumni Swedish expatriates in France Members of the Royal Swedish Academy of Sciences Academic staff of Stockholm University Burials at Uppsala old cemetery
Jan-Erik Roos
[ "Mathematics" ]
450
[ "Algebra", "Algebraists" ]
64,164,958
https://en.wikipedia.org/wiki/Cerro%20Impacto
Cerro Impacto is a large mineral deposit in the southern Venezuelan state of Amazonas. References Thorium Nuclear fuels Nuclear materials Economic geology Geologic formations of Venezuela Igneous rocks
Cerro Impacto
[ "Physics" ]
35
[ "Materials", "Nuclear materials", "Matter" ]
64,165,817
https://en.wikipedia.org/wiki/South%20African%20Institute%20of%20Civil%20Engineers
The South African Institute of Civil Engineers (SAICE) is the professional body for civil engineers in South Africa. It publishes the SAICE Journal. It is a member of the Southern African Federation of Engineering Organisations (SAFEO) and the Federation of African Engineering Organisations (FAEO), which is a member of the World Federation of Engineering Organizations (WFEO). References External links Engineering societies based in South Africa Civil engineering professional associations
South African Institute of Civil Engineers
[ "Engineering" ]
89
[ "Civil engineering professional associations", "Civil engineering organizations" ]
64,169,752
https://en.wikipedia.org/wiki/Multi-cycle%20processor
A multi-cycle processor is a processor that carries out one instruction over multiple clock cycles, often without starting up a new instruction in that time (as opposed to a pipelined processor). See also Single-cycle processor, a processor executing (and finishing) one instruction per clock cycle References Microprocessors
Multi-cycle processor
[ "Technology" ]
65
[ "Computing stubs" ]
64,170,193
https://en.wikipedia.org/wiki/Hack%20Club
Hack Club is a global nonprofit network of high school computer hackers, makers and coders founded in 2014 by Zach Latta. It now includes more than 500 high school clubs and 40,000 students. It has been featured on the TODAY Show, and profiled in the Wall Street Journal and many other publications. Programs Hack Club's primary focus is its clubs program, in which it supports high school coding clubs through learning resources and mentorship. It also runs a series of other programs and events. Some of their notable programs and events include: HCB - A fiscal sponsorship program originally targeted at high school hacker events AMAs - Video calls with industry experts such as Elon Musk, Vitalik Buterin, and Sal Khan Summer of Making - A collaboration with GitHub, Adafruit & Arduino to create an online summer program for teenagers during the COVID-19 pandemic that included $50k in hardware donations to teen hackers around the world The Hacker Zephyr - A cross-country hackathon on a train across America Assemble - The first high school hackathon in San Francisco since the COVID-19 pandemic, with the stated goal of "kick[ing] off a hackathon renaissance" Epoch - A global high schooler-led hackathon in Delhi NCR organized in public to inspire the community of student hackers and bring hundreds of teenagers together Winter Hardware Wonderland - An online winter program where teenagers submit ideas for hardware projects and, if accepted, get grants of up to $250 Outernet - An experimental four-day hackathon and camping trip in the Northeast Kingdom 2024 Leader's Summit - A 72-hour hackathon in San Francisco where teenage club leaders built projects for their club members to use Wonderland - A 48-hour hackathon in Boston where teenagers built projects using random items found in their "chest" Apocalypse - A 42-hour high-school hackathon at Shopify's Toronto office, with the theme of a "zombie apocalypse" The Boreal Express - A cross-country hackathon on a train in partnership with Via Rail originally planned from Vancouver to Montreal, but was turned around due to wildfires in Jasper, Alberta Arcade - An online summer program in collaboration with GitHub, allowing teenagers to log work on creative projects to earn “tickets”, which could be exchanged for prize Onboard $100 grant for high schoolers to produce PCBs Funding Hack Club is funded by grants from philanthropic organizations and donations from individual supporters. In 2019, GitHub Education provided cash grants of up to $500 to every Hack Club "hackathon" event. In May 2020, GitHub committed to a $50K hardware fund, globally alongside Arduino and Adafruit, to deliver hardware tools directly to students’ homes with a program named Hack Club Summer of Making.Elon Musk and the Musk Foundation donated $500,000 to help expand Hack Club in 2020, donated another $1,000,000 in 2021, and an additional $4,000,000 in 2023. In 2022, Tom and Theresa Preston-Werner donated $500,000 to Hack Club. See also Ethical hacking References Hacker_culture Clubs and societies Computer programming 2014 establishments in Vermont
Hack Club
[ "Technology", "Engineering" ]
669
[ "Software engineering", "Computer programming", "Computers" ]
64,170,595
https://en.wikipedia.org/wiki/Lomonosovite
Lomonosovite is a phosphate–silicate mineral with the idealized formula Na10Ti4(Si2O7)2(PO4)2O4 early Na5Ti2(Si2O7)(PO4)O2 or Na2Ti2Si2O9*Na3PO4. The main admixtures are niobium (up to 11.8% Nb2O5), manganese (up to 4.5 %MnO) and iron (up to 2.8%). Discovery and name The mineral was discovered by V.I. Gerasimovskii in Lovozersky agpaitic massif. Named for Mikhail Lomonosov – famous Russian poet, chemist and philosopher, but the earlier – mining engineer. Crystal structure According to X-ray data, lomonosovite structure was determined is triclinic unit cell with parameters: a = 5.44 Å, b = 7.163 Å, c = 14.83 Å, α = 99°, β = 106°, and γ = 90°, usually centrosymmetric (sp. gr. P-1), but acentric varieties (polytype) are also reported. The crystal structure of lomonosovite is based on three-layer HOH packets consisting of a central octahedral O layer and two outer heteropolyhedral H layers. Ti- and Na centered octahedra are distinguished in the O layer, whereas the H layers are composed of Ti-centered octahedra and Si2O7 diorthogroups, (like in other heterophyllosilicates, for example lamprophyllite). The interpacket space includes Na+ cations and PO43- anions. Properties Lomonosovite forms lamellar and tabular crystals with perfect cleavage. It is macroscopically brown, from cinnamon-brown to black. It is transparent in thin plates. The luster vitreous to adamantine. Its pleochroism is strong from colorless to brown. The refractive index is = 1.654–1.670 = 1.736 – 1.750 =1.764–1.778 2V=56–69. Hardness 3–4 Density 3.12 – 3.15. Origin Accessory mineral of peralkaline agpaitic nepheline syenites (like Khibina and Lovozero massif, Russia, Ilimaussaq intrusion, Greenland) important mineral of agpaitic pegmatites and peralkaline fenites. References Crystals Phosphate minerals Silicate minerals
Lomonosovite
[ "Chemistry", "Materials_science" ]
552
[ "Crystallography", "Crystals" ]
64,170,708
https://en.wikipedia.org/wiki/Value-freedom
Value-freedom is a methodological position that the sociologist Max Weber offered that aimed for the researcher to become aware of their own values during their scientific work, to reduce as much as possible the biases that their own value-judgements could cause. The demand developed by Max Weber is part of the criteria of scientific neutrality. The aim of the researcher in the social sciences is to make research about subjects structured by values, while offering an analysis that will not be, itself, based on a value-judgement. According to this concept, the researcher should make of these values an “object”, without passing on them a prescriptive judgement. In this way, Weber developed a distinction between "value-judgement" and "link to the values". The "link to the values" describes the action of analysis of the researcher who, by respecting the principle of the value-freedom, makes of cultural values several facts to analyse without venturing a prescriptive judgement on them, i. e. without passing a value judgement. The original term comes from the German werturteilsfreie Wissenschaft, and was introduced by Max Weber. Bibliography Max Weber, Max Weber on the Methodology of the Social Sciences , 1949 See also Fact-value distinction Empirical research Epistemology Ethnology Ethnocentrism Scientific method Objectivity (philosophy) Philosophy of science References Max Weber Social science methodology Research ethics 1949 introductions
Value-freedom
[ "Technology" ]
290
[ "Research ethics", "Ethics of science and technology" ]
64,170,778
https://en.wikipedia.org/wiki/Interface%20force%20field
In the context of chemistry and molecular modelling, the Interface force field (IFF) is a force field for classical molecular simulations of atoms, molecules, and assemblies up to the large nanometer scale, covering compounds from across the periodic table. It employs a consistent classical Hamiltonian energy function for metals, oxides, and organic compounds, linking biomolecular and materials simulation platforms into a single platform. The reliability is often higher than that of density functional theory calculations at more than a million times lower computational cost. IFF includes a physical-chemical interpretation for all parameters as well as a surface model database that covers different cleavage planes and surface chemistry of included compounds. The Interface Force Field is compatible with force fields for the simulation of primarily organic compounds and can be used with common molecular dynamics and Monte Carlo codes. Structures and energies of included chemical elements and compounds are rigorously validated and property predictions are up to a factor of 100 more accurate relative to earlier models. Origin IFF was developed by Hendrik Heinz and his research group in 2013, based on preliminary work dating back to 2003 that includes a new rationale for atomic charges, use of energy expressions, interpretation of parameters, and a series of outperforming force field parameters for minerals, metals, and polymers. The force fields covered new chemical space and were one to two orders of magnitude more accurate than prior models where available, with apparently no restrictions to extend them further across the periodic table. As early as in the late 1960s, interatomic potentials were developed, for example, for amino acids and later served the CHARMM program. The fraction of covered chemical space was small, however, considering the size of the periodic table, and compatible interatomic potentials for inorganic compounds remained largely unavailable. Different energy functions, lack of interpretation and validation of parameters restricted modeling to isolated compounds with unpredictable errors. Assumptions of formal charges, a lack of rationale for Lennard-Jones parameters and even for bonded terms, fixed atoms, as well as other approximations often led to collapsed structures and random energy differences when allowing atom mobility. A concept for consistent simulations of inorganic-organic interfaces, that formed the basis of IFF, was first introduced in 2003. A major obstacle was the poor definition of atomic charges in molecular models, especially for inorganic compounds, due to reliance on quantum chemistry calculations and partitioning methods that may be suitable for field-based but not for point-based charge distributions necessary in force fields. As a result, uncertainties in quantum-mechanically derived point charges were often 100% or higher, clearly unsuited to quantify chemical bonding or chemical processes in force fields and in molecular simulations. IFF utilizes a method to assign atomic charges that translates chemical bonding accurately into molecular models, including metals, oxides, minerals, and organic molecules. The models reproduce multipole moments internal to a chemical compound on the basis of experimental data for electron deformation densities, dipole moments (often known to <1% error), as well as consideration of atomization energies, ionization energies, coordination numbers, and trends relative to other chemically similar compounds in the periodic table (the Extended Born Model). The method ensures a combination of experimental data and theory to represent chemical bonding and yields up to ten times more reliable and reproducible atomic charges in comparison to the use of quantum methods, with typical uncertainties of 5%. This approach is essential to carry out consistent all-atom simulations of compounds across the periodic table that vary widely in the type of chemical bonding and in internal polarity. IFF also allows the inclusion of specific features of the electronic structure such as π electrons in graphitic materials and aromatic compounds as well as image charges in metals. Another distinctive characteristic of IFF is the systematic reproduction of structures and energies to validate the classical Hamiltonian. First, the quality of structural predictions is assessed by validation of lattice parameters and densities from X-ray data, which has been common in molecular simulations. Second, in addition, IFF uses surface and cleavage energies for solids from experimental measurements to ensure a reliable potential energy surface. Third, in addition, force field parameters and reference data are considered at standard temperature and pressure. This protocol is far more practical than using lattice parameters at a temperature of 0 K and cohesive (vaporization) energies at up to 3000 K, which is commonly the case to assess ab-initio calculations, as then the conditions are far from practical utility and experimental data for validation may be limited or not at all available. As a result of the advances in IFF, hydration energies, adsorption energies, thermal, and mechanical properties can often be computed in quantitative agreement with measurements without further parameter modifications. The IFF parameters also have a physical-chemical interpretation and allow chemical analogy as an effective method to derive parameters for chemically similar, yet not parameterized compounds in good accuracy. Alternative approaches based on gray-box or black-box fitting of force field parameters, e.g., using lattice parameters and mechanical properties (the 2nd derivative of the energy) as target quantities, lack interpretability and frequently incur 50% to 500% error in surface and interfacial energies, which is usually not sufficient to accelerate materials design. Current coverage IFF covers metals, oxides, 2D materials, cement minerals, and organic compounds. The typical accuracy is ~0.5% for lattice parameters, ~5% for surface energies, and ~10% for elastic moduli, including documented variations for individual compounds. All-atom models and simulation inputs for bulk materials and interfaces can be built using Materials Studio, VMD, LAMMPS, CHARMM-GUI, as well as other editing programs. Simulations and analysis can be carried out using many molecular dynamics programs such as Discover, Forcite, LAMMPS, NAMD, GROMACS, and CHARMM. IFF uses employs the same potential energy function as other common force fields (CHARMM, AMBER, OPLS-AA, CVFF, DREIDING, GROMOS, PCFF, COMPASS), including options for 12-6 and 9-6 Lennard-Jones potentials, and can be used standalone or as a plugin to these force fields to utilize existing parameters. Applications Accurate interatomic potentials are essential to analyze assemblies of atoms, molecules, and nanostructures up to the small microscale. IFF is used in molecular dynamics simulations of nanomaterials and biological interfaces. Structures up to ten thousands of atoms can be analyzed on a workstation, and up to a billion atoms using supercomputing. Examples include properties of metals and alloys, mineral-organic interfaces, protein- and DNA-nanomaterial interactions, earth and building materials, carbon nanostructures, batteries, and polymer composites. The simulations visualize atomically resolved processes and quantify relationships to macroscale properties that are elusive from experiments due to limitations in imaging and tracking of atoms. Modeling thereby complements experimental studies by X-ray diffraction, electron microscopy and tomography, such as transmission electron microscopy and atomic force microscopy, as well as several types of spectroscopy, calorimetry, and electrochemical measurements. Knowledge of the 3D atomic structures and dynamic changes over time is key to understanding the function of sensors, molecular signatures of diseases, and material properties. Computations with IFF can also be used to screen large numbers of hypothetical materials for guidance in synthesis and processing. Surface model database A database in IFF provides simulation-ready models of crystal structures and crystallographic surfaces of metals and minerals. Often, variable surface chemistry is important, such as in pH-responsive surfaces of silica, hydroxyapatite, and cement minerals. The model options in the database incorporate extensive experimental data, which can be selected and customized by users. For example, models for silica cover the flexible area density of silanol groups and siloxide groups according to data from differential thermal gravimetry, spectroscopy, zeta potentials, surface titration, and pK values. Similarly, hydroxyapatite minerals in bone and teeth displays surfaces that differ in dihydrogenphosphate versus monohydrogenphosphate content as a function of pH value. The surface chemistry is often as critical as good interatomic potentials to predict the dynamics of electrolyte interfaces, molecular recognition, and surface reactions. Application to chemical reactions IFF is primarily a classical potential with limited applicability to chemical reactions. Quantitative simulations of reactions is, however, a natural extension due to an interpretable representation of chemical bonding and electronic structure. Simulations of the relative activity of Pd nanoparticle catalysts in C-C Stille coupling, hydration reactions, and cis-trans isomerization reactions of azobenzene have been reported. A general pathway to simulate reactions are QM/MM simulations. Other pathways to implement reactions are user-defined changes in bond connectivity during the simulations, and use of a Morse potential instead of a harmonic bond potential to enable bond breaking in stress-strain simulations. References Intermolecular forces Molecular physics Interface force field Molecular modelling
Interface force field
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
1,853
[ "Molecular physics", "Force fields (chemistry)", "Materials science", "Intermolecular forces", "Theoretical chemistry", "Molecular modelling", "Molecular dynamics", "Computational chemistry", " molecular", "nan", "Atomic", " and optical physics" ]
64,171,233
https://en.wikipedia.org/wiki/Chisel%20%28programming%20language%29
Chisel (an acronym for Constructing Hardware in a Scala Embedded Language) is an open-source hardware description language (HDL) used to describe digital electronics and circuits at the register-transfer level. Chisel is based on Scala as a domain-specific language (DSL). Chisel inherits the object-oriented and functional programming aspects of Scala for describing digital hardware. Using Scala as a basis allows describing circuit generators. High quality, free access documentation exists in several languages. Circuits described in Chisel can be converted to a description in Verilog for synthesis and simulation. Code examples A simple example describing an adder circuit and showing the organization of components in Module with input and output ports: class Add extends Module { val io = IO(new Bundle { val a = Input(UInt(8.W)) val b = Input(UInt(8.W)) val y = Output(UInt(8.W)) }) io.y := io.a + io.b } A 32-bit register with a reset value of 0: val reg = RegInit(0.U(32.W)) A multiplexer is part of the Chisel library: val result = Mux(sel, a, b) Use Although Chisel is not yet a mainstream hardware description language, it has been explored by several companies and institutions. The most prominent use of Chisel is an implementation of the RISC-V instruction set, the open-source Rocket chip. Chisel is mentioned by the Defense Advanced Research Projects Agency (DARPA) as a technology to improve the efficiency of electronic design, where smaller design teams do larger designs. Google has used Chisel to develop a Tensor Processing Unit for edge computing. Some developers prefer Chisel as it requires 5 times lesser code and is much faster to develop than Verilog. Circuits described in Chisel can be converted to a description in Verilog for synthesis and simulation using a program named FIRRTL. See also VHDL Verilog SystemC SystemVerilog References External links University of California, Berkeley Hardware description languages Science and technology in California
Chisel (programming language)
[ "Engineering" ]
441
[ "Electronic engineering", "Hardware description languages" ]
64,172,078
https://en.wikipedia.org/wiki/Ordered%20algebra
In mathematics, an ordered algebra is an algebra over the real numbers with unit e together with an associated order such that e is positive (i.e. e ≥ 0), the product of any two positive elements is again positive, and when A is considered as a vector space over then it is an Archimedean ordered vector space. Properties Let A be an ordered algebra with unit e and let C* denote the cone in A* (the algebraic dual of A) of all positive linear forms on A. If f is a linear form on A such that f(e) = 1 and f generates an extreme ray of C* then f is a multiplicative homomorphism. Results Stone's Algebra Theorem: Let A be an ordered algebra with unit e such that e is an order unit in A, let A* denote the algebraic dual of A, and let K be the -compact set of all multiplicative positive linear forms satisfying f(e) = 1. Then under the evaluation map, A is isomorphic to a dense subalgebra of . If in addition every positive sequence of type l1 in A is order summable then A together with the Minkowski functional pe is isomorphic to the Banach algebra . See also Ordered vector space Riesz space References Sources Functional analysis
Ordered algebra
[ "Mathematics" ]
267
[ "Functional analysis", "Functions and mappings", "Mathematical relations", "Mathematical objects" ]
64,172,642
https://en.wikipedia.org/wiki/43%2C112%2C609
43,112,609 (forty-three million, one hundred twelve thousand, six hundred nine) is the natural number following 43,112,608 and preceding 43,112,610. In mathematics 43,112,609 is a prime number. Moreover, it is the exponent of the 47th Mersenne prime, equal to M43,112,609 = 243,112,609 − 1, a prime number with 12,978,189 decimal digits. It was discovered on August 23, 2008 by Edson Smith, a volunteer of the Great Internet Mersenne Prime Search. The 45th Mersenne prime, M37,156,667 = 237,156,667 − 1, was discovered two weeks later on September 6, 2008, marking the shortest chronological gap between discoveries of Mersenne primes since the formation of the online collaborative project in 1996. It was the first time since 1963 that two Mersenne primes were discovered less than 30 days apart from each other. Less than a year later, on June 4, 2009, the 46th Mersenne prime, M42,643,801 = 242,643,801 − 1, was discovered by Odd Magnar Strindmo, a GIMPS participant from Norway. The result for this prime was first reported to the server in April 2009, but due to a bug, remained unnoticed for nearly two months. Having 12,837,064 decimal digits, it is only 141,125 digits, or 1.09%, shorter than M43,112,609. These two Mersenne primes hold the record for the ones with the smallest ratio between their exponents. 43,112,609 is the degree of four of the seven largest primitive binary trinomials over GF(2) found in 2016. and were the four largest in 2011. 43,112,609 is a Sophie Germain prime, the largest of only eight known Mersenne prime indexes to have this property. 43,112,609 is not a Gaussian prime, the largest of only 28 known Mersenne prime indexes to have this property. References Further reading George Woltman, Scott Kurowski, On the discovery of the 45th and 46th known Mersenne primes", Fibonacci Quarterly, vol. 46/47, no. 3, pp. 194–197, August 2008. Integers Prime numbers
43,112,609
[ "Mathematics" ]
508
[ "Prime numbers", "Mathematical objects", "Elementary mathematics", "Integers", "Numbers", "Number theory" ]
61,801,258
https://en.wikipedia.org/wiki/Snaiad
Snaiad is a speculative evolution, science fiction and artistic worldbuilding project by Turkish artist C. M. Kösemen, focused on a fictional exoplanet of the same name. Begun in the early 2000s and inspired by earlier works such as Wayne Barlowe's 1990 book Expedition, Kösemen has produced hundreds of paintings and sketches of creatures of Snaiad, with detailed ecological roles and taxonomic relationships to each other. The sheer number of invented creatures and lineages makes Snaiad one of the most biologically diverse fictional worlds. Since Snaiad artwork was first published by Kösemen online, the project has garnered a following and international attention, especially online. Fans of Snaiad have produced fan art, not only of Kösemen's own creatures but also of their own imagined Snaiadi creatures, consistent with the biological principles followed by the rest of the project's lifeforms. Kösemen hopes to eventually publish the project in book form. Premise Snaiad is an exoplanet slightly larger than Earth located outside of the Milky Way. The planet is home to a large variety of fauna, which Kösemen has designed and meticulously documented, along with creating maps and a geopolitical story of Snaiad as it undergoes the process of human colonization about 300 years in the future, Snaiad being one of Earth's first interstellar colonies. According to science writer Darren Naish, Snaiad "might well break the record as goes the number of fictional entities invented so far" due to the sheer number of lineages and lifeforms designed by Kösemen. The equivalents of tetrapods on Snaiad, so-called 'para-tetrapods' have two "heads"; the first head, typically most similar to heads on Earth animals is a modified set of genitalia which is also used to catch or grab food, and the second head, located below, has only a digestive function. The 'para-tetrapods' of Snaiad have hydraulic muscles (i.e. powered by fluids being pumped in and out) and their skeletons are not made of calcium, but a carbon-based mixture more similar to very hard wood. The bones of Snaiad creatures are usually black, brown or green and can catch fire at high temperatures. Just as with Earth animals, the creatures of Snaiad are classified into different taxonomic groupings, such as the "Allotauriformes" (large and armoured herbivores), the "Cardiocetoida" (whale-like aquatic creatures with jet propulsion) and the "Kahydroniformes" (intelligent predators with hooves and claws). One of the most successful and widespread predators of Snaiad is the Kahydron, with forelimbs strengthened with claws and hooves. The hydraulic muscles of the Kahydron stretch to its cheekbones, which gives it an exceptionally strong bite. Other than the human colonists, there are no sapient lifeforms on Snaiad. According to Kösemen: 'intelligence is not a "goal" of the evolutionary process, nothing is.' Project history Snaiad was inspired by Wayne Barlowe's 1990 book Expedition, which describes the exploration and the wildlife of an alien world. Other inspirational works included Gert van Dijk's "Furaha" (a similar project focusing on an alien world), the art of Terryl Whitlatch, James Gurney's Dinotopia book series and the works of naturalist painters, such as John James Audubon. The first steps to beginning the project were taken in 2004 or 2005, when Kösemen, still a student, began drawing alien creatures. Kösemen initially called the fictional planet 'Snai 4'. As Kösemen produced more sketches and eventually paintings, ecological niches and relationships between the animals materialized almost on their own; according to Kösemen "It evolved organically. I corrected things and added new ones. It was more evolution than design!" Kösemen worked "non-stop" on Snaiad until 2007, though work on the project has continued sporadically after that. Snaiad's official website was launched in June of 2008. The Snaiad website would later go offline for many years, but was relaunched in 2014. Later that year, in August, Kösemen held a talk about Snaiad at the 72nd World Science Fiction Convention in London. At the talk, Kösemen discussed the development of the project and shared never-before-seen early renditions of creatures. By the time of the 2014 talk, Kösemen had around 200 colorful paintings of Snaiad creatures and a backlog of nearly 500 concepts. Despite not yet having been published in any other form than online, Snaiad has garnered a following. 'Snaiad' has more google search hits than Kösemen's own name, and the project grew especially popular on the art-sharing website DeviantArt, where amateur artists drew fan art of Kösemen's creatures and also invented their own Snaiadi creatures. Fans of Snaiad have also created YouTube animations and digital as well as physical models of Kösemen's creatures. Per Darren Naish, writing in response to Kösemen's 2014 talk, "The mass appeal of Snaiad was demonstrated by the invention of Spore versions of Snaiad creatures, by the number of website mentions, by fan-art of assorted genres, and by the enthusiasm of the attending audience." Kösemen attributes the appeal of Snaiad to the project being somewhat "open source", with Kösemen allowing fan art and even 'canonizing' the fan creations he liked the best, and the established common body plans and anatomy, creating the possibility of consistent creations. Kösemen has expressed interest in eventually publishing Snaiad in book form. References External links Life on Snaiad – official website of the project The Story of Snaiad – Kösemen's 2014 talk at the 72nd World Science Fiction Convention Speculative evolution Fictional planets
Snaiad
[ "Biology" ]
1,257
[ "Biological hypotheses", "Speculative evolution", "Hypothetical life forms" ]
61,803,136
https://en.wikipedia.org/wiki/Blowback%20%28steam%20engine%29
A blowback (also blow back or blow-back) is a failure of a steam locomotive, which can be catastrophic. One type of blowback is caused when atmospheric air blows down the locomotive's chimney, causing the flow of hot gases through the boiler tubes to be reversed, with the fire itself being blown through the firehole onto the footplate, with potentially serious consequences for the crew. The risk of backdraught is higher when the locomotive enters a tunnel because of the pressure shock. Such blowbacks can be prevented by opening the blower before closing the regulator. Similar blowback can be caused by debris or other obstructions in the smokebox. In the days when steam-hauled trains were common in the United Kingdom, blowbacks occurred fairly frequently. In a 1955 report on an accident near Dunstable, the Inspector wrote: He also recommended that the British Transport Commission carry out an investigation into the causes of blowbacks. Blowbacks can also occur when a steam tube (or pipe) bursts in the boiler, allowing high-pressure steam to enter the firebox and thus egress onto the footplate. Other potential causes are unused mining explosives in the coal used to fuel the engine, and unburnt gases collecting in the firebox and then igniting. Examples The 1965 Winsford railway accident was caused by a blowback. Driver Wallace Oakes died as a result, and his fireman Gwilym Roberts was severely injured. References Locomotive boilers Steam locomotive fireboxes Steam locomotive exhaust systems Explosions
Blowback (steam engine)
[ "Chemistry", "Engineering" ]
308
[ "Combustion engineering", "Explosions", "Steam locomotive fireboxes" ]
61,803,742
https://en.wikipedia.org/wiki/Pseudothielavia%20terricola
Pseudothielavia terricola is a fungal species of the phylum Ascomycota, family Chaetomiaceae, and genus Pseudothielavia. Pseudothielavia terricola is widely distributed, especially in the tropical region of the world – with documented appearances in Africa, Southern Europe, and Asia. The species is mainly found in soil, but can also be found on other materials such as animal dung. The species was first assigned to the genus Coniothyrium in 1927, but was soon re-assigned to the genus Thielavia which endured for almost 90 years. Through intensive phylogenetic research and reassessment, the species was designated to a new genus, Pseudothielavia; the etymology of Pseudothielavia means similar to the genus Thielavia – the high resemblance was what contributed to the species assignment to the genus Thielavia nine decades ago. The fungus is mesophilic, grows abundantly in a pH level between 3.9–6, and is able to utilize multiple carbohydrates to support its growth. Mature Pseudothielavia terricola colonies in culture is dark brown in colour and spread out. Pseudothielavia terricola synthesizes a variety of compounds, two of which are thielavin A & B. These compounds were determined to be strong inhibitors of prostaglandin synthesis. History and taxonomy The fungus was first isolated in culture and described by J.C.Gilman and E.V.Abbott of the Department of Dermatology of the College of Physicians and Surgeons of Columbia University from soil in 1927. The isolated culture was kept at the Dermatology department of the university under the label Coniothyrium terricola Gilman and Abbott, No. 172.2 The soil sample from which the fungi was isolated came from the American Type Culture Collection, with soil obtained from Iowa, USA. The species – first designated in the genus Coniothyrium – was soon determined to belong to the genus Thielavia by Chester Emmons in 1930. When the crushed culture mounts of the species were shown to B.O.Dodge of the New York Botanical Garden – who recognized the fungus as a species he isolated long time ago under the label 'Thielavia n.sp.' Emmons took the culture of Dodge, germinated the ascospores in new media, and compared it to culture no.1722. The two cultures displayed identical growth and morphological features, which contributed to the assignment of the genus Thielavia in 1930. In 2019, a study was published by Xincun Wang and his team at the Westerdijk Fungal Biodiversity Institute, which phylogenetically reassessed all members of the genus Thielavia through genetic sequencing. In the new study, the fungus Thielavia terricola was reassigned a new genus – Pseudothielavia. Currently, the name Pseudothielavia terricola holds the binomial authority of the fungi, while Coniothyrium terricola as well as Thielavia terricola are now considered as holotypes or synonyms of the species. The genus Coniothyrium and Thielavia may have been assigned to Pseudothielavia terricola due to similar defining characteristics that highly resembles the fungus. Coniothyrium in the broad sense is defined to be unicellular with a smooth thin cell wall, pale-brown conidium, and a pycnidial structure with spherical cavity. Thielavia, on the other hand, is defined to have a non-ostiolate, spherical, setose ascomata, a brown thin cell wall, ellipsoidal to club-shaped asci, and unicellular, brown, single-germ pored ascospores. The defining characteristics of Coniothyrium coincides partially with Pseudothielavia terricola while defining characteristics of Thielavia fits Pseudothielavia terricola almost perfectly. However, following the recent phylogenetic re-assessment of the genus Thielavia, Wang and his team discovered that Pseudothielavia terricola, although morphologically similar to the genus Thielavia, should not be assigned to Thielavia sensu stricto due to genetic and subtle morphological differences. Thus, Wang and his team assigned a new genus – Pseudothielavia – to species that closely resembles the genus Thielavia but are genetically and morphologically distinct from that genus, which includes Pseudothielavia terricola. Growth and morphology Pseudothielavia terricola is a mesophilic fungus, with an optimal growth temperature of 37°C, a minimum growing temperature of 15 °C and a maximal growth temperature of 46 °C. Acidity and basicity also contribute greatly to the proper growth of the species; the fungus grows best in a pH range between 3.9–6, and terminates growth in environment with a pH higher than 7.9 or lower than 2.9. As for nutrition, the species is capable of breaking down various types of carbohydrates – such as chitin, cellulose, and poly-/tri-/di-/monosaccharides, alcohol, nitrate, ammonium, and nitrogen-containing compounds to support its growth. Mophologically, colonies grown on corn meal agar, malt extract agar, and oatmeal agar at 28 °C display a well spread out uniform white mat in 14 days. The colour of the colonies then darkens as time goes on, changing from white to brown to black – the color at maturity. The black color is largely due to the dark-coloured spores that form as the fungi matures. The ascomata of the fungi are submerged spherical cleistothecia, dark brown in color and is usually 60–200 μm in diameter. The ascomata of the fungi is encased with a brown, semi-transparent, pseudoparenchymatous, double membraned textura epidermoidea – thin and tightly packed peridium. The peridium is also smooth walled and not bristly. Inside the ascomata, the shape of the spore bearing asci of the fungi can range from pyriform to ovate to clavate to ellipsoidal. The asci of the fungi are also always eight-spored, and evanescent (disintegrating), varying in size from 24x14 μm to 40–20 μm. The ascospores of the fungi are unicellular, brown-dark green in colour, and ellipsoidal. The ascospores of the fungi are also observed to only have germ pores at one end, with the other end being truncated. The dimension of the ascospores range from 9x5 – 16x9 μm. The species present no anamorphic or asexual form. Physiology In addition to the species' ability to metabolize various carbohydrates and other chemical compounds for its growth, Pseudothielavia terricola also produce bioactive compounds that possess clinical research value. Thielavin A (C31H34O10)and B (C29H30O10) are two compounds that were isolated from Pseudothielavia terricola culture. These two compounds were shown to be structurally similar to depsides – which consists of three hydroxybenzoic acid groups. The melting point for the two compound was determined to be 235–236 °C and 250 °C respectively. They are insoluble in water but soluble in multiple organic solvents including methanol, ethanol, acetone, chloroform, and pyridine. In a study published in 1981, thielavin A and B were demonstrated to be novel inhibitors of prostaglandin synthesis, targeting arachidonic acid – prostaglandin H2 conversion and prostaglandin H2-prostaglandin E2 conversion respectively. Overproduction of prostaglandin in the human body has been linked to body pain, fever, inflammation and diarrhea, painful menstruation, arthritis, even certain forms of cancer. Habitat and ecology The fungus is a cosmopolitan species and is mainly found in soil, fecal matter, plant seeds, and roots of decaying plant. The distribution of the species is noted to concentrate along tropical regions, with documented appearances in multiple parts of Africa (such as Sudan, Sierra Leone, and Nigeria), Kuwait, Pakistan, India, Nepal, New Guinea, Japan and others. Distribution in regions distant from the tropics are also present, countries such as the Netherlands, Britain, and the United States also have been reported to show the presence of this species. In terms of habitat, the fungus has been reported to grow on a variety of soils. To list a few, the species have been found in forest soils, grassland, grass plots with Dichanthium annulatum, rice fields, and saline soil. In addition, the species has been isolated from other miscellaneous material of plant and animal origin. The species has been found on materials such as decaying strawberry plant, decaying Cordia dichotoma fruit, barley, seeds of wheat and groundnuts, cotton, kernels of Arachis hypogaea, and the dung of various animals (monkey, sheep, cow, elephant). References Sordariales Fungi described in 1927 Cosmopolitan species Fungus species
Pseudothielavia terricola
[ "Biology" ]
1,936
[ "Cosmopolitan species", "Fungi", "Organisms by location", "Fungus species" ]
61,804,010
https://en.wikipedia.org/wiki/Hercules%20at%20the%20crossroads
Hercules at the crossroads, also known as the Choice of Hercules and the Judgement of Hercules, is an ancient Greek parable attributed to Prodicus and known from Xenophon. It concerns the young Heracles (also known to the Romans as Hercules) who is offered a choice between Vice (Kakia) and Virtue (Arete)—a life of pleasure or one of hardship and honour. In the early modern period it became a popular motif in Western art. History Classical period The parable stems from the Classical era of ancient Greece and is reported by Xenophon in Memorabilia 2.1.21–34. In Xenophon's text, Socrates tells how the young Heracles, as the hero contemplates his future, is visited by two allegorical figures, female personifications of Vice and Virtue (Ancient Greek: and ; Kakía and Areté). They offer him a choice between a pleasant and easy life or a severe but glorious life, and present their respective arguments. Xenophon credits the invention of the parable to Prodicus. He cites a precursor in Hesiod's Works and Days, which also contrasts the paths of vice and virtue. The motif then appears in a number of works by ancient Greek and Roman writers. Aristophanes used it in a humorous way in the comedy The Birds, where Heracles has to choose between kingship and a tasty meal, and almost chooses the meal. In book 15 of the epic poem Punica by Silius Italicus, the military commander Scipio Africanus appears in a situation modeled on the choice of Heracles. The literary device of a contest in dialogue appears within many different genres throughout the literature of ancient Greece. It is related to the controversy stories in the Gospel of Matthew. Early modern period In the Renaissance the story of Hercules at the crossroads became popular again, and it remained so in Baroque and Neoclassical culture. It became a part of the broader motif of psychomachia: the battle of spirits or soul war. Petrarch used it in De vita solitaria (1346) and established it in the mainstream of Renaissance humanism as a figure of the choice between a contemplative life and an active life. Petrarch had read Cicero's summary of the story in De Officiis. Like Xenophon, Cicero stresses the hero's solitude as he deliberates with himself. Four decades after Petrarch's adaptation, Coluccio Salutati reintroduced the original moral choice between Virtus and Voluptas, using Cicero's Latin words. Famous examples from the visual arts include Albrecht Dürer's print Hercules at the Crossroads (1498), Paolo Veronese's Allegory of Virtue and Vice (1565), Annibale Carracci's The Choice of Hercules (1596), Gerard de Lairesse's Hercules between Virtue and Vice (1685) and Mariano Salvador Maella's mural in the Royal Palace of Madrid, Hercules between Virtue and Vice (1765–66). The story appears in musical compositions such as Laßt uns sorgen, laßt uns wachen by Johann Sebastian Bach and The Choice of Hercules by George Frideric Handel. See also Matthew 7:13 Virtù The Wayfarer References Further reading Erwin Panofsky. Hercules am Scheidewege und andere antike Bildstoffe in der neueren Kunst. (Studien der Bibliothek Warburg 18). Teubner, Leipzig/ Berlin 1930. External links (Greek text only) Mythology of Heracles Visual motifs Parables Virtue Iconography
Hercules at the crossroads
[ "Mathematics" ]
754
[ "Symbols", "Visual motifs" ]
61,804,415
https://en.wikipedia.org/wiki/Plant%20Breeding%20Institute
The Plant Breeding Institute was an agricultural research organisation in Cambridge in the United Kingdom between 1912 and 1987. Founding The institute was established in 1912 as part of the School of Agriculture at the University of Cambridge. Rowland Biffen was the first director, and was close with William Bateson who was leading studies of heredity in Cambridge following the rediscovery of the pioneering genetic research of Gregor Mendel in 1900. Biffen began studying cereal breeding in the early 1900s with the aim of producing improved varieties for farmers and millers, and also to test whether Mendel's laws applied to wheat. He demonstrated that resistance to yellow rust was a dominant trait and this culminated in 1910 in the release of the rust-resistant variety Little Joss, which was widely grown for decades and used as a parent for many other varieties. The institute's site was to the west of Cambridge, and it shared land with the School of Agriculture that is today the site of the North West Cambridge development. The first research students were J. W. Lesley – who later made important contributions to the genetics of the tomato – and Frank Engledow, who later became Drapers Professor of Agriculture. Engledow described the facilities as "two acres of land on Gravel Hill Farm, a cage, a not very large shed and a small greenhouse." Work was initially mundane, consisting of recording the yields of different wheat varieties, but led to the release of Yeoman in 1916, which combined high yields with strength (the quality required for bread-making). During the First World War, research at the institute ground to a halt, but it began to rapidly expand afterwards and into the 1920s when two new research stations were attached to the institute, the Horticultural Research Station in 1922 and the Potato Virus Station in 1926. Redcliffe N. Salaman was the director of the latter until 1939. The National Institute of Agricultural Botany was established in 1919 in order to separate the commercial aspects of varietal improvement from the more academic pursuits at the PBI. NIAB would distribute the seed of new varieties produced at the PBI, but only after testing showed them to be distinct and superior to existing varieties. This arrangement effectively discouraged workers at the PBI from developing new varieties and freed them to study plant physiology and genetics. In the 1920s, Engledow collaborated with the statistician Udny Yule to develop techniques to analyse crop yields and published a series of papers on yield formation and associated traits in cereals. Salaman developed methods to test for the presence of viruses in seed potatoes and developed techniques to build up stocks of virus-free seed potatoes, a technique adopted by many other countries. Yeoman II was released in 1925 but was a commercial failure, and marked the high point of Mendelian thinking in UK plant breeding. Biffen retired in 1936 and recommended Herbert Hunter, a barley breeder with close links to Guinness and who had worked at the PBI since 1922, become the new director. Unlike his predecessor, Hunter disputed the necessity of Mendelian thinking to varietal improvement, instead believing that success relied on finding parents with desirable traits and crossing them with existing popular varieties. The appointment was doubted by Alfred Daniel Hall, the founder of Wye College because Hunter was "a plant breeder and not a geneticist". During the 1930s the PBI released high-yielding barley varieties, but their poor malting quality meant that they were not adopted by farmers. George Douglas Hutton Bell was the director from 1940 to 1971. From the late 1940s to the 1960s, the low price of barley in comparison to meat made it an attractive animal feed, creating a niche for the PBI's barley varieties that led to them dominating the UK barley market. Move to Trumpington In 1948 it was announced that the institute would move from its site on Cambridge University Farm to a new site with improved facilities and more staff. At the same time, management would be transferred away from the university to a new independent body. The new site was opened in Trumpington, 2 miles south of Cambridge, in 1955. Ralph Riley was the director from 1971 to 1978. Richard B. Flavell joined in 1969 and built up a large department investigating plant molecular genetics. Privatisation The institute was privatised in 1987 as part of Margaret Thatcher's government policies to divest from profitable industries and that "near-market" agricultural research should be funded by industry rather than the state. At the time the institute's wheat varieties had 90% of the UK market and 86% of the cereal acreage. The plant breeding parts of the institute were sold to Unilever for £68m, and sold on a year later to Monsanto for £350m. The more research-orientated parts were moved to form 'The Cambridge Laboratory' in Norwich, which later merged with the John Innes Centre. In 2004, Monsanto sold the wheat breeding part of the business to RAGT Seeds and in 2008 the institute moved from Trumpington to a new headquarters in Essex, between Sawston and Saffron Walden. The impact of the privatisation on wheat breeding has been studied by several authors. Notable cultivars Barley Pioneer – the first winter-hardy malting barley Proctor – a spring barley Maris Otter – a cross of Pioneer and Proctor that remains popular with craft brewers Potatoes Maris Peer – an early potato variety Maris Piper – a maincrop potato variety with resistance to the potato cyst nematode Globodera rostochiensis; the most popular UK potato variety since 1980 Wheat Maris Wigeon Yeoman Maris Huntsman References Further reading The Journal of the Ministry of Agriculture, July 1922 – "The School of Agriculture of the University of Cambridge Part II", pp296–302 Agricultural research institutes in the United Kingdom 1912 establishments in England 1987 disestablishments in England Institutions of the University of Cambridge Plant breeding John Innes Centre
Plant Breeding Institute
[ "Chemistry" ]
1,198
[ "Plant breeding", "Molecular biology" ]
61,804,620
https://en.wikipedia.org/wiki/NGC%202404
NGC 2404 is a massive H II region inside NGC 2403, a spiral galaxy in Camelopardalis. It was discovered on February 2, 1886 by Gulliaume Bigourdan. NGC 2404 is approximately 940 ly in diameter, making it one of the largest H II regions so far known. It is the largest H II region in NGC 2403, and lies at the outskirts of the galaxy, making for a striking similarity with NGC 604 in M33, both in size and location in the host galaxy. This H II region contains 30-40 Wolf-Rayet stars, and unlike the Tarantula Nebula, but similar to NGC 604, NGC 2404's open cluster is probably much less compact, so it probably looks like a large stellar association. This H II region is probably only a few million years old. References H II regions Camelopardalis 2404
NGC 2404
[ "Astronomy" ]
186
[ "Camelopardalis", "Constellations" ]
61,804,794
https://en.wikipedia.org/wiki/Frances%20M.%20Ross
Frances Mary Ross is the Ellen Swallow Richards Professor in Materials Science and Engineering at Massachusetts Institute of Technology. Her work involves the use of in situ transmission electron microscopy to study nanostructure formation. In 2018 she was awarded the International Federation of Societies for Microscopy Hatsujiro Hashimoto Medal. Ross is a Fellow of the American Association for the Advancement of Science, the American Physical Society, the Microscopy Society of America and the Royal Microscopical Society, Early life and education Ross studied Natural Sciences at the University of Cambridge. She moved to the Department of Materials for her doctoral studies, and completed a PhD in materials and metallurgy in 1989. Her doctoral work considered transmission electron microscopy of silicon oxides. She was appointed as a postdoctoral research associate at AT&T Bell Laboratories in 1990. Here she began working with electron microscopy to study silicon oxidation and the dynamics of dislocation. Research and career In 1992 Ross started her academic career as a staff scientist inn the University of California, Berkeley National Center for Electron Microscopy. She moved to the Thomas J. Watson Research Center in 1997 where she worked as a research staff member. Here she developed various microscopic techniques, including in situ environmental transmission electron microscopy (TEM). By monitoring the growth of materials in situ it is possible to understand the nucleation and growth of materials, including observing individual nucleation events and transient intermediate states. She can change the growth conditions (for example temperature, pressure or choice of solvent) and establish how these variables impact the growth of materials. At IBM Ross monitored self-assembly mechanisms, including the processes by which nanowires form using chemical vapor deposition and the growth of quantum dots. By controlling the growth of nanowires it is possible to form complicated structures, which can be used in transistors, batteries and sensors. To grow the nanowires in an electron microscope Ross uses small catalytic particles, a flat substrate and a gas that contains silicon. She heats the substrate to 500 degrees celsius, at which temperature the gas begins to react with the metal catalysts and depositing silicon beneath the particles. She has demonstrated this nanowire growth for silicon, germanium, gallium arsenide and gallium phosphide. These nanowires can be used to bridge electrical contacts, allowing Ross to understand the relationship between physical structure and electronic performance. She joined the faculty at Massachusetts Institute of Technology in 2018. Here she is developing a TEM for Two-dimensional materials in one of the MIT quiet rooms. These rooms minimise interference from electromagnetic fields and temperature fluctuations. Ross plans to investigate where three-dimensional nanocrystals grow on two-dimensional materials. She has previously demonstrated that it is possible to use electrochemical electron-beam lithography to write, read and erase these nanocrystals. Two-dimensional materials are difficult to study using conventional equipment because electron microscopes can damage their structures. In an effort to avoid this, Ross has proposed using lower voltage electrons as well as a high vacuum. In 2018 Ross was awarded the Hatsujiro Hashimoto Medal in recognition of her work on electron microscopy. Awards and honours Her awards and honours include; 1999 Institute of Physics Moseley medal 2000 Materials Research Society Outstanding Young Investigator Award 2003 Microscopy Society of America Burton Medal 2011 Fellow of the American Physical Society 2012 Fellow of the Microscopy Society of America 2017 Honorary Fellow of the Royal Microscopical Society 2018 Hatsujiro Hashimoto Medal 2019 Gerhard Ertl Lecture Award Selected publications Her publications include; References American materials scientists Alumni of the University of Cambridge MIT School of Engineering faculty Year of birth missing (living people) Living people Women materials scientists and engineers
Frances M. Ross
[ "Materials_science", "Technology" ]
737
[ "Women materials scientists and engineers", "Materials scientists and engineers", "Women in science and technology" ]
61,805,032
https://en.wikipedia.org/wiki/Mr.%20Meeseeks
Mr. Meeseeks is a recurring fictional species in the American animated television series Rick and Morty. Created by Justin Roiland and Dan Harmon and based on the title character from Scud: The Disposable Assassin by Rob Schrab, Meeseeks are a powder-blue-skinned species of humanoids (each of whom is named "Mr. Meeseeks") who are created to serve a single purpose which they will go to any length to fulfill. Each brought to life by a "Meeseeks Box", they typically live for no more than a few hours in a constant state of pain, vanishing upon completing their assigned task so as to end their own existence and thereby end their suffering; as such, the longer an individual Meeseeks remains alive, the more insane and unhinged they become. A variant dubbed Mr. Youseeks can be summoned to aid in unreachable chores in the virtual reality game Rick and Morty: Virtual Rick-ality, while a red-skinned smoking Kirkland-brand Meeseeks is introduced in the fourth season episode "Edge of Tomorty: Rick Die Rickpeat". In June 2019, an Oni Press Rick and Morty comic series spin-off one-shot, Rick and Morty Presents: Mr. Meeseeks was released, following a Mr. Meeseeks who, after being called upon by Summer Smith to assist with her chili recipe, accidentally tasks another Meeseeks to bring them on a journey to find meaning to life despite their finite existence. A drug mule Meeseeks known as Mr. Sick is also featured as a henchman in the comic series arcs The Ricky Horror Peacock Show and Rick Revenge Squad, while a Meeseeks named Master Meeseeks serves as the main antagonist of the limited series Rick and Morty: Crisis on C-137. In October 2023, Mr.Meeseeks, along with Summer, became playable characters in Fortnite. A second solo series, Rick and Morty: Meeseeks, P.I., began publication in November 2023, following Meeseeks, P.I. teaming up with Jerry Smith. Biography Each Mr. Meeseeks is a tall, blue-skinned humanoid figure with elongated, narrow limbs, a large, bulbous head, beady black eyes, and a high-pitched voice. Some versions of him have one or more small patches of orange hair on their head, while others are bald. Mr. Meeseeks usually exhibits a friendly, cheerful, and helpful demeanor; willing to assist the one who brought him into existence, however possible, in order to ensure their own death. Ideally, Mr. Meeseeks will dispatch the simple task given to him, and disappear shortly after its completion. However, if given a task that is outside of Mr. Meeseeks' capabilities, he will not be able to cease existing. This unusual lifespan will cause Mr. Meeseeks' attitude and mental state to worsen dramatically; the relatively long and tortuous existence quickly driving him to violent behavior and outright insanity, including creating more Meeseeks to assist themselves if they have access to a Meeseeks Box. All Mr. Meeseeks will typically say "I'm Mr. Meeseeks, look at me!" upon being created. Television series In "Meeseeks and Destroy", Jerry, Beth and Summer Smith are given a Meeseeks Box by Rick Sanchez to provide solutions to their problems: for Jerry, taking two strokes off his golf game, for Beth, making her feel better about herself, and for Summer, to make her popular at her high school. While Beth and Summer are successful, Jerry ultimately summons dozens of Meeseeks while unable to achieve his goal. As Jerry goes to dinner with Beth, the increasingly desperate Meeseeks initially blame each other for their predicament and fight over the correct solution, concluding that their only hope is a loophole: to take all the strokes off Jerry's golf game by killing him. Confronting Jerry in a restaurant, the Meeseeks take the other patrons hostage to convince Jerry to come out. Before he can give in, however, Beth inspires Jerry and encourages him to try his golf swing one last time. Jerry uses a broken shelf and tomato in place of a golf club and ball to show that his game has improved, and the delighted Meeseeks cease to exist. In the background of "Mortynight Run", a Mr. Meeseeks can be seen at the Blips & Chitz arcade, helping an alien win a video game before disappearing when they eventually win. A pair of Mr. Meeseeks are visible in the Collector's menagerie in a segment of "Morty's Mind Blowers". In "Edge of Tomorty: Rick Die Rickpeat", Rick summons a Meeseeks to kill an alternate dimension version of Morty, known as Fascist Morty, while the power-mad normal Morty summons several Meeseeks to protect him from the local police. Upon returning to his laboratory, Rick notes that Morty has stolen his Meeseeks Boxes, leaving only his Kirkland-brand Meeseeks Boxes, which summon a surly and uncooperative red-skinned Mr. Meeseeks variant. In a flashforward in "Never Ricking Morty", President Morty is shown to command an army of Meeseeks amongst his various other armies. Comic series Presents In June 2019, an Oni Press Rick and Morty comic series spin-off one-shot, Rick and Morty Presents: Mr. Meeseeks was released, following a Mr. Meeseeks who, after being called upon by Summer Smith to assist with her chili recipe, accidentally tasks another Meeseeks to bring them on a journey to find meaning to life despite their finite existence. A previous one-shot, The Vindicators, also featured Mr. Meeseeks in Rick using one to unleash an army to defeat Boon (an alternate evil and all-powerful Noob Noob). Main series A drug mule Meeseeks known as Mr. Sick is featured as a recurring antagonist in the Rick and Morty comic series, working for Peacock Jones to seek revenge on Rick after failing to deliver the singular drug shipment he was created for, culminating in his death in Rick Revenge Squad on being tricked into finally making the delivery. Consequently, an alternate Jones uses a Meeseeks Box to create himself an army to kill Rick in The Rickoning, which Rick tricks into dissolving by having most of them "kill" holograms of his, and the last one "kill" his character in a video game. Crisis on C-137 A Meeseeks called Master Meeseeks is featured as the main antagonist of the limited series Rick and Morty: Crisis on C-137, which began publication in August 2022. In the series, on surviving the events of "Meeseeks and Destroy", Master Meeseeks gathers together the enemies of the Smith family in an attempt to take his revenge upon them. Meeseeks, P.I. A limited series, Rick and Morty: Meeseeks, P.I., began publication in November 2023, following a Meeseeks hired by Jerry as a private detective (named Meeseeks, P.I.) to help him find the missing remote to the interdimensional cable television box before Rick finds out, the two uncovering a massive interdimensional cable conspiracy in the process. Video games Meeseeks Boxes are featured as power-ups and loot boxes in the free-to-play role-playing video game Pocket Mortys, producing Meeseeks for the player to use in battles,<ref>{{cite web|last=Gaudette|first=Emily|title=How To Get The Most Out Of The Rick and Morty Mobile Game Pocket Mortys|url=https://www.inverse.com/article/10416-how-to-get-the-most-out-of-the-rick-and-morty-mobile-game-pocket-mortys|website=Inverse|date=20 January 2016|accessdate=20 January 2016}}</ref> as well as in the virtual reality game Rick and Morty: Virtual Rick-ality, where a variant dubbed Mr. Youseeks can be summoned to aid in unreachable chores. In 2022, Mr. Meeseeks Ace was added as a character skin to Tom Clancy's Rainbow Six Siege. Development The character was created by Justin Roiland and Dan Harmon, who first met at Channel 101 in the early 2000s. In 2006, Roiland created The Real Animated Adventures of Doc and Mharti, an animated short parodying the Back to the Future characters Doc Brown and Marty McFly, and the precursor to Rick and Morty, the second episode of which features the pair encountering the title character Scud of Scud: The Disposable Assassin, whom Harmon had written for from 1994–1998, serving to promote the series' 2008 revival. At Stan Lee's L.A. Comic Con: Comikaze in 2016, Harmon confirmed that the Mr. Meeseeks character had been "ripped off" from Scud. After the idea for Rick and Morty, in the form of Doc and Mharti was brought up to Adult Swim and a full television series entered development, Roiland had the idea for the episode "Meeseeks and Destroy" when, frustrated with the progress of a writing session, he suggested the introduction of a character who would blurt "I'm Mister Meeseeks, look at me!" in a "most annoying" voice, which was then combined with Harmon's variation on Rob Schrab's Scud character, each member of the species uttering the catchphrase upon being created or making a point. Series writer Ryan Ridley also cited The Smurfs as an inspiration for the character. Merchandise On August 3, 2016, Cryptozoic Entertainment released Mr. Meeseeks' Box o' Fun, a game combining elements of dice games and truth or dare focused around the Mr. Meeseeks character. On September 23, 2019, Entertainment Earth launched a range of Mr. Meeseeks jack-in-the-boxes and dolls to positive reception. On September 10, 2020, Kidrobot launched a Mr. Meeseeks-themed line of eight-inch dunny figurines, while Pringles launched a Mr. Meeseeks-themed flavor called "Look at Me! I'm Cheddar & Sour Cream", with the product description reading "Mr. Meeseeks' tall, powder-blue figure is naturally emulated in this narrow can of Pringles crisps. Since existence is pain for a Meeseeks, you better eat these crisps fast—which should be no problem given how delicious these flavors are." Reception The character has received positive reception. Looper described Mr. Meeseeks as "a symbol of the series", citing "his appeal [a]s the genie-in-a-bottle fantasy element" that the character provides as "a meditation on the nature of existence, setting the template for the show's ever-more frequent forays into philosophical contemplation." Screen Rant praised the character as a "cult hero", while Inverse comparatively compared the character to Michael Keaton's 1996 film Multiplicity, praising their "story circle". The Daily Dot described "[p]art of the charm of Meeseeks [a]s that their looping dialogue is so quotable, but fans also seem to relate to Mr. Meeseeks’ deterioration from happy and helpful to frustrated and murderous. Meeseeks’ dark reality—that they’re chipper despite living in pain and wanting to die—also makes them extra sympathetic." GameSpot has stated that "there has been no character who has caused as lasting an impact [on the Rick and Morty franchise] as Mr. Meeseeks", while Jake Lahut praised the character as "[r]esembling something between Gumby and the Smurfs, [as] a species who embod[ies] Aristotle’s key theory of teleology, the idea of design and purpose in each material thing." With regards their initial portrayal, Comic Book Resources described the characters as "very unique and fascinating to watch play out onscreen", while CinemaBlend collectively ranked the species as the second-best side character in the series behind President Morty. Den of Geek praised the character as a vindication of "[the] sort of outwardly happy folks [who] live a rich inner life of endless torment" and the concept of "@#%&*! Smilers'', as titled by the Aimee Mann album of the same name. References Animated characters introduced in 2014 Extraterrestrial characters in television Fictional extraterrestrial species and races Fictional henchmen Fictional kidnappers Fictional suicides Fictional superorganisms Fictional warrior races Male characters in animated television series Rick and Morty characters Television characters introduced in 2014 Video game bosses
Mr. Meeseeks
[ "Biology" ]
2,803
[ "Superorganisms", "Fictional superorganisms" ]
61,805,833
https://en.wikipedia.org/wiki/Period-luminosity%20relation
In astronomy, a period-luminosity relation is a relationship linking the luminosity of pulsating variable stars with their pulsation period. The best-known relation is the direct proportionality law holding for Classical Cepheid variables, sometimes called the Leavitt Law. Discovered in 1908 by Henrietta Swan Leavitt, the relation established Cepheids as foundational indicators of cosmic benchmarks for scaling galactic and extragalactic distances. The physical model explaining the Leavitt's law for classical cepheids is called kappa mechanism. History Leavitt, a graduate of Radcliffe College, worked at the Harvard College Observatory as a "computer", tasked with examining photographic plates in order to measure and catalog the brightness of stars. Observatory Director Edward Charles Pickering assigned Leavitt to the study of variable stars of the Small and Large Magellanic Clouds, as recorded on photographic plates taken with the Bruce Astrograph of the Boyden Station of the Harvard Observatory in Arequipa, Peru. She identified 1777 variable stars, of which she classified 47 as Cepheids. In 1908 she published her results in the Annals of the Astronomical Observatory of Harvard College, noting that the brighter variables had the longer period. Building on this work, Leavitt looked carefully at the relation between the periods and the brightness of a sample of 25 of the Cepheids variables in the Small Magellanic Cloud, published in 1912. This paper was communicated and signed by Edward Pickering, but the first sentence indicates that it was "prepared by Miss Leavitt". In the 1912 paper, Leavitt graphed the stellar magnitude versus the logarithm of the period and determined that, in her own words, Using the simplifying assumption that all of the Cepheids within the Small Magellanic Cloud were at approximately the same distance, the apparent magnitude of each star is equivalent to its absolute magnitude offset by a fixed quantity depending on that distance. This reasoning allowed Leavitt to establish that the logarithm of the period is linearly related to the logarithm of the star's average intrinsic optical luminosity (which is the amount of power radiated by the star in the visible spectrum). At the time, there was an unknown scale factor in this brightness since the distances to the Magellanic Clouds were unknown. Leavitt expressed the hope that parallaxes to some Cepheids would be measured; one year after she reported her results, Ejnar Hertzsprung determined the distances of several Cepheids in the Milky Way and that, with this calibration, the distance to any Cepheid could then be determined. The relation was used by Harlow Shapley in 1918 to investigate the distances of globular clusters and the absolute magnitudes of the cluster variables found in them. It was hardly noted at the time that there was a discrepancy in the relations found for several types of pulsating variable all known generally as Cepheids. This discrepancy was confirmed by Edwin Hubble's 1931 study of the globular clusters around the Andromeda Galaxy. The solution was not found until the 1950s, when it was shown that population II Cepheids were systematically fainter than population I Cepheids. The cluster variables (RR Lyrae variables) were fainter still. The relations Period-luminosity relations are known for several types of pulsating variable stars: type I Cepheids; type II Cepheids; RR Lyrae variables; Mira variables; and other long-period variable stars. Classical Cepheids The Classical Cepheid period-luminosity relation has been calibrated by many astronomers throughout the twentieth century, beginning with Hertzsprung. Calibrating the period-luminosity relation has been problematic; however, a firm Galactic calibration was established by Benedict et al. 2007 using precise HST parallaxes for 10 nearby classical Cepheids. Also, in 2008, ESO astronomers estimated with a precision within 1% the distance to the Cepheid RS Puppis, using light echos from a nebula in which it is embedded. However, that latter finding has been actively debated in the literature. The following relationship between a Population I Cepheid's period P and its mean absolute magnitude Mv was established from Hubble Space Telescope trigonometric parallaxes for 10 nearby Cepheids: with P measured in days. <ref name=benedict2007/> The following relations can also be used to calculate the distance to classical Cepheids. Impact Classical Cepheids (also known as Population I Cepheids, type I Cepheids, or Delta Cepheid variables) undergo pulsations with very regular periods on the order of days to months. Cepheid variables were discovered in 1784 by Edward Pigott, first with the variability of Eta Aquilae, and a few months later by John Goodricke with the variability of Delta Cephei, the eponymous star for classical Cepheids. Most of the Cepheids were identified by the distinctive light curve shape with a rapid increase in brightness and a sharp turnover. Classical Cepheids are 4–20 times more massive than the Sun and up to 100,000 times more luminous. These Cepheids are yellow bright giants and supergiants of spectral class F6 – K2 and their radii change by of the order of 10% during a pulsation cycle. Leavitt's work on Cepheids in the Magellanic Clouds led her to discover the relation between the luminosity and the period of Cepheid variables. Her discovery provided astronomers with the first "standard candle" with which to measure the distance to faraway galaxies. Cepheids were soon detected in other galaxies, such as Andromeda (notably by Edwin Hubble in 1923–24), and they became an important part of the evidence that "spiral nebulae" are independent galaxies located far outside of the Milky Way. Leavitt's discovery provided the basis for a fundamental shift in cosmology, as it prompted Harlow Shapley to move the Sun from the center of the galaxy in the "Great Debate" and Hubble to move the Milky Way galaxy from the center of the universe. With the period-luminosity relation providing a way to accurately measure distances on an inter-galactic scale, a new era in modern astronomy unfolded with an understanding of the structure and scale of the universe. The discovery of the expanding universe by Georges Lemaitre and Hubble were made possible by Leavitt's groundbreaking research. Hubble often said that Leavitt deserved the Nobel Prize for her work, and indeed she was nominated by a member of the Swedish Academy of Sciences in 1924, although as she had died of cancer three years earlier she was not eligible. (The Nobel Prize is not awarded posthumously.) References Large-scale structure of the cosmos Astrometry Classical Cepheid variables
Period-luminosity relation
[ "Astronomy" ]
1,453
[ "Astrometry", "Astronomical sub-disciplines" ]
61,807,728
https://en.wikipedia.org/wiki/Crist%C3%B3bal%20de%20Losada%20y%20Puga
Cristóbal de Losada y Puga (14 April 1894 – 30 August 1961) was a Peruvian mathematician and mining engineer. He was Minister of Education of Peru in the government of José Luis Bustamante y Rivero and Director of the National Library of Peru between 1948 and 1961. Biography He was born in New York, son of Enrique Cristóbal de Losada Plissé and Amalia Natividad Puga y Puga. He was two years old when, in 1896, his father died so he was taken back to Peru and settled in Cajamarca, the land of his mother's family. There he attended his primary and secondary studies. In 1913 he went to Lima to study at the National School of Engineers (now the National University of Engineering), obtaining his title of Mining Engineer in 1919. His first professional work was with the Corps of Mining Engineers, until 1923. He was also admitted in the Faculty of Sciences of the National University of San Marcos. In 1922, he graduated with a bachelor's degree and later in 1923 he obtained his diploma as a doctor of mathematical sciences from this institution, with a thesis "On rolling curves". He dedicated himself to teaching. In the Chorrillos Military School he was professor of Arithmetic, Descriptive Geometry and Elemental Mechanics (1920–1926 and 1931–1940). In the Faculty of Sciences of San Marcos he was Professor of Differential and Integral Calculation (1924–1926), and of Calculation of Probabilities and Mathematical Physics (1935–1939). In the National School of Engineers he was professor of Rational Mechanics, Resistance of Materials and Infinitesimal Calculation (1930–1931), work that exerted until the closing of the School for political reasons. In 1924 he was a speaker at the International Congress of Mathematicians in Toronto. In 1931 he assumed the presidency of the National Society of Industries. In 1933 he became a professor at the Faculty of Science and Engineering at the Pontifical Catholic University of Peru, where he taught Analytical Geometry, Infinitesimal Calculation, Mechanics and Resistance of Materials, until 1953. He became dean of the Faculty (1939–1946 and 1948–1950. He was also director of the Magazine of the Catholic University (1938–1945) and even came to serve as prorector (1941–1946). President José Luis Bustamante y Rivero summoned him to serve as Minister of Public Education, he served in this role from January 12 to October 30, 1947. On July 12, 1948 he was appointed Director of the National Library of Peru, a position in which he remained until his death. During his long period at the head of that institution he directed the Fénix magazine. He was a member of the National Academy of Exact, Physical and Natural Sciences of Peru, of the Peruvian Association for the Progress of Science and of the Peruvian Academy of Language. He was also a member to the Royal Academy of Physical and Natural Sciences of Madrid, the Royal Spanish Mathematical Society, the French Physical Society and the American Mathematical Association of America. He married María Luisa Marrou y Correa, he was the father of five children. Works Curso de Análisis matemático (3 volumes, 1945-1954) Las anomalías de la gravedad: su interpretación geológica, sus aplicaciones mineras (1917; aumentada en 1920) Contribución a la teoría matemática de las clépsidras y de los filtros (1922) Sobre las curvas de rodadura (1923) Mecánica racional (1930) Curso de Cálculo Infinitesimal (1938) Teoría y técnica de la fotoelastisimetría (1941) Galileo (1942) Copérnico (1943) References Bibliography 1894 births 1961 deaths Peruvian mathematicians 20th-century Peruvian engineers Ministers of education of Peru National University of Engineering alumni National University of San Marcos alumni Academic staff of the National University of San Marcos Academic staff of the Pontifical Catholic University of Peru 20th-century mathematicians Mining engineers Deans (academic) Recipients of the Civil Order of Alfonso X, the Wise Peruvian expatriates in the United States
Cristóbal de Losada y Puga
[ "Engineering" ]
851
[ "Mining engineering", "Mining engineers" ]
61,808,035
https://en.wikipedia.org/wiki/Closed%20graph%20property
In mathematics, particularly in functional analysis and topology, closed graph is a property of functions. A function between topological spaces has a closed graph if its graph is a closed subset of the product space . A related property is open graph. This property is studied because there are many theorems, known as closed graph theorems, giving conditions under which a function with a closed graph is necessarily continuous. One particularly well-known class of closed graph theorems are the closed graph theorems in functional analysis. Definitions Graphs and set-valued functions Definition and notation: The graph of a function is the set . Notation: If is a set then the power set of , which is the set of all subsets of , is denoted by or . Definition: If and are sets, a set-valued function in on (also called a -valued multifunction on ) is a function with domain that is valued in . That is, is a function on such that for every , is a subset of . Some authors call a function a set-valued function only if it satisfies the additional requirement that is not empty for every ; this article does not require this. Definition and notation: If is a set-valued function in a set then the graph of is the set . Definition: A function can be canonically identified with the set-valued function defined by for every , where is called the canonical set-valued function induced by (or associated with) . Note that in this case, . Open and closed graph We give the more general definition of when a -valued function or set-valued function defined on a subset of has a closed graph since this generality is needed in the study of closed linear operators that are defined on a dense subspace of a topological vector space (and not necessarily defined on all of ). This particular case is one of the main reasons why functions with closed graphs are studied in functional analysis. Assumptions: Throughout, and are topological spaces, , and is a -valued function or set-valued function on (i.e. or ). will always be endowed with the product topology. Definition: We say that   has a closed graph in if the graph of , , is a closed subset of when is endowed with the product topology. If or if is clear from context then we may omit writing "in " Note that we may define an open graph, a sequentially closed graph, and a sequentially open graph in similar ways. Observation: If is a function and is the canonical set-valued function induced by   (i.e. is defined by for every ) then since , has a closed (resp. sequentially closed, open, sequentially open) graph in if and only if the same is true of . Closable maps and closures Definition: We say that the function (resp. set-valued function) is closable in if there exists a subset containing and a function (resp. set-valued function) whose graph is equal to the closure of the set in . Such an is called a closure of in , is denoted by , and necessarily extends . Additional assumptions for linear maps: If in addition, , , and are topological vector spaces and is a linear map then to call closable we also require that the set be a vector subspace of and the closure of be a linear map. Definition: If is closable on then a core or essential domain of is a subset such that the closure in of the graph of the restriction of to is equal to the closure of the graph of in (i.e. the closure of in is equal to the closure of in ). Closed maps and closed linear operators Definition and notation: When we write then we mean that is a -valued function with domain where . If we say that is closed (resp. sequentially closed) or has a closed graph (resp. has a sequentially closed graph) then we mean that the graph of is closed (resp. sequentially closed) in (rather than in ). When reading literature in functional analysis, if is a linear map between topological vector spaces (TVSs) (e.g. Banach spaces) then " is closed" will almost always means the following: Definition: A map is called closed if its graph is closed in . In particular, the term "closed linear operator" will almost certainly refer to a linear map whose graph is closed. Otherwise, especially in literature about point-set topology, " is closed" may instead mean the following: Definition: A map between topological spaces is called a closed map if the image of a closed subset of is a closed subset of . These two definitions of "closed map" are not equivalent. If it is unclear, then it is recommended that a reader check how "closed map" is defined by the literature they are reading. Characterizations Throughout, let and be topological spaces. Function with a closed graph If is a function then the following are equivalent:   has a closed graph (in ); (definition) the graph of , , is a closed subset of ; for every and net in such that in , if is such that the net in then ; Compare this to the definition of continuity in terms of nets, which recall is the following: for every and net in such that in , in . Thus to show that the function has a closed graph we may assume that converges in to some (and then show that ) while to show that is continuous we may not assume that converges in to some and we must instead prove that this is true (and moreover, we must more specifically prove that converges to in ). and if is a Hausdorff space that is compact, then we may add to this list:   is continuous; and if both and are first-countable spaces then we may add to this list:   has a sequentially closed graph (in ); Function with a sequentially closed graph If is a function then the following are equivalent:   has a sequentially closed graph (in ); (definition) the graph of is a sequentially closed subset of ; for every and sequence in such that in , if is such that the net in then ; set-valued function with a closed graph If is a set-valued function between topological spaces and then the following are equivalent:   has a closed graph (in ); (definition) the graph of is a closed subset of ; and if is compact and Hausdorff then we may add to this list: is upper hemicontinuous and is a closed subset of for all ; and if both and are metrizable spaces then we may add to this list: for all , , and sequences in and in such that in and in , and for all , then . Characterizations of closed graphs (general topology) Throughout, let and be topological spaces and is endowed with the product topology. Function with a closed graph If is a function then it is said to have a if it satisfies any of the following are equivalent conditions: (Definition): The graph of is a closed subset of For every and net in such that in if is such that the net in then Compare this to the definition of continuity in terms of nets, which recall is the following: for every and net in such that in in Thus to show that the function has a closed graph, it may be assumed that converges in to some (and then show that ) while to show that is continuous, it may not be assumed that converges in to some and instead, it must be proven that this is true (and moreover, it must more specifically be proven that converges to in ). and if is a Hausdorff compact space then we may add to this list: is continuous. and if both and are first-countable spaces then we may add to this list: has a sequentially closed graph in Function with a sequentially closed graph If is a function then the following are equivalent: has a sequentially closed graph in Definition: the graph of is a sequentially closed subset of For every and sequence in such that in if is such that the net in then Sufficient conditions for a closed graph If is a continuous function between topological spaces and if is Hausdorff then   has a closed graph in . However, if is a function between Hausdorff topological spaces, then it is possible for   to have a closed graph in but not be continuous. Closed graph theorems: When a closed graph implies continuity Conditions that guarantee that a function with a closed graph is necessarily continuous are called closed graph theorems. Closed graph theorems are of particular interest in functional analysis where there are many theorems giving conditions under which a linear map with a closed graph is necessarily continuous. If is a function between topological spaces whose graph is closed in and if is a compact space then is continuous. Examples For examples in functional analysis, see continuous linear operator. Continuous but not closed maps Let denote the real numbers with the usual Euclidean topology and let denote with the indiscrete topology (where note that is not Hausdorff and that every function valued in is continuous). Let be defined by and for all . Then is continuous but its graph is not closed in . If is any space then the identity map is continuous but its graph, which is the diagonal , is closed in if and only if is Hausdorff. In particular, if is not Hausdorff then is continuous but not closed. If is a continuous map whose graph is not closed then is not a Hausdorff space. Closed but not continuous maps Let and both denote the real numbers with the usual Euclidean topology. Let be defined by and for all . Then has a closed graph (and a sequentially closed graph) in but it is not continuous (since it has a discontinuity at ). Let denote the real numbers with the usual Euclidean topology, let denote with the discrete topology, and let be the identity map (i.e. for every ). Then is a linear map whose graph is closed in but it is clearly not continuous (since singleton sets are open in but not in ). Let be a Hausdorff TVS and let be a vector topology on that is strictly finer than . Then the identity map is a closed discontinuous linear operator. See also References Functional analysis
Closed graph property
[ "Mathematics" ]
2,082
[ "Functional analysis", "Functions and mappings", "Mathematical relations", "Mathematical objects" ]
61,808,196
https://en.wikipedia.org/wiki/Mai%20Gehrke
Mai Gehrke (born 10 May 1964) is a Danish mathematician who studies the theory of lattices and their applications to mathematical logic and theoretical computer science. She is a director of research for the French Centre national de la recherche scientifique (CNRS), affiliated with the Laboratoire J. A. Dieudonné (LJAD) at the University of Nice Sophia Antipolis. Education As a child, Gehrke was educated at a French school in Algiers, which used a Bourbakist and very abstract mathematics curriculum. As a high school student in Denmark, she spent a year as an exchange student in Houston studying painting, but was brought back to mathematics by a Polish mathematics teacher who taught her point-set topology according to the Moore method. She earned her Ph.D. from the University of Houston in 1987. Her dissertation, Order Structure of Stone Spaces and the TD-axiom, was supervised by Klaus Hermann Kaiser. Career After postdoctoral study at Vanderbilt University, Gehrke joined the faculty of New Mexico State University in 1990. She moved to Radboud University Nijmegen in 2007, and to CNRS in 2011. From 2011 to 2017 her work for CNRS was associated with the Laboratoire d'Informatique Algorithmique: Fondements et Applications (LIAFA) at Paris Diderot University; in 2017 she moved to LJAD in Sophia Antipolis. References External links Home page 1964 births Living people Danish mathematicians Danish women mathematicians 21st-century French women mathematicians 21st-century French mathematicians Mathematical logicians University of Houston alumni New Mexico State University faculty Academic staff of Radboud University Nijmegen Women logicians Research directors of the French National Centre for Scientific Research
Mai Gehrke
[ "Mathematics" ]
354
[ "Mathematical logic", "Mathematical logicians" ]
61,810,178
https://en.wikipedia.org/wiki/Egidio%20Cosentino
Egidio Cosentino (21 April 1927 – 15 November 2020) was an Italian field hockey player. He competed in the men's tournament at the 1952 Summer Olympics. According to his obituary, he was born in Trieste, where he secured a university degree in Structural engineering in 1948. With his wife Luisa Querin, he emigrated to Canada in 1954, working for Strong, Lamb and Nelson from 1960, from 1977 running his own structural engineering company. References External links 1927 births 2020 deaths Italian male field hockey players Olympic field hockey players for Italy Field hockey players at the 1952 Summer Olympics Sportspeople from Trieste Structural engineers 20th-century Italian sportsmen
Egidio Cosentino
[ "Engineering" ]
133
[ "Structural engineering", "Structural engineers" ]
61,812,712
https://en.wikipedia.org/wiki/Groundwater%20in%20Nigeria
Groundwater in Nigeria is widely used for domestic, agricultural, and industrial supplies. The Joint Monitoring Programme for Water Supply and Sanitation estimate that in 2018 60% of the total population were dependent on groundwater point sources for their main drinking water source: 73% in rural areas and 45% in urban areas. The cities of Calabar and Port Harcourt are totally dependent on groundwater for their water supply. In 2013, there were around 65,000 boreholes in Nigeria extracting an estimated 6,340,000 m3/day. The majority of these (almost 45,000) were equipped with hand pumps and used for water supply in rural areas and small towns. Estimates of total renewable groundwater resources in Nigeria are variable. The United Nations Food and Agriculture Organization (FAO) estimates that Nigeria has 87,000 Million m3/year of renewable groundwater resources. Japan International Cooperation Agency (JICA) estimate that total annual groundwater recharge is 155,800 Million m3/year. Recharge is variable across the country and largely controlled by climate: recharge is lower in the north of the country due to higher evapotranspiration and lower rainfall. Groundwater Environments Geology The geology of Nigeria includes large areas where Precambrian basement rocks are present at the ground surface, and other large areas where the basement rocks are overlain by younger sedimentary rocks. In the north west, these sedimentary rocks largely consist of Tertiary and Cretaceous rocks of the Sokoto Basin (the south eastern part of the Iullemmeden Basin), which are made up of varying amounts of sandstone and clay, with lesser amounts of limestone. In the north east, Tertiary and Quaternary rocks of the Chad Basin comprise sandstone, siltstone and shale. In the centre of Nigeria, along the course of the Benue and Niger rivers, are Tertiary and Cretaceous sedimentary rocks of the Benue and Nupe Basins. The Benue Basin consists of continental sandstones overlain by marine and estuarine shales and limestones, while the Nupe Basin largely contains continental sandstones, siltstones, claystones and conglomerates. Along the coast, within the Niger Delta Basin, there are significant unconsolidated sediments of Tertiary to Quaternary age, comprising coarse to medium grained unconsolidated sands and gravels with thin peats, silts, clays and shales. Basement Aquifers The basement rocks generally form low to moderate productivity aquifers where groundwater flow is concentrated in fractures, and often in a weathered mantle, which can develop in the upper few to tens of metres. Boreholes within basement aquifers are typically drilled to depths of 10–70 m, depending on local conditions. Water quality in these aquifers is generally good. Sedimentary Aquifers Rocks in the lower, more southerly part, of the Benue Basin form moderately productive aquifers where groundwater flow is concentrated in fractures. Borehole yields typically vary from 2 to 8 litres per second (l/s), but can be much higher or lower than this depending on the degree of fracturing and weathering. Aquifers tend to be localised and unconfined, with boreholes ranging from 40 to 150 m deep. Groundwater from these aquifers can be highly mineralised. Rocks in the Upper Benue Basin typically form low to moderate productivity aquifers, with borehole yields varying between 1 and 5 L/s. Groundwater flow is largely intergranular and boreholes can vary in depth from 30 to 300 m. Groundwater quality is generally good. Rocks in the Nupe Basin form low to moderately productive aquifers. Groundwater flow predominantly occurs through pore spaces in the rocks, which are generally fine grained and slightly cemented, limiting groundwater potential. Where coarser sandstones dominate, yields of 2 to 4 L/s can be achieved; basal conglomerate beds may support higher yields. Borehole yields in rocks of the Sokoto Basin are highly variable, but can be moderate to highly productive where coarser sandstones dominate. Lower sandstones can be confined while upper sandstone layers are typically unconfined. The Chad Formation, found in the Chad Basin, forms a moderate to highly productive aquifer, with yields up to 30 L/s recorded. Groundwater flow is largely intergranular and aquifers can be unconfined or confined depending on local conditions. Deeper sandstone layers are often confined. The Gombe Formation, also in the Chad Basin, has much lower permeability providing yields of around 1 to 5 L/s. Natural water quality from both these formations is generally good. Igneous Aquifers Volcanic rocks also occur in Nigeria, forming low to moderately productive aquifers. Borehole yields are typically less than 3 L/s and may be drilled to depths of around 15 to 50 m. Unconsolidated Aquifers Unconsolidated rocks, which predominantly occur in the coastal Niger Delta Basin and along major river valleys, form high to very high productivity aquifers. Alluvial aquifers are largely unconfined with shallow water tables and are thickest (15 – 30 m) along the rivers Niger and Benue. In the Niger Delta Basin, the upper Deltaic Formation is unconsolidated and largely unconfined with a shallow water table (0–10 m below ground level). The older Benin Formation is partly consolidated. It is largely unconfined (locally confined by lower permeability beds) with a water table typically between 3 and 15 m below ground level, but can be as much as 55 m deep. Aquifers in the Niger Delta Basin can provide yields from 3 to 60 L/s with borehole depths ranging from 10 to 800 m. Recharge is mostly directly from rainfall. Water quality issues include salinity, due to intrusion of sea water, iron, and pollution from human activities, particularly in urban areas. Groundwater Issues Groundwater Availability A 1996 survey by the Ministry of Water Resources found only 63% of boreholes in Nigeria were in working order, with many out of action due to pump failure. There are local issues of over-abstraction of groundwater, causing lowering of groundwater levels and, in some cases, land subsidence. These are mainly in unconsolidated aquifers in urban areas in the coastal plain in the south, including in Lagos and Port Harcourt. There are issues with drought causing lowering of groundwater levels, both seasonally in dry seasons, and during longer-term periods of low rainfall. This is particularly an issue for local, low storage basement aquifers, both in the north where rainfall is low and in the south, where rainfall is high. The potential impact of climate change on groundwater levels, combined with changing water demand, is recognised in the National Water Resources Master Plan. Groundwater Quality Groundwater pollution related to human activities (agriculture, domestic waste, industry) are issues in shallow aquifers, particularly permeable unconsolidated deposits, which have little protection from surface activities. Natural groundwater quality issues have also been recorded. Examples of this include high salinity in shallow alluvial aquifers due to dissolution of evaporate minerals and high iron and manganese concentrations in confined aquifers with low dissolved oxygen (i.e. anaerobic). Transboundary Groundwater There are several transboundary aquifers in Nigeria: The Iullemeden, Taoudeni/Tanezrout Aquifer Systems (ITAS), shared by Algeria, Benin, Burkina Faso, Mali, Mauritania, Niger and Nigeria. The Chad Basin Aquifer, shared by Cameroon, Central African Republic, Chad, Niger and Nigeria. The Keta Basin Aquifers, shared by Ghana, Togo, Benin and Nigeria. The Benue Trough, shared by Cameroon and Nigeria The Rio Del Rey Basin, shared by Cameroon and Nigeria along the coast. The Lake Chad Basin Commission manages issues related to the Chad Basin and a mechanism for the management of the Iullemeden Basin is in development, following a Memorandum of Understanding for establishing an integrated water resources management mechanism for this aquifer. Groundwater Management There are many bodies with responsibility for groundwater management in Nigeria. They include: Nigeria Hydrological Services Agency (NIHSA) whose mandate is water resources (groundwater and surface water) assessment of the country; its quantity, quality, availability and distribution in time and space Nigeria Integrated Water Resources Management Commission (NIWRMC) that is responsible for regulation of water use and allocation The state Ministries of Water Resources and their Rural Water Supply and Sanitation Agencies (RUWATSSAN), responsible for provision of water to their various States The National Water Resources Institute, a parastatal of the Federal Ministry of Water Resources, has responsibility for training, research and data management relating to water in general. All the River Basin Development Authorities, which are also parastatals of the Federal Ministry of Water Resources involved in the provision of water supply to rural environments within their catchments. The Nigerian Association of Hydrogeologists (NAH) is the professional body concerned with due process and best practices in the exploration, development and management of Nigeria's water resources. The NAH disseminates information on the state of the nation's water resources through annual conferences and a journal, Water Resources. Through membership of the Hydrology and Hydrogeology Subcommittee of the National Technical Committee on Water Resources (NTCWR), the technical arm of the National Council of Water Resources (NCWR), the NAH contributes to the development of water resources policies and legislation, including the Water Resources Act 100 and the Nigerian Standard for Drinking Water Quality. The Nigeria Hydrological Services Agency (NIHSA), an agency of the Federal Ministry of Water Resources, has responsibility for groundwater monitoring. There is a national groundwater level monitoring programme with 43 monitoring points, 32 of which are equipped with data loggers. These are sited in basement and sedimentary aquifers. The frequency of monitoring at sites with data loggers is daily, and sometimes twice daily. NIHSA has implemented a programme of drilling new monitoring boreholes for monitoring groundwater level. The new boreholes so far are focussed on sedimentary aquifers used for urban water supply; with borehole depths of 80 to 100 m. The groundwater level monitoring data are stored at NIHSA headquarters in Abuja. The NIHSA is also responsible for water quality monitoring, but as yet a full programme is not in place due to lack of equipment. The National Water Resources Master Plan recognises current problems in the effective acquisition and management of groundwater data, and recommends strategies for improving this situation. Sources References External links https://www.bgs.ac.uk/africagroundwateratlas/index.cfm http://earthwise.bgs.ac.uk/index.php/Hydrogeology_of_Nigeria Hydrogeology Water in Nigeria Geology of Nigeria
Groundwater in Nigeria
[ "Environmental_science" ]
2,231
[ "Hydrology", "Hydrogeology" ]
68,497,153
https://en.wikipedia.org/wiki/Theories%20of%20iterated%20inductive%20definitions
In set theory and logic, Buchholz's ID hierarchy is a hierarchy of subsystems of first-order arithmetic. The systems/theories are referred to as "the formal theories of ν-times iterated inductive definitions". IDν extends PA by ν iterated least fixed points of monotone operators. Definition Original definition The formal theory IDω (and IDν in general) is an extension of Peano Arithmetic, formulated in the language LID, by the following axioms: for every LID-formula F(x) The theory IDν with ν ≠ ω is defined as: for every LID-formula F(x) and each u < ν Explanation / alternate definition ID1 A set is called inductively defined if for some monotonic operator , , where denotes the least fixed point of . The language of ID1, , is obtained from that of first-order number theory, , by the addition of a set (or predicate) constant IA for every X-positive formula A(X, x) in LN[X] that only contains X (a new set variable) and x (a number variable) as free variables. The term X-positive means that X only occurs positively in A (X is never on the left of an implication). We allow ourselves a bit of set-theoretic notation: means For two formulas and , means . Then ID1 contains the axioms of first-order number theory (PA) with the induction scheme extended to the new language as well as these axioms: Where ranges over all formulas. Note that expresses that  is closed under the arithmetically definable set operator , while  expresses that  is the least such (at least among sets definable in ). Thus,  is meant to be the least pre-fixed-point, and hence the least fixed point of the operator . IDν To define the system of ν-times iterated inductive definitions, where ν is an ordinal, let  be a primitive recursive well-ordering of order type ν. We use Greek letters to denote elements of the field of . The language of IDν, is obtained from by the addition of a binary predicate constant JA for every X-positive formula that contains at most the shown free variables, where X is again a unary (set) variable, and Y is a fresh binary predicate variable. We write instead of , thinking of x as a distinguished variable in the latter formula. The system IDν is now obtained from the system of first-order number theory (PA) by expanding the induction scheme to the new language and adding the scheme expressing transfinite induction along for an arbitrary  formula  as well as the axioms: where  is an arbitrary  formula. In  and  we used the abbreviation  for the formula , where  is the distinguished variable. We see that these express that each , for , is the least fixed point (among definable sets) for the operator . Note how all the previous sets , for , are used as parameters. We then define . Variants - is a weakened version of . In the system of , a set is instead called inductively defined if for some monotonic operator , is a fixed point of , rather than the least fixed point. This subtle difference makes the system significantly weaker: , while . is weakened even further. In , not only does it use fixed points rather than least fixed points, and has induction only for positive formulas. This once again subtle difference makes the system even weaker: , while . is the weakest of all variants of , based on W-types. The amount of weakening compared to regular iterated inductive definitions is identical to removing bar induction given a certain subsystem of second-order arithmetic. , while . is an "unfolding" strengthening of . It is not exactly a first-order arithmetic system, but captures what one can get by predicative reasoning based on ν-times iterated generalized inductive definitions. The amount of increase in strength is identical to the increase from to : , while . Results Let ν > 0. If a ∈ T0 contains no symbol Dμ with ν < μ, then "a ∈ W0" is provable in IDν. IDω is contained in . If a -sentence is provable in IDν, then there exists such that . If the sentence A is provable in IDν for all ν < ω, then there exists k ∈ N such that . Proof-theoretic ordinals The proof-theoretic ordinal of ID<ν is equal to . The proof-theoretic ordinal of IDν is equal to . The proof-theoretic ordinal of is equal to . The proof-theoretic ordinal of for is equal to . The proof-theoretic ordinal of is equal to . The proof-theoretic ordinal of for is equal to . The proof-theoretic ordinal of for is equal to . The proof-theoretic ordinal of is equal to . The proof-theoretic ordinal of is equal to . The proof-theoretic ordinal of is equal to . The proof-theoretic ordinal of is equal to . The proof-theoretic ordinal of is equal to . The proof-theoretic ordinal of is equal to . The proof-theoretic ordinal of ID1 (the Bachmann-Howard ordinal) is also the proof-theoretic ordinal of , , and . The proof-theoretic ordinal of W-IDω () is also the proof-theoretic ordinal of . The proof-theoretic ordinal of IDω (the Takeuti-Feferman-Buchholz ordinal) is also the proof-theoretic ordinal of , and . The proof-theoretic ordinal of ID<ω^ω () is also the proof-theoretic ordinal of . The proof-theoretic ordinal of ID<ε0 () is also the proof-theoretic ordinal of and . References An independence result for Iterated inductive definitions and subsystems of analysis: recent proof-theoretical studies Iterated inductive definitions in nLab Lemma for the intuitionistic theory of iterated inductive definitions Iterated Inductive Definitions and Large countable ordinals and numbers in Agda Ordinal analysis in nLab Ordinal numbers Proof theory Set theory Mathematical logic
Theories of iterated inductive definitions
[ "Mathematics" ]
1,358
[ "Ordinal numbers", "Proof theory", "Set theory", "Mathematical logic", "Mathematical objects", "Order theory", "Numbers" ]
68,498,249
https://en.wikipedia.org/wiki/Civilian%20victimization
Civilian victimization is the intentional use of violence against noncombatants in a conflict. It includes both lethal forms of violence (such as killings), as well as non-lethal forms of violence such as torture, forced expulsion, and rape. According to this definition, civilian victimization is only a subset of harm that occurs to civilians during conflict, excluding that considered collateral damage of military activity. However, "the distinction between intentional and unintentional violence is highly ambivalent" and difficult to determine in many cases. Scholars have identified various factors that may either provide incentives for the use of violence against civilians, or create incentives for restraint. Violence against civilians occurs in many types of civil conflict, and can include any acts in which force is used to harm or damage civilians or civilian targets. It can be lethal or nonlethal. During periods of armed conflict, there are structures, actors, and processes at a number of levels that affect the likelihood of violence against civilians. Violence towards civilians is not “irrational, random, or the result of ancient hatreds between ethnic groups.” Rather, violence against civilians may be used strategically in a variety of ways, including attempts to increase civilian cooperation and support; increase costs to an opponent by targeting their civilian supporters; and physically separate an opponent from its civilian supporters by removing civilians from an area. Patterns of violence towards civilians can be described at a variety of levels and a number of determinants of violence against civilians have been identified. Describing patterns of violence Francisco Gutiérrez-Sanín and Elisabeth Jean Wood have proposed a conceptualization of political violence that describes an actor in terms of its pattern of violence, based on the "repertoire, targeting, frequency, and technique in which it regularly engages." Actors can include any organized group fighting for political objectives. Repertoire covers the forms of violence used; targeting identifies the those attacked in terms of social group; frequency is the measurable occurrence of violence; and techniques are the types of weapons or technology used. This framework can be applied to observed patterns of violence without considering the intentionality of the actor. Other frameworks focus on motivation of the actor. Repertoires may include both lethal forms of violence against civilians such as killings, massacres, bombings, and terrorist attacks, and nonlethal forms of violence, such as forced displacement and sexual violence. In indirect violence heavy weapons such as tanks or fighter planes are used remotely and unilaterally. In direct violence perpetrators act face-to-face with the victims using small weapons such as machetes or rifles. Targets may be chosen collectively, as members of a particular ethnic, religious, or political group. This is sometimes referred to as categorical violence. Targets may also be chosen selectively, identifying specific individuals who are seen as opposing a political group or aiding its opponents. Techniques can vary greatly depending on the level of technology and amount of resources available to combatants. There are considerable impacts of technology over time, including the introduction of new technologies of rebellion. For example, changes in communication infrastructure may affect violence against civilians. If such technology facilitates organization by armed groups and increases contests over territory, violence against civilians in those areas is also likely to increase. As government surveillance of digital information increases, the use of targeted, selective violence against civilians by governments has been shown to increase. Analysis of levels of violence Theoretical explanations at various levels of analysis can co-exist and interact with one another. The following levels of analysis can be useful in understanding such dynamics: International At the international level, institutions, ideologies and the distribution of power and resources shape technologies of rebellion and political interactions, including both international and domestic wars. During the Cold War, the United States and the Soviet Union provided military and financial backing to both governments and rebellious groups, who engaged in irregular civil wars. Such conflicts frequently involve the use of violence to control civilians and territory. The decade following the dissolution of the Soviet Union was marked by a decline in worldwide battle deaths and the number of armed conflicts in the world. International norms and ideas also influence conflict and the use of violence against civilians. The period following World War II, from 1946 to 2013, has also been regarded as showing a decline in conflict. The United Nations General Assembly adopted the Universal Declaration of Human Rights in 1946. International actors signed the Genocide Convention in 1948 and the Geneva Conventions in 1949, formalizing protections for noncombatants and international norms for human rights and humanitarian standards. Transnational non-governmental organizations such as Human Rights Watch and Amnesty International have become active in surfacing information, advocating for human rights, mobilizing international public opinion, and influencing both social norms and international law. Interactions between foreign governments and rebel groups who receive their support can affect violence against civilians. Groups receiving external support become less dependent on local civilian populations and have less incentive to limit violence against civilians. Foreign aid to rebels is associated with higher levels of both combat-related death and civilian targeting. However, foreign actors that are democracies or have strong human rights lobbies are less likely to support groups that engage in violence against civilians. The international strategic environment also shapes government perceptions of threat. Perceptions of threat due to external military intervention may lead to increases in governmental mass killing of civilians and violence against domestic out-groups. The scrutiny and criticism of international and domestic actors can affect government use of violence, by increasing the perceived costs of violence against civilians. Governments and rebel groups that are vulnerable domestically and that seek international legitimacy are more likely to comply with international humanitarian law and exercise restraint toward civilians. Domestic and subnational Political organization occurs not just at a national level, but at many levels, including provinces, states, legislative districts, and cities. In many countries, national and local politics differ in scale and in the extent to which subnational governments afford and support their citizen's political rights and civil liberties. Relationships between government (at various levels), armed groups and domestic populations affect violence against civilians. Governments that rely on a broad base of domestic and institutional support are more likely to exercise restraint toward civilians. These may include democratic governments, inclusive governments, and governments in which institutions have not consolidated power. Similarly, rebel groups that need the support of a broad domestic constituency or of local civilians are less likely to target civilians and to engage in terrorism. Rebel groups whose political constituents live in the area that they control are more likely to use governance structures like elections to obtain cooperation and less likely to use political violence. Rebel groups that control areas inhabited by nonconstituents are more likely to use violence to obtain resources and cooperation. Ideology may strongly influence the ways in which governments and rebels define their constituencies, affecting patterns of violence. Where national, subnational or local institutions follow exclusionary ideologies, ethnic or other out-groups may become identified as nonconstituents and targeted, sometimes to the point of displacement, ethnic cleansing or genocide. Violence against civilians may vary over space and time with the extent to which military forces are contesting a territory. Stathis Kalyvas has theorized that selective violence is more likely to occur where control is asymmetric, with one group exercising dominant but not complete control of an area. Indiscriminate violence may be more likely to occur where one side controls an area. It has also been shown that indiscriminate violence is more likely to occur at a distance from a country's center of power. Opinions differ widely on whether there is a relationship between the relative military capacity of a government or rebel group and the likelihood that it will engage in patterns of violence against civilians. This may also vary depending on the type of violence involved. However, there is evidence that cutting off access to external sources of support may cause a group to become more dependent on the support of its local population and less likely to engage in violence against civilians. Organizational At the organizational level, researchers have examined the dynamics and ideology of armed groups: how they recruit and train their members, how organizational norms about the use of violence against civilians are established and maintained, and the role of group leaders and political ideology in shaping organizations and behavior. While some studies argue that violence against civilians reflects a lack of control over an organization's members and the absence of norms that inhibit violence, other researchers emphasize the social dynamics of armed groups and ways in which they may actively break down social norms that inhibit violence. Jeremy Weinstein has argued that armed groups develop certain organizational structures and characteristics as a result of their available resources. According to this view, organizations that depend on external resources are predicted to attract low-commitment members, and have trouble controlling their use of violence against civilians. Organizations that are dependent on local resources will tend to attract higher-commitment, ideologically motivated members from local communities, which will help to control their use of violence against civilians. Other researchers focus on organizational structure and its effects on behavior, without assuming that they are driven by resource endowment. They suggest that processes of education, training, and organizational control are important both in producing strategic violence and in establishing restraints against the use of violence against civilians. The ideology of armed groups is a key factor influencing both their organizational structure and member behavior. Some Marxist groups, which emphasize political education, have been less likely to use violence against civilians. The ideology of other armed groups, including governments, can actively promote violence and direct it at particular targets. Such groups often use "exclusionary ethnic or national ideologies or narratives" which have resulted in mass killings and genocide. Accounts from multiple countries have documented the "practice, norms, and other socialization processes" which armed groups have used to gain recruits, socialize group members, establish new norms of behavior and build group cohesion. Methods can include forced recruitment, systematic brutalization, and gang rape. Such groups create a “culture of violence” in which "horrifying acts of cruelty" are directed at both group members and civilians and become routine. The risk to civilians from such organizations is high. Individual On an individual level, people may be influenced to participate in armed conflicts due to economic motivations or incentive structures. Research in this area often views violence against civilians as a by-product of economic processes such as competition for resources. Researchers have also studied emotional and psychological factors relating to the use of violence, which are generally related to other factors such as strategy, opportunity, socialization, and other group-level processes. The emotions of shame, disgust, resentment, and anger have been linked to violence against civilians. While research suggests that emotions such as fear affect the polarization of attitudes, material and structural opportunities are important mediators of the expression of violence. At the individual level, researchers are examining the category of “civilian" in greater detail, to better understand the use of violence against different types of noncombatants. Such research also emphasizes the agency of civilians who are themselves actors during wartime and the ways in which they may respond to armed groups. There is evidence to suggest that local civilian institutions can sometimes mitigate violence by governments and rebel groups. Research also examines concerns such as the use of violence against humanitarian aid workers, and the targeting of women. Consequences of violence against civilians A relatively new area of research asks how individuals, groups, communities and domestic and international audiences respond to violence against civilians. Legacies of violence can last for many years and across generations, long after the violence occurred. Evidence on the effects of wartime violence on ethnic polarization is mixed. Research from various countries suggests that civilian responses to violence are not uniform. However, civilians do blame actors who have acted violently against their communities, and may withdraw their support, provide support to opposing forces, or vote for an opposing political party in elections. Such outcomes are more likely to occur in the area where the violence was experienced, and when the perpetrators of violence are considered outsiders. Individuals are likely to respond to violence by rejecting the ideology of the perpetrating group, particularly if the violence was severe. Those exposed to violence are likely to engage in prosocial behavior and to increase their political engagement. Research on the effectiveness of groups using violence against civilians in gaining political ends is mixed. Macro-level evidence suggests that rebel groups are likely to gain support from Western international actors in situations where governments are employing violence against civilians and rebel groups are showing restraint towards civilians. The United Nations is more likely to deploy peacekeepers when conflicts involve high levels of violence towards civilians. However, peacekeeping missions are more likely to be effective at protecting civilians from rebel groups than from governments. See also Child murder Civilian casualty ratio Collective punishment Dehumanization Gaza Strip famine Genocide Incitement to genocide Indiscriminate attack State crime References Civil affairs Global health Social conflict Human rights abuses Civilians in war
Civilian victimization
[ "Biology" ]
2,560
[ "Behavior", "Aggression", "Human behavior", "Violence" ]
68,498,462
https://en.wikipedia.org/wiki/Goshenite%20%28gem%29
Goshenite is a colorless gem variety of beryl. It is called the mother of all gemstones because it can be transformed into other like emerald, morganite, or bixbite. Goshenite is also referred to as the purest form of beryl since there are generally no other elements present in the stone. The gem is used as imitation for diamond or emerald by adding colored foil on it. Name Goshenite is named after Goshen, Massachusetts, United States, where it was first found. It is also known as white beryl or lucid beryl. Value and treatments Goshenite is not popular in the jewelry industry because of its lack of color and it lacks brilliance, luster, or fire. It is also inexpensive due to the fact it is abundant. Although the gem value of goshenite is relatively low, it can be colored yellow, green, pink, blue, and in intermediate colors by irradiating it with gamma rays and bombarding it with neutrons from nuclear reactors and radioactive materials. The resulting color depends on the content of Ca, Sc, Ti, V, Fe, and Co impurities. Occurrence It is most commonly found inside granite. It can also be found in metamorphic rocks. Goshenite can be found in countries like China, Canada, Brazil, Russia, Mexico, Pakistan, the United States, and Madagascar. References Beryl group Gemstones
Goshenite (gem)
[ "Physics" ]
297
[ "Materials", "Gemstones", "Matter" ]
68,498,475
https://en.wikipedia.org/wiki/Ariane%20Next
Ariane Next—also known as SALTO (reusable strategic space launcher technologies and operations)—is a future European Space Agency rocket being developed in the 2020s by ArianeGroup. This partially reusable launcher is planned to succeed Ariane 6, with an entry into service in the 2030s. The objective of the new launcher is to halve the launch costs compared with Ariane 6. The preferred architecture is that of the Falcon 9 rocket (a reusable first stage landing vertically with a common engine model for the two stages) while using an engine burning a mixture of liquid methane and liquid oxygen. The first technological demonstrators are under development. History The European Space Agency's Ariane 6 launcher is to gradually succeed the Ariane 5 rocket after 2023. Studies on the next generation of European government-funded launcher to follow Ariane 6 started before 2019. The stated priority objective for the new rocket is to halve the cost of launching compared to Ariane 6 with simplified and more flexible launch methods. ArianeGroup was selected by the ESA in 2021 to head two projects: one to develop a new reusable launch vehicle and the other to develop a new liquid propellant rocket engine for the vehicle. More specifically, the two programmes were named "SALTO (reuSable strAtegic space Launcher Technologies & Operations) and ENLIGHTEN (European iNitiative for Low cost, Innovative & Green High Thrust Engine) projects," respectively. ArianeGroup secured funding to begin development of the new reusable launch vehicle in May 2022. Funding for the project will be provided "by the European Commission as a part of the Horizon Europe programme designed to encourage and accelerate innovation" in Europe. In May 2022, the "French Economy Minister Bruno Le Maire said SALTO and ENLIGHTEN would be operational by 2026", and ArianeGroup stated that the target date was achievable. , the SALTO project intended to carry out an initial flight test of a single rocket stage by mid-2024, using a Themis prototype first stage to validate the landing phase of the design. First hot fire engine testing occurred in 2023. Description The architecture proposed for Ariane Next uses a design based on SpaceX's Falcon 9: a reusable first stage which, after having separated from the second stage, returns to land vertically on Earth. The first stage will use several liquid-propellant rocket engines: the predecessor for these is the Prometheus rocket engine under development by the EU, which burns a mixture of methane and liquid oxygen. Methane is somewhat less efficient than the hydrogen used by the Vulcain engine of Ariane 6 but it can be stored at higher temperatures, compared to for hydrogen, which makes it possible to lighten and simplify the tanks and the supply circuits. The density of liquid methane is higher than hydrogen, which allows a mass reduction in the tank structure. The launcher is planned to use seven or nine of such engines for the first stage and a single engine for the second stage. The goal is to halve the launch costs compared to Ariane 6. Preliminary steps To be able to produce the new launcher, various technology demonstrators are being developed, each also funded by European Union technology development funds: FROG is a small demonstrator for testing the vertical landing of a rocket stage. It made several flights in 2019. Callisto, under development, aims to improve the techniques required to produce a reusable launcher (return to Earth and reconditioning) and to estimate the operational cost of such a launcher. A first flight is scheduled for 2025 or early 2026. Themis will then be developed. It will have a reusable first stage with one to three Prometheus rocket motors and is expected to fly around 2022–2025. Configurations Different configurations of the launcher are being evaluated. Three versions are under consideration for different missions: A two-stage version A version with two small liquid propellant boosters A version with three first stages linked together, similar to Falcon Heavy Return to Earth Different systems are being studied for controlling the first stage's atmospheric re-entry: Grid fins, as on the first stage of Falcon 9 Stabilization fins Air braking Landing system Different systems are being considered, ranging from everything on ground (all ground systems) to everything on the launcher (all on-board systems). Currently, development is focused on an on-board legs system similar to that of Falcon 9. See also Reusable Vehicle Testing References External links Ariane Next on CNES website Reusable launch systems Partially reusable space launch vehicles Proposed space launch vehicles Space launch vehicles of Europe Ariane (rocket family) Space programs European space programmes Spaceflight technology
Ariane Next
[ "Engineering" ]
959
[ "Space programs", "European space programmes" ]
68,498,668
https://en.wikipedia.org/wiki/Ramanujan%20machine
The Ramanujan machine is a specialised software package, developed by a team of scientists at the Technion: Israeli Institute of Technology, to discover new formulas in mathematics. It has been named after the Indian mathematician Srinivasa Ramanujan because it supposedly imitates the thought process of Ramanujan in his discovery of hundreds of formulas. The machine has produced several conjectures in the form of continued fraction expansions of expressions involving some of the most important constants in mathematics like e and π (pi). Some of these conjectures produced by the Ramanujan machine have subsequently been proved true. The others continue to remain as conjectures. The software was conceptualised and developed by a group of undergraduates of the Technion under the guidance of , an electrical engineering faculty member of Technion. The details of the machine were published online on 3 February 2021 in the journal Nature. According to George Andrews, an expert on the mathematics of Ramanujan, even though some of the results produced by the Ramanujan machine are amazing and difficult to prove, the results produced by the machine are not of the caliber of Ramanujan and so calling the software the Ramanujan machine is slightly outrageous. Doron Zeilberger, an Israeli mathematician, has opined that the Ramanujan machine is a harbinger of a new methodology of doing mathematics. Formulas discovered by the Ramanujan machine The following are some of the formulas discovered by the Ramanujan machine which have been later proved to be true: The following are some of the many formulas conjectured by the Ramanujan machine whose truth or falsity has not yet been established: In the last expression, the numbers 4, 14, 30, 52, . . . are defined by the sequence for and the numbers 8, 72, 288, 800, . . . are generated using the formula for . External links Website of the Ramanujan machine project: The Ramanujan Machine: Using algorithms to discover new mathematics References Mathematical software
Ramanujan machine
[ "Mathematics" ]
410
[ "Mathematical software" ]
68,499,126
https://en.wikipedia.org/wiki/Giulia%20Zanderighi
Giulia Zanderighi is an Italian-born theoretical physicist born in 1974 in Milan, Italy. She is the first woman director at the Max Planck Institute for Physics. Education Giulia Zanderighi received her undergraduate degree from the University of Milan in 1998 and her PhD in physics from the University of Pavia in 2001. Career Zanderighi held postdoctoral positions at Durham University at the Institute for Particle Physics Phenomenology in Durham (UK) from 2001 to 2003, Fermilab in Batavia (USA) from 2003 to 2005, and became a fellow in the theoretical department at CERN from 2005 to 2007. In 2007, she became an assistant professor at the University of Oxford, and in 2014 she became a Professor of Physics there and Tutorial Fellow at Wadham College in 2007. In 2014 she took a leave from this position, holding in a five-year staff position at CERN. On January 1, 2019, she was appointed director at the Max Planck Institute for Physics. She leads the department of novel computational techniques in particle phenomenology and is the first woman director at the institute in its more than 100-year history. She is an internationally recognized expert in collider phenomenology. References External links Giulia Zanderighi's page at the Max Planck Institute for Physics Giulia Zanderighi's author page at INSPIRE-HEP Italian physicists University of Pavia alumni Italian theoretical physicists Particle physicists Italian women physicists Living people 1974 births Max Planck Institute directors University of Milan alumni People associated with CERN
Giulia Zanderighi
[ "Physics" ]
321
[ "Particle physicists", "Particle physics" ]