id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
68,608,152 | https://en.wikipedia.org/wiki/Soviet%20computing%20technology%20smuggling | Soviet computing technology smuggling, both attempted and actual, was a response to CoCom (Coordinating Committee for Multilateral Export Controls) restrictions on technology transfer.
History
Mainframe successes
Initially the Soviet Union focused on mainframe computing technology, particularly the IBM 360 and 370. Between 1967 and 1972 much effort went into reverse engineering what they "acquired." Their first IBM-like machine was based on a 360/40 smuggled in via Poland. The second Soviet-built machine was from a 370/145. Their focus subsequently shifted to super-minicomputers. Failure in 1983 to import a VAX-11/782 did not stop their efforts. "Reverse-engineered and copied Apple IIe parts" brought microcomputers to the Soviet Union; it also brought computer viruses too. IBM PC compatible computers were also smuggled in.
Production of Iron Curtain mainframes, at one point, was estimated to be 180 per year.
VAX failures
The failure of the Soviets to acquire a VAX-11/782, a dual-processor variation of the VAX-11/780, the original VAX, unraveled much of their smuggling system. U. S. Secretary of Defense Caspar Weinberger made a public display of the system, about which The Washington Post headlined "Seized Computer Put on Display" in later 1983. The computer had been exported from the United States to South Africa, from which it was to clandestinely be reshipped; it was seized "moments before its scheduled shipment to the Soviet Union." Weinberger stated at a news conference that the VAX was intended for assisting production of "vastly more accurate . . . and more destructive weapons."
Like the 360/40, the smuggling process involved multiple shipments. The 360 had been disassembled and placed in a large number of suitcases. A smaller number of "huge containers of parts" held the 782. The latter's route involved transhipping, some more than half via Sweden, others via West Germany. A U.S. official describe potential "military uses, including the operation of a missile guidance system."
The exact configuration was not released even by over a year later: APnews, which noted that the smuggling operation
was spread across ten countries, cited $1.1 million as the system's price The Los Angeles Times described the same system's price as $1.5 million. The New York Times wrote "between $1.5 and $2 million."
Another VAX-smuggling attempt, five years later, involved a VAX 8800; this too ended in a failure. This time also, the computer involved was a dual-processor system. American government wiretapping revealed that some of the parties involved considered even settling for a VAX 8700, a uni-processor system.
See also
Toshiba–Kongsberg scandal
References
Further reading
Technobandits, by Linda Melvern, David Hebditch, and Nick Anning
History of computing hardware
History of international relations | Soviet computing technology smuggling | [
"Technology"
] | 617 | [
"History of computing hardware",
"History of computing"
] |
68,610,044 | https://en.wikipedia.org/wiki/12%20Pegasi | 12 Pegasi is a K-type supergiant star in the constellation of Pegasus. It has a spectral type of K0Ib Hdel0.5, which indicates that it is a less luminous K-type supergiant with strong H-δ Balmer lines. The star has expanded to 81 times the radius of the Sun, and has an effective temperature of 4,185 K.
References
K-type supergiants
Pegasus (constellation)
Pegasi, 12
8321
207089
107472
Durchmusterung objects | 12 Pegasi | [
"Astronomy"
] | 112 | [
"Pegasus (constellation)",
"Constellations"
] |
68,611,033 | https://en.wikipedia.org/wiki/James%20Marrow | Thomas James Marrow (born 23 November 1966) is a British scientist who is a professor of nuclear materials at the University of Oxford and holds the James Martin Chair in Energy Materials. He specialises in physical metallurgy, micromechanics, and X-ray crystallography of engineering materials, mainly ceramic matrix composite and nuclear graphite.
Biography
Early life and education
James Marrow was born on 23 November 1966 in Bromborough, Wirral to John Williams Marrow and Mary Elizabeth Marrow. He attended Wirral Grammar School for Boys, then graduated with a 1st Class Honours Master of Arts (M.A) in Natural Sciences (Materials Science) from the University of Cambridge in 1988, where he was a student at Clare College, Cambridge before pursuing and completing a Doctor of Philosophy degree in 1991. During his PhD, he studied the Fatigue mechanisms in embrittled duplex stainless steel and was supervised by Julia King.
Career
From 1992 to 1993, Marrow was appointed as postdoctoral research associate in the Department of Materials, University of Oxford, and a junior research fellow at Linacre College, Oxford, but moved with an Engineering and Physical Sciences Research Council (EPSRC) postdoctoral research fellowship to the School of Metallurgy and Materials, University of Birmingham. In 2001, he joined the Manchester Materials Science Centre, University of Manchester, as senior lecturer in physical metallurgy, where he became assistant director of Materials Performance Centre in 2002 and the director in 2009.
Marrow moved to the University of Oxford to become Oxford Martin School co-director of the school programme in Nuclear and Energy Materials from 2010 to 2015, Professor in Energy Materials, Department of Materials, Oxford University, and Fellow of Mansfield College, Oxford. , Marrow is the Associate Head of Department of Materials (Teaching).
Marrow is a council member of the UK Forum for Engineering Structural Integrity (FESI), UK representative for the European Energy Research Alliance Joint Programme on Nuclear Materials, member (ex-chair) of the OECD-NEA Expert Group on Innovative Structural Materials, independent advisor to the UK Office of Nuclear Regulation on materials/structural integrity, and UK representative on Graphite for BEIS to the Generation IV International Forum. Marrow is the co-director of the Nuclear Research Centre (NRC), which is a joint venture between the University of Bristol and the University of Oxford to train new nuclear scientists and engineers.
Personal life
Marrow married Daiva Kojelyte in 1998 and he is a father of a son and a daughter.
Research
Marrow's research focuses on the degradation of structural materials, the role of microstructure, and the mechanisms of materials ageing. A key aspect is the investigation of fundamental mechanisms of damage accumulation - including irradiation - using novel materials characterisation techniques. This has concentrated recently on computed X-ray tomography and strain mapping by digital image correlation and digital volume correlation, together with X-ray and neutron diffraction. He applies these techniques to study the degradation of Generation IV nuclear materials such as graphite and silicon carbide composites, as well as new materials for electrical energy storage.
Public engagement
Marrow is part of I'm a Scientist, Get me out of here! energy generation zone. He has also been a key developer and academic consultant for the Dissemination of IT for the Promotion of Materials Science (DoITPoMS). Global Cycle Network Technology (GCN Tech) interviewed James about carbon fibre fatigue and strain in 2022.
See also
References
British materials scientists
Academics of the University of Oxford
1966 births
Living people
British nuclear engineers
Metallurgists
Alumni of the University of Cambridge | James Marrow | [
"Chemistry",
"Materials_science"
] | 727 | [
"Metallurgists",
"Metallurgy"
] |
68,611,074 | https://en.wikipedia.org/wiki/Joe%20Murphy%20%28contractor%29 | John "Joe" Murphy (1917 – 2 August 2000) was an Irish civil engineering contractor. In his early life he worked as a police officer in the Garda Síochána but moved to England in 1945 to work in construction with his brother John Murphy. After ten years Joe established his own company, Murphy Limited, which became known as "Grey Murphy" to distinguish it from his brother's "Green Murphy" (J Murphy and Sons). Grey Murphy specialised in below-ground works, while Green Murphy specialised in above-ground works. Grey Murphy did well during the 1960s building boom and grew to become one of the largest Irish-owned construction firms.
Early life
Joe Murphy was born as John Murphy in Cahersiveen, County Kerry in Ireland in 1917. He was educated at a national school at Knockeen, County Waterford, before joining the Garda Síochána (police service). Murphy travelled to England in 1945 to join his older brother, who was working in the construction industry there. Murphy's brother had adopted the name John when he arrived in England so Murphy adopted "Joseph" to avoid confusion. The pair worked as labourers before setting up their own sub-contractor. One of the firm's first projects was to remove hazards to shipping on the Dover-Calais route in the English Channel.
Grey Murphy
After 10 years in partnership with his brother, Murphy left the firm to establish his own company specialising in cable-laying. Joe Murphy's company was Murphy Limited (under JMCC Holdings), while his brother's firm was J Murphy and Sons. The firms distinguished from each other as "Grey Murphy" and "Green Murphy" respectively, for the colours of their company vehicles. Grey Murphy tended to focus on below-ground work and Green Murphy on above-ground. At one stage the two companies accounted for 10% of the UK construction market.
Murphy's company, as well as other Irish-owned contractors, did well during the 1960s building boom. Murphy became a member of the fashionable Irish Club in Eaton Square, London, which became a centre for industry gossip. Murphy and his company became renowned for valuing good workmanship, for paying high wages and for employing Irish nationals. His friend, the actor Joe Lynch, claimed that Murphy employed more Irishmen than any firm based in Ireland.
Grey Murphy's Irish subsidiary, JMSE, was accused of bribing the senior politician Ray Burke. The company was investigated by the Flood Tribunal established in 1997. Murphy was not called in front of the tribunal due to ill health but was interviewed by them in Guernsey, though no action was taken against Murphy or the company.
Personal life
Murphy's wife died in 1962, leaving him to raise a daughter and a three-month-old son. Murphy remarried in 1968 to the sister of his dead wife, they had no further children. In the 1960s Murphy and his brother invested heavily in the Isle of Man-based bank the International Finance and Trust Corporation. The bank collapsed and both men lost millions of pounds. Murphy was briefly in crisis, but eventually recovered around 80% of his investment. Murphy later moved to Guernsey and lived there as a tax exile.
Murphy's second wife died in 1991. Murphy died of cancer at home in Guernsey on 2 August 2000. At the time his company was one of the top 10 largest Irish-owned building firms; Murphy was personally worth £36 million. The company entered administration in 2013 and was closed down.
References
1917 births
2000 deaths
People from Cahersiveen
Civil engineering contractors
20th-century Irish businesspeople
Irish emigrants to the United Kingdom | Joe Murphy (contractor) | [
"Engineering"
] | 733 | [
"Civil engineering",
"Civil engineering contractors"
] |
68,611,123 | https://en.wikipedia.org/wiki/Grand%20National%20Rink | The Grand National Rink was an outdoor skating rink located in the Brockton Village neighbourhood of Toronto, Ontario, Canada from 1896 to 1902. At the time, it was the largest open-air rink in the city. Its location is now the site of the McCormick Playground Arena at McCormick Park in the Little Portugal neighbourhood.
History
Business merchant Andrew Wheeler Green owned the Grand National Rink at 153 Brock Avenue, south of Dundas Street. Opened in December 1896, the north side of the grounds featured the ice rink and a heated bandstand. Expansion plans began in March 1897 to add new amenities. By May 1900, a new bandstand was constructed and the grounds featured a large fountain surrounded by evergreen trees. A basketball court was added along with a race track for sprinting and distance running and an athletic field for jumping and vaulting. One corner had a summertime outdoor roller skating rink and an open-air hockey rink for the winter.
Skating carnivals were held at the Grand National Rink along with speed skating races that attracted crowds of up to 1,000 spectators. In January 1902, the rink was awarded the bid by the Amateur Skating Association of Canada to hold the Ontario racing championships during the first week of February. Event organizers expected the tournament to attract a wide array of speed skaters from across the country and Green anticipated large attendance numbers. However, its location was too distant from the city’s downtown to draw a big crowd and the gathering became a local sporting event with a provincial name. Green's incurred financial losses forced the closure of the Grand National Rink in 1902.
Reopenings
The grounds of the Grand National Rink remained vacant until the end of 1907, which was then followed by two brief reopenings.
Royal Alexandra Rink
The north end of the Grand National Rink became the Royal Alexandra Rink, reopened as an outdoor hockey rink in January 1908 at 189 Brock Avenue. Its secretary was Thomas Bert Andrew, a hockey player with the Bank of Toronto Hockey Club in 1904 whose brother, William Herbert Andrew, attended the coronation of King Edward VII and Queen Alexandra in 1902. The last scheduled hockey game at the Royal Alexandra Rink was held in March 1908. By April, the rink and its adjoining property became the grounds for a three-acre (1.2 ha) baseball field with a large bleacher-seating area.
Brock Avenue Rink
Toronto Marlboros treasurer Arthur Hillyard Birmingham and his brother, team captain Herbert Frederick Birmingham, organized a consortium of hockey players to bring the Toronto Professional Hockey Club, a predecessor of the Toronto Maple Leafs, over to the Eastern Canada Hockey Association (ECHA) in response to the hockey club's withdrawal from the Ontario Professional Hockey League on November 19, 1909. The Birmingham brothers telegraphed a proposal to the ECHA on November 24 about their plan to erect a large canvas roof over the site of the former Grand National Rink and install wooden sideboards for a hockey rink by the end of 1909. The cost to build the temporary structure with a seating capacity for 4,000 people was . Construction of a permanent hockey arena for the professional team was scheduled for the following year on the grounds of the baseball field. The ECHA, which then became the Canadian Hockey Association (CHA), accepted the proposal on the condition that the new indoor arena had to be ready to house the Toronto club by the next summer.
The former Royal Alexandra Rink became the Brock Avenue Rink, reopened in December 1909 at 189 Brock Avenue. The ice rink featured amateur hockey games, skating carnivals and speed skating races. When the CHA dissolved on January 15, 1910, its hockey teams were transferred over to the National Hockey Association and its agreement with the Birmingham brothers came to an end. The last known skating event at the Brock Avenue Rink was held in March 1910 and the Mutual Street Arena in downtown Toronto became the first home arena of the Toronto Hockey Club and, subsequently, the Toronto Maple Leafs.
McCormick Park
The city of Toronto purchased the property of the former Grand National Rink for in December 1910 for the purpose of establishing a playground. The parcel of land became the McCormick Playground in 1911, named in recognition of Mary Virginia McCormick, the daughter of American inventor Cyrus Hall McCormick who lived in Toronto in 1908 and donated to the Toronto Playgrounds Association in 1910. The McCormick Recreation Centre opened at the north end of the property in 1912 at 163 Brock Avenue.
By 1963, the outdoor playground became known as McCormick Park. A new McCormick Recreation Centre was opened in 1964 at 66 Sheridan Avenue, located immediately east of the original building which itself became the site for the McCormick Playground Arena in 1972, an indoor skating arena at 179 Brock Avenue.
List of notable speed skaters
Notable athletes who skated at the Grand National Rink include the following:
Alice Louisa "Louie" Hern, Toronto women's skating champion in 1900 in the mile-long (1.6 km) mixed pairs who married her skating partner in 1902.
John S. Johnson, American speed skating world record holder in 1895 who competed in the mile-long (1.6 km) race at the rink in 1901.
John "Johnny" Nilsson, American speed skating world record holder in 1897 and 1900 who competed against Johnson at the rink in 1901.
William Charles Lawrence "Larry" Piper, Canadian skating champion in 1901 in the 220-yard (200 m) hurdles who later became a professional baseball player in the minor leagues in 1908.
Frederick "Fred" James Robson, Canadian skating champion in the 220-yard (200 m) straightaways in 1900 and 1901 who later won speed skating world records in 1904 and 1911.
Lot Roe, Ontario skating champion in the two-mile (3.2 km) and five-mile (8 km) races at the rink in 1902 who later won a speed skating world record in 1910.
Lewis "Lou" Leslie Walker, Toronto men's skating champion in 1900 in the mile-long (1.6 km) mixed pairs with Hern who later married her in 1902.
References
Defunct sports venues in Toronto
Ice rinks
1896 establishments in Canada
1902 disestablishments in Canada
1908 establishments in Canada
1908 disestablishments in Canada
1909 establishments in Canada
1910 disestablishments in Canada | Grand National Rink | [
"Engineering"
] | 1,248 | [
"Structural engineering",
"Ice rinks"
] |
68,611,391 | https://en.wikipedia.org/wiki/Aquamarine%20%28gem%29 | Aquamarine is a pale-blue to light-green variety of the beryl family, with its name relating to water and sea. The color of aquamarine can be changed by heat, with a goal to enhance its physical appearance (though this practice is frowned upon by collectors and jewelers). It is the birth stone of March.
Aquamarine is a fairly common gemstone, rendering it more accessible for purchase, compared to other gems in the beryl family. Overall, its value is determined by weight, color, cut, and clarity.
It is transparent to translucent and possesses a hexagonal crystal system. Aquamarine mainly forms in granite pegmatites and hydrothermal veins, and it is a very lengthy process that can take millions of years to form.
Aquamarine occurs in many countries over the world, and is most commonly used for jewelry, decoration and its properties.
Aquamarine is mainly extracted through open-pit mining, however underground mining is also a possibility to access aquamarine reserves.
Aquamarine is a durable gemstone, but it is highly recommended to conserve it on its own to prevent damage/scratches.
Famous aquamarines include the Dom Pedro, the Roosevelt Aquamarine, the Hirsch Aquamarine, Queen Elizabeth's Tiara, Meghan Markle's ring, and the Schlumberger bow.
Name and etymology
The name aquamarine comes from , and marine, deriving from . The word aquamarine was first used in the year 1677.
The word aquamarine has been used as a modifier for other minerals like aquamarine tourmaline, aquamarine emerald, aquamarine chrysolite, aquamarine sapphire, or aquamarine topaz.
Physical properties
Aquamarine is blue with hues of green, caused by trace amounts of iron found within the crystal structure. It can vary from pale to vibrant and transparent to translucent. Better transparency in aquamarine gemstones means that light may go through the crystal with less interference. The hexagonal crystal system is where aquamarine crystallizes. It forms prismatic crystals with a hexagonal cross-section. These crystals can be microscopic to enormous in size and frequently feature faces with vertical striating. The lustre of aquamarine ranges from vitreous to resinous. It can have a glass-like brilliance and a sheen when cut and polished correctly.
Chemical composition
Aquamarine has a chemical composition of , also containing Fe2+. It belongs to the beryl family, being a beryllium aluminum silicate mineral. It is closely related to emerald, morganite, and heliodor. Aquamarine is chemically stable and resistant to most common chemicals and acids. It has a hardness of 7.5–8 on the Mohs scale. While aquamarine often contains no inclusions, it may possess them, with content such as mica, hematite, saltwater, biotite, rutile or pyrite. Its hardness on the Mohs scale of mineral hardness is rated as 7.5-8. This rating gives aquamarine the chance to be a very suitable gem for everyday wear.
Geological formation
Aquamarine mainly forms in granite pegmatites (coarse-grained igneous rock) and hydrothermal vents. The remaining liquid that is left behind after granitic magma crystallizes is what gives rise to pegmatites. The residual fluids, which are rich in volatile elements and minerals such as silicon, aluminum, and beryllium, concentrate when the magma cools and solidifies.
Aquamarine may also be formed by hydrothermal fluids, which are hot, mineral-rich solutions. These liquids contain dissolved minerals and metals as they move through fissures and cavities in the crust of the Earth. Fractures, faults, and veins are just a few of the geological environments that hydrothermal systems can be linked to.
Beryllium is a necessary component for the production of aquamarine, a type of beryl. Although beryllium is a relatively uncommon element in the crust of the Earth, it can be found in concentrated forms in some geological settings. These include beryllium-rich hydrothermal systems and granite pegmatites, which contain large amounts of beryllium-bearing minerals.
The dissolved elements start to precipitate out of the solution and form crystals as the hydrothermal fluids cool and come into contact with the right minerals and circumstances. Crystals of beryl, which include aquamarine, begin to form in pegmatite veins and host rock fissures or cavities. Aquamarine crystals grow over long periods, which enables them to take on their distinctive hexagonal prismatic shape.
This is a very long process that can take millions of years to form. The settings in which aquamarine forms can vary and may lead to variations in gem quality, size, and color.
Value
The value of aquamarine is determined by its weight, color, cut, and clarity. Due to its relative abundance, aquamarine is comparatively less expensive than other gemstones within the beryl group, such as emerald or bixbite (red beryl), however it is typically more expensive than similarly colored gemstones such as blue topaz. Maxixe is a rarer variant of aquamarine, with its deep blue coloration, however, its color can fade due to sunlight. The color of maxixe is caused by NO3. Dark-blue maxixe color can be produced in green, pink or yellow beryl by irradiating it with high-energy radiation (gamma rays, neutrons or even X-rays). Naturally occurring blue hued aquamarine specimens are more expensive than those that have undergone heat treatment to reduce yellow tones caused by ferric iron. Cut aquamarines that are over 25 carats will have a lower price per carat than smaller ones of the same quality. Overall, the quality and color will vary depending on the source of the gem.
In culture
Aquamarine is the birth stone for the month of March. It has historically been used a symbol for youth and happiness due to its color, which has also, along with its name, made Western culture connect it with the ocean. Ancient tales have claimed that aquamarine came from the treasure chests of mermaids; which led to sailors using this gemstone as a lucky charm to protect against shipwreck. Additionally, ancient Romans believed this stone had healing properties, due to the stone being almost invisible when submerged in water.
The Chinese used it to make seals, and showpiece dolls. The Japanese used it to make netsuke.
The Egyptians, Greeks, Hebrews, and Sumerians all believed that aquamarine stones were worn by the High Priest of the Second Temple. It was said that these stones were engraved to represent the six tribes of Israel. Greeks also engraved designs into aquamarine 2 thousand years ago and turned them into intaglios.
In our modern era, aquamarine is mainly used for jewelry, decoration and its properties. It can be cut and shaped into rings, earrings, necklaces, and bracelets.
Aquamarine became a state gem for Colorado in 1971.
Occurrence
Aquamarine can be found in countries like Afghanistan, China, Kenya, Pakistan, Russia, Mozambique, the United States, Brazil, Nigeria, Madagascar, Zambia, Tanzania, Sri Lanka, Malawi, India, Zimbabwe, Australia, Myanmar, and Namibia. The state of Minas Gerais is a major source for aquamarine.
Aquamarine can mostly be found in granite pegmatites. It can also be found in veins of metamorphic rocks that became mineralized by hydrothermal activity.
The largest known example is the Dom Pedro aquamarine found in Pedra Azul, Minas Gerais, Brazil, in the late 1980's. It weighs roughly 4.6 pounds, cut from a 100-pound aquamarine crystal, and measures 10,363 carats. It resides in the National Museum of Natural History in Washington.
Mining and extraction
The initial stages of the aquamarine mining process involve prospecting and exploration. Finding prospective locations or regions with aquamarine reserves is necessary. Geological mapping, remote sensing, mapping, remote sensing, sampling, and other methods are used by geologists and mining firms to locate potentially aquamarine-containing geological formations and structures. Preparation of the site is the next step, which includes removing any vegetation, leveling the land, and constructing the facilities - such as access roads and workspaces. It is possible to mine aquamarine using both open-pit and underground techniques. This will depend on the size of the operation, the features of the deposit, and environmental conditions.
The most popular technique for extracting aquamarine on a large scale is open-pit mining. In order to reveal the aquamarine-bearing ore, the soil, vegetation, and rock cover must be removed. The ore is extracted using trucks, bulldozers, and excavators, to remove the material.
Underground mining may occasionally be used to obtain aquamarine reserves. This process entails digging shafts and tunnels to reach the ore bodies or veins that contain gems. When the aquamarine deposit is deep or the surrounding rock is too hard for open-pit extraction, underground mining is used, even though it can be more difficult and expensive than open-pit mining.
After extraction, the ore containing aquamarine is delivered to a processing plant. To extract the aquamarine crystals from the surrounding rock and other minerals, the ore is crushed, processed, and occasionally cleaned. The aquamarine can be concentrated and purified using a variety of methods, such as magnetic separation, froth flotation, and gravity separation.
The aquamarine crystals are then sorted according to size, shape, color, and clarity following the initial processing. The gemstones are assessed and graded by gemologists and experts according to predetermined standards, such as the four C's (color, clarity, cut, and carat weight). Only the best aquamarine crystals are chosen to be used in jewelry made of gemstones.
Care and maintenance
Aquamarine is classified as a durable gem, however, it may still be damaged. In storage, it is advised to place it on its own, without the interruption of other gemstones to prevent scratches. Warm soapy water and a soft brush are the best ways to clean this gemstone, however, ultrasonic cleaners are relatively safe for aquamarine.
Alternative uses
Although aquamarine is mainly used for jewelry, aquamarine powder has proven to be a beneficial ingredient in cosmetics. It has a binding and skin protecting function that ensures protection of the skin from external influences.
Notable examples
See also
List of gemstones
List of minerals
References
Gemstones
Beryl group | Aquamarine (gem) | [
"Physics"
] | 2,162 | [
"Materials",
"Gemstones",
"Matter"
] |
68,612,162 | https://en.wikipedia.org/wiki/Arazu | Arazu is the Babylonian god of construction and or crafts. Arazu was created by Ea, with one version saying he was created in order to build and restore temples.
It has been interpreted that Arazu is a priest.
References
Mesopotamian gods
Handicraft deities
Construction deities | Arazu | [
"Engineering"
] | 58 | [
"Construction",
"Construction deities"
] |
68,613,165 | https://en.wikipedia.org/wiki/Warazan | was a system of record-keeping using knotted straw at the time of the Ryūkyū Kingdom. In the Southern Ryukyuan languages of the Sakishima Islands it was known as barasan and on Okinawa Island as warazani or warazai. Formerly used in particular in relation to the "head tax", it is still to be found in connection with the annual , to record the amount of miki or sacred sake dedicated.
See also
Kaidā glyphs
Naha Tug-of-war
Quipu
References
Ryukyu Kingdom
Japanese writing system
Knots
Mathematical notation
Recording
Proto-writing
ja:結縄#沖縄 | Warazan | [
"Mathematics"
] | 128 | [
"nan"
] |
68,613,853 | https://en.wikipedia.org/wiki/Leticia%20Myriam%20Torres%20Guerra | Leticia Myriam Torres Guerra (born September 9, 1955) is a Mexican chemist.
Her research work focuses on the development and synthesis of advanced materials such as semiconductors and their application as powders and films in renewable energy and sustainable decontamination projects.
In 2005, she was appointed head of the Faculty of Civil Engineering's Department of Ecomaterials and Energy at the Autonomous University of Nuevo León (UANL). As of 2019, she is the general director of the .
Biography
Leticia Myriam Torres Guerra was born in Monterrey on September 9, 1955. She graduated from UANL with a licentiate in industrial chemistry in 1976. She earned her doctorate in advanced ceramic materials at the University of Aberdeen in 1984. In 1985, she began her work as a research professor at UANL's Faculty of Chemical Sciences, and went on to receive the university's research award 15 times by 2010. She became a Level 3 member of the Sistema Nacional de Investigadores in 1986, the only woman to do so for ten years.
Other positions she has held are deputy director of research of the UANL Faculty of Chemical Sciences from 1995 to 2001, and deputy director of scientific and technological development of the National Council of Science and Technology (CONACYT) from 2011 to 2013. During 2014 and 2015 she was a certified leader in renewable energies and energy efficiency at Harvard University. She has been a member of the Mexican Academy of Sciences since 1999, the Mexican Materials Society since 2009, and the International Union of Materials Research Societies since 2017. She is on four committees of Mexico's .
Torres founded the Center for Research and Development of Ceramic Materials (active from 1990 to 1995) at UANL's Faculty of Chemical Sciences. She has carried out technological developments in collaboration with the industrial sector, including an agreement with the Vitro Group in 1996 to teach a master of science program with a specialty oriented to glass, and one with Cemex to implement a special UNI-EMPRESA scholarship program.
In 2019, she was named general director of the .
Research
Torres' work has focused on materials science; she began her research with the synthesis of advanced ceramic materials and crystal chemistry. Her most notable scientific investigations have focused on the synthesis and modification of semiconductors such as titanates, tantalates, and zirconates of alkali and alkaline earth metals for decontamination of air, soil, and water through photocatalysis, as well as their use in hydrogen production. The materials developed in her work group have shown high photoelectrocatalytic efficiency, allowing the development of prototypes of an "artificial leaf" to transform solar energy into chemical energy.
Awards and recognition
2012: "Flama, Vida y Mujer" award from the Autonomous University of Nuevo León
2015: "Master of Business Leadership" and "Master of Business Management" distinctions from the World Confederation of Businesses in Houston, Texas
2015: Medal of Civic Merit from the State of Nuevo León for her successful performance in the area of scientific research
2018: National Prize for Arts and Sciences in the field of Technology, Innovation, and Development
2019: Valor Regiomontano Award from the Universidad Regiomontana
Selected publications
Diagramas de Equilibrio de Fases. 2012. Patricia Quintana Owen, Leticia M. Torres-Martínez.
Fotosíntesis Artificial: Estudio de fundamentación social, económica, científica y tecnológica de la Fotosíntesis Artificial para la reducción de CO2 ambiental y la producción de energéticos sustentables en México. 2013. Alfredo Aguilar, Diego M.M. de la Escalera, Gisela Aguirre, Jessica Rangel, Jorge A. Ascencio, Leticia Torres, Edilso Reguera, Ricardo Gómez, Lorenzo Martínez.
References
External links
Leticia Myriam Torres Guerra at the Autonomous University of Nuevo León
1955 births
Alumni of the University of Aberdeen
Living people
Materials scientists and engineers
Members of the Mexican Academy of Sciences
Mexican chemists
Mexican women chemists
Women materials scientists and engineers
Autonomous University of Nuevo León alumni
Mexican scientists | Leticia Myriam Torres Guerra | [
"Materials_science",
"Technology",
"Engineering"
] | 855 | [
"Women materials scientists and engineers",
"Materials scientists and engineers",
"Women in science and technology",
"Materials science"
] |
68,614,188 | https://en.wikipedia.org/wiki/CodeMonkey%20%28software%29 | CodeMonkey is an educational computer coding environment that allows beginners to learn computer programming concepts and languages. CodeMonkey is intended for students ages 6–14. Students learn text-based coding on languages like Python, Blockly and CoffeeScript, as well as learning the fundamentals of computer science and math.
The software was first released in 2014, and was originally developed by Jonathan Schor, Ido Schor and Yishai Pinchover, supported by the Center for Educational Technology in Israel.
Development history
CodeMonkey software program in form of a game for children was developed by three software engineers from Haifa, Israel: the brothers Jonathan and Ido Schor and Yishai Pinchover. The trio set up a start-up company CodeMonkey Studios Ltd., supported by the Center for Educational Technology. The game was launched in May 2014 and is currently available in 23 languages. The company has offices in Israel and USA. Since 2014, CodeMonkey launched several additional programming tools in form of games including Coding Adventure, Game Builder, Dodo Does Math, Banana Tales, CodeMonkey Jr. and Beaver Achiever. In 2018, the software company was acquired by TAL Education Group, a Chinese holding company, but remained active as its independent subsidiary also retaining its software development team.
In June 2020, CodeMonkey joined UNESCO distance learning initiative and offered free courses for all schools that were forced to close during the Covid-19 lockdown.
Overview and functionality
The game does not require prior programming experience and is intended for children from the age of 6. It allows the user to make their first steps in programming but also progresses to more advanced topics. The teaching method is experiential, in accordance with the principles of Game-based learning: the children control the figures of animals and direct them to collect bananas, overcoming various obstacles. One of the salient features of the game is that it requires writing actual textual code, as opposed to games that work in a method that represents commands using graphical blocks.
Supported language
The programming languages are Python and CoffeeScript, chosen mostly due to a friendly syntax. Some games like CodeMonkey Jr. and Beaver Achiever rely on block-based coding using Blockly.
Integration of the game in schools
The games are intended for individual use and for educational classrooms and have been selectively applied by schools and school centers in several countries including Israel, United States, UK, China, India and Bhutan, among others. CodeMonkey was also integrated in the Israeli Cyber Championship for Elementary Schools (Skillz Olympics) and a high school software program also called Skillz, where CodeMonkey games are a part of coding competition for young students.
See also
Educational programming language
References
Computer science education
Educational programming languages
Pedagogic integrated development environments | CodeMonkey (software) | [
"Technology"
] | 562 | [
"Computer science education",
"Computer science"
] |
68,614,556 | https://en.wikipedia.org/wiki/Agn%C3%A8s%20Sulem | Agnès Sulem (born 1959) is a French applied mathematician whose research topics include stochastic control, jump diffusion, and mathematical finance.
Education
Sulem earned a Ph.D. in 1983 at Paris Dauphine University, with the dissertation Résolution explicite d'Inéquations Quasi-Variationnelles associées à des problèmes de gestion de stock supervised by Alain Bensoussan.
Career
She is a director of research at the French Institute for Research in Computer Science and Automation (INRIA) in Paris, where she heads the MATHRISK project on mathematical risk handling. She is currently a professor at the University of Luxembourg in the Mathematics department. She is a coauthor of the book Applied Stochastic Control of Jump Diffusions (with Bernt Øksendal, Springer, 2005; 2nd ed., 2007; 3rd ed., 2019). Sulem is also an associate editor at the Journal of Mathematical Analysis and Applications and at the SIAM Journal on Financial Mathematics.
References
External links
Agnès Sulem publications indexed by INRIA
1959 births
Living people
French mathematicians
21st-century French women mathematicians
21st-century French mathematicians
Control theorists
Mathematical economists | Agnès Sulem | [
"Engineering"
] | 239 | [
"Control engineering",
"Control theorists"
] |
68,614,902 | https://en.wikipedia.org/wiki/Bridge%20strike | Bridge strike or tunnel strike (also known as bridge bashing) is a type of transport accident in which a vehicle collides with a bridge, overpass, or tunnel structure. Bridge-strike road accidents, in which an over-height vehicle collides with the underside of the structure, occur frequently and are a major issue worldwide. In waterways, the term encompasses water vessel–bridge collisions, including bridge span and support structure collisions.
Impacts
In United Kingdom, railway bridge strikes (called "bridge bashing") happen on an average of once every four and a half hours, with total of 1789 times in 2019. Several bridges have been hit over 20 times in a single year. The total cost borne by the state was around £23 million. In Beijing,
China, 20% of all bridge damages are caused by bridge strikes. Texas Department of Transportation estimated in 2013 that an average cost to repair a bridge strike is $180,000 USD.
Even without damages to the bridges, the strikes can result in significant damages to the vehicles. There are many examples of buses that have their roofs got completely cut off by bridge strikes, such as strikes in Birkenhead in 2014, Long Island in 2018, and Glasgow in 2023. Local communities also incur costs related to strikes without bridge damages. These include economic impacts due to road closures, and police response and cleanup costs. From 2021 to 2022, Network Rail lost £12 million in train delay and cancellation fees.
The severity of damage to bridges caused by strikes can vary depending on the type of impact and the differences in damage resistance among bridges. Some of them are not structural damages and only minor repairs are required. Some major structural damages require extensive repairs, for example, an overpass strike in Nashville in 2018 caused a structural beam to be twisted, resulting in repair cost of nearly one million US dollars. Repairs of structural damages can be long and complicated process. In order to allow traffic to get moving as soon as possible, officials may need to implement emergency repairs before permanent repair solutions can be implemented. For example, a bridge strike in York County, Pennsylvania in 2022 required a large temporary stabilizing frame to be added to the top of the bridge. After the second strike on the same spot six months later, the permanent repair was further delayed and the estimate costs of the permanent repair increased to $1.5 million USD.
A single bridge strike may result in a catastrophic bridge damage. An example of that is the I-5 Skagit River bridge collapse. The collapse was caused by a truck with an oversize load that was taller than the clearance above of the bridge. The bridge was a steel through-truss bridge with a "fracture-critical" design that has non-redundant load-bearing beams. An impact of the oversize load to multiple sway braces was enough to damage load-bearing members and caused a span to be collapsed, resulting in vehicles falling down to the river with three minor injuries.
Beyond damages and economic impacts, bridge strikes can result in serious injuries and fatalities. In 1994, five passengers of a double-decker bus were killed by a low-bridge strike in Glasgow.
In 2010, a rail bridge strike of a double-decker bus on a New York parkway killed 4 passengers. In the United States, 13 people died in bridge strike accidents between 2014 and 2018. In 2022, a truck carrying liquified petroleum gas scraped the underside of a low bridge in Boksburg, South Africa, causing an explosion that killed at least 8 people.
Mitigations
Warning signs
Some countries have standards on minimum vertical clearance of roadway. Any structures that do not meet that clearance would need to have warning signs. The United Kingdom has a standard on minimum clearance of a public highway at . Any bridges that do not meet the clearance requirement are considered to be "low bridges" and they require to have signage to indicate the clearance.
In United States, the Manual on Uniform Traffic Control Devices requires the placement of warning signs for any structure with vertical clearance less than more than the legal maximum vehicle height. Warning signs installed on the structure can use either diamond or rectangle shapes, while advanced warning signs, must be diamonds. States, not the federal government, set maximum vehicle heights. The most common maximum, used by 32 eastern states, is . This effectively sets a requirement for signage at structures in those states lower than . Higher legal maximum vehicle height limits used in other states.
There is a variety of shapes, colors and designs used for low clearance and height limit warning signs, indication signs, and prohibition signs used worldwide.
In United Kingdom, there is an emphasis on making low clearance structures be more noticeable to drivers. This includes having large "LOW BRIDGE" letters and hazard marking on the low bridges. It has various warning signs including warning signs at or near the structures, advanced warning signs, and road markings. For advanced warning signs, there can be large panel with alternative route information. Similarly in Australia, advanced traffic instruction signs are used to provide clearance information of a low-clearance bridge ahead and accompanied by a detour direction.
Passive devices
Speed limits along with road narrowing, speed bumps, rumble strip or other traffic calming techniques can be used to force reducing vehicle speeds to prevent bridge strikes.
Tattle tales are series of chains suspended from an overhead gantry over the roadway. The gantry is also equipped with warning signs and other markings. The chain are set to the clearances of the approaching low-clearance structures. When any parts of vehicles hit the chains, it creates noises to notify drivers so they can pay attention to the height restriction signs. Sometime a bar is hung at the bottom of the chains to make it more noticeable to drivers. This type of devices only works in low speeds. High speed vehicles can run through without giving out enough noticeable noises to the drivers.
In special situations where the clearance of a structure is reduced longitudinally, the beginning part of the structure may give an appearance that it provides enough clearance for vehicles to go through. As the vehicles are driven under it, the clearance is reduced. The vehicles may be wedged and stuck under the structure as it is too late for drivers to realize that it is not enough clearance to go all the way through. In this case, false soffits can be added to the beginning part of the structure. The false soffits are flaps that extend down from the underside of the structure to match the lowest clearance of the entire structure to provide visual aid to drivers of the actual clearance that they will be facing. There can be hazard markings on the false soffits to make them highly visible.
Other devices include scarifying structures. These structures are installed in front of the bridges or tunnels with the heights of the clearance to absorb the impacts of overheight vehicles. The primary goal of using scarifying structures is to prevent damage to the bridge and tunnel structures. Height restriction bars can be installed ahead of the structure to ensure that overheight vehicles will not be able to approach the low-clearance structure. Collision protection beams are installed right in front or at the structures. They take the impacts and dissipate the energy from bridge strikes to prevent damages to the structures. As the goal of these devices are to protect the structures, they are used as the last resort without providing any benefits to crash prevention. Without other devices, risks to vehicles still hold. In 2019, a bus crashed onto a height restriction bar in Dubai, resulting in several fatalities.
Overheight vehicle detection systems
Overheight detectors integrated with flashing lights and other warning signs are used to in some locations to provide an active warning to the drivers of the vehicles that are over the height restrictions of the upcoming bridges or tunnels. Despite the yellow flashing lights, this method may still fail to gain attention of the drivers. The infamous Norfolk Southern–Gregson Street Overpass (also known as the 11-foot-8 Bridge) had 100 crashes on record between 2008 and 2016 even with this type of systems. An improvement to this is to have the detectors fully integrated with traffic lights to stop all traffic when an overheight vehicle is detected.
Fully integrated systems of overheight detectors and traffic lights have been used in tunnel operations for decades. By the 1970s, several tunnels in the United States had such systems in place. An example was a systems at the Hampton Roads Bridge–Tunnel which had two detection points. The first point provided an audible and visual warnings to drivers to pull over to an inspection station. The vehicle would be measured by an authority before allowing to proceed. In an event that an overheight vehicle failed to stop at the inspection station, the secondary detector would trigger another alarm that would display a message at variable-message signs to lower the speed limit and eventually turned the red light to stop all traffic. Most detectors in the 70s used photocell system with light source installed on one pole and a photocell detector installed on opposite side of the roadway. A major drawback of the photocell detectors was that they produced a lot of false alarms caused by snow and heavy rain. Another type of detectors at the time included trip wires that would be broken if an object at that height hit the wire.
Modern detectors now use other technologies such as red/infrared photoelectric sensors, infrared laser beams, and ultrasonic transducer. Some are dual-beam systems to increase reliability by allowing the system to function even with single beam, and to lower false alarms by rejecting birds and other materials. More sophisticated systems include camera systems that can capture 3D profiles of vehicles in realtime, and microwave radar systems that can capture vehicle height profiles. Contact-based detection systems are also in use such as low-clearance detection bars that are movable vertically to initiate a trigger.
In case of overheight vehicle detection systems that are fully integrated with traffic light control systems but unmanned, there is no personnel available to perform measurements and direct overheight vehicles away from the bridges. The lights will turn green after a short stop with an expectation that the drivers will notice the height restrictions and use an alternative route by themselves. Some drivers still do not think that the warnings are applicable to them and proceed to crash into the bridges after the lights turn to green.
In United Kingdom, a more advanced system that targets a specific vehicle was deployed at Blackwall Tunnel. An advanced overheight detection system is installed ahead of the tunnel. It is equipped with automatic number-plate recognition that reads off the plate number and sends that over to variable-message signs to instruct the vehicle with that particular plate number to divert to the provided alternative route. The system reduced tunnel strikes by 38%. In Australia, a water curtain falling on all traffic lanes with a laser projection of a large stop sign in front of the Sydney Harbour Tunnel is used as the last resort to stop overheight vehicles that ignore all the warning signs and traffic lights.
Onboard systems
There are several systems that can be equipped in trucks or other commercial vehicles to provide bridge strike prevention. These include specialized truck GPS systems that have clearance information, systems that measure the actual vehicle heights, and onboard sensors that detect upcoming low-clearance structure. More advanced onboard systems can electronically receive warnings from overheight vehicle detection systems. They can apply a brake if drivers fail to react.
Bridges known for strikes
A high overpass bridge near St Petersburg, Russia, is known as the "Bridge of Stupidity" because it is often struck by vehicles despite many warning signs. In May 2018, after it was struck for the 150th time by a GAZelle truck, a birthday cake was presented to the bridge. This made national news.
Similarly, the Norfolk Southern–Gregson Street Overpass, nicknamed "The Can Opener", in Durham, North Carolina, US, was very frequently struck by vehicles, and received international media attention until it was raised in 2019.
Infrared sensors, which trigger warning signs when a high vehicle approaches, were added to an underpass in Frauenfeld, Switzerland, only after several incidents.
A similar situation exists at an underpass on Guy Street in Montreal, which has a clearance of .
Waterways
In waterways, the term bridge strike may be used when a water vessel collides with a bridge. This may include a collision to the bridge span or a collision to the bridge support structure such as a pier. Bridge protection systems are used to mitigate the effects of a ship strike.
In 2014, the United States Coast Guard published statistics that it investigated 205 bridge strikes in the eleven years prior to the publication. All of those collisions involved involved a fixed, swing, lift or draw bridge. That number was 1.2% of all vessel collision incidents investigated by the Coast Guard. The primary causal factor was the lack of accurate air draft data, the distance between water surface to the top most part of the vessel.
Aviation
Low-flying aircraft can also collide with bridges. For example, Air Florida Flight 90 crashed into a bridge over the Potomac River just after takeoff.
To prevent controlled flight into terrain, tall bridges may bear aviation obstruction lighting that notifies pilots of their presence.
See also
Clearance (civil engineering)
References
Sources
Civil engineering
Accidents | Bridge strike | [
"Technology",
"Engineering"
] | 2,654 | [
"Construction",
"Railway accidents and incidents",
"Civil engineering",
"Bridge disasters caused by collision"
] |
68,615,793 | https://en.wikipedia.org/wiki/Cyclosiloxane | Cyclosiloxanes are a class of silicone material. They are volatile and often used as a solvent. The three main commercial varies are octamethylcyclotetrasiloxane (D4), decamethylcyclopentasiloxane (D5) and dodecamethylcyclohexasiloxane (D6). They evaporate and degrade in air under sunlight.
Octamethylcyclotetrasiloxane (D4)
The octamethylcyclotetrasiloxane silicone liquid has no odor and consists of four repeating units of silicon (Si) and oxygen (O) atoms in a closed loop giving it a circular structure. Each silicon atom has two methyl groups attached (CH3).
Decamethylcyclopentasiloxane (D5)
Decamethylcyclopentasiloxane silicone liquid has no odor and consists of five repeating units of silicon (Si) and oxygen (O) atoms in a closed loop giving it a circular structure. Each silicon atom has two methyl groups attached (CH3). Typically it is used as an ingredient in antiperspirant, skin cream, sun protection lotion and make-up. With a low surface tension of 18 mN/m this material has good spreading properties.
Dodecamethylcyclohexasiloxane (D6)
The dodecamethylcyclohexasiloxane silicone liquid has no odor and consists of six repeating units of silicon (Si) and oxygen (O) atoms in a closed loop giving it a circular structure. Each silicon atom has two methyl groups attached (CH3).
CASRN: 540-97-6. D6 is also contained under the CAS No. (69430-24-6 ) which is associated with the names cyclopolydimethylsiloxane, cyclopolydimethylsiloxane (DX), cyclosiloxanes di-Me, dimethylcyclopolysiloxane, polydimethyl siloxy cyclics, polydimethylcyclosiloxane, cyclomethicone and mixed cyclosiloxane.
See also
Polydimethylsiloxane
Cyclomethicone
Siloxane and other organosilicon compounds
Literature
Cyclosiloxanes (pdf-file), Materials for the December 4-5, 2008 Meeting of the California Environmental Contaminant Biomonitoring Program (CECBP) Scientific Guidance Panel (SGP)
References
Silicones
Chemistry
Solvents
Materials | Cyclosiloxane | [
"Physics"
] | 554 | [
"Materials",
"Matter"
] |
68,617,077 | https://en.wikipedia.org/wiki/Bacteriophage%20%CF%86Cb5 | Bacteriophage φCb5 is a bacteriophage that infects Caulobacter bacteria and other caulobacteria. The bacteriophage was discovered in 1970, it belongs to the genus Cebevirus of the Steitzviridae family and is the namesake of the genus. The bacteriophage is widely distributed in the soil, freshwater lakes, streams and seawater, places where caulobacteria inhabit and can be sensitive to salinity.
Description
The capsid has icosahedral geometries, and T = 3 symmetry. It does not have a viral envelope. The diameter is around 26 nm. The genomes are linear, positive single-stranded RNA, and about 3.4 kb in length. The genome segmentation is monopartite and has 2 or 3 ORFs. Viral replication occurs in the cytoplasm and entry into the bacterial cell occurs by penetration into the pilus. The routes of transmission are by contact.
The bacteriophage is similar to the RNA bacteriophages of Escherichia in that it is composed of a single positive single-stranded RNA molecule and a protein coat with two structural proteins and apparently contains the genetic ability to encode a subunit of the coat protein, a maturation-like protein and a similar RNA replicase. The φCb5 bacteriophage differs from Escherichia RNA bacteriophages in host specificity, salt sensitivity, and the presence of histidine, but not methionine, in the coat protein. As for related bacteriophages, ORFs encode maturation, coat, RNA replicase, and lysis proteins, but unlike other members of Leviviricetes, the φCb5 lysis protein gene completely overlaps with RNA replicase in a single frame different reading. The lysis protein of φCb5 is approximately twice as long as that of the distantly related bacteriophage MS2 and presumably contains two transmembrane helices.
References
Bacteriophages
Riboviria | Bacteriophage φCb5 | [
"Biology"
] | 435 | [
"Viruses",
"Riboviria"
] |
57,665,505 | https://en.wikipedia.org/wiki/Oil%20base | An oil base or oil pedestal is a water, dirt and fat repellent paint, which is applied directly as a coating on the rough plaster to protect it. Mostly it consists of a coating of paint, which is based on alkyd resin and is often applied in staircases, but also in kitchens and bathrooms of old buildings. It is sometimes found in heavy traffic hallways.
Use
The oil base protects the wall from soiling and the surface can be easily wiped with a damp cloth for cleaning. The wall is not damaged by wiping, dirt can not stick due to the structure of the alkyd resin based paint. Even water can not penetrate into the wall, therefore, the oil base is often found in kitchens or bathrooms of old buildings, where tiles are used today. It could also be found in hospitals and heavy-traffic areas such as hallways or staircases in public buildings.
Renewal of the oil base
For renewal of the oil base, it cannot simply be repainted. The old oil base first has to be completely removed, along with the underlying plaster layer. This can be done for example by knocking off the top layer of the plaster or by grinding. There are also chemical solvents that peel off the paint layer. Then a new layer of plaster can be applied, and after drying, the new coat of pain can be applied paint. Renewal of the oil base requires time due to the drying phases.
If the oil base is to be wallpapered with a wallpaper, it has to be thoroughly roughened first.
References
Paints
Wallcoverings | Oil base | [
"Chemistry"
] | 318 | [
"Paints",
"Coatings"
] |
57,665,852 | https://en.wikipedia.org/wiki/Super%20Mario%20Party | is a 2018 Party video game developed by NDcube and published by Nintendo for the Nintendo Switch. It is the eleventh main installment in the Mario Party series, and the first for the Nintendo Switch. The game was described as a "complete refresh" of the franchise, bringing back and revitalizing gameplay elements from older titles while also introducing new ones to go along with them. The game was released worldwide on 5 October 2018, and sold 1.5 million copies by the end of the month.
Super Mario Party includes four game boards and 80 minigames. The game received positive reviews from critics. As of 30 June 2024, the game has sold more than 20.84 million copies worldwide, making it the best-selling Mario Party game and one of the top 10 best-selling games on the system. Mario Party Superstars, a game featuring maps remastered from earlier entries and a return to the original formula, was released in 2021. A sequel, Super Mario Party Jamboree, was released on 17 October 2024.
Gameplay
Super Mario Party returns to the traditional turn-based Mario Party-style of gameplay for the first time in over a decade. This format had remained absent from home console entries since Mario Party 9. The game is played with one Joy-Con controller per player, with other players needing additional controllers for multiplayer. In the game's story, Mario and his friends hold a party to determine who should be the Super Star. Bowser appears, with Bowser Jr., arguing that he could also be the Super Star. Toad, Toadette and Kamek are appointed as judges and the party begins.
The standard game mode, "Mario Party", features up to four players taking turns independently navigating the game board. Upon the player's turn, a dice block is rolled to determine how many spaces the player moves on the board, and items collected can be used to alter how many spaces the player can move. Each space has a unique function, such as blue and red spaces giving and taking three coins respectively, and good luck and bad luck spaces granting the player helpful or unhelpful consequences.
After each player takes their turn, everyone competes in a minigame that awards coins based on their placement. Minigames vary with rules and playstyle, such as 4-player free-for-alls, 2-on-2 or 1-on-3 matchups, or utilizing motion controls or HD Rumble. There are 80 minigames in total across all game modes where the objects are colored according to Mario Party 7s color code, and they can all be played independently of the game board in the Free Play section.
One star is located in a random location at a time; any player who reaches it can spend ten coins to purchase it. The player who has the most stars and coins by the end of the game wins. Coins can additionally be spent on one-use items to give the player certain advantages on the board, such as adding to one's own dice roll, subtracting from another player's dice roll, or using a golden pipe to be taken directly to the star.
One major difference compared to previous home console entries is the introduction of character-specific dice blocks: each character has a unique alternative dice block that has a different selection of numbers compared to the standard dice block, including a slightly higher chance of 3's (Mario), rolling only even numbers (Peach), and having a decent chance for a high roll but an equally likely chance to lose coins (Bowser). Another major difference is the incorporation of the ally system from the Nintendo 3DS game Mario Party: Star Rush, wherein each player can recruit up to three allies from the roster. These allies can add additional rolls to the player's dice block, lend the player their character-specific dice block for the duration of the game, and appear as assistance in some of the minigames.
Beyond the standard Mario Party mode, Super Mario Party features several secondary game modes for multiplayer. The second, known as "Partner Party", has two teams of two players also searching for stars, but the players are free to move in any direction and cross their path, similar to the "Toad Scramble" mode from the aforementioned Star Rush. This mode features unique items and redesigned board layouts. In "River Survival", four players must work together to navigate through a series of white-water rapids under a time limit. This mode features exclusive minigames that focus on cooperation and reward the team with time bonuses. In "Sound Stage", players compete in a series of motion-controlled rhythm games in one of three difficulty settings, and the player with the highest score by the end wins.
The final multiplayer-focused game mode is "Toad's Rec Room", where players can take multiple Nintendo Switch consoles and arrange and synchronize them to create larger, multi-monitor environments. The minigames featured with this mode include an enhanced version of the "Shell Shocked" minigame from the Nintendo 64 entries, and a unique take on toy baseball. The last major game mode in Super Mario Party is "Challenge Road", essentially a single player campaign wherein the player participates in every single minigame featured in the game, including those from River Survival and Sound Stage, now with unique challenges associated to them. This mode is unlocked when all of the minigames have been played at least once in their respective modes.
Beyond local play, Super Mario Party features online multiplayer for the first time in the Mario Party series. In the game's "Online Mariothon" mode, players are only able to play a selection of ten of the game's 80 minigames with other players online, independent of the board games. Here, players compete in five randomly selected minigames out of the aforementioned ten, aiming to get the highest combined score by the end. It also features leaderboards and a ranking system, as well as rewards that the player can receive for playing the mode. At launch, the two board game modes, Mario Party and Partner Party, were restricted to offline play. However, on 27 April 2021, Nintendo released patch update 1.1.0, which allows for full access to Mario Party, Partner Party, and Free Play for online multiplayer. This update also allows for the use of the Nintendo Switch's built-in invite feature. All of these modes can be played with people on one's friend list or in lobbies protected by a passcode, and 70 of the 80 total minigames can be played online, with the ten omitted minigames being from the Sound Stage mode.
Super Mario Party features a roster of twenty playable characters. The roster includes Mario, Luigi, Yoshi, Peach, Daisy, Rosalina, Wario, Waluigi, Donkey Kong, Diddy Kong, Koopa Troopa, Hammer Bro, Dry Bones, Shy Guy, Boo, Bowser, and Bowser Jr., all of whom are returning characters, with Bowser being fully playable for the first time. New playable characters to the series include Pom Pom, Goomba, and Monty Mole, none of whom have previously been a playable character in Mario Party; although this is the former's debut in the series, the latter two have appeared as NPCs throughout the series.
Development
Super Mario Party was developed by NDcube, who have handled every Mario Party title since Mario Party 9 (2012). Nintendo revealed Super Mario Party on 12 June 2018, during their Nintendo Direct presentation for E3 2018, where they also announced that the game would release on 5 October 2018, exclusively for the Nintendo Switch. In August 2018, Nintendo stated that Super Mario Party would not support the Nintendo Switch Pro Controller. Later in September 2018, it was revealed that Super Mario Party would not support handheld mode, as the game supports one Joy-Con per player.
Reception
Critical response
Super Mario Party received "generally favorable reviews" according to review aggregator Metacritic, becoming the highest-rated game in the series at the time since Mario Party 2. In Japan, four critics from Famitsu gave the game a total score of 34 out of 40.
Samuel Claiborn of IGN claimed that "Super Mario Party is the best Party in two console generations," and that "it delivers the couch multiplayer experience the series is famous for". Jordan Ramée of GameSpot particularly praised the inclusion of character-specific dice blocks, stating they "added small moments of strategy into a series that has for too long solely relied on randomness". Evan Slead of Electronic Gaming Monthly, like Ramée, emphatically welcomed the removal of the car mechanic from the two previous home console entries, Mario Party 9 and Mario Party 10. Alex Olney of Nintendo Life, like Slead and Claiborn, not only welcomed the omission of the car but also commended the game's overall presentation. Olney particularly singled out the new hub world as a point of praise, noting that it added charm to the game even if it was not truly a necessary inclusion. While the game was praised for its wide variety of game modes and characters, some of the highest praise has gone to the minigames, with Game Informers Brian Shea claiming that "the highlights shine bright enough that when the occasional dud pops up, I don't mind". Two common points of criticism were that there were only four boards for both Mario Party and Partner Party, severely limiting the game's replayability according to many outlets, and the restriction of only being able to play with one "half" of a Joy-Con controller per player.
Sales
Super Mario Party sold 142,868 physical copies within its first two days in Japan, outpacing its two home console predecessors. Super Mario Party debuted at #5 on United Kingdom sales charts for physical copies sold, even during a very crowded release schedule. By 31 October 2018, total sales of Super Mario Party reached over 1.5 million copies, far exceeding Nintendo's expectations and making it the fastest-selling Mario Party game since Mario Party 6. As of March 2019, the game has sold 1.22 million copies in Japan. By 30 June 2024, the game sold 20.84 million units.
Accolades
Notes
References
External links
2018 video games
Asymmetrical multiplayer video games
CAProduction games
Casual games
Cooperative video games
Mario Party
Multiplayer and single-player video games
Nintendo Cube games
Nintendo Switch games
Nintendo Switch-only games
Party video games
Video games developed in Japan
Video games that use Amiibo figurines | Super Mario Party | [
"Physics"
] | 2,141 | [
"Asymmetrical multiplayer video games",
"Symmetry",
"Asymmetry"
] |
57,667,287 | https://en.wikipedia.org/wiki/Dirichlet%20hyperbola%20method | In number theory, the Dirichlet hyperbola method is a technique to evaluate the sum
where is a multiplicative function. The first step is to find a pair of multiplicative functions and such that, using Dirichlet convolution, we have ; the sum then becomes
where the inner sum runs over all ordered pairs of positive integers such that . In the Cartesian plane, these pairs lie on a hyperbola, and when the double sum is fully expanded, there is a bijection between the terms of the sum and the lattice points in the first quadrant on the hyperbolas of the form , where runs over the integers : for each such point , the sum contains a term , and vice versa.
Let be a real number, not necessarily an integer, such that , and let . Then the lattice points can be split into three overlapping regions: one region is bounded by and , another region is bounded by and , and the third is bounded by and . In the diagram, the first region is the union of the blue and red regions, the second region is the union of the red and green, and the third region is the red. Note that this third region is the intersection of the first two regions. By the principle of inclusion and exclusion, the full sum is therefore the sum over the first region, plus the sum over the second region, minus the sum over the third region. This yields the formula
Examples
Let be the divisor-counting function, and let be its summatory function:
Computing naïvely requires factoring every integer in the interval ; an improvement can be made by using a modified Sieve of Eratosthenes, but this still requires time. Since admits the Dirichlet convolution , taking in () yields the formula
which simplifies to
which can be evaluated in operations.
The method also has theoretical applications: for example, Peter Gustav Lejeune Dirichlet introduced the technique in 1849 to obtain the estimate
where is the Euler–Mascheroni constant.
References
Number theory
External links
Discussion of the Dirichlet hyperbola method for computational purposes | Dirichlet hyperbola method | [
"Mathematics"
] | 431 | [
"Discrete mathematics",
"Number theory"
] |
57,670,313 | https://en.wikipedia.org/wiki/IO-Link | IO-Link is a short distance, bi-directional, digital, point-to-point, wired (or wireless), industrial communications networking standard (IEC 61131-9) used for connecting digital sensors and actuators to either a type of industrial fieldbus or a type of industrial Ethernet. Its objective is to provide a technological platform that enables the development and use of sensors and actuators that can produce and consume enriched sets of data that in turn can be used for economically optimizing industrial automated processes and operations. The technology standard is managed by the industry association Profibus and Profinet International.IO-Link market may surpass $34 billion by 2028.
System overview
An IO-Link system consists of an IO-Link master and one or more IO-Link devices, i.e. Sensors or Actuators. The IO-Link master provides the interface to the higher-level controller (PLC) and controls the communication with the connected IO-Link devices.
An IO-Link master can have one or more IO-Link ports to which only one device can be connected at a time. This can also be a "hub" which, as a concentrator, enables the connection of classic switching sensors and actuators.
An IO-Link device can be an intelligent sensor, actuator, hub or, due to bidirectional communication, also a mechatronic component, e.g. a gripper or a power supply unit with IO-Link connection. Intelligent with regard to IO-Link means that a device has identification data e.g. a type designation and a serial number or parameter data (e.g. sensitivities, switching delays or characteristic curves) that can be read or written via the IO-Link protocol. This allows parameters to be changed by the PLC during operation, for example. Intelligent also means, however, that it can provide detailed diagnostic information. IO-Link and the data transmitted with it are often used for preventive maintenance and servicing, e.g. it is possible to set an optical sensor in such a way that it reports via IO-Link in good time if it threatens to become dirty. Cleaning no longer comes as a surprise and blocks production; it can now be put on a production break.
The parameters of the sensors and actuators are device- and technology-specific, which is why parameter information in the form of an IODD (IO Device Description) with the description language XML. The IO-Link community provides interfaces to an "IODD Finder", which can be used by engineering or master tools to present the appropriate IODD for a device.
Connector
Cabling is in the form of an unshielded, three or five conductor cables, not longer than twenty meters, and a standardized four or five pin connector. The master and device pin assignment is based on the specifications in IEC 60947-5-2. For a master, two port classes are defined, port class A and port class B.
Port class A uses M5, M8, or M12 connectors, with a maximum of four pins. Port class B uses only M12 connectors with 5 pins. M12 connectors are mechanically "A"-coded according to IEC 61076-2-101. Female connectors are assigned to the master and male connectors to the device.
At the master pin 1 to pin 3 provides 24V DC power with max. 200 mA for an optional power supply of the IO-Link device. Pin 4 is used as a digital input (DI) or digital output (DO) according to the IEC 61131-2 specification to allow backward compatibility to proximity sensors according to IEC60947-5-2 or other sensors or electrical switches.
The IO-Link master sends a wake-up current pulse to get the IO-Link device from the serial input-output (SIO) state into the single-drop digital communication interface (SDCI) state. In the SDCI state the IO-Link master exchanges information frames with the IO-Link device.
In a port class A the pins 2 and 5 are not specified and are left to the manufacturer to define. In a port class B the pins 2 and 5 are configured as an additional power supply.
Protocol
The IO-Link communications protocol consists of communication ports, communication modes, data types, and transmission speeds. The ports are physically located on the master, and provide it a means for connecting with terminal devices and for bridging to a fieldbus or Ethernet. There are four communication modes that can be applied to a port connected to a terminal device: IO-Link, DI, DQ, and Deactivated. IO-Link mode configures the port for bi-directional communications, DI mode configures it as an input, DQ configures it as an output, and Deactivated just simply deactivates the port. There are four data types: process data, value status data, device data, and event data. The protocol can be configured to operate at transmission speeds of either 4.8 kilobaud, 38.4 kilobaud, or 230.4 kilobaud. The minimum transmission time at 230.4 kilobaud is 400 microseconds. An engineering tool is used for configuring the master to operate as the network bridge.
IO-Link Wireless
IO-Link Wireless is an extension of IO-Link on the physical level. An IO-Link Wireless Master ("W-Master") behaves like a Master to the superordinate system. There are only virtual ports "down" to the IO-Link Wireless Devices ("W-Devices").
A transmission cycle consists of two phases. To transmit output data, the W master sends a Multicast-W frame (Downlink) with data for the W devices in assigned time slots. Then the W-Master goes on reception and collects in the Uplink Data from the W-Devices which transmit one after the other according to an agreed fixed scheme.
To secure the transmission Frequency Hopping and Channel-Blacklisting are used.
IO-Link Safety
IO-Link Safety is an extension of IO-Link by providing an additional safety communication layer on the existing master and device layers, which thus become the "FS master" and "FS device". One also speaks of the Black Channel principle. The concept has been tested by TÜV SÜD.
IO-Link Safety has also extended the OSSD (Output Switching Signal Device) output switching elements commonly used for functional safety in a non-contact protective device like a light curtain to OSSDe. As with standard IO-Link, an FS-Device can be operated both in switching mode as OSSDe and via functionally safe IO-Link communication.
During implementation, the safety rules of IEC 61508 and/or ISO 13849 must be observed.
Literature
Joachim R. Uffelmann, Peter Wienzek, Myriam Jahn: IO-Link. The DNA of Industry 4.0. Edition 1. Vulkan-Verlag GmbH, Essen 2018, .
References
Serial buses
Industrial automation
Communications protocols
Io-Link Communication
Io-Link
Io-Link Sensors | IO-Link | [
"Technology",
"Engineering"
] | 1,481 | [
"Computer standards",
"Industrial engineering",
"Automation",
"Communications protocols",
"Industrial automation"
] |
57,671,069 | https://en.wikipedia.org/wiki/Two%20capacitor%20paradox | The two capacitor paradox or capacitor paradox is a paradox, or counterintuitive thought experiment, in electric circuit theory. The thought experiment is usually described as follows:
Two identical capacitors are connected in parallel with an open switch between them. One of the capacitors is charged with a voltage of , the other is uncharged. When the switch is closed, some of the charge on the first capacitor flows into the second, reducing the voltage on the first and increasing the voltage on the second. When a steady state is reached and the current goes to zero, the voltage on the two capacitors must be equal since they are connected together. Since they both have the same capacitance the charge will be divided equally between the capacitors so each capacitor will have a charge of and a voltage of .
At the beginning of the experiment the total initial energy in the circuit is the energy stored in the charged capacitor:
At the end of the experiment the final energy is equal to the sum of the energy in the two capacitors:
Thus the final energy is equal to half of the initial energy . Where did the other half of the initial energy go?
Solutions
This problem has been discussed in electronics literature at least as far back as 1955. Unlike some other paradoxes in science, this paradox is not due to the underlying physics, but to the limitations of the 'ideal circuit' conventions used in circuit theory. The description specified above is not physically realizable if the circuit is assumed to be made of ideal circuit elements, as is usual in circuit theory. If the series resistance of the wires and conductors in the circuit is , the initial current when the switch is closed is
If the wires connecting the two capacitors, the switch, and the capacitors themselves are idealized as having no electrical resistance or inductance as is usual, then closing the switch would connect points at different voltage with a perfect conductor, causing an infinite current to flow, which is impossible. Therefore a solution requires that one or more of the 'ideal' characteristics of the elements in the circuit be relaxed, which was not specified in the above description. The solution differs depending on which of the assumptions about the actual characteristics of the circuit elements is abandoned:
If the connecting wires are assumed to have any nonzero resistance at all, it is an RC circuit, and the current will decrease exponentially to zero. Since none of the original charge is lost, the final state of the capacitors will be as described above, with half the initial voltage on each capacitor. Since in this state the two capacitors together are left with half the energy, regardless of the amount of resistance half of the initial energy will be dissipated as heat in the wire resistance.
If the wires are assumed to have inductance but no resistance, the current will not be infinite, but the circuit does not have any energy dissipating components, so it will not settle to a steady state, as assumed in the description. It will constitute an LC circuit with no damping, so the charge will oscillate perpetually back and forth between the two capacitors; the voltage on the two capacitors and the current will vary sinusoidally. None of the initial energy will be lost, at any point the sum of the energy in the two capacitors and the energy stored in the magnetic field around the wires will equal the initial energy.
If the connecting wires, in addition to having inductance and no resistance, are assumed to have a nonzero length, the oscillating circuit will act as an antenna and lose energy by radiating electromagnetic waves (radio waves). The effect of this energy loss is exactly the same as if there were a resistance called the radiation resistance in the circuit, so the circuit will be equivalent to an RLC circuit. The oscillating current in the wires will be an exponentially decaying sinusoid. Since none of the original charge is lost, the final state of the capacitors will be as in the case of the resistor, with half the initial voltage on each. Since in this state the capacitors contain half the initial energy, the missing half of the energy will have been radiated away by the electromagnetic waves.
If in addition to nonzero length and inductance the wires are assumed to have resistance, the total energy loss will be the same, half the initial energy, but will be divided between the radiated electromagnetic waves and heat dissipated in the resistance.
Various additional solutions have been devised, based on more detailed assumptions about the characteristics of the components.
Alternate versions
There are several alternate versions of the paradox. One is the original circuit with the two capacitors initially charged with equal and opposite voltages and . Another equivalent version is a single charged capacitor short circuited by a perfect conductor. In these cases in the final state the entire charge has been neutralized, the final voltage on the capacitors is zero, so the entire initial energy has vanished. The solutions to where the energy went are similar to those described in the previous section.
See also
List of paradoxes
References
Electrical circuits
Capacitors
Physical paradoxes
Thought experiments in physics | Two capacitor paradox | [
"Physics",
"Engineering"
] | 1,068 | [
"Physical quantities",
"Capacitors",
"Electronic engineering",
"Capacitance",
"Electrical engineering",
"Electrical circuits"
] |
57,672,313 | https://en.wikipedia.org/wiki/Secretan%20%28company%29 | Secretan was a company based in Paris, France that manufactured telescopes and other scientific instruments.
History
In 1845, Marc Secretan (1804–1867), a Swiss mathematician, and Noël Paymal Lerebours (1807–1873), a French optician, established a firm in Paris that manufactured precision instruments. In 1854, Secretan became the sole owner of the company, which continued to operate under the name Lerebours & Secretan. With popular interest in astronomy growing, the French physicist Léon Foucault (1819–1868) entered into an exclusive contract with Secretan for the commercialization of a reflecting telescope. Upon the death of Secretan in 1867, the company’s management first passed to his son Auguste François (1833–1874), and then to Auguste’s cousin Georges Emmanuel Secrétan (1837–1906). Around 1889, Georges Secrétan moved the company’s workshops to 30 rue du Faubourg Saint-Jacques, near the Paris Observatory and appointed Raymond Augustin Mailhat (1862 – 1923) as their head from 1 January 1889. In 1894, Mailhat bought some of the workshops and set up his own business, while Secretan moved his equipment into a new location at 41, quai de l’Horloge, near to the company’s retail shop on the Place du Pont-Neuf. When Georges Secrétan died in 1906, his son Paul Victor (b. 1879) and daughter Alice (b. 1878) inherited the business, which they ran until 1911, when they sold it to Charles Épry. In 1913, Gustave Jacquelin (1879–1939) became Épry’s associate and the firm continued manufacturing and selling astronomical, scientific and optical products. In 1963, the Secretan company merged with the Henri Morin company, a producer of surveying and drawing equipment, and was renamed as the Etablissements H. Morin-Secretan. Around 1967, that firm merged with the Société de Recherches et de Perfectionnements Industriels (SRPI), which operated until at least 1981.
See also
Amateur astronomy
References
Telescope manufacturers
French companies established in 1845 | Secretan (company) | [
"Astronomy"
] | 441 | [
"Telescope manufacturers",
"People associated with astronomy"
] |
57,672,434 | https://en.wikipedia.org/wiki/Ubrogepant | Ubrogepant, sold under the brand name Ubrelvy, is a medication used for the acute (immediate) treatment of migraine with or without aura (a sensory phenomenon or visual disturbance) in adults. It is not indicated for the preventive treatment of migraine. Ubrogepant is a small-molecule calcitonin gene-related peptide receptor antagonist. It is the first drug in this class approved for the acute treatment of migraine.
The most common side effects are nausea, tiredness and dry mouth. Ubrogepant is contraindicated for co-administration with strong CYP3A4 inhibitors.
History
Ubrogepant, also known as MK-1602, was discovered by scientists at Merck.
The effectiveness of ubrogepant for the acute treatment of migraine was demonstrated in two randomized, double-blind, placebo-controlled trials. In these studies, 1,439 adult patients with a history of migraine, with and without aura, received the approved doses of ubrogepant to treat an ongoing migraine. In both studies, the percentages of patients achieving pain relief two hours after treatment (defined as a reduction in headache severity from moderate or severe pain to no pain) and whose most bothersome migraine symptom (nausea, light sensitivity or sound sensitivity) stopped two hours after treatment were significantly greater among patients receiving ubrogepant (19–21% depending on the dose) compared to those receiving placebo (12%). Patients were allowed to take their usual acute treatment of migraine at least two hours after taking ubrogepant. 23% of patients were taking a preventive medication for migraine.
In December 2019, the US Food and Drug Administration approved Ubrelvy produced by Allergan USA, Inc. for treatment of migraine after onset.
References
Drugs developed by AbbVie
Antimigraine drugs
Calcitonin gene-related peptide receptor antagonists
Carboxamides
Trifluoromethyl compounds
Spiro compounds
Piperidinones
Pyridines | Ubrogepant | [
"Chemistry"
] | 432 | [
"Organic compounds",
"Spiro compounds"
] |
57,672,909 | https://en.wikipedia.org/wiki/Thiosilicate | In chemistry and materials science, thiosilicate refers to materials containing anions of the formula . Derivatives where some sulfide is replaced by oxide are also called thiosilicates, examples being materials derived from the oxohexathiodisilicate . Silicon is tetrahedral in all thiosilicates and sulfur is bridging or terminal. Formally such materials are derived from silicon disulfide in analogy to the relationship between silicon dioxide and silicates. Thiosilicates are typically encountered as colorless solids. They are characteristically sensitive to hydrolysis. They are from the class of chalcogenidotetrelates.
Materials science
The LISICON (LIthium Super Ionic CONductor) include thiosilicates, which are fast ion conductors. Thiosilicates and related thiogermanates are also of interest for infrared optics, since they only absorb low frequency IR modes.
References
Inorganic silicon compounds
Sulfides
Inorganic polymers
Sulfur ions | Thiosilicate | [
"Physics",
"Chemistry"
] | 200 | [
"Matter",
"Inorganic compounds",
"Inorganic polymers",
"Inorganic silicon compounds",
"Sulfur ions",
"Ions"
] |
57,672,953 | https://en.wikipedia.org/wiki/Concrete%20cone%20failure | Concrete cone is one of the failure modes of anchors in concrete, loaded by a tensile force. The failure is governed by crack growth in concrete, which forms a typical cone shape having the anchor's axis as revolution axis.
Mechanical models
ACI 349-85
Under tension loading, the concrete cone failure surface has 45° inclination. A constant distribution of tensile stresses is then assumed. The concrete cone failure load of a single anchor in uncracked concrete unaffected by edge influences or overlapping cones of neighboring anchors is given by:
Where:
- tensile strength of concrete
- Cone's projected area
Concrete capacity design (CCD) approach for fastening to concrete
Under tension loading, the concrete capacity of a single anchor is calculated assuming an inclination between the failure surface and surface of the concrete member of about 35°. The concrete cone failure load of a single anchor in uncracked concrete unaffected by edge influences or overlapping cones of neighboring anchors is given by:
,
Where:
- 13.5 for post-installed fasteners, 15.5 for cast-in-site fasteners
- Concrete compressive strength measured on cubes [MPa]
- Embedment depth of the anchor [mm]
The model is based on fracture mechanics theory and takes into account the size effect, particularly for the factor which differentiates from expected from the first model. In the case of concrete tensile failure with increasing member size, the failure load increases less than the available failure surface; that means the nominal stress at failure (peak load divided by failure area) decreases.
Current codes take into account a reduction of the theoretical concrete cone capacity considering: (i) the presence of edges; (ii) the overlapping cones due to group effect; (iii) the presence of an eccentricity of the tension load.
Difference between models
The tension failure loads predicted by the CCD method fits experimental results over a wide range of embedment depth (e.g. 100 – 600 mm). Anchor load bearing capacity provided by ACI 349 does not consider size effect , thus an underestimated value for the load-carrying capacity is obtained for large embedment depths.
Influence of the head size
For large head size, the bearing pressure in the bearing zone diminishes. An increase of the anchor's load-carrying capacity is observed . Different modification factors were proposed in technical literature.
Un-cracked and cracked concrete
Anchors, experimentally show a lower load-bearing capacity when installed in a cracked concrete member. The reduction is up to 40% with respect to the un-cracked condition, depending on the crack width. The reduction is due to the impossibility to transfer both normal and tangential stresses at the crack plane.
References
See also
Fracture mechanics
Concrete fracture analysis
Size effect
Anchor Cone
Structural connectors
Wall anchors
Concrete | Concrete cone failure | [
"Engineering"
] | 572 | [
"Structural engineering",
"Structural connectors",
"Concrete"
] |
57,673,181 | https://en.wikipedia.org/wiki/Sa%20%28Islamic%20measure%29 | The Sāʿ (Arabic: صَاعًا and صَۡع in spelling, and sa'e in the Latin alphabet, literally: "one") is an ancient measurement of volume from the Islamic world, with cultural and religious significance. While its exact volume is uncertain, the Arabic word Sāʿ translates to "small container," related to the Quranic word ṣuwāʿ ("cup, goblet"). Together with the Mudd and the Makkūk, the Sāʿ is part of the system of units of volume used in the Arabic peninsula.
Proportion to other Arabic measures
There is general agreement between medieval Arabic authors that 1 Sāʿ = 4 Mudd. The 9th-century scholar al-Khwārizmī indicates that this was the opinion of the people of Medina. Likewise, Shams al-Dīn al-Maqdisī, who lived in the 10th century, stated that in al-Ḥijāz 1 Sāʿ = 4 Mudd = 1/3 Makkūk. Az-Zahrāwī related the Sāʿ with Xestes, declaring that at the Rûm, 1 Sāʿ = 10 Xestes.
Because the Sāʿ was related to different measures of mass, many standardization problems occurred. Its relation to the Ratl was especially controversial, with two prevailing opinions:
1 Sāʿ = 8 Ratl was how the people of Kufa defined 1 Sāʿ. It was also the measure used by Umar (reg. 634–644) when he atoned oaths.
1 Sāʿ = 5 1/3 Ratl was how the people of people of Medina defined 1 Sāʿ. It was reduced to this relation by Saʿīd ibn al-ʿĀs, who was Governor of Medina under Muawiyah I (reg. 661–680).
Al-Juwayni reported that Al-Shafi‘i and also the hanafi scholar Abu Yusuf quarreled about the measurement of the Sāʿ in front of the Kalif Harun al-Rashid (reigned 786–809) at Medina. The Kalif invited the progeny of Muhajirun with their Sāʿ- Vessels, which they inherited from their ancestors. When it turned out that the measurement given by Al-Shafi‘i (1 Sāʿ = 4 Mudd = 5 1/3 Ratl) was right, Abū Yūsuf ultimately agreed with the opinion. Taking into account the fact that in Baghdad, 1 Ratl = 130 dirhams, the Muslim scholars have also established the equation: 1 Sāʿ = 693 1/3 dirhams.
Meaning for Islamic teachings
Like the Mudd, the Sāʿ has an additional symbolic and religious meaning in Islam than simply a measurement. According to a hadith referred to by Anas ibn Malik in different versions and is also found in Sahih al-Bukhari, Muhammad asked God on the return from the Battle of Khaybar to bless the Sāʿ and the Mudd of the Muslims.
The Sāʿ is especially important for the measurement of the Zakat al-fitr, a beautiful alms-giving that must be done on Eid al-Fitr. This alms has the value of one Sāʿ of grain per family member. According to Islamic tradition, this value was established by Muhammad in the year 2 of the Hijra (623/624 AD). In the absence of a Mudd or Sā measure, the amount of grain for the Zakāt al-fitr can also be measured with the two hands held together; four of these double handfuls are considered equal to one Sāʿ. In Fès, the rule was that in the occasion that needy people received a larger amount of grain in the distribution of zakāt al-fitr by their neighbors, they would have to pass on the surplus to other needy people. You should only keep one Sāʿ per family member.
Special Sāʿ-measuring vessels were produced for the metering of the Zakāt al-fitr. For example, for the merinid sultan Abu al-Hasan Ali ibn Othman (reigned 1331-1351), a vessel was made from copper, which should represent the "Sāʿ of the Prophet." An inscription attached to the vessel contains a long Isnad, over which the calibration of the measuring vessel could be retraced to the prophet's companion, Zaid ibn Thabit.
Based on hadith, the Sāʿ is also considered to be the minimum amount of water that must be available to perform a valid ghusl.
Use of the Sāʿ for non-ritual purposes is recorded only in the Arabian Peninsula. Al-Maqdisī reports that the Arabs had two different Sāʿ units on the ships, a small one for compensating sailors, and a large one used for commercial transactions.
Conversion to the metric system
According to Walther Hinz, who relies on the news of a Mudd calibration vessel from Ayyubid time, the "Sāʿ of the Prophet" ( ṣā' an-nabī ) is exactly 4.2125 liters. Converting this measure to the weight of wheat it is a value of 3.24 kg. The Sāʿ-vessel for the Merinid Sultan Abū l-Hasan, which was also to represent the "Sāʿ of the Prophet," has a volume of 2.75 liters.
According to students of Abdul Muhsin bin Hamad al Abbaad, the head of the department of Sharia in the Islamic University of Medina, the Majority opinion of the Fuqaha (experts in Islamic jurisprudence) is:
A Sa of raw grain is 2.3 kilograms according to the Hanbali, Maliki & Shafi'i schools of thought.
A Mudd of raw grain is 510 grams according to the Hanbali, Maliki & Shafi'i schools of thought.
2.3 kilograms of grain is about 3 liters, this being the minimum amount of water to perform a valid Ghusl (full body ablution)
60 Sa = 1 Wask
The Minority opinion of the Fuqaha is:
A Sa of raw grain is 3.3 kilograms according to the Hanafi school of thought.
Literature
Arabic sources
Abū-ʿUbaid al-Qāsim Ibn-Sallām: al- Amwāl. Ed. Muḥammad al-ʿAmmāra. Dār aš-Šurūq, Beirut, 1989. S. 615–627. Digitalisat
Abū-ʿAbdallāh Muḥammad Ibn-Aḥmad al-Ḫwārizmī: Kitāb Mafātīḥ al-ʿulūm. Ed. Gerlof van Vloten. Brill, Leiden, 1895. S. 14. Digitalisat
Šams ad-Dīn al-Maqdisī: Kitāb Aḥsan at-taqāsīm fī maʿrifat al-aqālīm. Ed. M. J. de Goeje. 2. Aufl. Brill, Leiden 1906. s. 98f. Digitalisat
Literature
Alfred Bel: "Ṣāʿ" in Enzyklopaedie des Islam Brill, Leiden, 1913–1936. Bd. IV, S. 1. Digitalisat
Alfred Bel: "Note sur trois anciens vases de cuivre gravé trouvés à Fès et servant à mesurer l'aumône légale du fitr." in Bulletin archéologique 1917. S. 359–387. Digitalisat
Walther Hinz: Islamische Masse und Gewichte. Umgerechnet ins metrische System. E. J. Brill, Leiden/Köln 1970, S. 51.
Cengiz Kellek: "Sâʿ" in Türkiye Diyanet Vakfı İslâm Ansiklopedisi Bd. XXXV, S. 317c-319c. Digitalisat
Paul Pascon: "Description des mudd et ṣāʿ Maghribins" in Hespéris Tamuda 16 (1975) S. 25–88 Digitalisat
M. H. Sauvaire: "Matériaux pour servir à l'histoire de la numismatique et de la métrologie musulmanes" in Journal Asiatique VIII/7 (1886) 394–417 Digitalisat.
References
Units of volume
Customary units of measurement | Sa (Islamic measure) | [
"Mathematics"
] | 1,726 | [
"Units of volume",
"Quantity",
"Customary units of measurement",
"Units of measurement"
] |
57,673,633 | https://en.wikipedia.org/wiki/COSMIC%20functional%20size%20measurement | COSMIC functional size measurement is a method to measure a standard functional size of a piece of software. COSMIC is an acronym of COmmon Software Measurement International Consortium, a voluntary organization that has developed the method and is still expanding its use to more software domains.
The method
The "Measurement Manual" defines the principles, rules and a process for measuring a standard functional size of a piece of software. Functional size is a measure of the amount of functionality provided by the software, completely independent of any technical or quality considerations. The generic principles of functional size are described in the ISO/IEC 14143 standard. This method is also an International Standard by itself. The COSMIC standard is the first of the old generation implementation of the ISO/IEC 14143 standard. There are also four first generation implementations:
ISO/IEC 20926 - IFPUG function points
ISO/IEC 20968 - Mk II function points
ISO/IEC 24570 - Nesma function points
ISO/IEC 29881 - FiSMA function points
These first generation functional size measurement methods consisted of rules that are based on empirical results. Part of the terminology that deals with users and requirements has overlap with similar terms in software engineering. They work well for the software domains the rules were designed for, but for other domains, the rules need to be altered or extended. Key elements of a second generation functional size measurement method are:
Adoption of all measurement concepts from the ISO metrology
A defined measurement unit
Fully compliant with ISO/IEC 14143
Preferably domain independent
The method is based on principles rather than rules that are domain independent. The principles of the method are based on fundamental software engineering principles, which have been subsequently tested in practice.
The method may be used to size software that is dominated by functionality to maintain data, rather than software that predominantly manipulates data. As a consequence of measuring the size, the method can be used to establish benchmarks of (and subsequent estimates) regarding the effort, cost, quality and duration of software work.
The method can be used in a wide variety of domains, like business applications, real-time software, mobile apps, infrastructure software and operating systems. The method breaks down the Functional User Requirements of the software into combinations of the four data movements types:
Entry (E)
Exit (X)
Read (R)
Write (W)
The function point count provides measurement of software size, which is the sum of the data movements for a given functional requirement. It may be used to estimate (and benchmark) software project effort, cost, duration, quality and maintenance work.
The foundation of the method is the ISO/IEC 19761 standard, which contains the definitions and basic principles that are described in more detail in the COSMIC measurement manual.
The applicability of the COSMIC functional size measurement method
Since the COSMIC method is based on generic software principles, these principles can be applied in various software domains. For a number of domains guidelines have been written to assist measurers to apply the COSMIC method in their domain:
Real-time Software Real-time software "controls an environment by receiving data, processing them, and returning the results sufficiently quickly to affect the environment at that time". The guideline describes how to use the generic principles in this environment.
Service Oriented Architecture (SOA) This is a software architecture where services are provided to the other components by application components, through a communication protocol over a network. A service is a discrete unit of functionality that can be accessed remotely and acted upon and updated independently, such as retrieving a credit card statement online. The guideline describes how to measure the functional size of distinct components.
Data WareHouse and Big Data is a field that treats ways to analyze, systematically extract information from, or otherwise deal with data sets that are too large or complex to be dealt with by traditional data-processing application software. The guideline describes how to transform the principles in that field to a functional size.
Business Application Software This is software designed to perform a group of coordinated functions, tasks, or activities for the benefit of the user. Examples of an application include a word processor, a spreadsheet, an accounting application, a web browser, an email client, a media player, a file viewer, a flight simulator or a photo editor. Business Application Software contrasts with system software, which is mainly involved with running the computer. The guideline describes how to deal with application specific features, like data storage and retrieval.
To explain the use of the method a number of case studies have been developed. The method is of particular validity in the estimation of cost of software undertakings.
The organization behind the method
The COSMIC organization commenced its work in 1998. Legally COSMIC is an incorporated not for profit organization under Canadian law. The organization grew informally to a global community of professionals. COSMIC is an open and democratic organization. The organization relies and will continue to rely on unpaid efforts by volunteers, who work on various aspects of the method, based on their professional interest.
The first generation functional size measurement methods consisted of rules that are based on empirical results. Some define their own terminology, which may have overlap with other terms in software engineering. They work well for the software domains the rules were designed for, but for other domains, the rules need to be altered or extended. Key elements of a second generation functional size measurement method are:
Adoption of all measurement concepts from the ISO metrology
A defined measurement unit
Fully compliant with ISO/IEC 14143
Preferably domain independent
The method is based on principles and rules that are domain independent. The principles of the method are based on fundamental software engineering principles, which have been subsequently tested in practice.
References
External links
COSMIC website A public domain version of the COSMIC measurement manual and other technical reports
COSMIC Publications Public domain publications for the COSMIC method
Software metrics
Software engineering costs | COSMIC functional size measurement | [
"Mathematics",
"Engineering"
] | 1,160 | [
"Software engineering",
"Quantity",
"Metrics",
"Software metrics"
] |
57,673,691 | https://en.wikipedia.org/wiki/GreenScreen%20List%20Translator | The GreenScreen List Translator is a procedure for assessing chemical hazard used to identify chemicals of concern to prioritize for removal from product formulations. The List Translator assesses substances based on their presence on lists of chemicals associated with human and environmental health hazards issued by a global set of governmental and professional scientific bodies, such as the European Union’s GHS hazard statements and California's Proposition 65.
Analysis procedure
The List Translator procedure is defined in the GreenScreen for Safer Chemicals, a transparent, open standard for chemical hazard assessment that supports alternatives assessment for toxics use reduction through identifying chemicals of concern and safer alternatives. The GreenScreen protocol is published in a Guidance document that is reviewed and updated regularly. This description of the List Translator is based upon the Hazard Assessment Guidance Version 1.4
The List Translator identifies the hazard endpoints for which a substance has been listed on each of a defined set of published hazard lists and the level of hazard. It prioritizes for avoidance those substances listed with a high hazard of any of the following endpoints:
Carcinogenicity
Mutagenicity
Reproductive toxicity
Developmental toxicity
Endocrine disruptor or
Persistent, bioaccumulative and toxic substances (PBT).
This parallels the prioritization schemes underlying various international governmental regulatory programs such as the Substance of very high concern definition within the REACH Regulation of the European Union.
The central tools of the List Translator are the GreenScreen Specified Lists and the GreenScreen List Translator Map.
GreenScreen Specified Lists: The List Translator identifies a set of lists as the references for the procedure. These are lists that identify specific chemicals or groups of chemicals that are associated with specific human and environmental health endpoints. The lists are published by a global set of state, national and international governmental bodies, such as the California Office of Environmental Health Hazard Assessment, the United States Environmental Protection Agency and the European Chemicals Agency. The Specified Lists also include lists published by scientific professional associations. In all cases there must be a defined set of threshold criteria and a review process by scientific authorities for listing
GreenScreen List Translator Map: The Map characterizes each of the categories within each Specified List
The hazard endpoint(s) addressed by the list are identified and a hazard level or range is assigned. For example, the International Agency for Research on Cancer Monographs On the Evaluation of Carcinogenic Risks to Humans category of “Group 1 - Agent is Carcinogenic to humans” receives a “High” for Carcinogenicity. The List Translator characterizes the hazard level of substances from very low to very high across twenty human and environmental health endpoints addressing:
Human health - such as cancer and reproductive toxicity
Environmental protection - primarily aquatic toxicity
Physical hazard - flammability and reactivity
Environmental fate - persistence and bioaccumulation.
Lists are also characterized as
Authoritative - high confidence
Screening - lower confidence due to less comprehensive review, use of estimated data or other factors.
Authoritative lists and screening lists are both further characterized as:
A lists: contain single endpoint with one hazard classification or only one possible List Translator Score.
B lists: contain multiple endpoints and/or hazard classifications
A chemical receives an overall hazard level for each endpoint based on the highest hazard assigned by the most authoritative lists.
List Translator scores
Scoring a substance is a three part procedure:
Search the Specified Lists: Each of the Specified Lists is searched to determine if the substance being scored is listed on any of the lists. It may be identified specifically by CASRN or it may be a member of a listed compound group - a group of substances with a similar chemical structure.
Compare the endpoints and hazard levels: The Map is consulted for each list on which the substance is identified to determine a hazard level for each endpoint. If there is more than one listing, the highest hazard level from the most authoritative list is used.
Calculate List Translator score: The assigned hazard levels and endpoints are compared to the criteria for GreenScreen’s highest concern category of Benchmark 1:
LT-1 (List Translator Likely Benchmark 1) - The substance is on at least one Authoritative A list that meets Benchmark 1 criteria. That is, it meets the criteria for a high hazard carcinogen, mutagen, reproductive toxicant or developmental toxicant or endocrine disruptor or as a persistent bioaccumulative and toxic substance (PBT). The PBT criteria can be met by a combination of lists.
LT-P1 (List Translator Possible Benchmark 1) - On a list that overlaps Benchmark 1 criteria and/or is a lower confidence Authoritative B or Screening list.
LT-UNK (List Translator Benchmark Unknown) - Listed only on a list that does not meet or overlap Benchmark 1 criteria.
NoGSLT (No GreenScreen List Translator information) - Not listed on any of the GreenScreen Specified lists
NoGS (No GreenScreen information) - Not on any of the GreenScreen Specified lists and there is no public Greenscreen full assessment.
An LT-Unk, No GSLT or NoGS score is not an indication of low hazard or safety for a substance, only that the substance has not been listed for the priority health endpoints. A full GreenScreen Assessment must be undertaken to determine if the substance qualifies as an affirmatively safer substance.
Automation
Any person can use the GreenScreen List Translator protocol to score a substance. The research required to look up the substance in each of the hazard lists is, however, substantial. Several Licensed GreenScreen List Translator™ Automators aggregate the lists and provide free online lookup services for determining List Translator scores.
Applications
The GreenScreen List Translator is the first step in a GreenScreen Assessment. It is also used as a stand alone screening protocol by health and sustainability screening and certification programs. It is widely referenced in standards and certifications related to green building products, including the Health Product Declaration Standard (HPD), Portico, and the "Building product disclosure and optimization - material ingredients" credits in the US Green Building Council's LEED program.
References
External links
GreenScreen for Safer Chemicals home page for the GreenScreen Standard
Clean Production Action publisher of the GreenScreen
Licensed GreenScreen® List Translator™ Automators provide access to public GreenScreen assessment reports as part of their databases(s)
Building materials
Toxicology | GreenScreen List Translator | [
"Physics",
"Engineering",
"Environmental_science"
] | 1,274 | [
"Toxicology",
"Building engineering",
"Construction",
"Materials",
"Building materials",
"Matter",
"Architecture"
] |
57,673,779 | https://en.wikipedia.org/wiki/TDBzcholine | TDBzcholine is a diazirine analog of acetylcholine that can be used to label the nicotinic acetylcholine receptor.
Mechanism of action
TDBzcholine is able to bind to the nicotinic acetylcholine receptor. Once TDBzcholine is bound to the receptor, TDBzcholine can be activated by exposing the sample to UV light. This led to formation of a highly reactive carbene radical that can react with amino acid residues in the receptor and become covalently bound to the receptor.
See also
Photoaffinity labeling
References
Diazirines
Choline esters
Benzoate esters
Trifluoromethyl compounds
Nicotinic acetylcholine receptors | TDBzcholine | [
"Chemistry"
] | 156 | [
"Pharmacology",
"Pharmacology stubs",
"Medicinal chemistry stubs"
] |
57,674,142 | https://en.wikipedia.org/wiki/Photothermal%20time | Photothermal time (PTT) is a product between growing degree-days (GDD) and day length (hours) for each day. PTT = GDD × DL It can be used to quantify environment, as well as the timing of developmental stages of plants.
References
Product certification
Measurement
Ecology | Photothermal time | [
"Physics",
"Mathematics",
"Biology"
] | 63 | [
"Physical quantities",
"Quantity",
"Ecology",
"Measurement",
"Size"
] |
57,674,198 | https://en.wikipedia.org/wiki/Antrodiella%20lactea | Antrodiella lactea is a species of fungus in the family Steccherinaceae. Found in China, it was described as new to science in 2018 by mycologist Hai-Sheng Yuan. The type collection was made in Maoershan Nature Reserve (Xing'an County, Guangxi), where it was found growing on a fallen angiosperm branch. The specific epithet lactea refers to the cream-coloured fruit body. The fungus has a trimitic hyphal system, and its generative hyphae have clamp connections. Its smooth, thin-walled spores range in shape from oblong to ellipsoidal, and typically measure 3.1–3.6 by 2.1–2.4 μm.
References
Fungi described in 2018
Fungi of China
Steccherinaceae
Fungus species | Antrodiella lactea | [
"Biology"
] | 173 | [
"Fungi",
"Fungus species"
] |
57,674,242 | https://en.wikipedia.org/wiki/Antrodiella%20nanospora | Antrodiella nanospora is a species of crust fungus in the family Steccherinaceae. Found in China, it was described as new to science in 2018 by mycologist Hai-Sheng Yuan. The type collection was made in Maoershan Nature Reserve (Xing'an County, Guangxi), where it was found growing on a fallen angiosperm branch. The specific epithet nanospora refers to its small spores, which measure 2.9–3.2 by 1.8–2.1 μm. The fungus has a dimitic hyphal system, and its generative hyphae have clamp connections. It is similar in appearance to Antrodiella minutispora, but this species has a thicker and fleshier fruit body, larger pores, and does not have cystidioles in the hymenium.
References
Fungi described in 2018
Fungi of China
Steccherinaceae
Fungus species | Antrodiella nanospora | [
"Biology"
] | 198 | [
"Fungi",
"Fungus species"
] |
57,675,728 | https://en.wikipedia.org/wiki/Drug%20titration | Drug titration is the process of adjusting the dose of a medication for the maximum benefit without adverse effects.
When a drug has a narrow therapeutic index, titration is especially important, because the range between the dose at which a drug is effective and the dose at which side effects occur is small. Some examples of the types of drugs commonly requiring titration include insulin, anticonvulsants, blood thinners, anti-depressants, and sedatives.
Titrating off of a medication instead of stopping abruptly is recommended in some situations. Glucocorticoids should be tapered after extended use to avoid adrenal insufficiency.
Drug titration is also used in phase I of clinical trials. The experimental drug is given in increasing dosages until side effects become intolerable. A clinical trial in which a suitable dose is found is called a dose-ranging study.
See also
Therapeutic drug monitoring
Pituri – chewed as a stimulant (or, after extended use, a depressant) by Aboriginal Australians
References
Pharmacology | Drug titration | [
"Chemistry"
] | 224 | [
"Pharmacology",
"Pharmacology stubs",
"Medicinal chemistry",
"Medicinal chemistry stubs"
] |
57,676,290 | https://en.wikipedia.org/wiki/NGC%203336 | NGC 3336 is a barred spiral galaxy located about 190 million light-years away in the constellation Hydra. It was discovered by astronomer John Herschel on March 24, 1835. NGC 3336 is a member of the Hydra Cluster.
One supernova has been observed in NGC 3336: SN1984S (type unknown, mag. 16.8) was discovered by Paul Wild on 23 December 1984.
See also
NGC 3307
List of NGC objects (3001–4000)
References
External links
Hydra Cluster
Hydra (constellation)
Barred spiral galaxies
3336
031754
Astronomical objects discovered in 1835
Discoveries by John Herschel | NGC 3336 | [
"Astronomy"
] | 127 | [
"Hydra (constellation)",
"Constellations"
] |
57,678,302 | https://en.wikipedia.org/wiki/Serge%20Galam | Serge Galam (born 1952) is a French physicist and Scientist Emeritus at CNRS.
Biography
In 1975, Serge Galam obtained a PhD in physics at the Pierre and Marie Curie University in Paris. In 1981, he received a Ph.D. in physics at Tel Aviv University. From 1981 to 1983, he taught at City University of New York and from 1983 to 1985 at New York University.
From 1984 to 2004, he worked in several physics laboratories of the Pierre and Marie Curie University.
In 1999, he was appointed director of research at the Centre national de la recherche scientifique.
In 2004 he joined the Center for Research in Applied Epistemology of the École Polytechnique (CREA). In 2013, he joined the faculty of Sciences Po.
Serge Galam is one of the pioneers of the modern field of sociophysics. His work focuses on the dynamics of group decision making and how minority opinions can sway public opinion. In the fall of 2016, using the principles of sociophysics, Galam predicted the election of Donald Trump, although he would incorrectly predict Trump's reelection in the fall of 2020.
See also
Nassim Nicholas Taleb
Dominant minority
References
Sources
1952 births
Living people
Applied physicists
Social physics
20th-century French physicists
21st-century French physicists
Research directors of the French National Centre for Scientific Research
Pierre and Marie Curie University alumni
Tel Aviv University alumni
New York University faculty
City University of New York faculty
Academic staff of Sciences Po | Serge Galam | [
"Physics"
] | 303 | [
"Social physics",
"Applied and interdisciplinary physics",
"Applied physicists"
] |
59,326,528 | https://en.wikipedia.org/wiki/Voxelotor | Voxelotor, sold under the brand name Oxbryta, was a medication used for the treatment of sickle cell disease. Voxelotor is the first hemoglobin oxygen-affinity modulator. Voxelotor had been shown to have disease-modifying potential by increasing hemoglobin levels and decreasing hemolysis indicators in sickle cell patients. It initially appeared to have an acceptable safety profile in sickle cell patients and healthy volunteers, without any dose-limiting toxicity noted in clinical trials. It was developed by Global Blood Therapeutics, a subsidiary of Pfizer.
In November 2019, voxelotor received accelerated approval in the United States for the treatment of sickle cell disease for those twelve years of age and older. The U.S. Food and Drug Administration (FDA) considered it to be a first-in-class medication. In December 2021, voxelotor received accelerated approval in the United States for the treatment of sickle cell disease for those aged four to eleven years.
In September 2024, Pfizer announced a voluntary withdrawal of voxelotor from all global markets due to concerns regarding the potential for severe safety events, including fatalities.
Side effects
Common side effects include headache, diarrhea, abdominal pain, nausea, fatigue, rash and pyrexia (fever).
History
Voxelotor was granted accelerated approval by the US Food and Drug Administration (FDA) in November 2019. The FDA granted the application for voxelotor fast track designation and orphan drug designation.
The approval of voxelotor was based on the results of a clinical trial with 274 participants with sickle cell disease. The FDA granted the approval of Oxbryta to Global Blood Therapeutics.
Society and culture
Legal status
In December 2021, the Committee for Medicinal Products for Human Use (CHMP) of the European Medicines Agency (EMA) adopted a positive opinion, recommending the granting of a marketing authorization for the medicinal product Oxbryta, intended for the treatment of hemolytic anemia due to sickle cell disease. The applicant for this medicinal product is Global Blood Therapeutics Netherlands B.V. Voxelotor (Oxbryta) was approved for medical use in the European Union in February 2022.
In September 2024, the CHMP recommended suspending the marketing authorization for voxelotor (Oxbryta). The CHMP described this recommendation as a precaution while a review of additional clinical trial data was proceeding. The CHMP review of clinical trial data began in July 2024, following concerns in two ongoing placebo-controlled clinical trials that raised the possibility that the drug's benefit risk ratio was no longer favorable, due to possibly related excess deaths. The CHMP also cited observational studies which found a higher rate of painful vaso-occlusive crises during treatment with voxelotor than the subjects had before starting the medicine.
In September 2024, Pfizer announced a voluntary withdrawal of voxelotor from all global markets due to concerns regarding the potential for severe safety events, including fatalities.
References
Orphan drugs
Drugs developed by Pfizer
Sickle-cell disease | Voxelotor | [
"Chemistry"
] | 632 | [
"Drug safety",
"Withdrawn drugs"
] |
59,327,406 | https://en.wikipedia.org/wiki/YouTube%20Rewind%202018%3A%20Everyone%20Controls%20Rewind | YouTube Rewind 2018: Everyone Controls Rewind (also known as YouTube Rewind 2018) is a video that was uploaded to the official channel of the video-sharing website YouTube on December 6, 2018, as the ninth installment of the YouTube Rewind series. The video features references to video games and internet culture, starring YouTubers such as Ninja and Marques Brownlee, as well as celebrities like Will Smith and Trevor Noah.
YouTube Rewind 2018 was panned by critics, YouTubers, and viewers alike, who dubbed it the worst YouTube Rewind video to date. The video was criticized for the inclusion of unpopular or outdated trends and the exclusion of many prominent YouTubers of the year, as well as rivalries such as KSI vs Logan Paul and PewDiePie vs T-Series. By December 13, 2018, a week after its upload, Everyone Controls Rewind had over 10 million dislikes, making it the most-disliked video on YouTube of all time, a record that was previously held by the music video for Justin Bieber's "Baby" for over seven years.
Background
YouTube Rewind was an annual series of videos released from 2010 to 2019 that was produced (alongside Portal A Interactive), released and distributed by the namesake website via its official channel. Each video was a recap of the year's trends and events.
Overview
The video themes around everyone being able to control YouTube Rewind, with various featured personalities describing what events they want to review. The video begins with actor Will Smith on Jebel Jais's mountain range, suggesting the inclusion of the video game Fortnite and YouTuber Marques Brownlee in the video. The camera then cuts to Brownlee, other YouTubers, and Twitch streamer Ninja as the bus driver, conversing inside a battle bus, a reference to the game.
The following scene depicts a group of YouTube personalities surrounding a campfire. The group suggests that the Rewind should have K-pop, references to the wedding of Prince Harry and Meghan Markle, the internet meme 'Bongo Cat,' a science experiment involving melting lipstick, and the inclusion of electronic musician Marshmello, whose mask is removed and revealed to be Mason Ramsey underneath. The video then cuts to a group doing a mukbang in South Korea. After the scene, animator TheOdd1sOut suggests adding the "In My Feelings" challenge to the video. The video rapidly cuts between scenes of various YouTubers and celebrities dancing to the challenge, including scenes of talk show hosts Trevor Noah and John Oliver performing dances from Fortnite. Animator Jaiden Animations includes several easter eggs, comprising references to other memes and events of the year, such as Ugandan Knuckles, an invitation to Super Smash Bros. Ultimate, KSI vs. Logan Paul boxing match, a group of items on the wall that spell out "Sub 2 PewDiePie", as well as PewDiePie's swivel chair.
After the challenge, Lilly Singh says the video should feature "the people who managed to do something bigger than themselves." Several YouTubers give shoutouts to various groups of people, including people who strived for mental health awareness and "all women in 2018 for finding their voices." Later, Elle Mills decides to read a faux comments section on what to feature in the Rewind. Various comments are featured, featuring pop culture references to the costumes in Kanye West and Lil Pump's "I Love It" music video, the 2018 FIFA World Cup, the children's song Baby Shark, and the Dame Tu Cosita dance craze. The 'Sister Squad' (James Charles, Dolan Twins, and Emma Chamberlain) are then shown in outer space, driving a car resembling Elon Musk's Tesla Roadster. The video ends with Smith laughing as he watches the aforementioned battle bus through a pair of binoculars and states "That's hot, that's hot." While the credits are playing, Primitive Technology is featured, sculpting the YouTube Rewind logo with clay.
Cast
This is the list of starring cast members in YouTube Rewind 2018: Everyone Controls Rewind, derived from its website.
10Ocupados
Adam Rippon
Afros e Afins por Nátaly Neri
Alisha Marie
Ami Rodriguez
Anwar Jibawi
AsapSCIENCE
AuthenticGames
BB Ki Vines
Bearhug
Bie The Ska
Bilingirl Chika
Bongo Cat (@StrayRogue and @DitzyFlama)
Bokyem TV
CajuTV
Casey Neistat
Caspar
Cherrygumms
Collins Key
Dagi Bee
Desimpedidos
Diva Depressão
Dolan Twins
Domics
Dotty TV
Elle Mills
Emma Chamberlain
Enes Batur
EnjoyPhoenix
EroldStory
FAP TV
FavijTV
Fischer's
Furious Jumper
Gabbie Hanna
GamingWithKev
Gen Halilintar
Gongdaesang
gymvirtual
Hannah Stocking
HikakinTV
How Ridiculous
illymation
ItsFunneh
Jaiden Animations
James Charles
John Oliver
Jordindian
Jubilee Media
JukiLop
julioprofe
Katya Zamolodchikova
Kaykai Salaider
Kelly MissesVlog
Krystal Yam & family
LA LATA
Lachlan
LaurDIY
Lele Pons
Life Noggin
Lilly Singh
Liza Koshy
Los Polinesios
Lucas the Spider
Luisito Comunica (Rey Palomo)
Luzu
Lyna
Manual do Mundo
Markiplier
Marques Brownlee
Marshmello
Mason Ramsey
Me Poupe!
Merrell Twins
Michael Dapaah
MissRemiAshten
mmoshya
Molly Burke
Ms Yeah
Muro Pequeno
NickEh30
NikkieTutorials
Ninja
Noor Stars
Pautips
Pinkfong Baby Shark
Pozzi
Primitive Technology
RobleisIUTU
Rosanna Pansino
Rudy Mancuso
Safiya Nygaard
Sam Tsui
SamHarveyUK
SHALOM BLAC
Simone Giertz
skinnyindonesian24
Sofia Castro
sWooZie
Tabbes
Technical Guruji
The Try Guys
TheOdd1sOut
Tiền Zombie v4
Trevor Noah
Trixie Mattel
Wengie
whinderssonnunes
Will Smith
Yammy
Yes Theory
Reception
Upon its release, YouTube Rewind 2018: Everyone Controls Rewind received universally negative reviews, receiving extensive backlash from critics, YouTubers, and viewers alike. Many YouTubers deemed it the "worst Rewind ever". Only a few portions of the video received praise, with many viewers applauding Jaiden Animations for incorporating PewDiePie's chair, as well as other Easter eggs, into her segment of the video. Other criticisms included what viewers had seen as the video's overuse of some trends, many of them being seen as outdated or unpopular, including Fortnite, as well as the lack of variety in references. It was also prominently criticized for its social commentary, which some felt was shoehorned into the video. Many people were also angered with PewDiePie not being included, as his channel was the most-subscribed on the platform at the time.
While YouTube Rewind 2018: Everyone Controls Rewind incorporated user comment suggestions as a part of the video, Nicole Engelman of The Hollywood Reporter called YouTube "out of touch". Julia Alexander of The Verge suggested that YouTube had intentionally left out the biggest moments on the platform in 2018 from the video in an attempt to appease concerned advertisers over controversies that had plagued the platform over the past two years. She states that "it's increasingly apparent, however, that YouTube is trying to sell a culture that's different from the one millions of people come to the platform for, and that's getting harder for both creators and fans to swallow." Meira Gebel of Business Insider shared a similar sentiment, saying "The video appears to be an attempt for the company to keep advertisers on its side following a rather rocky 2018."
PewDiePie, who was not in YouTube Rewind 2018: Everyone Controls Rewind, criticized the video. He stated that he was almost glad he wasn't in it "because it's such a cringey video at this point which I think is quite a shame honestly." He adds on the statement saying that "Rewind [used to be] something that seemed like an homage to the creators that year, it was something cool to be a part of." He further criticized the over-saturation of Fortnite, the inclusion of celebrities not associated with YouTube, and the lack of any mention of the outpouring of support on the platform for those who died before December, including Icelandic actor and YouTuber Stefán Karl Stefánsson. On top of his criticism, he, along with FlyingKitty, Party In Backyard, Grandayy and Dolan Dark, created their take of YouTube Rewind 2018: Everyone Controls Rewind on December 27, 2018, titled "YouTube Rewind 2018 but it's actually good", which focused on the notable memes of 2018.
Marques Brownlee, who was prominently featured in the video, said Rewind had once been a "big celebration of YouTubers and the biggest events that had happened on the site in a particular year. It became an honor to be included in Rewind. But now YouTube saw Rewind as a way to showcase all the best stuff that happens on YouTube for advertisers." He concluded that "Instead of honoring creators, it is now a list of advertiser-friendly content. Rewind has turned into a giant ad for YouTube."
In a video uploaded in February 2019, then-YouTube CEO Susan Wojcicki said "Even at home, my kids told me it (YouTube Rewind 2018: Everyone Controls Rewind) was cringey." She promised a better Rewind for 2019 and revealed several priorities for YouTube for the year.
Dislikes
On December 13, 2018, a week after being uploaded, it became the most-disliked video on the website, beating the previous record-holder: the music video for Justin Bieber's "Baby." In a statement given to media outlets, YouTube spokeswoman Andrea Faville said that "dethroning 'Baby' in dislikes wasn't exactly our goal this year."
After the release of the video and subsequent backlash, YouTube discussed possible options to prevent abuse of the dislike button by "dislike mobs", such as making the like–dislike ratings invisible by default, prompting disliking users to explain their dislike, removing the dislike count or the dislike button entirely. Tom Leung, the director of project management at YouTube, described the possibility of removing the dislike button to be the most extreme and undemocratic option, as "not all dislikes are from dislike mobs."
In November 2021, dislike counts became viewable only by a video's uploader in an attempt to "help better protect our creators from harassment, and reduce dislike attacks — where people work to drive up the number of dislikes on a creator's videos."
References
External links
Portal-A project page
2010s YouTube controversies
2018 controversies
2018 YouTube events
2018 YouTube videos
Fortnite
Internet memes introduced from the United States
Internet memes introduced in 2018
Viral videos
Works about video games
2018 in Internet culture | YouTube Rewind 2018: Everyone Controls Rewind | [
"Technology"
] | 2,303 | [
"Works about video games",
"Works about computing"
] |
59,327,836 | https://en.wikipedia.org/wiki/Bray%E2%80%93Moss%E2%80%93Libby%20model | In premixed turbulent combustion, Bray–Moss–Libby (BML) model is a closure model for a scalar field, built on the assumption that the reaction sheet is infinitely thin compared with the turbulent scales, so that the scalar can be found either at the state of burnt gas or unburnt gas. The model is named after Kenneth Bray, J. B. Moss and Paul A. Libby.
Mathematical description
Let us define a non-dimensional scalar variable or progress variable such that at the unburnt mixture and at the burnt gas side. For example, if is the unburnt gas temperature and is the burnt gas temperature, then the non-dimensional temperature can be defined as
The progress variable could be any scalar, i.e., we could have chosen the concentration of a reactant as a progress variable. Since the reaction sheet is infinitely thin, at any point in the flow field, we can find the value of to be either unity or zero. The transition from zero to unity occurs instantaneously at the reaction sheet. Therefore, the probability density function for the progress variable is given by
where and are the probability of finding unburnt and burnt mixture, respectively and is the Dirac delta function. By definition, the normalization condition leads to
It can be seen that the mean progress variable,
is nothing but the probability of finding burnt gas at location and at the time . The density function is completely described by the mean progress variable, as we can write (suppressing the variables )
Assuming constant pressure and constant molecular weight, ideal gas law can be shown to reduce to
where is the heat release parameter. Using the above relation, the mean density can be calculated as follows
The Favre averaging of the progress variable is given by
Combining the two expressions, we find
and hence
The density average is
General density function
If reaction sheet is not assumed to be thin, then there is a chance that one can find a value for in between zero and unity, although in reality, the reaction sheet is mostly thin compared to turbulent scales. Nevertheless, the general form the density function can be written as
where is the probability of finding the progress variable which is undergoing reaction (where transition from zero to unity is effected). Here, we have
where is negligible in most regions.
References
Fluid dynamics
Combustion
Turbulence | Bray–Moss–Libby model | [
"Chemistry",
"Engineering"
] | 472 | [
"Turbulence",
"Chemical engineering",
"Combustion",
"Piping",
"Fluid dynamics"
] |
59,328,051 | https://en.wikipedia.org/wiki/Feeding%20behavior%20of%20spotted%20hyenas | The spotted hyena is the most carnivorous member of the Hyaenidae. Unlike its brown and striped cousins, the spotted hyena is primarily a predator rather than a scavenger. One of the earliest studies to demonstrate its hunting abilities was done by Hans Kruuk, a Dutch wildlife ecologist who showed through a 7-year study of hyena populations in Ngorongoro and Serengeti National Park during the 1960s that spotted hyenas hunt as much as lions, and with later studies this has been shown to be the average in all areas of Africa. However spotted hyenas remain mislabeled as scavengers, often even by ecologists and wildlife documentary channels.
Prey
Blue wildebeest are the most commonly taken medium-sized ungulate prey item in both Ngorongoro and the Serengeti, with zebra and Thomson's gazelles coming close behind. Cape buffalo are rarely attacked due to differences in habitat preference, though adult bulls have been recorded to be taken on occasion. In Kruger National Park, blue wildebeest, cape buffalo, Burchell's zebra, greater kudu and impala are the spotted hyena's most important prey, while giraffe, impala, wildebeest and zebra are its major food sources in the nearby Timbavati area. Springbok and kudu are the main prey in Namibia's Etosha National Park, and springbok in the Namib. In the southern Kalahari, gemsbok, wildebeest and springbok are the principal prey. In Chobe, the spotted hyena's primary prey consists of migratory zebra and resident impala. In Kenya's Masai Mara, 80% of the spotted hyena's prey consists of topi and Thomson's gazelle, save for during the four-month period when zebra and wildebeest herds migrate to the area. Bushbuck, suni and buffalo are the dominant prey items in the Aberdare Mountains, while Grant's gazelle, gerenuk, sheep, goats and cattle are likely preyed upon in northern Kenya.
In west Africa, the spotted hyena is primarily a scavenger who will occasionally attack domestic stock and medium-size antelopes in some areas. In Cameroon, it is common for spotted hyenas to feed on small antelopes like kob, but may also scavenge on reedbuck, kongoni, buffalo, giraffe, African elephant, topi and roan antelope carcasses. Records indicate that spotted hyenas in Malawi feed on medium to large-sized ungulates such as waterbuck and impala. In Tanzania's Selous Game Reserve, spotted hyenas primarily prey on wildebeest, followed by buffalo, zebra, impala, giraffe, reedbuck and kongoni. In Uganda, it is thought that the species primarily preys on birds and reptiles, while in Zambia it is considered a scavenger.
Spotted hyenas have also been found to catch fish, tortoises, humans, black rhino, hippo calves, young African elephants, pangolins and pythons. There is at least one record of four hyenas killing an adult or subadult hippopotamus in Kruger National Park. Spotted hyenas may consume leather articles such as boots and belts around campsites. Jane Goodall recorded spotted hyenas attacking or savagely playing with the exterior and interior fittings of cars, and the species is thought to be responsible for eating car tyres.
The fossil record indicates that the now extinct European spotted hyenas primarily fed on Przewalski's horses, Irish elk, reindeer, red deer, roe deer, fallow deer, wild boar, ibex, steppe wisent, aurochs, and woolly rhinoceros. Spotted hyenas are thought to be responsible for the dis-articulation and destruction of some cave bear skeletons. Such large carcasses were an optimal food resource for hyenas, especially at the end of winter, when food was scarce.
Hunting behaviour
Unlike other large African carnivores, spotted hyenas do not preferentially prey on any species, and only African buffalo and giraffe are significantly avoided. Spotted hyenas prefer prey with a body mass range of , with a mode of . When hunting medium to large sized prey, spotted hyenas tend to select certain categories of animal; young animals are frequently targeted, as are old ones, though the latter category is not so significant when hunting zebras, due to their aggressive anti-predator behaviours. Small prey is killed by being shaken in the mouth, while large prey is eaten alive.
The spotted hyena tracks live prey by sight, hearing and smell. Carrion is detected by smell and the sound of other predators feeding. During daylight hours, they watch vultures descending upon carcasses. Their auditory perception is powerful enough to detect sounds of predators killing prey or feeding on carcasses over distances of up to . Unlike the grey wolf, the spotted hyena relies more on sight than smell when hunting, and does not follow its prey's prints or travel in single file.
Spotted hyenas usually hunt wildebeest either singly, or in groups of two or three. They catch adult wildebeest usually after chases at speeds of up to 60 km/h (37 mi/h). Chases are usually initiated by one hyena and, with the exception of cows with calves, there is little active defence from the wildebeest herd. Wildebeest will sometimes attempt to escape hyenas by taking to water although, in such cases, the hyenas almost invariably catch them.
Zebras require different hunting methods to those used for wildebeest, due to their habit of running in tight groups and aggressive defence from stallions. Typical zebra hunting groups consist of 10–25 hyenas, though there is one record of a hyena killing an adult zebra unaided. During a chase, zebras typically move in tight bunches, with the hyenas pursuing behind in a crescent formation. Chases are usually relatively slow, with an average speed of 15–30 km/h. A stallion will attempt to place himself between the hyenas and the herd, though once a zebra falls behind the protective formation it is immediately set upon, usually after a chase of . Though hyenas may harass the stallion, they usually only concentrate on the herd and attempt to dodge the stallion's assaults. Unlike stallions, mares typically only react aggressively to hyenas when their foals are threatened. Unlike wildebeest, zebras rarely take to water when escaping hyenas.
When hunting Thomson's gazelles, spotted hyenas usually operate alone, and prey primarily on young fawns. Chases against both adult and young gazelles can cover distances of with speeds of 60 km/h (37 mi/h). Female gazelles do not defend their fawns, though they may attempt to distract hyenas by feigning weakness.
Feeding habits
A single spotted hyena can eat at least 14.5 kg of meat per meal, and although they act aggressively toward each other when feeding, they compete with each other mostly through speed of eating, rather than by fighting as lions do. Spotted hyenas can take less than two minutes to eat a gazelle fawn, while a group of 35 hyenas can completely consume an adult zebra in 36 minutes. Spotted hyenas do not require much water, and typically only spend 30 seconds drinking.
When feeding on an intact carcass, spotted hyenas will first consume the meat around the loins and anal region, then open the abdominal cavity and pull out the soft organs. Once the stomach, its wall and contents are consumed, the hyenas will eat the lungs and abdominal and leg muscles. Once the muscles have been eaten, the carcass is disassembled and the hyenas carry off pieces to eat in peace. Spotted hyenas are adept at eating their prey in water: they have been observed to dive under floating carcasses to take bites, then resurface to swallow.
The spotted hyena is very efficient at eating its prey; not only is it able to splinter and eat the largest ungulate bones, it is also able to digest them completely. Spotted hyenas can digest all organic components in bones, not just the marrow. Any inorganic material is excreted with the faeces, which consist almost entirely of a white powder with few hairs. They react to alighting vultures more readily than other African carnivores, and are more likely to stay in the vicinity of lion kills or human settlements.
References
Bibliography
Hyenas
Behavioral ecology
Predation | Feeding behavior of spotted hyenas | [
"Biology"
] | 1,826 | [
"Behavioural sciences",
"Ethology",
"Behavior",
"Behavioral ecology"
] |
59,328,065 | https://en.wikipedia.org/wiki/Martin%20Grohe | Martin Grohe (born 1967) is a German mathematician and computer scientist known for his research on parameterized complexity, mathematical logic, finite model theory, the logic of graphs, database theory, descriptive complexity theory, and graph neural networks. He is a University Professor of Computer Science at RWTH Aachen University, where he holds the Chair for Logic and Theory of Discrete Systems.
Life
Grohe earned his doctorate (Dr. rer. nat.) at the University of Freiburg in 1994. His dissertation, The Structure of Fixed-Point Logics, was supervised by Heinz-Dieter Ebbinghaus.
After postdoctoral research at the University of California, Santa Cruz and Stanford University, he earned his habilitation at the University of Freiburg in 1998. He became professor at the University of Illinois Chicago in 2000, reader at the University of Edinburgh in 2001, and professor at the Humboldt University of Berlin in 2003, before becoming professor at RWTH Aachen University in 2012.
Books
Grohe is the author of Descriptive Complexity, Canonisation, and Definable Graph Structure Theory (Lecture Notes in Logic 47, Cambridge University Press, 2017). In 2011, Grohe and Johann A. Makowsky published as editors the 558th proceedings of the AMS-ASL special session on Model Theoretic Methods in Finite Combinatorics, which was held on January 5-8 2009 in Washington, DC. With Jörg Flum, he is the co-author of Parameterized Complexity Theory (Springer, 2006).
Recognition
Grohe won the Heinz Maier–Leibnitz Prize awarded by the German Research Foundation in 1999, and he was elected as an ACM Fellow in 2017 for "contributions to logic in computer science, database theory, algorithms, and computational complexity". In 2022, he was awarded an ERC Advanced Grant "Symmetry and Similarity".
References
External links
1967 births
Living people
German computer scientists
20th-century German mathematicians
Mathematical logicians
University of Freiburg alumni
Academic staff of RWTH Aachen University
21st-century German mathematicians
Academic staff of the Humboldt University of Berlin | Martin Grohe | [
"Mathematics"
] | 424 | [
"Mathematical logic",
"Mathematical logicians"
] |
59,328,485 | https://en.wikipedia.org/wiki/Cyclic%20and%20separating%20vector | In mathematics, the notion of a cyclic and separating vector is important in the theory of von Neumann algebras, and, in particular, in Tomita–Takesaki theory. A related notion is that of a vector that is cyclic for a given operator. The existence of cyclic vectors is guaranteed by the Gelfand–Naimark–Segal (GNS) construction.
Definitions
Given a Hilbert space H and a linear space A of bounded linear operators in H, an element Ω of H is said to be cyclic for A if the linear space AΩ = {aΩ: a ∈ A} is norm-dense in H. The element Ω is said to be separating if aΩ = 0 for a in A implies that a = 0. Note that:
Any element Ω of H defines a semi-norm p on A, with p(a) = ||aΩ||. The statement that "Ω is separating" is then equivalent to the statement that p is actually a norm.
If Ω is cyclic for A, then it is separating for the commutant A′ of A in B(H), which is the von Neumann algebra consisting of all bounded operators in H that commute with all elements of A, where A is a subset of B(H). In particular, if a belongs to the commutant A′ and satisfies aΩ = 0 for some Ω, then for all b in A, we have that 0 = baΩ = abΩ. Because the subspace bΩ for b in A is dense in the Hilbert space H, this implies that a vanishes on a dense subspace of H. By continuity, this implies that a vanishes everywhere. Hence, Ω is separating for A′.
The following, stronger result holds if A is a *-algebra (an algebra that is closed under adjoints) and unital (i.e., contains the identity operator 1). For a proof, see Proposition 5 of Part I, Chapter 1 of von Neumann algebras.
Proposition If A is a *-algebra of bounded linear operators on H and 1 belongs to A, then Ω is cyclic for A if and only if it is separating for the commutant A′.
A special case occurs when A is a von Neumann algebra, in which case a vector Ω that is cyclic and separating for A is also cyclic and separating for the commutant A′.
Positive linear functionals
A positive linear functional ω on a *-algebra A is said to be faithful if, for any positive element a in A, ω(a) = 0 implies that a = 0.
Every element Ω of the Hilbert spaceH defines a positive linear functional ωΩ on a *-algebra A of bounded linear operators on H via the inner product ωΩ(a) = (aΩ,Ω), for all a in A. If ωΩ is defined in this way and A is a C*-algebra, then ωΩ is faithful if and only if the vector Ω is separating for A. Note that a von Neumann algebra is a special case of a C*-algebra.
Proposition Let φ and ψ be elements of H that are cyclic for A. Assume that ωφ = ωψ. Then there exists an isometry U in the commutant A′ such that φ = Uψ.
References
Linear operators
Operator theory | Cyclic and separating vector | [
"Mathematics"
] | 689 | [
"Mathematical objects",
"Functions and mappings",
"Mathematical relations",
"Linear operators"
] |
59,330,131 | https://en.wikipedia.org/wiki/Qualcomm%20Wi-Fi%20SON | Qualcomm Wi-Fi SON (Self-Organizing Network) is a solution developed by Qualcomm for Wi-Fi networks to simply and automatically select and link different wireless networking devices together, using the concept of "mesh networking". It is supposed to improve network coverage in different corners in a house or an apartment, and also provide improved security. Note that the technology behind the solution is more like mesh networking than cellular self-organizing network which would have included dynamical adjustment between different access points and client devices.
References
Qualcomm
Wi-Fi
Mesh networking | Qualcomm Wi-Fi SON | [
"Technology"
] | 118 | [
"Computer network stubs",
"Wireless networking",
"Wi-Fi",
"Computing stubs",
"Mesh networking"
] |
59,333,315 | https://en.wikipedia.org/wiki/Kiyoshi%20Nagai | Kiyoshi Nagai (June 25, 1949 – September 27, 2019) was a Japanese structural biologist at the MRC Laboratory of Molecular Biology Cambridge, UK. He was known for his work on the mechanism of RNA splicing and structures of the spliceosome.
Education
Nagai studied at Osaka University and earned a Doctor of Philosophy under the supervision of Hideki Morimoto working on the allosteric effect in hemoglobin.
Career and research
In 1981 Nagai moved to the MRC Laboratory of Molecular Biology where he worked as a post-doc with Max Perutz on overproduction of eukaryotic proteins in E. coli. He produced recombinant hemoglobin and studied its properties and evolution by crystallography and mutagenesis. In 1987 he became a tenured group leader at the LMB and was joint head of the Division of Structural Studies from 2000 to 2010. He was appointed fellow of Darwin College, Cambridge in 1993.
In 1990 his group solved the first structure of an RRM (RNA recognition motif) protein, U1A, and in 1994 showed how it specifically binds RNA. Subsequent work involved crystallographic studies of other components of the spliceosome, a large macromolecular machine that catalyses RNA splicing in eukaryotes, including components of the U2 snRNP and the Sm proteins and culminating in the crystal structures of the full U1 snRNP and the U5 snRNP components Prp8 and Brr2.
From 2014, Nagai's group used cryo-electron microscopy to study the spliceosome. Structures of the U5.U4/U6 tri-snRNP gave the first structural insights into the assembly of the spliceosome. Nagai's subsequent structures of spliceosomes in various stages of assembly and catalysis combined with structures from the groups of Reinhard Lührmann, Yigong Shi and others have provided crucial insight into the catalytic mechanism of pre-mRNA splicing.
Awards
2000 Fellow of the Royal Society
1999 Member, European Molecular Biology Organisation (EMBO)
2000 Novartis Medal of the Biochemical Society
References
Structural biologists
Fellows of the Royal Society
Osaka University alumni
Members of the European Molecular Biology Organization
Fellows of Darwin College, Cambridge
1949 births
2019 deaths
20th-century Japanese biologists
21st-century Japanese biologists
People from Osaka
Japanese expatriates in the United Kingdom | Kiyoshi Nagai | [
"Chemistry"
] | 501 | [
"Structural biologists",
"Structural biology"
] |
59,334,289 | https://en.wikipedia.org/wiki/Peng%20Sixun | Peng Sixun (; 28 July 1919 – 9 December 2018) was a Chinese medicinal chemist.
A native of Baojing County, Peng was of Tujia descent. He graduated from the National College of Pharmacy in 1942, and completed a master's degree at Columbia University in 1950. Peng returned to teach at his alma mater, which had been renamed China Pharmaceutical University, and was elected to the Chinese Academy of Engineering in February 1996. Peng died at the age of 99 on 9 December 2018.
References
1919 births
2018 deaths
20th-century Chinese chemists
Columbia University alumni
Members of the Chinese Academy of Engineering
Tujia people
Chemists from Hunan
Medicinal chemistry
China Pharmaceutical University alumni
People from Xiangxi | Peng Sixun | [
"Chemistry",
"Biology"
] | 139 | [
"Biochemistry",
"nan",
"Medicinal chemistry"
] |
59,334,576 | https://en.wikipedia.org/wiki/Yi%20Zhang%20%28biochemist%29 | Yi Zhang () is a Chinese-American biochemist who specializes in the fields of epigenetics, chromatin, and developmental reprogramming. He is a Fred Rosen Professor of Pediatrics and professor of genetics at Harvard Medical School, a senior investigator of Program in Cellular and Molecular Medicine at Boston Children's Hospital, and an investigator of the Howard Hughes Medical Institute. He is also an associate member of the Harvard Stem Cell Institute, as well as the Broad Institute of MIT and Harvard. He is best known for his discovery of several classes of epigenetic enzymes and the identification of epigenetic barriers of SCNT cloning.
Education
Zhang received his B.Sc. and master's degrees in biophysics from China Agricultural University in 1984 and 1987, respectively. He then received his Ph.D. in molecular biophysics from Florida State University in 1995. From 1995 to 1999, He did his postdoctoral training in the lab of Danny Reinberg at the Howard Hughes Medical Institute, Robert Wood Johnson Medical School of the University of Medicine and Dentistry of New Jersey.
Career and research
Appointments
2012–present Fred Rosen Professor, Department of Genetics and Pediatrics, Harvard Medical School & Boston Children's Hospital
2005–present Investigator, Howard Hughes Medical Institute
2007–2013 Founder and scientific advisor of Epizyme, Cambridge, MA
1999–2012 Assistant Professor to Kenan Distinguished Professor, Dept. of Biochemistry & Biophysics, University of North Carolina at Chapel Hill
Research
Zhang has published more than 180 highly influential papers. These studies have been cited over 92,000 times (H-index 124), making him one of the top 10 authors of high impact papers in the fields of molecular biology and genetics (ScienceWatch 2008), and one of the "most influential scientific minds" (ScienceWatch 2014). He was also a Founder of Epizyme, and NewStem (Natick, MA). His current efforts are focused on the molecular mechanism of embryonic development & reprogramming, brain reward-related learning & memory, pancreatic cancer.
Zhang has made several landmark discoveries in the fields of epigenetics, chromatin and developmental reprogramming.
Zhang was the first to systematically identify and characterize six histone methyltransferases, including the H4R3 methyltransferase PRMT1, the H3K79 methyltransferase Dot1L, and the H3K27me3 methyltransferase EZH2/PRC2. He went on to demonstrate the function of H3K27me3 methylation in X chromosome inactivation, genomic imprinting, and non-coding RNA regulation. He was also the first to uncover PRC1 as an E3 ligase mediating H2A ubiquitylation. By discovering two enzymatic activities of two PcG protein complexes, Zhang has contributed significantly to our current understanding of the PcG silencing mechanism.
Zhang was the first to show JmjC domain is a signature motif for histone demethylases. He not only worked out the demethylation mechanism, but also demonstrated that JmjC demethylases can demethylate trimethyl state. Zhang went on to show the diverse function of histone demethylases in spermatogenesis, metabolism, cancer, iPSC generation, and somatic cell nuclear transfer reprogramming. The last finding overcomes a major barrier in SCNT cloning, contributing to the success of the first primate cloning by a team of Chinese scientists
Zhang not only discovered 5-formylcytosine (5fC), and 5-carboxylcytosine (5caC) in mammalian genomic DNA, but also elucidated the DNA demethylation mechanism by demonstrating that Tet proteins can sequentially oxidize 5-methylcytosine (5mC) to 5-hydroxymethylcytosine (5hmC), 5fC, and 5caC in a cyclic manner in mouse embryonic stem cells. He continued to reveal the function of Tet proteins in zygotic DNA demethylation, germ cell development, and genomic imprinting erasure
Zhang contributed to the understanding of the molecular events during mammalian embryogenesis by uncovering an important function of de novo nucleosome assembly in nuclear pore complex formation, identifying key factors for zygotic genome activation, revealing a new mechanism of genomic imprinting and imprinted X-inactivation, as well as the role of this new imprinting mechanism in SCNT cloning
Honors and recognition
2023 Elected to the National Academy of Medicine
2012 Fellow, American Association for the Advancement of Science
2012 Fred Rosen chair, Harvard Medical School & Boston Children's Hospital
2009 Senior Investigator Award, Chinese Biological Investigators Society
2009 Kenan Distinguished Professorship, University of North Carolina-Chapel Hill
2008 The Battle Distinguished Cancer Research Award, University of North Carolina-Chapel Hill
2008 Top 10 authors of high-impact papers by ScienceWatch
2005 Investigator, Howard Hughes Medical Institute
2004 Hettleman Prize for Artistic and Scholarly Achievement, University of North Carolina-Chapel Hill
2000 V. Scholar Award, V Foundation for Cancer Research
References
External links
The Zhang Lab
Yi Zhang's Google Scholar Profile
Year of birth missing (living people)
Living people
Epigeneticists
Stem cell researchers
Howard Hughes Medical Investigators
Fellows of the American Association for the Advancement of Science
Harvard Medical School faculty
Chemists from Chongqing
Chinese emigrants to the United States
American biochemists
Chinese biochemists
China Agricultural University alumni
Florida State University alumni
Educators from Chongqing
Biologists from Chongqing
Members of the National Academy of Medicine | Yi Zhang (biochemist) | [
"Biology"
] | 1,160 | [
"Stem cell researchers",
"Stem cell research"
] |
59,336,378 | https://en.wikipedia.org/wiki/List%20of%20German%20states%20by%20life%20expectancy | The official statistics of Germany, available on the Destatis website, do not include total life expectancy for the population as a whole. For a more correct comparison of regions with various differences in life expectancy for men and women, a column with the arithmetic mean of these indicators was added to the tables.
Destatis (2021/2023)
By default, the table is sorted by the arithmetic mean for life expectancy at birth.
Data source: Destatis
Destatis (2016/2018)
This is a list of German states by life expectancy at birth (average of 2016 to 2018) according to the Federal Statistical Office of Germany.
Eurostat (2019—2022)
By default the table is sorted by 2022.
Data source: Eurostat
Global Data Lab (2019–2022)
Data source: Global Data Lab
Charts
See also
List of countries by life expectancy
List of European countries by life expectancy
Demographics of Germany
References
External links
Federal Statistical Office
Life expectancy
Germany, life expectancy
Germany
Germany health-related lists | List of German states by life expectancy | [
"Biology"
] | 213 | [
"Senescence",
"Life expectancy"
] |
59,337,770 | https://en.wikipedia.org/wiki/NEOSTEL | The Near Earth Object Survey TELescope (NEOSTEL - also known as "Flyeye") is an astronomical survey and early-warning system for detecting near-Earth objects sized and above a few weeks before they impact Earth.
NEOSTEL is a project founded by the European Space Agency (ESA), starting with an initial prototype currently under construction at OHB in Italy. The telescope is of a new "fly-eye" design inspired by the wide field of vision from a fly's eye. The design combines a single objective reflector with multiple sets of optics and CCDs, giving a very wide field of view (around , or 220 times the area of the full moon). When complete it will have one of the widest fields of view of any telescope and be able to survey the majority of the visible sky in a single night. If the initial prototype is successful, three more telescopes are planned, in complementary positions around the globe close to the equator.
In terms of light gathering power, the size of the primary mirror is not directly comparable to more conventional telescopes because of the novel design, but is equivalent to a conventional 1-metre telescope and should have a limiting magnitude of around 21.
The project is part of the NEO Segment of ESA's Space Situational Awareness Programme. The telescope itself should be complete by end of 2024, and installation on Mount Mufara, Sicily should be complete in 2025, having been agreed with the Italian Space Agency in October 2018. Development of the telescope was reported as on track in Feb 2019.
Optics
The fly eye aspect of the telescope refers to the use of compound optics, as opposed to the single set of optics used in a conventional telescope. Classically, telescopes were designed around a single human observer looking through an eye piece. Astrographs were developed in the 19th century where a photographic plate, or later a CCD, records the image, which a human observer can then view. With the human eye no longer directly observing the image there is no longer a restriction on a single viewing point, and asteroid detection software has become fully automated, so a human observer need not view the majority of images at all.
Light enters the NEOSTEL telescope through the aperture and is reflected off the primary mirror onto a secondary, consisting of 16 mirrors arranged on a hexadecagonal pyramid. The split beam then passes into 16 separate aspheric lenses and on to 16 corresponding CCD image sensors. NEOSTEL uses the 16 CCD cameras to view 45 square degrees of light entering the telescope aperture. The pixel scale is 1.5 arc seconds per pixel across the whole field of view.
Observatory
NEOSTEL's detection capabilities and the quality of service it requires (in particular, the use of a fast slewing equatorial mount) mean that a standard telescope dome and observatory design will not be sufficient. Work has been carried out on optimizing the design of the infrastructure layout to solve these problems, whilst minimising the impact of the infrastructure on the environment in Madonie Regional Natural Park, where Monte Mufara is situated.
See also
List of near-Earth object observation projects
Asteroid Terrestrial-impact Last Alert System
Asteroid impact prediction
References
External links
GAL Hassin: Ente Parco delle Madonie: sì al progetto dell’Osservatorio su Monte Mufara
Optical telescopes
Astronomical surveys
Asteroid surveys
Near-Earth object tracking | NEOSTEL | [
"Astronomy"
] | 689 | [
"Astronomical surveys",
"Astronomical objects",
"Works about astronomy"
] |
59,338,885 | https://en.wikipedia.org/wiki/Crisper%20drawer | A crisper drawer (also known as a crisper) is a compartment within a refrigerator designed to prolong the freshness of stored produce. Crisper drawers have a different level of humidity from the rest of the refrigerator, optimizing freshness in fruits and vegetables. Some can be adjusted to either prevent the loss of moisture from produce, or to allow ethylene gas produced by certain fruits to escape in order to prevent them from rotting quickly.
Etymology
The first known use of the word "crisper" in relation to a crisper drawer was in 1835.
Design and operation
Crisper drawers operate by creating an environment of greater humidity than the rest of the refrigerator. Many crisper drawers have a separate humidity control which closes or opens a vent in the drawer. When the vent is in the closed position, airflow is shut off, creating greater humidity in the drawer. High humidity is optimal for the storage of leafy or thin-skinned vegetables. When the vent is in the open position, airflow keeps humidity in the crisper drawer low, which is beneficial for the storage of fruits. Additionally, because some fruits emit high levels of ethylene gas, the open vent allows the ethylene gas to escape, preventing these foods from rotting. The ability to separate low-humidity fruits from high-humidity vegetables using the different crisper drawers also prevents ethylene gas from damaging the latter. Crisper drawers which do not have a humidity control are, by default, high humidity crispers.
Reported public confusion
Appliance manufacturers have reported that many refrigerator owners are unaware of the purpose or operation of crisper drawers. A 2010 survey commissioned by Robert Bosch GmbH found that 55 percent of surveyed Americans "admit to not knowing how to use their crisper drawer controls".
In the UK, sources often use the term "crisper drawer" in conjunction with a nearby explanation, like the Vegetable Expert advice website calling them "special compartments or 'crisper drawers' to store fruits and vegetables", the consumers' organisation Which? calling it a "salad crisper drawer [...] for storing your fruit and veg", and some appliance companies calling it a "fridge/freezer salad crisper".
See also
List of home appliances
References
Food preservation
Home appliances
Refrigerators | Crisper drawer | [
"Physics",
"Technology"
] | 466 | [
"Physical systems",
"Machines",
"Home appliances"
] |
59,339,836 | https://en.wikipedia.org/wiki/Interim%20Register%20of%20Marine%20and%20Nonmarine%20Genera | The Interim Register of Marine and Nonmarine Genera (IRMNG) is a taxonomic database which attempts to cover published genus names for all domains of life (also including subgenera in zoology), from 1758 in zoology (1753 in botany) up to the present, arranged in a single, internally consistent taxonomic hierarchy, for the benefit of Biodiversity Informatics initiatives plus general users of biodiversity (taxonomic) information. In addition to containing just over 500,000 published genus name instances as at May 2023 (also including subgeneric names in zoology), the database holds over 1.7 million species names (1.3 million listed as "accepted"), although this component of the data is not maintained in as current or complete state as the genus-level holdings. IRMNG can be queried online for access to the latest version of the dataset and is also made available as periodic snapshots or data dumps for import/upload into other systems as desired. The database was commenced in 2006 at the then CSIRO Division of Marine and Atmospheric Research in Australia and, since 2016, has been hosted at the Flanders Marine Institute (VLIZ) in Belgium.
Description
IRMNG contains scientific names (only) of the genera (plus zoological subgenera, see below), a subset of species, and principal higher ranks of most plants, animals and other kingdoms, both living and extinct, within a standardized taxonomic hierarchy, with associated machine-readable information on habitat (e.g. marine/nonmarine) and extant/fossil status for the majority of entries. The database aspires to provide complete coverage of both accepted and unaccepted genus names across all kingdoms, with a subset only of species names included as a secondary activity. The names in IRMNG fall within the governance of the International Code of Zoological Nomenclature for zoology (covering animals, zoological protists, and trace fossils attributable to the activities of animals), the International Code of Nomenclature for algae, fungi, and plants (ICN or ICNafp) for botany including those groups, the International Code of Nomenclature of Prokaryotes for Bacteria and Archaea, and the International Committee on Taxonomy of Viruses for that group.
In its May 2023 release, IRMNG contained 500,077 genus names, of which 240,625 were listed as "accepted", 127,971 "unaccepted", 7,932 of "other" status i.e. interim unpublished, nomen dubium, nomen nudum, taxon inquirendum or temporary name, and 123,552 as "uncertain" (unassessed for taxonomic status at this time). The data originate from a range of (frequently domain-specific) print, online and database sources, including (among others) Nomenclator Zoologicus for animals and Index Nominum Genericorum for plants, and are reorganised into a common data structure to support a variety of online queries, generation of individual taxon pages, and bulk data supply to other biodiversity informatics projects. IRMNG content can be queried and displayed freely via the web, and download files of the data down to the taxonomic rank of genus as at specific dates are available in the Darwin Core Archive (DwC-A) format. The data include homonyms (with their authorities), including both available (validly published) and selected unavailable names.
Since in zoology (only) names of subgenera are included, along with genera, in the "genus-group" and are deemed by the "principle of coordination" to have been simultaneously published at both ranks even if not explicitly so at the time of original publication, they are included as available generic names in the IRMNG compilation but marked as "unaccepted names" (not currently used as the accepted name for a genus) unless where they are currently in use as the accepted name for a genus. By contrast, the botanical Code (ICN), which covers Algae, Fungi and Plants, lacks such a provision, so subgenera published under that Code are not included in IRMNG except where they have been explicitly re-ranked to be botanical genera, in which case a new "name" is considered to have been created at that point with authorship given in the form of the original author(s) of the subgenus in parentheses, followed by the name of the author(s) responsible for the newly elevated status.
Estimates for "accepted names" as held at May 2023 are as follows, broken down by kingdom, following the methodology used in Rees et al., 2020 (updated using 2023 data):
Animalia: 189,287 accepted genus names plus notional 50% of 104,583 "uncertain" names: est. total 241,578 accepted genera
Plantae: 25,394 accepted genus names plus notional 50% of 13,838 "uncertain" names: est. total 32,313 accepted genera
Fungi: 10,702 accepted genus names plus notional 50% of 358 "uncertain" names: est. total 10,881 accepted genera
Chromista: 9,912 accepted genus names plus notional 50% of 2,379 "uncertain" names: est. total 11,102 accepted genera
Protozoa: 999 accepted genus names plus notional 50% of 2,164 "uncertain" names: est. total 2,081 accepted genera
Bacteria: 3,337 accepted genus names plus notional 50% of 223 "uncertain" names: est. total 3,393 accepted genera
Archaea: 140 accepted genus names (no "uncertain" genera)
Viruses: 851 accepted genus names (no "uncertain" genera)
Database location and hosting
IRMNG was initiated and designed by Australian biologist and data manager Tony Rees in 2006. For his work on this and other projects, GBIF awarded him the 2014 Ebbe Nielsen Prize. From 2006 to 2014 IRMNG was located at CSIRO Marine and Atmospheric Research, and was moved to the Flanders Marine Institute (VLIZ) over the period 2014–2016; from 2016 onwards all releases have been available via its new website www.irmng.org which is hosted by VLIZ. VLIZ also hosts the World Register of Marine Species (WoRMS), using a common infrastructure.
IRMNG usage
Content from IRMNG is used by several global Biodiversity Informatics projects including Open Tree of Life, the Global Biodiversity Information Facility (GBIF), and the Encyclopedia of Life (EOL), in addition to others including the Atlas of Living Australia and the Global Names Architecture (GNA)'s Global Names Resolver. From 2018 onwards, IRMNG data are also being used to populate the taxonomic hierarchy and provide generic names for a range of taxa in the areas of protists (kingdoms Protozoa and Chromista) and plant algae (Charophyta, Chlorophyta, Glaucophyta and Rhodophyta) in the Catalogue of Life.
Notes
References
Further reading
External links
Taxonomy (biology)
Online databases
Internet properties established in 2006
Biodiversity databases
Online taxonomy databases | Interim Register of Marine and Nonmarine Genera | [
"Biology",
"Environmental_science"
] | 1,472 | [
"Environmental science databases",
"Biodiversity databases",
"Taxonomy (biology)",
"Biodiversity"
] |
54,350,964 | https://en.wikipedia.org/wiki/Thomas%20Jennewein | Thomas Jennewein is an Austrian physicist who conducts research in quantum communication and quantum key distribution. He has taught as an associate professor at the University of Waterloo and the Institute for Quantum Computing in Waterloo, Canada since 2009. He earned his PhD under Anton Zeilinger at the University of Vienna in 2002, during which time he performed experiments on Bell's inequality and cryptography with entangled photons. His current work at the Institute for Quantum Computing focuses on satellite-based free space quantum key distribution, with the goal of creating a global quantum network.
He is also an affiliate of the Perimeter Institute for Theoretical Physics, a fellow of the Canadian Institute for Advanced Research, and CEO and co-founder of quantum optics measurement device company UQDevices alongside physicist Raymond Laflamme.
Education and earlier work
Thomas Jennewein obtained an engineering degree in physics from HTL Anichstraße in 1991, his master's degree in experimental physics from the University of Innsbruck in 1997, and earned his doctoral degree at the University of Vienna in 2002. He then worked as a postdoctoral fellow at the Institute for Quantum Optics and Quantum Information within the Austrian Academy of Sciences from 2004 until 2009 and as a visiting research fellow at the University of Queensland from 2007 to 2008.
Current work
Since 2009, Jennewein has held an associate professorship position at the University of Waterloo and Institute for Quantum Computing where he is the leader of the Quantum Photonics Laboratory. He is currently "working with partners in industry and academia to advance a proposed microsatellite mission called QEYSSat through a series of technical studies funded initially by Defence Research and Development Canada (DRDC) and subsequently by the Canadian Space Agency (CSA)." In April 2017, the Canadian government announced funding of $80.9 million to the Canadian Space Agency for funding of two projects, one of which is for the "demonstration of the applications of quantum technology in space" with the goal of positioning "Canada as a leader in quantum encryption".
In December 2015, Jennewein, with researchers from the National Institute of Standards and Technology, the Joint Quantum Institute at the University of Maryland, and the Jet Propulsion Laboratory at the California Institute of Technology among others, closed two loopholes (namely, the locality and detection loopholes) in a Bell test experiment by using entangled photons to obtain a Bell inequality violation by seven standard deviations.
In April 2017, Jennewein and researchers from the Institute for Quantum Computing, the University of Innsbruck, the University of Paderborn, and the University of Moncton experimentally observed "three-photon interference that does not originate from two-photon or single photon interference" by following a "theoretical recipe proposed by Daniel Greenberger, Michael Horne, and Anton Zeilinger in 1993". The experiment later received one of the ten Physics World 2017 Breakthrough of the Year awards.
In June 2017, Jennewein and his colleagues published findings that showed the first demonstration of quantum key distribution from a ground transmitter to a "receiver prototype mounted on an airplane in flight", reporting optical links with distances between 3-10km and the generation of secure keys up to 868 kilobytes in length.
References
Quantum physicists
Austrian physicists
Academic staff of the University of Waterloo
University of Vienna alumni
Year of birth missing (living people)
Living people | Thomas Jennewein | [
"Physics"
] | 678 | [
"Quantum physicists",
"Quantum mechanics"
] |
54,351,003 | https://en.wikipedia.org/wiki/Radiation%20Budget%20Instrument | The Radiation Budget Instrument (RBI) is a scanning radiometer capable of measuring Earth's reflected sunlight and emitted thermal radiation. The project was cancelled on January 26, 2018; NASA cited technical, cost, and schedule issues and the impact of anticipated RBI cost growth on other programs.
RBI was scheduled to fly on the Joint Polar Satellite System 2 (JPSS-2) mission planned for launch in November 2021; the JPSS-3 mission planned for launch in 2026; and the JPSS-4 mission planned for launch in 2031. The one on JPSS-2 would have been the 14th in the series that started with the Earth radiation budget instruments launched in 1985, and would have extended the unique global climate measurements of the Earth's radiation budget provided by the Clouds and the Earth's Radiant Energy System (CERES) instruments since 1998.
References
External links
Electromagnetic radiation meters
Radiometry | Radiation Budget Instrument | [
"Physics",
"Technology",
"Engineering"
] | 182 | [
"Telecommunications engineering",
"Spectrum (physical sciences)",
"Electromagnetic radiation meters",
"Electromagnetic spectrum",
"Measuring instruments",
"Radiometry"
] |
54,352,124 | https://en.wikipedia.org/wiki/SSU%20rRNA | Small subunit ribosomal ribonucleic acid (SSU rRNA) is the smaller of the two major RNA components of the ribosome.
Associated with a number of ribosomal proteins, the SSU rRNA forms the small subunit of the ribosome. It is encoded by SSU-rDNA.
Characteristics
Use in phylogenetics
SSU rRNA sequences are widely used for determining evolutionary relationships among organisms, since they are of ancient origin and are found in all known forms of life.
See also
LSU rRNA: the large subunit ribosomal ribonucleic acid.
References
Ribosomal RNA
Protein biosynthesis | SSU rRNA | [
"Chemistry"
] | 130 | [
"Protein biosynthesis",
"Gene expression",
"Biosynthesis"
] |
54,352,245 | https://en.wikipedia.org/wiki/Fractional%20Laplacian | In mathematics, the fractional Laplacian is an operator, which generalizes the notion of Laplacian spatial derivatives to fractional powers. This operator is often used to generalise certain types of Partial differential equation, two examples are and which both take known PDEs containing the Laplacian and replacing it with the fractional version.
Definition
In literature the definition of the fractional Laplacian often varies, but most of the time those definitions are equivalent. The following is a short overview proven by Kwaśnicki, M in.
Let and or let or , where:
denotes the space of continuous functions that vanish at infinity, i.e., compact such that for all .
denotes the space of bounded uniformly continuous functions , i.e., functions that are uniformly continuous, meaning such that for all with , and bounded, meaning such that for all .
Additionally, let .
Fourier Definition
If we further restrict to , we get
This definition uses the Fourier transform for . This definition can also be broadened through the Bessel potential to all .
Singular Operator
The Laplacian can also be viewed as a singular integral operator which is defined as the following limit taken in .
Generator of C_0-semigroup
Using the fractional heat-semigroup which is the family of operators , we can define the fractional Laplacian through its generator.
It is to note that the generator is not the fractional Laplacian but the negative of it . The operator is defined by
,
where is the convolution of two functions and .
Distributional Definition
For all Schwartz functions , the fractional Laplacian can be defined in a distributional sense by
where is defined as in the Fourier definition.
Bochner's Definition
The fractional Laplacian can be expressed using Bochner's integral as
where the integral is understood in the Bochner sense for -valued functions.
Balakrishnan's Definition
Alternatively, it can be defined via Balakrishnan's formula:
with the integral interpreted as a Bochner integral for -valued functions.
Dynkin's Definition
Another approach by Dynkin defines the fractional Laplacian as
with the limit taken in .
Quadratic Form Definition
In , the fractional Laplacian can be characterized via a quadratic form:
where
Inverse of the Riesz Potential Definition
When and for , the fractional Laplacian satisfies
Harmonic Extension Definition
The fractional Laplacian can also be defined through harmonic extensions. Specifically, there exists a function such that
where and is a function in that depends continuously on with bounded for all .
References
See also
Fractional calculus
Nonlocal operator
Riemann-Liouville integral
References
External links
"Fractional Laplacian". Nonlocal Equations Wiki, Department of Mathematics, The University of Texas at Austin.
Fractional calculus | Fractional Laplacian | [
"Mathematics"
] | 574 | [
"Fractional calculus",
"Calculus"
] |
54,352,274 | https://en.wikipedia.org/wiki/LSU%20rRNA | Large subunit ribosomal ribonucleic acid (LSU rRNA) is the largest of the two major RNA components of the ribosome.
Associated with a number of ribosomal proteins, the LSU rRNA forms the large subunit of the ribosome.
The LSU rRNA acts as a ribozyme, catalyzing peptide bond formation.
Characteristics
Use in phylogenetics
LSU rRNA sequences are widely used for working out evolutionary relationships among organisms, since they are of ancient origin and are found in all known forms of life.
See also
SSU rRNA: the small subunit ribosomal ribonucleic acid.
References
Ribosomal RNA
Protein biosynthesis | LSU rRNA | [
"Chemistry"
] | 137 | [
"Protein biosynthesis",
"Gene expression",
"Biosynthesis"
] |
54,352,461 | https://en.wikipedia.org/wiki/LEDA%20135657 | LEDA 135657 is a distant low surface brightness spiral galaxy located about 570 million light-years away in the constellation Cetus. It has an estimated diameter of 97,000 light-years.
See also
Malin 1 a giant low surface brightness galaxy discovered in 1986
NGC 45
Low-surface-brightness galaxy
References
External links
Spiral galaxies
Cetus
135657
Low surface brightness galaxies | LEDA 135657 | [
"Astronomy"
] | 78 | [
"Cetus",
"Constellations"
] |
54,353,631 | https://en.wikipedia.org/wiki/Thomas%20Fantet%20de%20Lagny | Thomas Fantet de Lagny (7 November 1660 – 11 April 1734) was a French mathematician, well known for his contributions to computational mathematics, and for calculating π to 112 correct decimal places.
Biography
Thomas Fantet de Lagny was son of Pierre Fantet, a royal official in Grenoble, and Jeanne d'Azy, the daughter of a physician from Montpellier.
He entered a Jesuit College in Lyon, where he became passionate about mathematics, as he studied some mathematical texts such as Euclid by Georges Fournier and an algebra text by Jacques Pelletier du Mans. Then he studied three years in the Faculty of Law in Toulouse.
In 1686, he went to Paris and became a mathematics tutor to the Noailles family. He collaborated with de l'Hospital under the name of de Lagny, and at that time he started publishing his first mathematical papers.
He came back to Lyon when, on 11 December 1695, he was named an associate of the Académie Royale des Sciences. Then, in 1697, he became professor of hydrography at Rochefort for 16 years.
De Lagny returned to Paris in 1714, and became a librarian at the Bibliothèque du roi, and a deputy director of the Banque Générale between 1716 and 1718. On 7 July 1719, he was awarded a pension by the Académie Royale des Sciences, finally earning his living from science. In 1723, he became a pensionnaire at the academy, replacing Pierre Varignon who died in 1722, but had to retire in 1733.
De Lagny died on 11 April 1734. While he was dying, someone asked him: "What is the square of 12?" and he answered immediately: "144."
Computing π
In 1719, de Lagny calculated π to 127 decimal places, using Gregory's series for arctangent, but only 112 decimals were correct. This remained the record until 1789, when Jurij Vega calculated 126 correct digits of π.
Bibliography
Méthode nouvelle infiniment générale et infiniment abrégée pour l’extraction des racines quarrées, cubiques... (Paris, 1691)
Méthodes nouvelles et abrégées pour l’extraction et l’approximation des racines (Paris, 1692)
Nouveaux élémens d’arithmétique et d’algébre ou introduction aux mathématiques (Paris, 1697)
Trignonmétrie française ou reformée (Rochefort, 1703)
De la cubature de la sphére où l’on démontr une infinité de portions de sphére égales à des pyramides rectilignes (La Rochelle, 1705)
Analyse générale ou Méthodes nouvelles pour résoudre les probémes de tous les genres et de tous degrés à l’infini, M. Richer, ed. (Paris, 1733)
References
Lagny, Thomas Fantet de, Encyclopedia.com
1660 births
1734 deaths
French mathematicians
Pi-related people | Thomas Fantet de Lagny | [
"Mathematics"
] | 616 | [
"Pi-related people",
"Pi"
] |
54,355,694 | https://en.wikipedia.org/wiki/MT3D | MT3D is a family of finite-difference groundwater mass transport modeling software, often used with MODFLOW. The first generation, MT3D, was developed by Chunmiao Zheng in 1990, and most recently released by the U.S. Geological Survey with MT3D-USGS.
Versions
MT3D
Appeared in 1990.
MT3DMS
Second generation, released in 1998. Improved to simulate multiple species.
MT3D-USGS
Third generation, released in 2016. Improved to support new transport modeling capabilities from MODFLOW.
References
Hydrology models
Hydrogeology software
United States Geological Survey
Geology software for Linux
Public-domain software | MT3D | [
"Biology",
"Environmental_science"
] | 125 | [
"Hydrology",
"Environmental modelling",
"Hydrology models",
"Biological models"
] |
54,356,586 | https://en.wikipedia.org/wiki/Akal%20Wood%20Fossil%20Park | Akal Wood Fossil Park is a National Geological Monument of India located in Jaisalmer district, Rajasthan. It is also a Biodiversity Heritage Site.
It is 21 hectares in extent and is located in Akal village, 17–18 km southeast of Jaisalmer city, and 1 km off the NH-68 Jaisalmer-Barmer road, on a stretch of about 10 km2 of bare hillside. The terrain is barren and rocky.
The park lies in Jaisalmer's fossil belt, a region noted to have the potential for geological parks. Fossils and footprints of pterosaurs have been found in the nearby Thaiyat area.
The park contains fossils of Pterophyllum, Ptilophyllum, Equisetites species and dicotyledonous wood and gastropod shells of the Early Jurassic period. There are about a dozen fossilised wood logs lying horizontally oriented in random directions, the largest of which is 13.4 m in length and 0.9 m in width. There are a total of 25 petrified tree trunks. The fossils date back 180 million years.
The Geological Survey of India (GSI) declared the site a National Geological Monument in 1972. The park was maintained by GSI till 1985, when maintenance was handed over to the Forest Department of Government of Rajasthan. Now, the park is maintained by the authorities of the Desert National Park. The exposed tree trunks have been protected by iron grill cages with tin sheet roofing.
Geology
The fossils are considered to be of non-flowering trees such as Chir, Deodar and Redwood, as only non-flowering plants (gymnosperms) existed during the geological time when the fossilization took place.
The petrified wood is indicative of lush forests in a tropical warm and humid climate thriving 180 million years ago. Existence of fossils of gastropod shells also suggest that the region was a sea once upon a time. The claim is furthered by the fossils of stems of gymnosperms and fluviatile sediments and deposits. The wood fossils have given evidence that the area has been under the sea on four occasions. However, the host rocks of the wood fossils have been considered continental (non-marine).
After the tree trunks were buried and petrified, geological activities are supposed to have occurred causing the shifting and upheaval of the basin, bringing the fossils to the surface. In addition to the exposed ones, there are more wood fossils lying beneath the surface. Evidence of fruits have also been found.
Fossils in the nearby areas
Buried in multiple layers of sandstones and limestones of geological Habur Formation, isolated teeth of five lamniform genera of large predatory sharks with serrated teeth were found at Kanoi village that lived during the early Cretaceous period — Cretalamna, Dwardius, Leptostyrax, Squalicorax, and Eostriatolamia, among these “Dwardius and Eostriatolamia may possibly be among the globally oldest, with the fossils being an astonishing 115-million-year old”. Habur Formation, which spread around Harbur village, is well known for the stones containing fossils.
Visiting
The park is open to visitors throughout the week from 9 am to 6 pm. A nominal entrance fee is charged. There is a small museum near the entrance which showcases photographs of fossils along with brief descriptions.
In 2017, the government of Rajasthan earmarked Rs 10.9 crore to turn the park into an international geographical heritage site, with Rs 5 crore proposed for modernising the park. Apart from tourism, there is also a focus on attracting researchers.
See also
Nearby
Desert National Park
Kuldhara
Rajkumari Ratnavati Girls School at Kanoi
Sam sand dune safaris and resorts
Fossil parks in India
National Fossil Wood Park, Tiruvakkarai
National Fossil Wood Park, Sathanur
References
National Geological Monuments in India
Fossil parks in India
Mesozoic paleontological sites of Asia
Tourist attractions in Jaisalmer district
Biodiversity Heritage Sites of India | Akal Wood Fossil Park | [
"Biology"
] | 816 | [
"Biodiversity Heritage Sites of India",
"Biodiversity"
] |
54,358,158 | https://en.wikipedia.org/wiki/Mouse%20ear%20swelling%20test | The mouse ear swelling test is a toxicological test that aims to mimic human skin reactions to chemicals. It avoids post-mortem examination of tested animals.
References
See also
Local lymph node assay
Draize test
Freund's Complete Adjuvant
Toxicology
Allergology | Mouse ear swelling test | [
"Environmental_science"
] | 62 | [
"Toxicology",
"Toxicology stubs"
] |
54,358,982 | https://en.wikipedia.org/wiki/Hydrazobenzene | Hydrazobenzene (1,2-diphenylhydrazine) is an aromatic organic compound consisting of two aniline groups joined via their nitrogen atoms. It is an important industrial chemical used in the manufacture of dyes, pharmaceuticals, and hydrogen peroxide.
References
Hydrazines
Anilines | Hydrazobenzene | [
"Chemistry"
] | 65 | [
"Functional groups",
"Hydrazines"
] |
54,359,975 | https://en.wikipedia.org/wiki/Megasporoporiella | Megasporoporiella is a genus of five species of white rot poroid crust fungi in the family Polyporaceae. It was circumscribed in 2013 as a segregate genus of Megasporoporia. Characteristics of the genus include a dimitic hyphal structure, and skeletal hyphae that are weakly to moderately dextrinoid. Megasporoporiella has a primarily temperate distribution. The generic name refers to its similarity to Megasporoporia.
Species
Megasporoporiella cavernulosa (Berk.) B.K.Cui, Y.C.Dai & Hai J. Li (2013)
Megasporoporiella lacerata B.K.Cui & Hai J.Li (2013)
Megasporoporiella pseudocavernulosa B.K.Cui & Hai J.Li (2013)
Megasporoporiella rhododendri (Y.C.Dai & Y.L.Wei) B.K. Cui & Hai J.Li (2013)
Megasporoporiella subcavernulosa (Y.C.Dai & Sheng H.Wu) B.K. Cui & Hai J.Li (2013)
References
Polyporaceae
Polyporales genera
Fungi described in 2013
Taxa named by Yu-Cheng Dai
Taxa named by Bao-Kai Cui
Fungus species | Megasporoporiella | [
"Biology"
] | 292 | [
"Fungi",
"Fungus species"
] |
54,360,045 | https://en.wikipedia.org/wiki/Megasporoporia%20minor | Megasporoporia minor is a species of crust fungus in the family Polyporaceae. Found in China, it was described as a new species in 2013 by mycologists Bao-Kai Cui and Hai-Jiao Li. The type was collected was made in Daweishan Forest Park, Yunnan, where it was found growing on a fallen angiosperm branch. It is distinguished from other species of Megasporoporia by its relatively small pores (number 6–7 per millimetre) and small spores (measuring 6–7.8 by 2.6–4 μm); it is these features for which the fungus is named.
References
Fungi of China
Fungi described in 2013
Polyporaceae
Taxa named by Bao-Kai Cui
Fungus species | Megasporoporia minor | [
"Biology"
] | 160 | [
"Fungi",
"Fungus species"
] |
54,360,101 | https://en.wikipedia.org/wiki/Megasporoporia%20bannaensis | Megasporoporia bannaensis is a species of white rot crust fungus in the family Polyporaceae. Found in China, it was described as a new species in 2013 by mycologists Bao-Kai Cui and Hai-Jiao Li. The type was collected in Sanchahe Nature Reserve, Yunnan, where it was found growing on a fallen angiosperm branch. It is characterized by its relatively large pores (numbering 1–2 per millimetre), the unbranched skeletal hyphae, and long, thin hyphal plugs in the hymenium. Its spores measure 10–14 by 3.9–4.6 μm. The specific epithet bannaensis refers to the type locality (Xishuang-Banna).
References
Fungi of China
Fungi described in 2013
Polyporaceae
Taxa named by Bao-Kai Cui
Fungus species | Megasporoporia bannaensis | [
"Biology"
] | 184 | [
"Fungi",
"Fungus species"
] |
54,360,159 | https://en.wikipedia.org/wiki/Megasporoporia%20minuta | Megasporoporia minuta is a species of crust fungus in the family Polyporaceae. Found in the Guangxi Autonomous Region of southern China, it was described as a new species in 2008 by mycologists Xu-Shen Zhou and Yu-Cheng Dai. The fungus produces annual to biennial fruit bodies with small pores, numbering 6–8 per millimetre. The spores are cylindrical to oblong-ellipsoid and measure 7.7–9.7 by 3.6–4.9 μm. The hymenium lacks both hyphal pegs and dendrohyphidia.
References
Fungi of China
Fungi described in 2008
Polyporaceae
Taxa named by Yu-Cheng Dai
Fungus species | Megasporoporia minuta | [
"Biology"
] | 150 | [
"Fungi",
"Fungus species"
] |
54,360,773 | https://en.wikipedia.org/wiki/NGC%207033 | NGC 7033 is a lenticular galaxy located about 390 million light-years away in the constellation of Pegasus. It is part of a pair of galaxies that contains the nearby galaxy NGC 7034. NGC 7033 was discovered by astronomer Albert Marth on September 17, 1863.
On July 2, 2016 a Type Ia supernova designated as SN 2016cyt was discovered in NGC 7033. It had a maximum apparent magnitude of 18.0.
See also
NGC 7007
NGC 7302
References
External links
Lenticular galaxies
Pegasus (constellation)
7033
66228
Astronomical objects discovered in 1863 | NGC 7033 | [
"Astronomy"
] | 120 | [
"Pegasus (constellation)",
"Constellations"
] |
54,361,643 | https://en.wikipedia.org/wiki/Hyperparameter%20optimization | In machine learning, hyperparameter optimization or tuning is the problem of choosing a set of optimal hyperparameters for a learning algorithm. A hyperparameter is a parameter whose value is used to control the learning process, which must be configured before the process starts.
Hyperparameter optimization determines the set of hyperparameters that yields an optimal model which minimizes a predefined loss function on a given data set. The objective function takes a set of hyperparameters and returns the associated loss. Cross-validation is often used to estimate this generalization performance, and therefore choose the set of values for hyperparameters that maximize it.
Approaches
Grid search
The traditional method for hyperparameter optimization has been grid search, or a parameter sweep, which is simply an exhaustive searching through a manually specified subset of the hyperparameter space of a learning algorithm. A grid search algorithm must be guided by some performance metric, typically measured by cross-validation on the training set
or evaluation on a hold-out validation set.
Since the parameter space of a machine learner may include real-valued or unbounded value spaces for certain parameters, manually set bounds and discretization may be necessary before applying grid search.
For example, a typical soft-margin SVM classifier equipped with an RBF kernel has at least two hyperparameters that need to be tuned for good performance on unseen data: a regularization constant C and a kernel hyperparameter γ. Both parameters are continuous, so to perform grid search, one selects a finite set of "reasonable" values for each, say
Grid search then trains an SVM with each pair (C, γ) in the Cartesian product of these two sets and evaluates their performance on a held-out validation set (or by internal cross-validation on the training set, in which case multiple SVMs are trained per pair). Finally, the grid search algorithm outputs the settings that achieved the highest score in the validation procedure.
Grid search suffers from the curse of dimensionality, but is often embarrassingly parallel because the hyperparameter settings it evaluates are typically independent of each other.
Random search
Random Search replaces the exhaustive enumeration of all combinations by selecting them randomly. This can be simply applied to the discrete setting described above, but also generalizes to continuous and mixed spaces. A benefit over grid search is that random search can explore many more values than grid search could for continuous hyperparameters. It can outperform Grid search, especially when only a small number of hyperparameters affects the final performance of the machine learning algorithm. In this case, the optimization problem is said to have a low intrinsic dimensionality. Random Search is also embarrassingly parallel, and additionally allows the inclusion of prior knowledge by specifying the distribution from which to sample. Despite its simplicity, random search remains one of the important base-lines against which to compare the performance of new hyperparameter optimization methods.
Bayesian optimization
Bayesian optimization is a global optimization method for noisy black-box functions. Applied to hyperparameter optimization, Bayesian optimization builds a probabilistic model of the function mapping from hyperparameter values to the objective evaluated on a validation set. By iteratively evaluating a promising hyperparameter configuration based on the current model, and then updating it, Bayesian optimization aims to gather observations revealing as much information as possible about this function and, in particular, the location of the optimum. It tries to balance exploration (hyperparameters for which the outcome is most uncertain) and exploitation (hyperparameters expected close to the optimum). In practice, Bayesian optimization has been shown to obtain better results in fewer evaluations compared to grid search and random search, due to the ability to reason about the quality of experiments before they are run.
Gradient-based optimization
For specific learning algorithms, it is possible to compute the gradient with respect to hyperparameters and then optimize the hyperparameters using gradient descent. The first usage of these techniques was focused on neural networks. Since then, these methods have been extended to other models such as support vector machines or logistic regression.
A different approach in order to obtain a gradient with respect to hyperparameters consists in differentiating the steps of an iterative optimization algorithm using automatic differentiation. A more recent work along this direction uses the implicit function theorem to calculate hypergradients and proposes a stable approximation of the inverse Hessian. The method scales to millions of hyperparameters and requires constant memory.
In a different approach, a hypernetwork is trained to approximate the best response function. One of the advantages of this method is that it can handle discrete hyperparameters as well. Self-tuning networks offer a memory efficient version of this approach by choosing a compact representation for the hypernetwork. More recently, Δ-STN has improved this method further by a slight reparameterization of the hypernetwork which speeds up training. Δ-STN also yields a better approximation of the best-response Jacobian by linearizing the network in the weights, hence removing unnecessary nonlinear effects of large changes in the weights.
Apart from hypernetwork approaches, gradient-based methods can be used to optimize discrete hyperparameters also by adopting a continuous relaxation of the parameters. Such methods have been extensively used for the optimization of architecture hyperparameters in neural architecture search.
Evolutionary optimization
Evolutionary optimization is a methodology for the global optimization of noisy black-box functions. In hyperparameter optimization, evolutionary optimization uses evolutionary algorithms to search the space of hyperparameters for a given algorithm. Evolutionary hyperparameter optimization follows a process inspired by the biological concept of evolution:
Create an initial population of random solutions (i.e., randomly generate tuples of hyperparameters, typically 100+)
Evaluate the hyperparameter tuples and acquire their fitness function (e.g., 10-fold cross-validation accuracy of the machine learning algorithm with those hyperparameters)
Rank the hyperparameter tuples by their relative fitness
Replace the worst-performing hyperparameter tuples with new ones generated via crossover and mutation
Repeat steps 2-4 until satisfactory algorithm performance is reached or is no longer improving.
Evolutionary optimization has been used in hyperparameter optimization for statistical machine learning algorithms, automated machine learning, typical neural network and deep neural network architecture search, as well as training of the weights in deep neural networks.
Population-based
Population Based Training (PBT) learns both hyperparameter values and network weights. Multiple learning processes operate independently, using different hyperparameters. As with evolutionary methods, poorly performing models are iteratively replaced with models that adopt modified hyperparameter values and weights based on the better performers. This replacement model warm starting is the primary differentiator between PBT and other evolutionary methods. PBT thus allows the hyperparameters to evolve and eliminates the need for manual hypertuning. The process makes no assumptions regarding model architecture, loss functions or training procedures.
PBT and its variants are adaptive methods: they update hyperparameters during the training of the models. On the contrary, non-adaptive methods have the sub-optimal strategy to assign a constant set of hyperparameters for the whole training.
Early stopping-based
A class of early stopping-based hyperparameter optimization algorithms is purpose built for large search spaces of continuous and discrete hyperparameters, particularly when the computational cost to evaluate the performance of a set of hyperparameters is high. Irace implements the iterated racing algorithm, that focuses the search around the most promising configurations, using statistical tests to discard the ones that perform poorly.
Another early stopping hyperparameter optimization algorithm is successive halving (SHA), which begins as a random search but periodically prunes low-performing models, thereby focusing computational resources on more promising models. Asynchronous successive halving (ASHA) further improves upon SHA's resource utilization profile by removing the need to synchronously evaluate and prune low-performing models. Hyperband is a higher level early stopping-based algorithm that invokes SHA or ASHA multiple times with varying levels of pruning aggressiveness, in order to be more widely applicable and with fewer required inputs.
Others
RBF and spectral approaches have also been developed.
Issues with hyperparameter optimization
When hyperparameter optimization is done, the set of hyperparameters are often fitted on a training set and selected based on the generalization performance, or score, of a validation set. However, this procedure is at risk of overfitting the hyperparameters to the validation set. Therefore, the generalization performance score of the validation set (which can be several sets in the case of a cross-validation procedure) cannot be used to simultaneously estimate the generalization performance of the final model. In order to do so, the generalization performance has to be evaluated on a set independent (which has no intersection) of the set (or sets) used for the optimization of the hyperparameters, otherwise the performance might give a value which is too optimistic (too large). This can be done on a second test set, or through an outer cross-validation procedure called nested cross-validation, which allows an unbiased estimation of the generalization performance of the model, taking into account the bias due to the hyperparameter optimization.
See also
Automated machine learning
Neural architecture search
Meta-optimization
Model selection
Self-tuning
XGBoost
References
Machine learning
Mathematical optimization
Model selection | Hyperparameter optimization | [
"Mathematics",
"Engineering"
] | 1,950 | [
"Mathematical optimization",
"Mathematical analysis",
"Machine learning",
"Artificial intelligence engineering"
] |
54,362,288 | https://en.wikipedia.org/wiki/White%20Widow%20%28Cannabis%29 | White Widow is a balanced hybrid strain of Cannabis indica and Cannabis sativa that was created and developed by Shantibaba whilst he worked at the Greenhouse Seed Company.
White Widow has been described as "among the most popular [strains] in the world" by Popular Science magazine. The strain won the Cannabis Cup in 1995.
Related strains
Black Widow - the renamed original white widow when Shantibaba moved his genetics to Mr. Nice Seedbank.
White Russian – An indica-dominant hybrid, that is a cross of White Widow and AK-47.
Blue Widow - A sativa-dominant (60%) hybrid, that is a cross of White Widow and Blueberry.
Moby Dick - A sativa-dominant (60%) hybrid, that is a cross of White Widow and Haze.
See also
Cannabis strains
Medical cannabis
Glossary of cannabis terms
List of cannabis strains
References
External links
Cannabis strains | White Widow (Cannabis) | [
"Biology"
] | 185 | [
"Cannabis strains",
"Biopiracy"
] |
54,362,371 | https://en.wikipedia.org/wiki/NGC%207034 | NGC 7034 is an elliptical galaxy located about 380 million light-years away in the constellation of Pegasus. It is part of a pair of galaxies that contains the nearby galaxy NGC 7033. NGC 7034 was discovered by astronomer Albert Marth on September 17, 1863.
See also
NGC 4486 (M87)
List of NGC objects (7001–7840)
References
External links
Elliptical galaxies
Pegasus (constellation)
11687
7034
66227
Astronomical objects discovered in 1863 | NGC 7034 | [
"Astronomy"
] | 99 | [
"Pegasus (constellation)",
"Constellations"
] |
54,364,250 | https://en.wikipedia.org/wiki/List%20of%20Kiosk%20software | This is a list of kiosk software. The list includes kiosk-exclusive software as well as mobile device management (MDM) software with kiosk features.
General information
References
Kiosks
Mobile device management
Windows security software
Kiosk | List of Kiosk software | [
"Technology"
] | 53 | [
"Computing-related lists",
"Lists of software"
] |
54,364,299 | https://en.wikipedia.org/wiki/Anti-tunnel%20barrier%20along%20the%20Gaza%E2%80%93Israel%20border | The anti-tunnel barrier along the Gaza–Israel border (sometimes referred to as the smart wall on the Israel–Gaza border) is an underground slurry wall constructed by Israel along the entire length of the Gaza–Israel border to prevent infiltration into Israel by digging tunnels under the Gaza–Israel barrier. The project includes excavation to classified depths, and the construction of thick concrete walls combined with sensors and alarm devices.
The underground anti-tunnel barrier, and 81% of the barrier above the ground, was completed in March 2021. The whole project was completed in December 2021. The project had been estimated to cost 3 billion shekels ($833 million) to 3.5 billion shekels ($1.11 billion). It is located entirely on Israeli land.
Background
Because of the effectiveness of the Israel–Gaza barrier in stopping infiltration of Israel by Palestinian militants, Hamas adopted a strategy of digging tunnels under the barrier. On 25 June 2006, Palestinians used an 800-metre tunnel dug over a period of months to infiltrate Israel. They attacked a patrolling Israeli armored unit, killed two Israeli soldiers, and captured another one, Gilad Shalit.
Between January and October 2013, three other tunnels were identified – two of which were packed with explosives.
During the 2014 Gaza War, Israel encountered Hamas militants who emerged from tunnels into Israel and attacked soldiers along the border. After the war, Israel located and destroyed 32 tunnels. In 2018, Israel destroyed three new tunnels.
Underground barrier
The anti-tunnel barrier was constructed in response to the large number of tunnels being dug by Hamas, which could only be of use for infiltration by militants. In mid-2017, Israel began construction of the underground wall several metres in depth. The barrier is equipped with sensors that can detect tunnel construction.
In October 2020, sensors in the underground structure identified a Hamas tunnel. An Israeli military official called the tunnel "The most significant tunnel we have seen to date, both in terms of depth and infrastructure".
See also
Gaza–Israel barrier
Palestinian tunnel warfare in the Gaza Strip
Gaza Strip smuggling tunnels
References
Israel–State of Palestine borders
Border barriers
Tunnel warfare
Gaza–Israel conflict
Borders of Israel
Israel–Gaza Strip border | Anti-tunnel barrier along the Gaza–Israel border | [
"Engineering"
] | 444 | [
"Separation barriers",
"Military engineering",
"Border barriers",
"Tunnel warfare"
] |
54,364,429 | https://en.wikipedia.org/wiki/Umbralisib | Umbralisib, sold under the brand name Ukoniq, is an anti-cancer medication for the treatment of marginal zone lymphoma (MZL) and follicular lymphoma (FL). It is taken by mouth.
Umbralisib is a kinase inhibitor including PI3K-delta and casein kinase CK1-epsilon.
The most common side effects include increased creatinine, diarrhea-colitis, fatigue, nausea, neutropenia, transaminase elevation, musculoskeletal pain, anemia, thrombocytopenia, upper respiratory tract infection, vomiting, abdominal pain, decreased appetite, and rash.
Umbralisib was granted accelerated approval for medical use in the United States in February 2021. However, due to concerns for increased long term side effects leading to inferior overall survival which led to increased FDA scrutiny in the form of an ODAC review, it has been withdrawn from the US market.
Medical uses
In April 2022, TG Therapeutics announced the voluntary withdrawal of Ukoniq (umbralisib) from sale for its approved use in the treatment of marginal zone lymphoma and follicular lymphoma. Furthermore, the company withdrew the pending Biologics License Application (BLA) and supplemental New Drug Application (sNDA) for the treatment of chronic lymphocytic leukemia (CLL) and small lymphocytic leukemia (SLL) which utilized umbralisib in tandem with ublituximab, known as the "U2" regimen. The decision was based on the overall survival (OS) data from the phase III trial, Unity-CLL, that illustrated and increasing imbalance in OS.
Umbralisib is indicated for adults with relapsed or refractory marginal zone lymphoma (MZL) who have received at least one prior anti-CD20-based regimen; and adults with relapsed or refractory follicular lymphoma (FL) who have received at least three prior lines of systemic therapy.
Adverse effects
The prescribing information provides warnings and precautions for adverse reactions including infections, neutropenia, diarrhea and non-infectious colitis, hepatotoxicity, and severe cutaneous reactions.
History
It has undergone clinical studies for chronic lymphocytic leukemia (CLL). Three year data (including follicular lymphoma and DLBCL) was announced June 2016. It is in combination trials for various leukemias and lymphomas, such as mantle cell lymphoma (MCL) and other lymphomas.
Umbralisib was granted breakthrough therapy designation by the U.S. Food and Drug Administration (FDA) for use in people with marginal zone lymphoma (MZL), a type of cancer with no specifically approved therapies.
FDA approval was based on two single-arm cohorts of an open-label, multi-center, multi-cohort trial, UTX-TGR-205 (NCT02793583), in 69 participants with marginal zone lymphoma (MZL) who received at least one prior therapy, including an anti-CD20 containing regimen, and in 117 participants with follicular lymphoma (FL) after at least two prior systemic therapies. The application for umbralisib was granted priority review for the marginal zone lymphoma (MZL) indication and orphan drug designation for the treatment of MZL and follicular lymphoma (FL).
Society and culture
Legal status
In June 2022, due to safety concerns, the US Food and Drug Administration (FDA) withdrew its approval for Ukoniq (umbralisib).
Updated findings from the UNITY-CLL clinical trial show a possible increased risk of death in people receiving Ukoniq. As a result, the FDA determined the risks of treatment with Ukoniq outweigh its benefits. Based upon this determination, the drug's manufacturer, TG Therapeutics, announced it was voluntarily withdrawing Ukoniq from the market for the approved uses in MZL and FL.
References
External links
Phosphoinositide 3-kinase inhibitors
Cancer treatments
Orphan drugs
Withdrawn drugs | Umbralisib | [
"Chemistry"
] | 908 | [
"Drug safety",
"Withdrawn drugs"
] |
54,364,570 | https://en.wikipedia.org/wiki/Fort%20de%20Montessuy | The Fort de Montessuy is a fort in the first belt of fortifications in Lyon, located in the neighborhood of Montessuy in Caluire-et-Cuire, Rhône, France.
History
Built in 1831, it was linked to Fort de Caluire, its less imposing twin, by an enclosure aligned with île Barbe, protecting Lyon and particularly Croix-Rousse from invaders coming up the road from the Dombes.
From the north bank of the Rhône, it defended the river and the Fort des Brotteaux.
North was considered dangerous, so a large ravelin was built before the fort in this direction, as well as a lunette further out.
When the Germans were leaving Caluire-et-Cuire on 24 August 1944, two children, Jean Turba (1930 - 1944) and Bernadette Choux (1931 - 1944) watched their departure through field-glasses from the fort de Montessuy; soldiers still posted across the Rhône fired on them with machine guns, killing them both. A street in Montessuy was named after the children (allée Turba-et-Choux). On the wall of the école d'Application Jean-Jaurès de Caluire (a public grade school on the place Jules-Ferry opened on October 1, 1933), a plaque commemorates Jean Turba and two 1944 other victims of the Nazis, also former students at the school.
Current use
The fort still exists in Caluire-et-Cuire, dans le quartier Montessuy, and has been owned by the municipality since 1972. Its moats have been covered over by fill dirt from the excavations for the construction of the new buildings now at the heart of the Montessuy neighborhood. Vegetation is slowly invading the fort. The tops of a few scarps remain visible, emerging from the ground, as well as a ''dame'', a column of stone that prevented attackers from walking along the top of the enclosure.
It isn't possible to visit the fort, but a few nonprofits have taken up residence in the only surviving building, the barracks, such as AS Caluire - Tir à l'arme de poing or AS Pétanque Caluire. The exterior of the fort has been transformed into green space; there is a skatepark nearby.
See also
Ceintures de Lyon
Fort de Caluire
References
Bibliography
François Dallemagne (photogr. Georges Fessy), Les défenses de Lyon : enceintes et fortifications, Lyon, Éditions Lyonnaises d'Art et d'Histoire, 2006, 255 p. (), p. 124–126
Fortification lines | Fort de Montessuy | [
"Engineering"
] | 549 | [
"Fortification lines"
] |
54,364,819 | https://en.wikipedia.org/wiki/Oil%20regeneration | Oil regeneration - is extraction of contaminants from oil in order to restore its original properties to be used equally with fresh oils.
Oil aging
Aging is a result of physical and chemical processes that change oil during storage and use in machines and mechanisms.
The main cause of aging is exposure to high temperatures and contact with air that leads to oxidation, decomposition, polymerization and condensation of hydrocarbons. Another cause of aging is contamination with metal particles, water and dust. Their accumulation leads to buildup of slurries, resinous and asphaltic compounds, coke, soot, various salts and acids in the oils.
The oil in which aging process occurs, cannot fully perform its functions. Therefore, it is either replaced with new oil or regenerated.
Regeneration by physical methods
Physical methods of regeneration do not change the chemical properties of oil. They remove only mechanical impurities (metal particles, sand, dust, as well as tar, asphalt and coke-like substances, water).
Regeneration by physical methods include:
sedimentation. This method is often used as the first stage in regeneration. The contamination particles in oil settle down, due to gravity;
centrifugation - separates oil into layers (an oil layer, a rag layer, a water layer,) by centrifugal forces;
filtration - separates suspensions into clean liquid and wet sediment with the help of filters;
Washing with water and dry washing to remove acidic products from oil (water-soluble low-molecular acids, salts of organic acids).
Regeneration by physicochemical methods
Physicochemical methods are based on the use of coagulants and adsorbents. Coagulants promote the coarsening and precipitation of fine-dispersed asphalt-resinous substances in oil. Adsorbents selectively absorb organic and inorganic compounds. These methods remove asphalt and resinous compounds, emulsified and dissolved water from oil. Adsorptive treatment with bleaching clays neutralizes free acid in acid-treated oil, unstable oxidized and sulphurized products as well as traces of sulphonic acid. In addition, clay treatment leads to higher resistance to oil oxidation at high temperatures and increased colour stability. This process is used in clay polishing plants for waste oil re-refining and transformer oil regeneration systems for the reclamation of old transformer oil to as-new condition.
Regeneration by chemical methods
Chemical methods of regeneration remove asphalt, silicic, acidic, some hetero-organic compounds and water from oils. These methods are based on the interaction of contaminating substances in oil with special reagents introduced into them. The compounds formed as a result of these chemical reactions are then easily removed from oil. Chemical methods include acid and alkaline refining, drying with calcium sulphate or reduction with metal hydrides.
Choice of methods of regeneration
In practice, to achieve a complete regeneration of oil using only one method is difficult. Therefore, a combination of different approaches are often used.
See also
Oil purification
References
External] links
Oils
Recycling | Oil regeneration | [
"Chemistry"
] | 618 | [
"Oils",
"Carbohydrates"
] |
54,364,879 | https://en.wikipedia.org/wiki/Wigner%20fusion | The Wigner fusion research groups are involved in magnetically confined nuclear fusion experiments around the world. Wigner fusion consists of research groups from four different research institutes and universities, 3 if which are located in the Department of Plasma Physics at the Wigner Research Centre for Physics, one in the Institute of Nuclear Techniques (INT) of the Budapest University of Technology and Economics other specialist are involved from the Centre for Energy Research and from the Institute for Nuclear Research of the Hungarian Academy of Sciences, in the coordination of the Wigner Research Centre for Physics. Wigner fusion connected to the European fusion research programme through EUROfusion consortium which coordinated fusion research in Europe. At Wigner fusion more than 40 researchers, engineers and technicians work together in these research groups who are involved in more than half a dozen magnetic confinement experiments around the world, such as ITER, JET, Asdex-Upgrade, W7-X, KSTAR, EAST, MAST-Upgrade and COMPASS.
The research groups of Wigner fusion:
Pellet and Video Diagnostics Group, Wigner RCP
ITER and Fusion Diagnostics Group, Wigner RCP
Beam Emission Spectroscopy Group, Wigner RCP
Fusion Research Group, BME NTI
References
Europe launches EUROfusion to make fusion energy a reality - Horizon 2020 Projects
Specific
External links
Wigner Research Centre for Physics
Centre for Energy Research
Institute for Nuclear Research
Hungarian fusion community website
EUROfusion website
See also
EUROfusion
Nuclear fusion
Fusion power
Research institutes | Wigner fusion | [
"Physics",
"Chemistry"
] | 292 | [
"Nuclear fusion",
"Fusion power",
"Plasma physics"
] |
54,365,238 | https://en.wikipedia.org/wiki/Erd%C5%91s%E2%80%93Tetali%20theorem | In additive number theory, an area of mathematics, the Erdős–Tetali theorem is an existence theorem concerning economical additive bases of every order. More specifically, it states that for every fixed integer , there exists a subset of the natural numbers satisfying
where denotes the number of ways that a natural number n can be expressed as the sum of h elements of B.
The theorem is named after Paul Erdős and Prasad V. Tetali, who published it in 1990.
Motivation
The original motivation for this result is attributed to a problem posed by S. Sidon in 1932 on economical bases. An additive basis is called economical (or sometimes thin) when it is an additive basis of order h and
for every . In other words, these are additive bases that use as few numbers as possible to represent a given n, and yet represent every natural number. Related concepts include -sequences and the Erdős–Turán conjecture on additive bases.
Sidon's question was whether an economical basis of order 2 exists. A positive answer was given by P. Erdős in 1956, settling the case of the theorem. Although the general version was believed to be true, no complete proof appeared in the literature before the paper by Erdős and Tetali.
Ideas in the proof
The proof is an instance of the probabilistic method, and can be divided into three main steps. First, one starts by defining a random sequence by
where is some large real constant, is a fixed integer and n is sufficiently large so that the above formula is well-defined. A detailed discussion on the probability space associated with this type of construction may be found on Halberstam & Roth (1983). Secondly, one then shows that the expected value of the random variable has the order of log. That is,
Finally, one shows that almost surely concentrates around its mean. More explicitly:
This is the critical step of the proof. Originally it was dealt with by means of Janson's inequality, a type of concentration inequality for multivariate polynomials. Tao & Vu (2006) present this proof with a more sophisticated two-sided concentration inequality by V. Vu (2000), thus relatively simplifying this step. Alon & Spencer (2016) classify this proof as an instance of the Poisson paradigm.
Relation to the Erdős–Turán conjecture on additive bases
The original Erdős–Turán conjecture on additive bases states, in its most general form, that if is an additive basis of order h then
that is, cannot be bounded. In his 1956 paper, P. Erdős asked whether it could be the case that
whenever is an additive basis of order 2. In other words, this is saying that is not only unbounded, but that no function smaller than log can dominate . The question naturally extends to , making it a stronger form of the Erdős–Turán conjecture on additive bases. In a sense, what is being conjectured is that there are no additive bases substantially more economical than those guaranteed to exist by the Erdős–Tetali theorem.
Further developments
Computable economical bases
All the known proofs of Erdős–Tetali theorem are, by the nature of the infinite probability space used, non-constructive proofs. However, Kolountzakis (1995) showed the existence of a recursive set satisfying such that takes polynomial time in n to be computed. The question for remains open.
Economical subbases
Given an arbitrary additive basis , one can ask whether there exists such that is an economical basis. V. Vu (2000) showed that this is the case for Waring bases , where for every fixed k there are economical subbases of of order for every , for some large computable constant .
Growth rates other than log
Another possible question is whether similar results apply for functions other than log. That is, fixing an integer , for which functions f can we find a subset of the natural numbers satisfying ? It follows from a result of C. Táfula (2019) that if f is a locally integrable, positive real function satisfying
, and
for some ,
then there exists an additive basis of order h which satisfies . The minimal case recovers Erdős–Tetali's theorem.
See also
Erdős–Fuchs theorem: For any non-zero , there is no set which satisfies .
Erdős–Turán conjecture on additive bases: If is an additive basis of order 2, then .
Waring's problem, the problem of representing numbers as sums of k-powers, for fixed .
References
Erdős, P.; Tetali, P. (1990). "Representations of integers as the sum of k terms". Random Structures & Algorithms. 1 (3): 245–261. .
Halberstam, H.; Roth, K. F. (1983). Sequences. Springer New York. . .
Alon, N.; Spencer, J. (2016). The probabilistic method (4th ed.). Wiley. . .
Tao, T.; Vu, V. (2006). Additive combinatorics. Cambridge University Press. . .
Theorems in number theory
Theorems in combinatorics
Additive number theory
Tetali theorem | Erdős–Tetali theorem | [
"Mathematics"
] | 1,072 | [
"Mathematical theorems",
"Theorems in combinatorics",
"Combinatorics",
"Theorems in discrete mathematics",
"Theorems in number theory",
"Mathematical problems",
"Number theory"
] |
51,500,017 | https://en.wikipedia.org/wiki/Design%20for%20additive%20manufacturing | Design for additive manufacturing (DfAM or DFAM) is design for manufacturability as applied to additive manufacturing (AM). It is a general type of design methods or tools whereby functional performance and/or other key product life-cycle considerations such as manufacturability, reliability, and cost can be optimized subjected to the capabilities of additive manufacturing technologies.
This concept emerges due to the enormous design freedom provided by AM technologies. To take full advantages of unique capabilities from AM processes, DfAM methods or tools are needed. Typical DfAM methods or tools includes topology optimization, design for multiscale structures (lattice or cellular structures), multi-material design, mass customization, part consolidation, and other design methods which can make use of AM-enabled features.
DfAM is not always separate from broader DFM, as the making of many objects can involve both additive and subtractive steps. Nonetheless, the name "DfAM" has value because it focuses attention on the way that commercializing AM in production roles is not just a matter of figuring out how to switch existing parts from subtractive to additive. Rather, it is about redesigning entire objects (assemblies, subsystems) in view of the newfound availability of advanced AM. That is, it involves redesigning them because their entire earlier design—including even how, why, and at which places they were originally divided into discrete parts—was conceived within the constraints of a world where advanced AM did not yet exist. Thus instead of just modifying an existing part design to allow it to be made additively, full-fledged DfAM involves things like reimagining the overall object such that it has fewer parts or a new set of parts with substantially different boundaries and connections. The object thus may no longer be an assembly at all, or it may be an assembly with many fewer parts. Many examples of such deep-rooted practical impact of DfAM have been emerging in the 2010s, as AM greatly broadens its commercialization. For example, in 2017, GE Aviation revealed that it had used DfAM to create a helicopter engine with 16 parts instead of 900, with great potential impact on reducing the complexity of supply chains. It is this radical rethinking aspect that has led to themes such as that "DfAM requires 'enterprise-level disruption'." In other words, the disruptive innovation that AM can allow can logically extend throughout the enterprise and its supply chain, not just change the layout on a machine shop floor.
DfAM involves both broad themes (which apply to many AM processes) and optimizations specific to a particular AM process. For example, DFM analysis for stereolithography maximizes DfAM for that modality.
Background
Additive manufacturing is defined as a material joining process, whereby a product can be directly fabricated from its 3D model, usually layer upon layer. Comparing to traditional manufacturing technologies such as CNC machining or casting, AM processes have several unique capabilities. It enables the fabrication of parts with a complex shape as well as complex material distribution. These unique capabilities significantly enlarge the design freedom for designers. However, they also bring a big challenge. Traditional Design for manufacturing (DFM) rules or guidelines deeply rooted in designers’ mind and severely restrict designers to further improve product functional performance by taking advantages of these unique capabilities brought by AM processes. Moreover, traditional feature-based CAD tools are also difficult to deal with irregular geometry for the improvement of functional performance. To solve these issues, design methods or tools are needed to help designers to take full advantages of design freedom provide by AM processes. These design methods or tools can be categorized as Design for Additive Manufacturing.
Methods
Topology optimization
Topology optimization is a type of structural optimization technique which can optimize material layout within a given design space. Compared to other typical structural optimization techniques, such as size optimization or shape optimization, topology optimization can update both shape and topology of a part. However, the complex optimized shapes obtained from topology optimization are always difficult to handle for traditional manufacturing processes such as CNC machining. To solve this issue, additive manufacturing processes can be applied to fabricate topology optimization result. However, it should be noticed, some manufacturing constraints such as minimal feature size also need to be considered during the topology optimization process. Since the topology optimization can help designers to get an optimal complex geometry for additive manufacturing, this technique can be considered one of DfAM methods.
Multiscale structure design
Due to the unique capabilities of AM processes, parts with multiscale complexities can be realized. This provides a great design freedom for designers to use cellular structures or lattice structures on micro or meso-scales for the preferred properties. For example, in the aerospace field, lattice structures fabricated by AM process can be used for weight reduction. In the bio-medical field, bio-implant made of lattice or cellular structures can enhance osseointegration.
Multi-material design
Parts with multi-material or complex material distribution can be achieved by additive manufacturing processes. To help designers take advantage of this capability, several design and simulation methods have been proposed to support the design of a part with multiple materials or Functionally Graded Materials . These design methods also bring a challenge to traditional CAD system. Most of them can only deal with homogeneous materials now.
Design for mass customization
Since additive manufacturing can directly fabricate parts from products’ digital model, it significantly reduces the cost and leading time of producing customized products. Thus, how to rapidly generate customized parts becomes a central issue for mass customization. Several design methods have been proposed to help designers or users to obtain the customized product in an easy way. These methods or tools can also be considered as the DfAM methods.
Parts consolidation
Due to the constraints of traditional manufacturing methods, some complex components are usually separated into several parts for the ease of manufacturing as well as assembly. This situation has been changed by the using of additive manufacturing technologies. Some case studies have been done to shows some parts in the original design can be consolidated into one complex part and fabricated by additive manufacturing processes. This redesigning process can be called as parts consolidation. The research shows parts consolidation will not only reduce part count, it can also improve the product functional performance. The design methods which can guide designers to do part consolidation can also be regarded as a type of DfAM methods.
Lattice structures
Lattice structures is a type of cellular structures (i.e. open). These structures were previously difficult to manufacture, hence was not widely used. Thanks to the free-form manufacturing capability of additive manufacturing technology, it is now possible to design and manufacture complex forms. Lattice structures have high strength and low mass mechanical properties and multifunctionality. These structures can be found in parts in the aerospace and biomedical industries. It has been observed that these lattice structures mimic atomic crystal lattice, where the nodes and struts represent atoms and atomic bonds, respectively, and termed as meta-crystals. They obey the metallurgical hardening principles (grain boundary strengthening, precipitate hardening etc.) when undergoing deformation. It has been further reported that the yield strength and ductility of the struts (meta-atomic bonds) can be increased drastically by taking advantage of the non-equilibrium solidification phenomenon in Additive Manufacturing, thus increasing the performance of the bulk structures.
Thermal issues in design
For AM processes that use heat to fuse powder or feedstock, process consistency and part quality are strongly influenced by the temperature history inside the part during manufacture, especially for metal AM.
Thermal modelling can be used to inform part design and the choice of process parameters for manufacture, in place of expensive empirical testing.
Optimal design for additive manufacturing
Additively manufactured metallic structures with the same (macroscopic) shape and size but fabricated by different process parameters have strikingly different microstructures and hence mechanical properties. The abundant and highly flexible AM process parameters substantially influence the AM microstructures. Therefore, in principle, one could simultaneously 3D-print the (macro-)structure as well as the desirable microstructure depending on the expected performance of the specialized AM component under the known service load. In this context, multi-scale and multi-physics integrated computational materials engineering (ICME) for computational linkage of process-(micro)structure-properties-performance (PSPP) chain can be used to efficiently search an AM design subspace for the optimum point with respect to the performance of the AM structure under the known service load. The comprehensive design space of metal AM is boundless and high dimensional, which includes all the possible combinations of alloy compositions, process parameters and structural geometries. However, always a constrained subset of the design space (design subspace) is under consideration. The performance, as the design objective, depending on the thermo-chemo-mechanical service load, may include multiple functional aspects, such as specific energy absorption capacity, fatigue life/strength, high temperature strength, creep resistance, erosion/wear resistance and/or corrosion resistance. It is hypothesized that the optimal design approach is essential for unraveling the full potential of metal AM technologies and thus their widespread adoption for production of structurally critical load-bearing components.
References
Additive manuf
3D printing
Industrial design
Electronic design automation
Digital electronics | Design for additive manufacturing | [
"Engineering"
] | 1,893 | [
"Industrial design",
"Design engineering",
"Digital electronics",
"Design for X",
"Electronic engineering",
"Design"
] |
51,500,848 | https://en.wikipedia.org/wiki/NGC%20170 | NGC 170 is a lenticular galaxy located in the constellation Cetus. It was discovered on 3 November 1863 by Albert Marth.
See also
List of NGC objects (1–1000)
References
External links
SEDS
Astronomical objects discovered in 1863
Cetus
Discoveries by Albert Marth
0170
002195
Lenticular galaxies | NGC 170 | [
"Astronomy"
] | 65 | [
"Cetus",
"Constellations"
] |
51,501,092 | https://en.wikipedia.org/wiki/NGC%20171 | NGC 171 is a barred spiral galaxy with an apparent magnitude of 12, located around 200 million light-years away in the constellation Cetus. The galaxy has two main medium-wound arms, with a few minor arms, and a fairly bright nucleus and bulge. It was discovered on 20 October 1784 by William Herschel. It is also known as NGC 175.
See also
List of NGC objects (1–1000)
References
External links
SEDS
0171
Barred spiral galaxies
Cetus
Astronomical objects discovered in 1784
Discoveries by William Herschel | NGC 171 | [
"Astronomy"
] | 108 | [
"Cetus",
"Constellations"
] |
51,501,126 | https://en.wikipedia.org/wiki/NGC%20172 | NGC 172 is a barred spiral galaxy located around 136 million light-years away in the constellation Cetus. It was discovered in 1886 by astronomer Frank Muller.
See also
List of NGC objects (1–1000)
References
External links
Barred spiral galaxies
0172
Cetus
Astronomical objects discovered in 1886
Discoveries by Frank Muller (astronomer) | NGC 172 | [
"Astronomy"
] | 67 | [
"Cetus",
"Constellations"
] |
51,501,136 | https://en.wikipedia.org/wiki/Eremophila%20glabra%20subsp.%20South%20Coast | Eremophila glabra subsp. South Coast is a plant in the figwort family, Scrophulariaceae and is endemic to Western Australia. It is similar to other shrubs in the species Eremophila glabra but is distinguished from them mainly by the outer surface of its petal tube, which is covered with glandular hairs. It has not been formally described but is a distinct subspecies, restricted to the Ravensthorpe district.
Description
Eremophila glabra subsp. South Coast is an erect, open shrub growing to high. The leaves are grey-green and hairy, sometimes with a few indistinct teeth near their tips. The leaves are long and wide.
The flowers are yellow, orange or reddish-orange and occur singly in the leaf axils. They have 5 slightly overlapping sepals which are narrow lance-shaped, long and wide. The 5 petals form a tube long which is covered on its outer surface by glandular hairs. The lowest petal lobe is narrower that the rest and is turned back below the flower. Flowering occurs from August to December.
Taxonomy and naming
Eremophila glabra subsp. South Coast has not been formally described.
Distribution and habitat
Eremophila glabra subsp. South Coast is only known from the Ravensthorpe area where it grows in red-brown clay soil.
References
Flora of Western Australia
glabra
Undescribed plant species | Eremophila glabra subsp. South Coast | [
"Biology"
] | 294 | [
"Undescribed plant species",
"Plants"
] |
51,501,182 | https://en.wikipedia.org/wiki/NGC%20173 | NGC 173 is an unbarred spiral galaxy located approximately 3.8 million light-years away in the constellation Cetus.
References
External links
0173
00369
+00-02-092
2223
J00371247+0156321
Unbarred spiral galaxies
Cetus | NGC 173 | [
"Astronomy"
] | 62 | [
"Cetus",
"Constellations"
] |
51,501,226 | https://en.wikipedia.org/wiki/NGC%20174 | NGC 174 is a barred spiral or lenticular galaxy around 159 million light-years away in the constellation Sculptor. It was discovered on 27 September 1834 by astronomer John Herschel.
Observation history
When Herschel discovered the galaxy, he logged "faint, small, little extended, among several bright stars". After a second and third sweep, he noted an exact position which matches PGC 2206. As of such, the two objects are generally referred to as the same. The galaxy was later catalogued by John Louis Emil Dreyer in the New General Catalogue, where Herschel's original note was largely adopted, as the object was described as "extremely faint, small, very little extended, among bright stars".
Description
The galaxy appears very dim in the sky as it only has an apparent visual magnitude of approximately 14 and thus can only be observed with telescopes. It can be classified as type G using the Hubble Sequence. The object's distance of roughly 159 million light-years from the Solar System can be estimated using its redshift and Hubble's law.
See also
List of NGC objects (1–1000)
References
External links
SEDS
0174
02206
-05-02-028
J00365892-2928403
Sculptor (constellation)
Astronomical objects discovered in 1834
Discoveries by John Herschel
Barred spiral galaxies | NGC 174 | [
"Astronomy"
] | 276 | [
"Constellations",
"Sculptor (constellation)"
] |
51,501,623 | https://en.wikipedia.org/wiki/NGC%20176 | NGC 176 is an open cluster around 3.5 million light-years away in the constellation Tucana. It is located within the Small Magellanic Cloud. It was discovered on August 12, 1834, by John Herschel.
See also
Open cluster
List of NGC objects (1–1000)
Tucana
References
External links
SEDS
0176
Open clusters
Astronomical objects discovered in 1834
Discoveries by John Herschel
Tucana
Small Magellanic Cloud | NGC 176 | [
"Astronomy"
] | 90 | [
"Tucana",
"Constellations"
] |
51,501,756 | https://en.wikipedia.org/wiki/Acetomepregenol | Acetomepregenol (ACM), also known as mepregenol diacetate and sold under the brand name Diamol, is a progestin medication which is used in Russia for the treatment of gynecological conditions and as a method of birth control in combination with an estrogen. It has also been studied in the treatment of threatened abortion. It has been used in veterinary medicine as well. It has been marketed since at least 1981.
Pharmacology
Based on its chemical structure, specifically the lack of a C3 ketone, it is probable that acetomepregenol is a prodrug of megestrol acetate (the 3-keto analogue).
Chemistry
Acetomepregenol, also known as megestrol 3β,17α-diacetate, as well as 3β-dihydro-6-dehydro-6-methyl-17α-hydroxyprogesterone diacetate or as 3β,17α-diacetoxy-6-methylpregna-4,6-dien-20-one, is a synthetic pregnane steroid and a derivative of progesterone and 17α-hydroxyprogesterone. It is very close to megestrol acetate (6-dehydro-6-methyl-17α-acetoxyprogesterone) in structure, except that there is a hydroxyl group with an acetate ester attached at the C3 position instead of a ketone. A closely related medication is cymegesolate (also known as megestrol 3β-cypionate 17α-acetate), which, in contrast, has not been marketed.
See also
List of Russian drugs
References
Acetate esters
Conjugated dienes
Ketones
Pregnanes
Prodrugs
Progestogen esters
Progestogens
Russian drugs
Veterinary drugs
Drugs in the Soviet Union | Acetomepregenol | [
"Chemistry"
] | 406 | [
"Ketones",
"Chemicals in medicine",
"Functional groups",
"Prodrugs"
] |
51,501,807 | https://en.wikipedia.org/wiki/NGC%20177 | NGC 177 is an unbarred spiral galaxy with a distinct ring structure, located around 200 million light-years away in the constellation Cetus. It was discovered in 1886 by Frank Muller.
References
External links
0177
002241
-04-02-028
Cetus
Unbarred spiral galaxies | NGC 177 | [
"Astronomy"
] | 63 | [
"Cetus",
"Constellations"
] |
51,502,129 | https://en.wikipedia.org/wiki/DNS%20leak | A DNS leak is a security flaw that allows DNS requests to be revealed to ISP DNS servers, despite the use of a VPN service to attempt to conceal them.
Although primarily of concern to VPN users, it is also possible to prevent it for proxy and direct internet users.
Process
The vulnerability allows an ISP, as well as any on-path eavesdroppers, to see what websites a user may be visiting. This is possible because the browser's DNS requests are sent to the ISP DNS server directly, and not sent through the VPN.
This only occurs with certain types of VPNs, e.g. "split-tunnel" VPNs, where traffic can still be sent over the local network interface even when the VPN is active.
Starting with Windows 8, Microsoft has introduced the "Smart Multi-Homed Named Resolution". This altered the way Windows 8 handled DNS requests, by ensuring that a DNS request could travel across all available network interfaces on the computer. While there is general consensus that this new method of domain name resolution accelerated the time required for a DNS look-up to be completed, it also exposed VPN users to DNS leaks when connected to a VPN endpoint, because the computer would no longer use only the DNS servers assigned by the VPN service. Instead, the DNS request would be sent through all available interfaces, thus the DNS traffic would travel out of the VPN tunnel and expose the user's default DNS servers.
Prevention
Websites exist to allow testing to determine whether a DNS leak is occurring. Regular DNS leak testing is crucial for VPN users to ensure their privacy, as DNS leaks can expose browsing activity to ISPs and other third parties, even when a VPN is active. DNS leaks can be addressed in a number of ways:
Encrypting DNS requests with DNS over HTTPS or DNS over TLS, which prevents the requests from being seen by on-path eavesdroppers.
Using a VPN client which sends DNS requests over the VPN. Not all VPN apps will successfully plug DNS leaks, as it was found in a study by the Commonwealth Scientific and Industrial Research Organisation in 2016 when they carried an in-depth research called "An Analysis of the Privacy and Security Risks of Android VPN Permission-enabled Apps" and found that 84% of the 283 VPN applications on Google Play Store that they tested did leak DNS requests.
Changing DNS servers on local computer for whole network adapters, or setting them to different ones. 3rd party apps are available for this such as NirSoft quicksetdns.
Using a firewall to disable DNS on whole device (usually outgoing connections UDP and less commonly TCP port 53), or setting DNS servers to non-existing ones like local 127.0.0.1 or 0.0.0.0 (via command line or 3rd party app if not possible via OS GUI interface). This requires alternate ways of resolving domains like the above-mentioned ones, or using in apps with configured proxy, or using proxy helper apps like Proxifier or ProxyCap, which allows resolving domains over proxy. Many apps allow setting manual proxy or using proxy already used by system.
Using completely anonymous web browsers such as Tor Browser which not only makes user anonymous, but also doesn't require any dns to be set up on the operating system.
Using proxy or vpn, system wide, via 3rd party app helpers like Proxifier, or in form of web browser extension. However most extensions in Chrome or Firefox will report false positive working condition even if they did not connect, so 3rd party website for ip and dns leak check is recommended. This false working state usually happens when two proxy or vpn extensions are tried to be used at the same time (e.g. Windscribe VPN and FoxyProxy extensions).
References
Virtual private networks
Internet privacy
Computer security exploits | DNS leak | [
"Technology"
] | 829 | [
"Computer security exploits"
] |
51,502,530 | https://en.wikipedia.org/wiki/NGC%20179 | NGC 179 is a lenticular galaxy located 3.3 million light-years away in the constellation Cetus. It was discovered in 1886 by Francis Preserved Leavenworth.
See also
List of NGC objects (1–1000)
References
External links
SEDS
0179
Lenticular galaxies
Astronomical objects discovered in 1886
Cetus | NGC 179 | [
"Astronomy"
] | 65 | [
"Cetus",
"Constellations"
] |
51,502,959 | https://en.wikipedia.org/wiki/Nattr | Nattr is a social application that crowdsources responses to messages. It is commonly used to source icebreakers and replies to sensitive text messages.
Nattr was founded by Melanie Mercier and Laura Russell. Nattr was launched in beta March 2015.
Features
Star Responders: A star responder is a writer that has been verified by Nattr. If a user would like to request a star response, they can purchase charms to pay for one, or accumulate charms by responding to messages on Nattr
Public or Private: All messages are anonymous however, the option exists to send a message to everyone in the Nattr community or send it privately to those selected.
How it works
Login using Facebook or phone number and submit a screenshot of a conversation that needs a reply. Then post it out to the Nattr community for responses.
References
External links
Social media | Nattr | [
"Technology"
] | 174 | [
"Computing and society",
"Social media"
] |
51,503,111 | https://en.wikipedia.org/wiki/NGC%20180 | NGC 180 is a barred spiral galaxy located in the constellation Pisces. It was discovered on December 29, 1790 by William Herschel.
A peculiar type II supernova was discovered in the galaxy in 2001 and given the designation SN 2001dj.
See also
Spiral galaxy
List of NGC objects (1–1000)
Pisces (constellation)
References
External links
SEDS
0180
00380
Barred spiral galaxies
Pisces (constellation)
Astronomical objects discovered in 1790
+01-02-039
002268 | NGC 180 | [
"Astronomy"
] | 107 | [
"Pisces (constellation)",
"Constellations"
] |
51,503,173 | https://en.wikipedia.org/wiki/Di-tert-butyl-iminodicarboxylate | Di-tert-butyl-iminodicarboxylate is an organic compound that can be described with the formula [(CH3)3COC(O)]2NH. It is a white solid that is soluble in organic solvents. The compound is used as a reagent for the preparation of primary amines from alkyl halides. It was popularized as an alternative to the Gabriel synthesis for the same conversion. Amines can also be prepared from alcohols by dehydration using the Mitsunobu reaction.
In the usual implementation the reagent is deprotonated to give the potassium salt, which is N-alkylated. The Boc protecting groups are subsequently removed under acidic conditions.
References
Imides
Carbamates
Reagents for organic chemistry
Tert-butyl compounds | Di-tert-butyl-iminodicarboxylate | [
"Chemistry"
] | 173 | [
"Imides",
"Functional groups",
"Reagents for organic chemistry"
] |
51,503,211 | https://en.wikipedia.org/wiki/NGC%20181 | NGC 181 is a galaxy, likely a spiral galaxy located in the constellation Andromeda. It was discovered on October 6, 1883 by Édouard Stephan.
References
External links
0181
Spiral galaxies
Andromeda (constellation)
Discoveries by Édouard Stephan
002287 | NGC 181 | [
"Astronomy"
] | 54 | [
"Andromeda (constellation)",
"Constellations"
] |
51,503,229 | https://en.wikipedia.org/wiki/Montana%20flume | A Montana flume is a popular modification of the standard Parshall flume. The Montana flume removes the throat and discharge sections of the Parshall flume, resulting a flume that is lighter in weight, shorter in length, and less costly to manufacture. Montana flumes are used to measure surface waters, irrigations flows, industrial discharges, and wastewater treatment plant flows.
As a short-throated flume, the Montana flume has a single, specified point of measurement in the contracting section at which the level is measured. The Montana flume is described in US Bureau of Reclamation's Water Measurement Manual and two technical standards MT199127AG and MT199128AG by Montana State University.
As a modification of the Parshall flume, the design of the Montana flume is standardized under ASTM D1941, ISO 9826:1992, and JIS B7553-1993. The flumes are not patented and the discharge tables are not copyright protected.
A total of 22 standard sizes of Montana flumes have been developed, covering flow ranges from 0.005 cfs [0.1416 L/s] to 3,280 cfs [92,890 L/s].
Lacking the extended throat and discharge sections of the Parshall flume, Montana flumes are not intended for use under submerged conditions. Where submergence is possible, a full length Parshall flume should be used. Should submergence occur, investigations have been made into correcting the flow.
Under laboratory conditions the Parshall flume - upon which the Montana is based - can be expected to exhibit accuracies to within +/-2%, although field conditions make accuracies better than 5% doubtful.
Free-Flow Characteristics
The Montana Flume is a restriction with free-spilling discharge that accelerates flow from a sub-critical state (Fr~0.5) to a supercritical one (Fr>1).
The free-flow discharge can be summarized as
Where
Q is flow rate
C is the free-flow coefficient for the flume
Ha is the head at the primary point of measurement
n varies with flume size (See Table 1 below)
Montana flume discharge table for free flow conditions:
Free-Flow vs. Submerged Flow
Free-Flow – when there is no “back water” to restrict flow through a flume. Only the single depth (primary point of measurement -Ha) needs to be measured to calculate the flow rate. A free flow also induces a hydraulic jump downstream of the flume.
Submerged Flow – when the water surface downstream of the flume is high enough to restrict flow through a flume, the flume is deemed to be submerged. Lacking the extended throat and discharge sections of the Parshall flume, the Montana flume has little resistance to the effects of submergence and as such it should be avoided. Where submerged flow is or may become present, there are several methods of correcting the situation: the flume may be raised above the channel floor, the downstream channel may be modified, or a different flume type may be used (typically a Parshall flume). Although commonly thought of as occurring at higher flow rates, submerged flow can exist at any flow level as it is a function of downstream conditions. In natural stream applications, submerged flow is frequently the result of vegetative growth on the downstream channel banks, sedimentation, or subsidence of the flume.
Construction
Montana flumes can be constructed from a variety of materials:
Fiberglass (wastewater applications due to its corrosion resistance)
Stainless steel (applications involving high temperatures / corrosive flow streams)
Galvanized steel (water rights / irrigation)
Concrete
Aluminum (portable applications)
Wood (temporary flow measurement)
Plastic (PVC or polycarbonate / Lexan)
Smaller Montana flumes tend to be fabricated from fiberglass and galvanized steel (depending upon the application), while larger Montana flumes can be fabricated from fiberglass (sizes up to 160") or concrete (160"-600").
In practice, is it usual to see Montana flumes larger than 48-inches as the need for free-spilling discharge can not usually be met, downstream scour would be excessive, or other flume types better handle the flow.
Drawbacks
Montana flumes require free-spilling discharge off the flume (for free-flow conditions). To accommodate the drop in an existing channel either the flume must be raised above the channel floor (raising the upstream water level) or the downstream channel must be modified.
As with weirs, flumes can also have an effect on local fauna. Some species or certain life stages of the same species may be blocked by flumes due to relatively slow swim speeds or behavioral characteristics. The elevated nature of the Montana flume exacerbates this problem.
In earthen channels, upstream bypass may occur and downstream scour will occur unless the channel is armored.and downstream scour may occur.
Montana flumes smaller than 3 inches in size should not be used on unscreened sanitary flows, due to the likelihood of clogging.
The Montana flume is an empirical device. Interpolation between sizes is not an accurate method of developing intermediate size Montana flumes as the flumes are not scale models of each other. The 30-inch [76.2 cm] and 42-inch [106.7 cm] sizes are examples of intermediate sizes of Montana flumes that have crept into the marketplace without the backing of published research into their sizing and flow rates.
References
External links
Pictures of fiberglass, galvanized and stainless steel Montana flumes
Fluid mechanics
Hydraulic structures
Water supply infrastructure
Hydrology | Montana flume | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 1,172 | [
"Hydrology",
"Civil engineering",
"Fluid mechanics",
"Environmental engineering"
] |
51,503,257 | https://en.wikipedia.org/wiki/NGC%20182 | NGC 182 is a spiral galaxy with a ring structure, located in the constellation Pisces. It was discovered on December 25, 1790 by William Herschel.
In 2004 a type IIb supernova was discovered in this galaxy and designated SN 2004ex.
References
External links
0182
Intermediate spiral galaxies
Discoveries by William Herschel
Pisces (constellation)
002279 | NGC 182 | [
"Astronomy"
] | 78 | [
"Pisces (constellation)",
"Constellations"
] |
51,503,455 | https://en.wikipedia.org/wiki/Cantharocybe%20brunneovelutina | Cantharocybe brunneovelutina is a species of the family Hygrophoraceae.
References
Hygrophoraceae
Fungi described in 2011
Fungus species | Cantharocybe brunneovelutina | [
"Biology"
] | 39 | [
"Fungi",
"Fungus species"
] |
51,503,592 | https://en.wikipedia.org/wiki/Cantharocybe%20virosa | Cantharocybe virosa is a member of the fungal family Hygrophoraceae that has been identified in India, Bangladesh and Thailand. It is an ectomycorrhizal fungus that is toxic for consumption and has no know uses in agriculture, horticulture or medicine. C. virosa is a gray to gray-brown fungus with white to yellowish-white gills that can be found in soil or on mud walls near Cocos nucifera.
Taxonomy
The species Cantharocybe virosa was initially described by a group of mycologists at the University of Calicut as Megacollybia virosa in 2010 using a cladistic approach. It was transferred from the genus Megacollybia to the genus Cantharocybe in 2013 by Kumar and Manimohan using molecular phylogeny.
Description
Macroscopic
Cantharocybe virosa has a gray to gray-brown pileus, ranging from 4.5 cm to 10 cm in diameter, with a striped surface and straight margins at maturity. The gills are up to 9mm thick and yellowish white to whitish, either adnate or decurrent, forming between four and eight tiers . The stipe is terete or compressed and typically central, but it can be excentric. It is moist, solid, with a dilated apex and white basal mycelium. The spore print is white, and the mushroom produces a strong and unpleasant, but undescribed odor.
Microscopic
Cantharocybe virosa has smooth ellipsoid basidiospores and elongated, necked lecythiform cheilocystidia . Also notable is the presence of cutis pileipellis forming trichodermal patches, and abundant clamp connections.
Distribution and habitat
Cantharocybe virosa can be found both as solitary individuals or in clusters in a substrate of soil or mud walls. It is saprotrophic and often found near the roots of Cocos nucifera due to its ectomycorrhizal association with it. C. virosa inhabits tropical regions, originally identified in India in 2010, but has since been identified in Bangladesh as well as Thailand. C. virosa was identified in Bangladesh in 2016 and in Thailand in 2018. It is assumed to have been present in Thailand, but not described before this point due to the large number of unidentified fungi in the country.
Root symbiosis
Cantharocybe virosa is believed to have an ectomycorrhizal association with C. nucifera, the coconut tree. This association is unusual as the family Arecaceae, in which C. nucifera is classified, typically doesn't form fungal associations. Recent studies have shown the closely related genus Cuphophyllus as having hyphal endophytes in plant roots, with Hosen hypothesizing the C. virosa and C. nucifera association might be of this form instead.
Toxicity
When consumed, C. virosa causes gastrointestinal (GI) issues, a result of the mycotoxin coprine, but it is not fatal. Because it is not edible, it is not cultivated and has no know current or historical medicinal uses or known ties to any historical events. Wild specimens of C. virosa are occasionally mistaken for other mushrooms and ingested, leading to its description in India and identification in Thailand. The first known outbreak occurred in 2006 in Kerala when a family of four used it in cooking, but at this time C. virosa had not been described. In 2018 a large outbreak of 39 cases occurred during the rainy season in Thailand, found to be caused by C. virosa.
Coprine
The mycotoxin coprine is believed to be responsible for causing a number of symptoms when ingested, including GI system effects, rash, sweating and arrhythmias. These symptoms fall under the group 4b toxins, described as disulfiram-like.
Use in research
In 2022 the genomic data gathered from C. virosa has been used as an out group to identify two new species in the genus Volvariella, V. neovolvacea and V. thailandensis.
References
Fungi of Bangladesh
Fungi of India
Hygrophoraceae
Fungi described in 2011
Fungus species | Cantharocybe virosa | [
"Biology"
] | 905 | [
"Fungi",
"Fungus species"
] |
51,503,630 | https://en.wikipedia.org/wiki/Event%20bubbling | Event bubbling is a type of DOM event propagation where the event first triggers on the innermost target element, and then successively triggers on the ancestors (parents) of the target element in the same nesting hierarchy till it reaches the outermost DOM element or document object (Provided the handler is initialized). It is one way that events are handled in the browser. Events are actions done by the user such as a button click, changing a field etc. Event handlers are used to execute code when a particular kind of user interface event occurs, such as when a button has been clicked or when a webpage has completed loading.
Overview
Consider the DOM structure where there are 3 elements nested in the following order: Element 1 (Div), Element 2 (Span), Element 3 (Button) whose on-click handlers are handler1(), handler2() and handler3() respectively.
<div id="Element1" onclick="handler1()">
<span id="Element2" onclick="handler2()">
<input type="button" id="Element3" onclick="handler3()" />
</span>
</div>
When the Element3 button is clicked, an event handler for Element 3 is triggered first, then event bubbles up and the handler for immediate parent element - Element 2 is called, followed by the handler for Element 1 and so on till it reaches the outermost DOM element.
Event handling order: handler3() → handler2() → handler1()
The innermost element from where the event is triggered is called the target element. Most of the browsers consider event bubbling as the default way of event propagation. However, there is another approach for event propagation known as Event Capturing, which is the direct opposite of event bubbling, where event handling starts from the outermost element (or Document) of the DOM structure and goes all the way to the target element, executing the target element handler last in order.
Implementation
All the event handlers consider event bubbling as the default way of event handling. But a user can manually select the way of propagation by specifying that as the last parameter in addEventListener() of any element in JavaScript.
addEventListener("type", "Listener", "CaptureMode")
If the is False, the event will be handled using event bubbling.
If the is True, the event will be handled using event capturing.
If a user doesn’t specify any value of CaptureMode argument, then it is by default considered as event bubbling. Most of the browser support both event bubbling and event capturing (Except IE <9 and Opera<7.0 which do not support event capturing).
JavaScript also provides an event property called bubbles to check whether the event is bubbling event or not. It returns a Boolean value True or False depending on whether the event can bubble up to the parent elements in DOM structure or not.
var isBubblePossible = event.bubbles;
isBubblePossible : True, if event can bubble up to the ancestors
isBubblePossible : False, if event cannot bubble up
Use of event bubbling
To handle cases where one event has more than one handler, event bubbling concept can be implemented. The major use of event bubbling is the registration of default functions present in the program. In recent times, not many developers use event capturing or bubbling in particular. It is not necessary to implement event bubbling; it may become complicated for the users to keep track of the actions getting executed because of an event.
Preventing event bubbling
It is sometimes useful to stop a single trigger on one element lead to multiple triggers on ancestors. JavaScript provides the following methods to prevent event bubbling:
1) stopPropagation(): This method stops the further propagation of any particular event to its parents, invoking only the event handler of the target element. Although supported by all W3C compliant browsers, Internet Explorer below version 9 requires the historical alias cancelBubble, as in:
event.cancelBubble = true;
For all W3C-compliant browsers:
event.stopPropagation();
2) stopImmediatePropagation(): This method will not only stop the further propagation but also stops any other handler of the target event from executing. In the DOM, the same event can have multiple independent handlers, so stopping the execution of one event handler generally doesn’t affect the other handlers of the same target. But stopImmediatePropagation() method prevents the execution of any other handler of the same target.
For all W3C-compliant browsers:
event.stopImmediatePropagation();
Another approach to stop event bubbling is to cancel the event itself, however this prevents the target handler execution as well.
See also
DOM event
References
bubbling
JavaScript | Event bubbling | [
"Technology"
] | 1,015 | [
"Information systems",
"Events (computing)"
] |
51,503,872 | https://en.wikipedia.org/wiki/Free%20neutron%20decay | When embedded in an atomic nucleus, neutrons are (usually) stable particles. Outside the nucleus, free neutrons are unstable and have a mean lifetime of or (about and or , respectively). Therefore, the half-life for this process (which differs from the mean lifetime by a factor of ) is (about , ).
The beta decay of the neutron described in this article can be notated at four slightly different levels of detail, as shown in four layers of Feynman diagrams in a section below.
The hard-to-observe quickly decays into an electron and its matching antineutrino. The subatomic reaction shown immediately above depicts the process as it was first understood, in the first half of the 20th century. The boson () vanished so quickly that it was not detected until much later.
Later, beta decay was understood to occur by the emission of a weak boson (), sometimes called a charged weak current. Beta decay specifically involves the emission of a boson from one of the down quarks hidden within the neutron, thereby converting the down quark into an up quark and consequently the neutron into a proton. The following diagram gives a summary sketch of the beta decay process according to the present level of understanding.
For diagrams at several levels of detail, see § Decay process, below.
Energy budget
For the free neutron, the decay energy for this process (based on the rest masses of the neutron, proton and electron) is . That is the difference between the rest mass of the neutron and the sum of the rest masses of the products. That difference has to be carried away as kinetic energy. The maximal energy of the beta decay electron (in the process wherein the neutrino receives a vanishingly small amount of kinetic energy) has been measured at . The latter number is not well-enough measured to determine the comparatively tiny rest mass of the neutrino (which must in theory be subtracted from the maximal electron kinetic energy); furthermore, neutrino mass is constrained by many other methods.
A small fraction (about 1 in 1,000) of free neutrons decay with the same products, but add an extra particle in the form of an emitted gamma ray:
This gamma ray may be thought of as a sort of "internal bremsstrahlung" that arises as the emitted beta particle (electron) interacts with the charge of the proton in an electromagnetic way. In this process, some of the decay energy is carried away as photon energy. Gamma rays produced in this way are also a minor feature of beta decays of bound neutrons, that is, those within a nucleus.
A very small minority of neutron decays (about four per million) are so-called "two-body (neutron) decays", in which a proton, electron and antineutrino are produced as usual, but the electron fails to gain the 13.6 eV necessary energy to escape the proton (the ionization energy of hydrogen), and therefore simply remains bound to it, as a neutral hydrogen atom (one of the "two bodies"). In this type of free neutron decay, in essence all of the neutron decay energy is carried off by the antineutrino (the other "body").
The transformation of a free proton to a neutron (plus a positron and a neutrino) is energetically impossible, since a free neutron has a greater mass than a free proton. However, see proton decay.
Decay process viewed from multiple levels
Understanding of the beta decay process developed over several years, with the initial understanding of Enrico Fermi and colleagues starting at the "superficial" first level in the diagram below. Current understanding of weak processes rest at the fourth level, at the bottom of the chart, where the nucleons (the neutron and its successor proton) are largely ignored, and attention focuses only on the interaction between two quarks and a charged boson, with the decay of the boson almost treated as an afterthought. Because the charged weak boson () vanishes so quickly, it was not actually observed during the first half of the 20th century, so the diagram at level 1 omits it; even at present it is for the most part inferred by its after-effects.
{| style="text-align:left;vertical-align:bottom;"
|-
|colspan=9|
|
|-
|
|
|
| +
|
|
|
| +
|
|
|-
|colspan=9|
|
|-
|
|
|
|
| +
|
|
|
|
|
|-
|
|
|
|
|
|
|
| +
|
|
|-
|colspan=9|
|
|-
|
|
|
| +
|
|
|
|
|
|
|-
|
|
|
|
|
|
|
| +
|
|
|-
|colspan=9|
|
|-
|
|
|
| +
|
|
|
|
|
|
|-
|
|
|
|
|
|
|
| +
|
|
|}
Neutron lifetime puzzle
While the neutron lifetime has been studied for decades, there currently exists a lack of consilience on its exact value, due to different results from two experimental methods ("bottle" versus "beam").
The "neutron lifetime anomaly" was discovered after the refinement of experiments with ultracold neutrons. While the error margin was once overlapping, increasing refinement in technique which should have resolved the issue has failed to demonstrate convergence to a single value. The difference in mean lifetime values obtained as of 2014 was approximately 9 seconds. Further, a prediction of the value based on quantum chromodynamics as of 2018 is still not sufficiently precise to support one over the other.
As explained by Wolchover (2018), the beam test would be incorrect if there is a decay mode that does not produce a proton.
On 13 October 2021 the lifetime from the bottle method was updated to increasing the difference to 10 seconds below the beam method value of and also on the same date a novel third method using data from the past NASA's Lunar prospector mission reported a value of but with great uncertainty.
Yet another approach similar to the beam method has been explored with the Japan Proton Accelerator Research Complex (J-PARC) but it is too imprecise at the moment to be of significance on the analysis of the discrepancy.
See also
Halbach array-used in the "bottle" method
Footnotes
References
Bibliography
Neutron
Radioactivity
Physical phenomena | Free neutron decay | [
"Physics",
"Chemistry"
] | 1,323 | [
"Physical phenomena",
"Radioactivity",
"Nuclear physics"
] |
51,504,411 | https://en.wikipedia.org/wiki/Thinking%20environment | The thinking environment is a philosophy of communication, based on the work of Nancy Kline. It is a practical series of values-based applications which are useful in family, campaigning, community and organisational life, as well as forming the basis of a teaching pedagogy and coaching approach.
A thinking environment exists when the "ten components", or "principles", are held in place by a facilitator. The components are attention, appreciation, ease, encouragement, diversity, information, feelings, equality, place and incisive questions.
With the components in place, the facilitator then chooses an "application" of the thinking environment, with the agreement of participants. These include coaching (known as the "thinking partnership"), "thinking rounds", "thinking pairs", "transforming meetings", "mentoring", "time to think council", "dialogue", and "timed talk".
References
Human communication | Thinking environment | [
"Biology"
] | 194 | [
"Human communication",
"Behavior",
"Human behavior"
] |
51,504,612 | https://en.wikipedia.org/wiki/2-NBDG | 2-NBDG is a fluorescent tracer used for monitoring glucose uptake into living cells; it consists of a glucosamine molecule substituted with a 7-nitrobenzofurazan fluorophore at its amine group. It is widely referred to a fluorescent derivative of glucose, and it is used in cell biology to visualize uptake of glucose by cells. Cells that have taken up the compound fluoresce green.
2-NBDG is similar to radiolabeled glucose in that both can be used to detect glucose transport. Unlike radiolabeled glucose, 2-NBDG is compatible with fluorescence techniques such as a fluorescent microscopy, flow cytometry, and fluorimetry.
The compound is taken up by a variety of mammalian, plant, and microbial cells In bacterial cells, the predominant transporter is the mannose phosphotransferase system. Cells that lack these or other compatible transporters do not take up 2-NBDG. In mammalian cells, one transporter for 2-NBDG is GLUT2. In T cells, 2-NBDG was transported by another, unidentified transporter and it did not match radiolabeled glucose transport.
Like glucose, 2-NBDG is transported according to Michaelis–Menten kinetics. However, transport of 2-NBDG has a lower Vmax (maximum rate), and thus the rate of transport is generally slower than glucose.
Once taken up, the compound is metabolized to a non-fluorescent derivative, as shown in Escherichia coli. The identity and further metabolism of this non-fluorescent derivative has not been established.
References
Hexosamines
Deoxy sugars
Aldohexoses | 2-NBDG | [
"Chemistry"
] | 374 | [
"Deoxy sugars",
"Carbohydrates"
] |
51,506,064 | https://en.wikipedia.org/wiki/DPANN | DPANN is a superphylum of Archaea first proposed in 2013. Many members show novel signs of horizontal gene transfer from other domains of life. They are known as nanoarchaea or ultra-small archaea due to their smaller size (nanometric) compared to other archaea.
DPANN is an acronym formed by the initials of the first five groups discovered, Diapherotrites, Parvarchaeota, Aenigmarchaeota, Nanoarchaeota and Nanohaloarchaeota. Later Woesearchaeota and Pacearchaeota were discovered and proposed within the DPANN superphylum. In 2017, another phylum Altiarchaeota was placed into this superphylum. The monophyly of DPANN is not yet considered established, due to the high mutation rate of the included phyla, which can lead to the artifact of the long branch attraction (LBA) where the lineages are grouped basally or artificially at the base of the phylogenetic tree without being related. These analyzes instead suggest that DPANN belongs to Euryarchaeota or is polyphyletic occupying various positions within Euryarchaeota.
The DPANN groups together different phyla with a variety of environmental distribution and metabolism, ranging from symbiotic and thermophilic forms such as Nanoarchaeota, acidophiles like Parvarchaeota and non-extremophiles like Aenigmarchaeota and Diapherotrites. DPANN was also detected in nitrate-rich groundwater, on the water surface but not below, indicating that these taxa are still quite difficult to locate.
Since the recognition of the kingdom rank by the ICNP, the only validly published name for this group is kingdom Nanobdellati.
Characteristics
They are characterized by being small in size compared to other archaea (nanometric size) and in keeping with their small genome, they have limited but sufficient catabolic capacities to lead a free life, although many are thought to be episymbionts that depend on a symbiotic or parasitic association with other organisms. Many of their characteristics are similar or analogous to those of ultra-small bacteria (CPR group).
Limited metabolic capacities are a product of the small genome and are reflected in the fact that many lack central biosynthetic pathways for nucleotides, aminoacids, and lipids; hence most DPANN archaea, such as ARMAN archaea, which rely on other microbes to meet their biological requirements. But those that have the potential to live freely are fermentative and aerobic heterotrophs.
They are mostly anaerobic and have not been cultivated. They live in extreme environments such as thermophilic, hyperacidophilic, hyperhalophilic or metal-resistant; or also in the temperate environment of marine and lake sediments. They are rarely found on the ground or in the open ocean.
Classification
Diapherotrites. Found by phylogenetic analysis of the genomes recovered from the groundwater filtration of a gold mine abandoned in the USA.
Parvarchaeota and Micrarchaeota. Discovered in 2006 in acidic mine drainage from a US mine. They are of very small size and provisionally called ARMAN (Archaeal Richmond Mine acidophilic nanoorganisms).
Woesearchaeota and Pacearchaeota. They have been identified both in sediments and in surface waters of aquifers and lakes, abounding especially in saline conditions.
Aenigmarchaeota. Found in wastewater from mines and in sediments from hot springs.
Nanohalarchaeota. Distributed in environments with high salinity.
Nanoarchaeota. They were the first discovered (in 2002) in a hydrothermal source next to the coast of Iceland. They live as symbionts of other archaea.
Phylogeny
Taxonomy
The currently accepted taxonomy is based on the List of Prokaryotic names with Standing in Nomenclature (LPSN) and National Center for Biotechnology Information (NCBI).
Superphylum "DPANN" Rinke et al. 2013 (kingdom Nanobdellati)
Phylum Microcaldota Sakai et al. 2023
Class Microcaldia Sakai et al. 2023
Order ?Microcaldales Sakai et al. 2023
Phylum "Undinarchaeota" Dombrowski et al. 2020
Class "Undinarchaeia" Dombrowski et al. 2020
Order "Undinarchaeales" Dombrowski et al. 2020
Phylum "Huberarchaeota" Probst et al. 2019
Class "Huberarchaeia" corrig. Probst et al. 2019
Order "Huberarchaeales" Rinke et al. 2020
Phylum "Aenigmatarchaeota" corrig. Rinke et al. 2013 (DSEG, DUSEL2)
Class "Aenigmatarchaeia" corrig. Rinke et al. 2020
Order "Aenigmatarchaeales" corrig. Rinke et al. 2020
Phylum "Nanohalarchaeota" corrig. Rinke et al. 2013
Class "Nanohalobiia" corrig.La Cono et al. 2020
Order "Nanohalobiales" La Cono et al. 2020
Class "Nanohalarchaeia" corrig. Narasingarao et al. 2012
Order ?"Nanohalarchaeales"
Order ?"Nanohydrothermales" Xie et al. 2022
Order ?"Nucleotidisoterales" Xie et al. 2022
Phylum Altarchaeota Probst et al. 2018 (SM1)
Class "Altarchaeia" corrig. Probst et al. 2014
Order "Altarchaeales" corrig. Probst et al. 2014
Phylum "Iainarchaeota" corrig. Rinke et al. 2013 ["Diapherotrites" Rinke et al. 2013] (DUSEL-3)
Class "Iainarchaeia" Rinke et al. 2020
Order "Forterreales" Probst & Banfield 2017
Order "Iainarchaeales" Rinke et al. 2020
Phylum "Micrarchaeota" Baker & Dick 2013
Class "Micrarchaeia" Vazquez-Campos et al. 2021
Order "Anstonellales" Vazquez-Campos et al. 2021 (LFWA-IIIc)
Order "Burarchaeales" Vazquez-Campos et al. 2021 (LFWA-IIIb)
Order "Fermentimicrarchaeales" Kadnikov et al. 2020
Order "Gugararchaeales" Vazquez-Campos et al. 2021 (LFWA-IIIa)
Order "Micrarchaeales" Vazquez-Campos et al. 2021
Order "Norongarragalinales" Vazquez-Campos et al. 2021 (LFWA-II)
Phylum Nanobdellota Huber et al. 2023
Class Nanobdellia Kato et al. 2022
Order JAPDLS01
Order "Jingweiarchaeales" Rao et al. 2023 [DTBS01]
Order Nanobdellales Kato et al. 2022
Order "Pacearchaeales" (DHVE-5, DUSEL-1)
Order "Parvarchaeales" Rinke et al. 2020 (ARMAN 4 & 5)
Order "Tiddalikarchaeales" Vazquez-Campos et al. 2021 (LFW-252_1)
Order "Woesearchaeales" (DHVE-6)
Phylum ?"Mamarchaeota"
Order ?"Wiannamattarchaeales"
See also
List of Archaea genera
References
External links
Archaea taxa
Extremophiles
Superphyla | DPANN | [
"Biology",
"Environmental_science"
] | 1,673 | [
"Archaea",
"Organisms by adaptation",
"Extremophiles",
"Bacteria",
"Archaea taxa",
"Environmental microbiology"
] |
51,506,398 | https://en.wikipedia.org/wiki/Cost-sharing%20mechanism | In economics and mechanism design, a cost-sharing mechanism is a process by which several agents decide on the scope of a public product or service, and how much each agent should pay for it. Cost-sharing is easy when the marginal cost is constant: in this case, each agent who wants the service just pays its marginal cost. Cost-sharing becomes more interesting when the marginal cost is not constant. With increasing marginal costs, the agents impose a negative externality on each other; with decreasing marginal costs, the agents impose a positive externality on each other (see example below). The goal of a cost-sharing mechanism is to divide this externality among the agents.
There are various cost-sharing mechanisms, depending on the type of product/service and the type of cost-function.
Divisible product, increasing marginal costs
In this setting, several agents share a production technology. They have to decide how much to produce and how to share the cost of production.
The technology has increasing marginal cost - the more is produced, the harder it becomes to produce more units (i.e., the cost is a convex function of the demand).
An example cost-function is:
$1 per unit for the first 10 units;
$10 per unit for each additional unit.
So if there are three agents whose demands are 3 and 6 and 10, then the total cost is $100.
Definitions
A cost-sharing problem is defined by the following functions, where i is an agent and Q is a quantity of the product:
Demand(i) = the amount that agent i wants to receive.
Cost(Q) = the cost of producing Q units of the product.
A solution to a cost-sharing problem is defined by a payment for every agent who is served, such that the total payment equals the total cost:
;
where D is the total demand:
Several cost-sharing solutions have been proposed.
Average cost-sharing
In the literature on cost pricing of a regulated monopoly, it is common to assume that each agent should pay its average cost, i.e.:
In the above example, the payments are 15.8 (for demand 3), 31.6 (for demand 6) and 52.6 (for demand 10).
This cost-sharing method has several advantages:
It is not affected by manipulations in which two agents openly merge their demand into a single super-agent, or one agent openly splits its demand into two sub-agents. Indeed, it is the only method immune to such manipulations.
It is not affected by manipulations in which two agents secretly transfer costs and products between each other.
Each agent pays at least its stand-alone cost - the cost he would have paid without the existence of other agents. This is a measure of solidarity: no agent should make a profit from a negative externality.
However, it has a disadvantage:
An agent might pay more than its unanimous cost - the cost he would have paid if all other agents had the same demand.
This is a measure of fairness: no agent should suffer too much from the negative externality. In the above example, the agent with demand 3 can claim that, if all other agents were as modest as he is, there would have been no negative externality and each agent would have paid only $1 per unit, so he should not have to pay more than this.
Marginal cost-sharing
In marginal cost-sharing, the payment of each agent depends on his demand and on the marginal cost in the current production-state:
In the above example, the payments are 0 (for demand 3), 30 (for demand 6) and 70 (for demand 10).
This method guarantees that an agents pays at most its unanimous cost - the cost he would have paid if all other agents had the same demand.
However, an agent might pay less than its stand-alone cost. In the above example, the agent with demand 3 pays nothing (in some cases it is even possible that an agent pays negative value).
Serial cost-sharing
Serial cost-sharing can be described as the result of the following process.
At time 0, all agents enter a room.
The machine starts producing one unit per minute.
The produced unit and its cost are divided equally among all agents in the room.
Whenever an agent feels that his demand is satisfied, he exits the room.
So, if the agents are ordered in ascending order of demand:
Agent 1 (with the lowest demand) pays:
;
Agent 2 pays:
plus ;
and so on.
This method guarantees that each agent pays at least its stand-alone cost and at most its unanimous cost.
However, it is not immune to splitting or merging of agents, or to transfer of input and output between agents. Hence, it makes sense only when such transfers are impossible (for example, with cable TV or telephone services).
Binary service, decreasing marginal costs
In this setting,
there is a binary service - each agent is either served or is not served. The cost of the service is higher when more agents are served, but the marginal cost is smaller than when serving each agent individually (i.e., the cost is a submodular set function).
As a typical example, consider two agents, Alice and George, who live near a water-source, with the following distances:
Source-Alice: 8 km
Source-George: 7 km
Alice-George: 2 km
Suppose that each kilometer of water-pipe costs $1000. We have the following options:
Nobody is connected; the cost is 0.
Only George is connected; the cost is $7000.
Only Alice is connected; the cost is $8000.
Both Alice and George are connected; the cost is $9000, since the pipe can go from Source to George and then to Alice. Note that it is much cheaper than the sum of the costs of George and Alice.
The choice between these four options should depend on the valuations of the agents - how much each of them is willing to pay for being connected to the water-source.
The goal is to find a truthful mechanism that will induce the agents to reveal their true willingness-to-pay.
Definitions
A cost-sharing problem is defined by the following functions, where i is an agent and S is a subset of agents:
Value(i) = the amount that agent i is willing to pay in order to enjoy the service.
Cost(S) = the cost of serving all and only the agents in S. E.g., in the above example Cost({Alice,George})=9000.
A solution to a cost-sharing problem is defined by:
A subset S of agents who should be served;
A payment for every agent who is served.
A solution can be characterized by:
The budget surplus of a solution is the total payment minus the total cost: . We would like to have budget balance, which means that the surplus should be exactly 0.
The social welfare of a solution is the total utility minus the total cost: . We would like to have efficiency, which means that the social welfare is maximized.
It is impossible to attain truthfulness, budget-balance and efficiency simultaneously; therefore, there are two classes of truthful mechanisms:
Tatonement mechanisms - budget-balanced but not efficient
A budget-balanced cost-sharing mechanism can be defined by a function Payment(i,S) - the payment that agent i has to pay when the subset of served agents is S. This function should satisfy the following two properties:
budget-balance: the total payment by any subset equals the total cost of serving this subset: . So if a single agent is served, he must pay all his cost, but if two or more agents are served, each of them may pay less than his individual cost because of the submodularity.
population monotonicity: the payment of an agent weakly increases when the subset of served agents shrinks: .
For any such function, a cost-sharing problem with submodular costs can be solved by the following tatonnement process:
Initially, let S be the set of all agents.
Tell each agent i that he should pay Payment(i,S).
Each agent who is not willing to pay his price, leaves S.
If any agent has left S, return to step 2.
Otherwise, finish and server the agents that remain in S.
Note that, by the population-monotonicity property, the price always increases when people leave S. Therefore, an agent will never want to return to S, so the mechanism is truthful (the process is similar to an English auction). In addition to truthfulness, the mechanism has the following merits:
Group strategyproofness - no group of agents can gain by reporting untruthfully.
No positive transfers - no agent is paid money in order to be served.
Individual rationality - no agent loses value from participation (in particular, a non-served agent pays nothing and a served agent pays at most his valuation).
Consumer sovereignty - every agent can choose to get service, if his willingness-to-pay is sufficiently large.
Moreover, any mechanism satisfying budget-balance, no-positive-transfers, individual-rationality, consumer-sovereignty and group-strategyproofness can be derived in this way using an appropriate Payment function.
The mechanism can select the Payment function in order to attain such goals as fairness or efficiency. When agents have equal apriori rights, some reasonable payment functions are:
The Shapley value, e.g., for two agents, the payments when both agents are served are: Payment(Alice,Both) = [Cost(Both)+Cost(Alice)-Cost(George)]/2, Payment(George,Both) = [Cost(Both)+Cost(George)-Cost(Alice)]/2.
The egalitarian solution, e.g. Payment(Alice,Both) = median[Cost(Alice), Cost(Both)/2, Cost(Both)-Cost(George)], Payment(George,Both) = median[Cost(George), Cost(Both)/2, Cost(Both)-Cost(Alice)].
When agents have different rights (e.g. some agents are more senior than others), it is possible to charge the most senior agent only his marginal cost, e.g. if George is more senior, then for every subset S which does not contain George: Payment(George,S+George) = Cost(S+George)−Cost(S). Similarly, the next-most-senior agent can pay his marginal remaining cost, and so on.
The above cost-sharing mechanisms are not efficient - they do not always select the allocation with the highest social welfare. But, when the payment function is selected to be the Shapley value, the loss of welfare is minimized.
VCG mechanisms - efficient but not budget-balanced
A different class of cost-sharing mechanisms are the VCG mechanisms. A VCG mechanism always selects the socially-optimal allocation - the allocation that maximizes the total utility of the served agents minus the cost of serving them. Then, each agent receives the welfare of the other agents, and pays an amount that depends only on the valuations of the other agents. Moreover, all VCG mechanisms satisfy the consumer-sovereignty property.
There is a single VCG mechanism which also satisfies the requirements of no-positive-transfers and individual-rationality - it is the Marginal Cost Pricing mechanism. This is a special VCG mechanism in which each non-served agent pays nothing, and each served agent pays:
I.e, each agent pays his value, but gets back the welfare that is added by his presence. Thus, the interests of the agent are aligned with the interests of society (maximizing the social welfare) so the mechanism is truthful.
The problem with this mechanism is that it is not budget-balanced - it runs a deficit. Consider the above water-pipe example, and suppose both Alice and George value the service as $10000. When only Alice is served, the welfare is 10000-8000=2000; when only George is served; the welfare is 10000-7000=3000; when both are served, the welfare is 10000+10000-9000=11000. Therefore, the Marginal Cost Pricing mechanism selects to serve both agents. George pays 10000-(11000-2000)=1000 and Alice pays 10000-(11000-3000)=2000. The total payment is only 3000, which is less than the total cost of 9000.
Moreover, the VCG mechanism is not group-strategyproof: an agent can help other agents by raising his valuation, without harming himself.
See also
Carpool - an application of cost-sharing.
Shapley value - a possible rule for cost-sharing.
Public good
Facility location (cooperative game)
Surplus sharing
References
Mechanism design | Cost-sharing mechanism | [
"Mathematics"
] | 2,642 | [
"Game theory",
"Mechanism design"
] |
51,507,225 | https://en.wikipedia.org/wiki/All%20Tomorrows | All Tomorrows: A Billion Year Chronicle of the Myriad Species and Mixed Fortunes of Man is a 2006 work of science fiction and speculative evolution written and illustrated by the Turkish artist C. M. Kosemen under the pen name Nemo Ramjet. It explores a hypothetical future path of human evolution set from the near future to a billion years from the present. Several future human species evolve through natural means and through genetic engineering, conducted by both humans themselves and by a mysterious and superior alien species called the Qu.
Inspired by the science fiction works of Olaf Stapledon and Edward Gibbon's The History of the Decline and Fall of the Roman Empire, Kosemen worked on All Tomorrows from 2003 to the publication of the book as a free PDF file online in 2006. Kosemen intends to eventually publish a greatly expanded All Tomorrows in physical form, with new text and updated illustrations.
Summary
Centuries following humanity terraforming and colonizing Mars, a brief but catastrophic interplanetary war takes place between Mars and Earth costing both parties billions of lives. The two planets eventually make peace with each other, and a large-scale colonization initiative is carried out by genetically engineered humans called Star People throughout the galaxy.
Humans (now Star People) then encounter a malevolent and superior alien species called the Qu. The Qu's religion motivates them to remake the universe through genetic engineering. A short war follows in which humanity is defeated. The Qu bioengineer the surviving humans as punishment into a range of exotic forms, many of them unintelligent. After forty million years of domination, the Qu leave the galaxy, leaving the altered humans to evolve on their own. The bioengineered humans range from worm-like humans to insectivores and modular and cell-based species. The book follows the progress of these new humans as they either go extinct or regain sapience in wildly different forms and gradually discover that the Qu experimented on them.
One species, known as the Ruin Haunters, replaces their bodies with mechanical forms using the technology the Qu had left, now known as the Gravitals. They begin to colonize the rest of the galaxy while annihilating most life within it, including the other post-human species (except for Bug Facers who in a similar fashion like that of the Qu are genetically modified by the Gravitals for their own gain). They are, themselves, destroyed by the Asteromorphs, the descendants of a human species who escaped experimentation by the Qu, the remaining Gravitals are then re-modified into less sophisticated machines to serve as laborers by the Asteromorphs. The final chapters of the book detail humanity's rebound as a posthuman species, their first contact with another galaxy's life, rediscovering and defeating the Qu after five-hundred million years, and concluding with the rediscovery of Earth 560 million years in the future.
All Tomorrows ends with a picture of the book's in-universe author, an alien researcher, holding a billion-year-old human skull and writing that all posthuman species disappeared a billion years in the future, for unknown reasons. The author goes on to state that mankind's story was always about the lives of humans themselves, not major wars and abstract ideals. The author ends by encouraging the reader to "Love Today, and seize All Tomorrows!"
Development
Kosemen worked on All Tomorrows from 2003 to 2006. The work of Olaf Stapledon, particularly Last and First Men (1930) and Star Maker (1937), served as the main inspiration for the work, alongside Edward Gibbon's The History of the Decline and Fall of the Roman Empire.
All Tomorrows is written in the style of a historical work, narrated by an alien creature recounting the history of humanity. According to Kosemen, the "tone of voice is a high school student fanboying on the Decline and Fall of the Roman Empire by Edward Gibbon". The artwork is also reflective of this "archaeological" approach, with faded and textured visual effects applied to the paintings. The original reason for adding the faded tint to the paintings was Kosemen wanting to avoid the paintings looking like "horrible racist caricatures".
The book was released for free online as a PDF on 4 October 2006 and has since then, per Kosemen himself, "had a life of its own as a PDF floating around the backwaters of the internet like a ghost ship". One of the common links which All Tomorrows has been shared through is a wiki site dedicated to speedrunning.
The first licensed physical edition of All Tomorrows was published by Time Publishing in March 2024, in the Thai language. This edition included the content of the original 2006 book, with a new chapter on the making of the book and some additional artwork by other artists.
All Tomorrows is yet to be physically published in English, however in July 2024 preorders on the crowdfunding site Unbound began for official hardback and e-book editions in the English language, including additional materials and artwork and the intent to publish in 2025.
Reception and legacy
Originally an obscure work, All Tomorrows slowly gained popularity online following its 2006 publication. In a 2021 podcast interview, Kosemen noted that the generation born right after him (Kosemen having been born in 1984) "really embraced" All Tomorrows, which he believes might partially be due to the "myriad disasters" that have happened in the world since then. The book has received some scholarly attention; in 2020, All Tomorrows was among the works discussed in Jörg Matthias Determann's book Islam, Science Fiction and Extraterrestrial Life, which explores astrobiology and science fiction in the Muslim world. Following the upload of an abridged version of the book's story by YouTuber Alt Shift X in June 2021, All Tomorrows saw a particular surge in popularity online during the summer of 2021. Among other things, there was a surge of internet memes based on the book, primarily on YouTube and Twitter as well as fan art based on the creatures in the book.
Readers have characterized All Tomorrows as "bizarre", "inexplicable", "interesting" and "fascinating", and as a work incorporating body horror. Ivan Farkas of Cracked.com called All Tomorrows "existentially freak-ay" in 2021 and described the artwork as "otherwordly". A 2022 article by Andrea Viscusi on the Italian media website Stay Nerd compared All Tomorrows to Man After Man (1990) by Dougal Dixon, also a work tackling future human evolution, but found the depictions in All Tomorrows to be "even more disturbing", yet still possible on an "almost subliminal level" to "recognize as our fellow men". In a 2022 article in the lifestyle magazine A Little Bit Human, Allia Luzong considered All Tomorrows to be a "fun exploration of what could be" but also a serious work with serious themes, particularly noting how humanity's social ills are present throughout the narrative.
Kosemen stated in 2021 that though the book had grown popular, he had almost "disowned" All Tomorrows, finding parts of it "a bit cringey". When designing his website and including his different books and projects, Kosemen purposefully left out All Tomorrows. Following the summer of 2021, he has since added the book to his website and intends to eventually publish All Tomorrows in physical form with new text and illustrations. By 16 October 2022, Kosemen had written the expanded version up until the Qu's conquest of the galaxy. Kosemen stated that the material up until that point amounted to 200 pages, almost twice the length of the entire original book. Kosemen continues to work on the expanded version as of 2024.
In April 2024 he has announced the release of a physical copy of the book, but only in Thai language. Although being the original version of the book, it is stated to comprehend a few illustrations made by other artists and a new chapter, with various informations about the species. This new chapter is only available in Thai.
At the same time, Kosemen has also stated that he is continuing his work on the new version of the book, that has now reached nearly over 300 pages, with still many species to talk about. Every species has now a deeper lore, and new major plot twists have been added.
See also
Transhuman
Posthuman
Biopunk
All Yesterdays (2012) by John Conway, Darren Naish and Kosemen – a similarly titled book on paleoart, co-authored by Kosemen.
Man After Man (1990) by Dougal Dixon – a similar book about (human) speculative evolution
References
External links
All Tomorrows – original 2006 PDF version of the book
Все Грядущие дни – 2009/2010 Russian translation of All Tomorrows by Pavel Volkov
– by Alt Shift X, recommended by C. M. Kosemen himself (see pinned comment)
All Tomorrows - 2023 Italian translation of All Tomorrows by D. Lombardo
All Tomorrows - 2023 Czech translation All Tomorrows by J. Dubánek
All Tomorrows - 2022 French translation All Tomorrows by Lucas G. Blanchard
2006 novels
2006 science fiction novels
Turkish science fiction novels
Fictional species and races
Books about evolution
Human evolution books
Speculative evolution
Novels set on Mars
Novels set in the future
Novels about genetic engineering
Extinction in fiction
Fiction set in the 7th millennium or beyond
Fiction books about genocide
Evolution in popular culture
Internet memes introduced in 2021
Milky Way in fiction | All Tomorrows | [
"Biology"
] | 1,998 | [
"Biological hypotheses",
"Speculative evolution",
"Hypothetical life forms"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.