id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160 values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
35,177,747 | https://en.wikipedia.org/wiki/Toshiba%E2%80%93Kongsberg%20scandal | The Toshiba–Kongsberg scandal, referred to in Japan as the Toshiba Machine Cocom violation case, was an international trade incident that unfolded during the final period of the Cold War. It centered on certain Coordinating Committee for Multilateral Export Controls (CoCom) member nations who transgressed foreign exchange and foreign trade laws by exporting machine tools to the Soviet Union. These tools, when combined with Kongsberg numerical control (NC) devices manufactured in Norway, contravened the CoCom agreement. The equipment allowed the submarine technology of the Soviet Union to progress significantly as it was being used to machine quieter propellers for Soviet submarines.
The incident strained relations between the United States and Japan and resulted in the arrest and prosecution of two senior executives, as well as the imposition of sanctions on Kongsberg by both countries.
The Incident
The Toshiba Machine division (at that time) was a 50.1% subsidiary of Toshiba, a major Japanese manufacturer of machine tools and a general electronics manufacturer. Toshiba Machine's sales to the entire Toshiba Group accounted for about 10%, and Toshiba Machine's exports to the communist bloc accounted for less than 20% of total sales.
Between December 1982 to 1984, Toshiba Machine supplied eight "machine tools", NC devices, and associated software to control the machine tools to the Soviet Union's Technical Machinery Import Corporation through Wako Trading, a dummy company of Itochu Corporation. Exported via route was a high-performance model capable of simultaneous 9-axis control, for which exports were prohibited through the Coordinating Committee for Multilateral Export Controls (CoCom). Despite these controls, Toshiba Machine and Itochu Corporation exported the machines to the Soviet Union from 1982 to 1983 and modified associated software in 1984.
Toshiba Machine, Itochu Corporation, and the Wako Trading Co. Ltd. employees recognized that exporting to the communist bloc of the "machine tools" ordered by the Soviet Union was not permitted. Wako created a false export permit application claiming it was exporting a large vertical lathe for control. For proof, they provided a signed contract to reassemble it overseas. The Japanese Ministry of International Trade and Industry, in charge of export control, did not see through the falsification of the permit application.
Exposed
At the end of 1986, the U.S. federal government learned of this transaction from an informant at Wako Trading, an employee called Kumagai Doku. The Pentagon conducted an investigation and concluded that the contract had contributed to the recent rapid improvement in the quietness of the Soviet Union Navy's nuclear-powered attack submarines. It subsequently notified the Japanese government through Atsuyuki Sasa, Director of the Cabinet Security Office, in a report in March 1987, the first report on the incident.
On the 19th, the Pentagon issued a statement that the U.S. government had learned that Japanese machine tooling, used to make screws for submarines, had been sent to the Soviet Union and that this was suspected of violating CoCom regulations. It announced that the Japanese government had been requested to conduct an investigation. According to sources familiar with the matter, the machine tool in question was believed to be a product of Toshiba Machine, a 50% subsidiary of Toshiba.
The tooling was believed to be a type of milling machine used to make propeller blades for ships, a general-purpose technical product that can be diverted to military technology. The Soviet Union was said to be using it to develop and manufacture new blades to reduce the screw noise of submarines.
It continued to state that it was not clear when and how the Soviet Union had acquired the equipment. However, the US government pointed out that Norwegian weapons maker Kongsberg had also provided similar machine tooling. Using these acquisitions, the Soviet Union reduced screw noise, which is a clue to detect, identify, and track submarines. The reduction could make it difficult for the U.S. Navy to track Soviet submarines, according to the Pentagon.
For this reason, the US government requested the Japanese and Norwegian governments to investigate the circumstances under which these machines had been exported. It called for "appropriate action" to be taken based on the international understanding of CoCom and their respective domestic laws if violations of CoCom were revealed.
After that, in June, former Minister of International Trade and Industry, Tamura, who was sent to the United States by Japan's Prime Minister Yasuhiro Nakasone, formally apologized to US Secretary of Defense Caspar Weinberger.
Investigation and trial
On April 30, 1987, the Japanese Police searched Toshiba Machine's premises. On May 27, two executives of Toshiba Machine were arrested for violating the Foreign Exchange Law, a Japanese domestic law, regarding the false application. A trial was held with Toshiba Machine being indicted.
On March 22, 1988, the Tokyo District Court handed down a judgment. Toshiba Machine was fined 2 million yen, and two executives were sentenced to 10 months in prison (with 3 years of suspension) and 1 year in prison (with 3 years of suspension). Chairman Shoichi Sawa and president Ichiro Watarisugi resigned from their parent company Toshiba. The term of office of chairman Sami, who had been expected to become a major force at Toshiba, was cut short. He was succeeded by Joichi Aoi.
Ryuzo Sejima, an adviser to Itochu Corporation, was demoted. Sejima was the brains of the Nakasone Cabinet. However, statements made by second secretary Yuri Rastovorov and Ivan Kovalenko raised suspicions that he was a Soviet spy, which caused a stir.
Diplomatic and trade consequences
In the United States, in addition to restrictions for Toshiba Machine, the import of all products of the Toshiba Group, including Toshiba itself, was strictly prohibited. In addition, in front of the White House, there were emotional reactions, such as a performance in which members of Congress smashed Toshiba radio cassette players and TVs with hammers.
Congressman Hunter, the central figure in the investigation of Toshiba in Congress, severely criticized Toshiba for putting the lives of American soldiers in danger by exporting the tools because the range at which American nuclear submarines could detect Soviet nuclear submarines was reduced by 50%. He argued that it would be necessary to invest $30 billion to build 30 new nuclear submarines within 10 years.
Impact
In response to the affair, Toshiba carried out lobbying activities in Congress between 1987 and 1989 to ease the sanctions. The amount of money invested by Toshiba, the number of lobbyists, and the scale of its activities were said to be the largest ever. Houlihan, a lobbyist law firm, argued that Toshiba and Toshiba Machine were separate companies and had some success.
Details
The machine tool that combined with the Norwegian numerical control (NC) device and was exported to the Soviet Union (based on the Norwegian Police Service report).
See also
History of computing in the Soviet Union
Soviet computing technology smuggling
References
Further reading
1987 in Japan
1987 in the United States
Computing in the Soviet Union
Espionage scandals and incidents
Foreign trade of the Soviet Union
Japan–Soviet Union relations
Japan–United States relations
Norway–Soviet Union relations
Political scandals in the United States
Smuggling
Toshiba | Toshiba–Kongsberg scandal | Technology | 1,489 |
2,324,714 | https://en.wikipedia.org/wiki/FERET%20database | The Facial Recognition Technology (FERET) database is a dataset used for facial recognition system evaluation as part of the Face Recognition Technology (FERET) program. It was first established in 1993 under a collaborative effort between Harry Wechsler at George Mason University and Jonathon Phillips at the Army Research Laboratory in Adelphi, Maryland. The FERET database serves as a standard database of facial images for researchers to use to develop various algorithms and report results. The use of a common database also allowed one to compare the effectiveness of different approaches in methodology and gauge their strengths and weaknesses.
The facial images for the database were collected between December 1993 and August 1996, accumulating a total of 14,126 images pertaining to 1,199 individuals along with 365 duplicate sets of images that were taken on a different day. In 2003, the Defense Advanced Research Projects Agency (DARPA) released a high-resolution, 24-bit color version of these images. The dataset tested includes 2,413 still facial images, representing 856 individuals. The FERET database has been used by more than 460 research groups and is managed by the National Institute of Standards and Technology (NIST).
References
External links
Official website about the gray-scale version
Official website about the color version
More official information
IEEE Transactions on Pattern Analysis and Machine Intelligence, VOL. 22, NO. 10, October 2000
More documents about FERET
Biometric databases
Datasets in computer vision
Facial recognition
Scientific databases
Test items
Machine learning task
Automatic identification and data capture
Surveillance | FERET database | Technology | 308 |
10,407,605 | https://en.wikipedia.org/wiki/Phosphatidylinositol%20phosphate | Phosphatidylinositol phosphate may refer to:
Phosphatidylinositol 3-phosphate, also known as PI(3)P
Phosphatidylinositol 4-phosphate, also known as PI(4)P
Phosphatidylinositol 5-phosphate, also known as PI(5)P
Phospholipids | Phosphatidylinositol phosphate | Chemistry | 81 |
69,787,990 | https://en.wikipedia.org/wiki/Goldberg%20drum | A Goldberg drum is a laboratory equipment used in the studies of aerosols. It was described by Leonard J. Goldberg from the Naval Biological Laboratory, School of Public Health, University of California, Berkeley, in 1958. It is used to contain airborne aerosols and particles.
References
Aerosols
Laboratory equipment | Goldberg drum | Chemistry | 64 |
66,727,176 | https://en.wikipedia.org/wiki/Neofavolus%20suavissimus | Neofavolus suavissimus is a species of fungus belonging to the family Polyporaceae.
Synonym:
Panus suavissimus (Fr.) Singer, 1951
References
Polyporaceae
Fungus species | Neofavolus suavissimus | Biology | 44 |
56,051,175 | https://en.wikipedia.org/wiki/Essential%20Biodiversity%20Variables | Essential Biodiversity Variables (EBVs) is a putative set of parameters intended to be the minimum set of broadly agreed upon necessary and sufficient biodiversity variables for at least national to global monitoring, researching, and forecasting of biodiversity. They are being developed by an interdisciplinary group of governmental and academic research partners. The initiative aims for a harmonised global biodiversity monitoring system. EBVs would be used to inform biodiversity change indicators, such as the CBD Biodiversity Indicators for the Aichi Targets.
The concept is partly based on the earlier Essential Climate Variables. It can be generalised as the minimum set of variables for describing and predicting a system's state and dynamics. Areas with more developed EV lists include climate, ocean, and biodiversity.
EBV Classes / Categories
The current candidate EBVs occupy six classes of Essential Biodiversity Variable: genetic composition, species populations, species traits, community composition, ecosystem structure, and ecosystem function. Within each class are a few to several variables.
Associated projects and organisations
As of 2017, participants in the project consist of the GlobDiversity project (funded by the European Space Agency) under GEO BON (Group on Earth Observations Biodiversity Observation Network; a cooperative project of international universities), and the GLOBIS-B project (Global Infrastructures for Supporting Biodiversity Research; funded by the EU Horizon 2020 programme).
Development
The concept was first proposed in 2012 and developed in the following years.
The GLOBIS-B global cooperation project, aimed to advance the challenge of practical implementation of EBVs by supporting interoperability and cooperation activities among diverse biodiversity infrastructures, started in 2015. The GlobDiversity project of GEO BON, led by the University of Zurich, started in 2017, focusing on specification and engineering of three RS-enabled EBVs.
The scope and screening of potential variables is under ongoing discussion.
This includes definition of the species distribution EBV and population abundance EBV, operationalisation of the EBV framework, data and tools for building EBV data products, workflow for building EBV data products, metadata and data sharing standards; and possible integration of abiotic variables (e.g. those emphasised in the Ecosystem Integrity framework) with biotic variables (emphasised in the EBV framework) to achieve comprehensive ecosystem monitoring.
"EBV data products" refers to the end product in the EBV information supply chain, from raw observations, to EBV-usable data, to EBV-ready data, to EBV data products. Each of these three types of EBV datasets could be used to produce indicators. Data sources for EBVs are categorised into four types: extensive and intensive monitoring schemes, ecological field studies, and remote sensing. Each have their own often complementary properties, implying that data integration will be important for creation of representative EBVs, as well as identifying and filling data gaps.
References
Biodiversity
International scientific organizations
Earth observation | Essential Biodiversity Variables | Biology | 608 |
804,255 | https://en.wikipedia.org/wiki/Roman%20shower | A Roman shower is a type of architecturally designed shower stall that does not require a door or curtain.
These showers are often used as disabled-accessible showers in hotels. They may also be known as "roll-in showers".
References
Plumbing
Bathing | Roman shower | Engineering | 52 |
41,269,797 | https://en.wikipedia.org/wiki/Prostaglandin%20E3 | Prostaglandin E3 (PGE3) is a naturally formed prostaglandin and is formed via the cyclooxygenase (COX) metabolism of eicosapentaenoic acid.
See also
Prostaglandin E1 (PGE1)
Prostaglandin E2 (PGE2)
References
Prostaglandins
Carboxylic acids
Secondary alcohols | Prostaglandin E3 | Chemistry | 87 |
77,828,177 | https://en.wikipedia.org/wiki/Sepetaprost | Sepetaprost is an investigational new drug that is being evaluated for the treatment of open angle glaucoma and ocular hypertension. It is an agonist of the prostaglandin EP3 and F receptors.
References
Oxepanes
2-Fluorophenyl compounds
Isopropyl esters
Diols | Sepetaprost | Chemistry | 71 |
28,263,672 | https://en.wikipedia.org/wiki/The%20World%27s%2025%20Most%20Endangered%20Primates | The World's 25 Most Endangered Primates is a list of highly endangered primate species selected and published by the International Union for Conservation of Nature (IUCN) Species Survival Commission (SSC) Primate Specialist Group (PSG), the International Primatological Society (IPS), Global Wildlife Conservation (GWC), and Bristol Zoological Society (BZS). The IUCN/SSC PSG worked with Conservation International (CI) to start the list in 2000, but in 2002, during the 19th Congress of the International Primatological Society, primatologists reviewed and debated the list, resulting in the 2002–2004 revision and the endorsement of the IPS. The publication was a joint project between the three conservation organizations until the 2012–2014 list when BZS was added as a publisher. The 2018–2020 list was the first time Conservation International was not among the publishers, replaced instead by GWC. The list has been revised every two years following the biannual Congress of the IPS. Starting with the 2004–2006 report, the title changed to "Primates in Peril: The World's 25 Most Endangered Primates". That same year, the list began to provide information about each species, including their conservation status and the threats they face in the wild. The species text is written in collaboration with experts from the field, with 60 people contributing to the 2006–2008 report and 85 people contributing to the 2008–2010 report. The 2004–2006 and 2006–2008 reports were published in the IUCN/SSC PSG journal Primate Conservation,, since then they have been published as independent publications.
The 25 species on the 2018–2020 list are distributed between 32 countries. The country with the most species on the list is Madagascar with five species, Indonesia, Brazil, Ghana, and Côte d'Ivoire each have three. The list is broken into four distinct regions: the island of Madagascar, the continent of Africa, the continent of Asia including the islands of Indonesia, and the Neotropics (Central and South America).
The purpose of the list, according to Russell Mittermeier, the president of CI, is "to highlight those [primate species] that are most at risk, to attract the attention of the public, to stimulate national governments to do more, and especially to find the resources to implement desperately needed conservation measures." Species are selected for the list based on two primary reasons: extremely small population sizes and very rapid drops in numbers. These reasons are heavily influenced by habitat loss and hunting, the two greatest threats primates face. More specifically, threats listed in the report include deforestation due to slash-and-burn agriculture, clearing for pasture or farmland, charcoal production, firewood production, illegal logging, selective logging, mining, land development, and cash crop production; forest fragmentation; small population sizes; live capture for the exotic pet trade; and hunting for bushmeat and traditional medicine. Twelve species were dropped for the 2018–2020 list, Mittermeier notes this was not because their situation has improved but instead to focus attention on other species that are also have "bleak prospects for their survival.
Key
Current list
Former list members
With each new publication, species are both added and removed from the list. In some cases, removal from the list signifies improvement for the species. With the publication of the 2006–2008 list, four species were removed because of increased conservation efforts: the black lion tamarin (Leontopithecus chrysopygus), golden lion tamarin (Leontopithecus rosalia), mountain gorilla (Gorilla beringei beringei), and Perrier's sifaka (Propithecus perrieri). In 2008, the black lion tamarin went from critically endangered to endangered and the golden lion tamarin was similarly promoted in 2003 after three decades of collaborative conservation efforts by zoos and other institutions. Well-protected species such as these still have very small populations, and due to deforestation, new habitat is still needed for their long-term survival. The Hainan black crested gibbon (Nomascus hainanus), which was removed from the 2008–2010 list, still has fewer than 20 individuals left, but significant efforts to protect it are now being made. Mittermeier claimed in 2007 that all 25 species could be elevated off the list within five to ten years if conservation organizations had the necessary resources.
Unlike the changes in the 2006–2008 report, not all species were removed from the 2008–2010 list due to improvement in their situation. Instead, new species were added to bring attention to other closely related species with very small populations that are also at risk of extinction. For example, the highly endangered eastern black crested gibbon (Nomascus nasutus) replaced the Hainan black crested gibbon. The Javan slow loris (Nycticebus javanicus) replaced the Horton Plains slender loris (Loris tardigradus nycticeboides) because the former has been hit the hardest of Asian lorises, all of which are declining rapidly due primarily to capture for the exotic pet trade, as well as use in traditional medicines and forest loss. In another case, the brown-headed spider monkey (Ateles fusciceps fusciceps) was omitted from the list since no spokesperson could be found for the species. The same approach was taken with the 2012–2014 list.
List history
With the exception of the 2000–2002 publication, which was written collaboratively by the IUCN/SSC PSG and CI, the list has been revised every two years following the biannual Congress of the IPS. The 2002–2004 list resulted from the 19th Congress of the IPS in Beijing, China; the 2004–2006 list followed the 20th Congress of the IPS, held in Torino, Italy; the 2006–2008 list after the 21st Congress in Entebbe, Uganda; the 2008–2010 list followed the 22nd Congress held in Edinburgh, UK; the 2010-2012 list followed the 23rd Congress in Kyoto, Japan; the 2012–2014 list after the 24th Congress in Cancún, Mexico; the 2014–2016 list after the 25th Congress in Hanoi, Vietnam; the 2016–2018 list after the 26th Congress in Chicago, US; the 2018–2020 list after the 27th Congress in Nairobi, Kenya; and the 2022–2023 list after the 28th Congress in Quito, Ecuador.
The 2008 IUCN Red List of Threatened Species offered assessments of 634 primate taxa, of which 303 (47.8%) were listed as threatened (vulnerable, endangered, or critically endangered). A total of 206 primate species were ranked as either critically endangered or endangered, 54 (26%) of which have been included at least once in The World's 25 Most Endangered Primates since 2000.
See also
The world's 100 most threatened species
List of primates by population
Notes
References
External links
IUCN Primate Specialist Group's Special Reports containing the latest and historic reports.
Endangered species
Environmental reports
International Union for Conservation of Nature
Lists of placental mammals
Primates, status
Primate conservation | The World's 25 Most Endangered Primates | Biology | 1,456 |
15,892,469 | https://en.wikipedia.org/wiki/Ancestral%20relation | In mathematical logic, the ancestral relation (often shortened to ancestral) of a binary relation R is its transitive closure, however defined in a different way, see below.
Ancestral relations make their first appearance in Frege's Begriffsschrift. Frege later employed them in his Grundgesetze as part of his definition of the finite cardinals. Hence the ancestral was a key part of his search for a logicist foundation of arithmetic.
Definition
The numbered propositions below are taken from his Begriffsschrift and recast in contemporary notation.
A property P is called R-hereditary if, whenever x is P and xRy holds, then y is also P:
An individual b is said to be an R-ancestor of a, written aR*b, if b has every R-hereditary property that all objects x such that aRx have:
The ancestral is a transitive relation:
Let the notation I(R) denote that R is functional (Frege calls such relations "many-one"):
If R is functional, then the ancestral of R is what nowadays is called connected:
Relationship to transitive closure
The Ancestral relation is equal to the transitive closure of . Indeed, is transitive (see 98 above), contains (indeed, if aRb then, of course, b has every R-hereditary property that all objects x such that aRx have, because b is one of them), and finally, is contained in (indeed, assume ; take the property to be ; then the two premises, and , are obviously satisfied; therefore, , which means , by our choice of ). See also Boolos's book below, page 8.
Discussion
Principia Mathematica made repeated use of the ancestral, as does Quine's (1951) Mathematical Logic.
However, the ancestral relation cannot be defined in first-order logic. It is controversial whether second-order logic with standard semantics is really "logic" at all. Quine famously claimed that it was really 'set theory in sheep's clothing.' In his books setting out formal systems related to PM and capable of modelling significant portions of Mathematics, namely - and in order of publication - 'A System of Logistic', 'Mathematical Logic' and 'Set Theory and its Logic', Quine's ultimate view as to the proper cleavage between logical and extralogical systems appears to be that once axioms that allow incompleteness phenomena to arise are added to a system, the system is no longer purely logical.
See also
Begriffsschrift
Gottlob Frege
Transitive closure
References
George Boolos, 1998. Logic, Logic, and Logic. Harvard Univ. Press.
Ivor Grattan-Guinness, 2000. In Search of Mathematical Roots. Princeton Univ. Press.
Willard Van Orman Quine, 1951 (1940). Mathematical Logic. Harvard Univ. Press. .
External links
Stanford Encyclopedia of Philosophy: "Frege's Logic, Theorem, and Foundations for Arithmetic" -- by Edward N. Zalta. Section 4.2.
Binary relations
ja:概念記法 | Ancestral relation | Mathematics | 637 |
21,292,715 | https://en.wikipedia.org/wiki/Bear%20Swamp%20%28New%20Jersey%29 | Bear Swamp is a swamp in Cumberland County, southwestern New Jersey, notable for its of old-growth forests and the birds they contain. It is divided into two areas, Bear Swamp East and Bear Swamp West, separated from each other by gravel mines and roads.
Bear Swamp West
Bear Swamp West contains broadleaf swamp forest dominated by black gum, American sweetgum, red maple, and sweetbay magnolia. Other trees present are American beech, swamp white oak, and American holly. Some of this forest is old-growth filled with trees of impressive sizes and ages. The black gum are nearly in diameter and 600 years old. The sweetgum again nearly in diameter, and 300 years old. The red maple are over in diameter. The American holly are particularly large, reaching in diameter and tall.
Bear Swamp East
Bear Swamp East is in Belleplain State Forest. It covers and contains of old-growth forest. It has forests similar to Bear Swamp West, but with large Tulip Poplar on hummocks, some reaching in diameter and 400 years of age.
Habitat for birds
As many as 30 bald eagles nest in the swamp, and it is home to the oldest continuously occupied bald eagle nest in New Jersey. It is a breeding site for red-shouldered hawks, barred owls, and Cooper's hawks, all species of concern in the state. It is one of just two known breeding sites in southern New Jersey for pileated woodpeckers.
See also
List of old growth forests
Glades Wildlife Refuge
References
Old-growth forests
Landforms of Cumberland County, New Jersey
Swamps of New Jersey | Bear Swamp (New Jersey) | Biology | 322 |
50,711,232 | https://en.wikipedia.org/wiki/SIA%20S.p.A. | SIA S.p.A. is an Italian company operating in the area of ICT, providing services to the banking and finance sector in addition to platforms for financial markets and e-payment services.
History
The company was founded in 1977 as Società Interbancaria per l'Automazione by Banca d'Italia, ABI and a pool of Italian banks. During the 1980s, SIA created the Rete Nazionale Interbancaria (RNI – national interbank network) and contributed to developing the interbank payments system in compliance with the white paper on payment systems in Italy, published by Banca d’Italia in 1987.
In 1983 SIA launched Bancomat and in 1987 introduced POS payments.
In 1992, from a branch of SIA's business, SSB - Società per i Servizi Bancari was born, a firm specializing in services in the field of electronic money. The new company worked on further developments in payment cards: Bancomat/Pagobancomat, FASTpay, borsellino elettronico (e-purse with the MINIpay product) and the Microcircuito project for the migration from magnetic stripe cards to microchip cards.
In 1999, SIA merged with Cedborsa, changing its name to "Società Interbancaria per l'Automazione - Cedborsa S.p.A.", working on the automation of the Borsa Italiana markets and the launch of the Italian gross payments markets e-MID and MTS.
In 2003, SIA developed the interbank payments system in Romania, a condition for the country's entry into the European Union.
In the early 2000s, SIA created and managed the technology platform for the STEP2 project, the first continental ACH (automated clearing house) for retail payments in euros.
In the same period, SIA created RTGS (Real Time Gross Settlement System) platforms for the central banks of Sweden, Norway, Egypt and Palestine.
In May 2007, the merger between SIA and SSB gave birth to SIA-SSB S.p.A., a name simplified in 2011 to SIA S.p.A.
In 2012, in partnership with Colt, SIA was awarded the tender announced by the European Central Bank and becomes a licensed Network Service Provider appointed to create the network infrastructure connecting central securities depositaries (CSD), central banks in the Eurosystem and major bank groups at European level to TARGET2-Securities (T2S).
A year later, together with the same partner, SIA won the tender announced by Deutsche Bundesbank (which was also operating on behalf of Banca d’Italia, Banque de France and Banco de España) to create the network infrastructure linking the four central banks charged with managing the single platform of the Eurosystem to settle large payments and securities transactions.
Over the course of 2013, SIA incorporated its Belgian subsidiary SiNSYS, a company operating in the processing of payment cards, and acquired Emmecom, an Italian firm in the sector of fixed, mobile and satellite telecommunications networks.
In the area of mobile payments, during 2014 SIA launched a service called Jiffy for sending and receiving cash in real time to and from a user's contacts by smartphone.
At the end of 2014, SIA incorporated its subsidiary RA Computer and the payments Gateway business of its TSP subsidiary. This led to the direct control of payment institution PI4PAY (effective from July 2011).
In January 2016, SIA acquired 69% of UBIQ, a startup born in 2012 from a spin-off of Parma University, specialized in designing and developing innovative technological solutions, particularly in the field of promotions, where it operates with the brand Ti Frutta.
In April 2016, the Reserve Bank of New Zealand (RBNZ), New Zealand's central bank, chose SIA to develop the new Real-Time Gross Settlement (RTGS) system, to create the new domestic interbank payments system, replacing the Exchange Settlement Account System (ESAS).
On June 2, 2016, SIA and Raphaels Bank, an issuing bank known for enabling innovation in payments, have agreed to a partnership agreement for the development and launch of payment solutions in the UK and throughout Europe.
In August 2016, ČSOB, one of the largest commercial banks in the Czech Republic and part of the Belgian KBC Group, and SIA launched the first mobile wallet for NFC payments in the Czech Republic that supports both the MasterCard and VISA circuits.
During the same month, Unicredit Business Integrated Solutions (UBIS), a company in the Unicredit Group, and SIA signed an agreement for the sale to the latter – for the sum of €500 million – of the processing activities of around 13.5 million payment cards and the management of 206,000 POS terminals and 12,000 ATMs in Italy, Germany and Austria. UBIS also signed with SIA a ten-year outsourcing contract for the supply of processing services for transactions made using debit, credit and prepaid cards, and for the management of POS and ATM terminals.
In the final part of 2016, SIA launched a series of partnerships in the e-payments market: American Express Italia has chosen SIA to launch a completely digital and paperless service to support the request for new credit cards. This project uses authentication and digital signature systems to request a card in paperless mode. Moreover, Friday 23 December 2016 saw the completion of the acquisition by SIA of the processing activities of around 13.5 million payment cards and the management of 206,000 POS terminals and 12,000 ATM terminals in Italy, Germany and Austria from Unicredit Business Integrated Solutions (UBIS), a company in the Unicredit Group, for the sum of €500 million. Following this agreement, SIA S.p.A. on January 1, 2017 established two new companies: P4cards S.r.l., based in Verona, and Pforcards GmbH, based in Vienna, to manage the payment cards processing activities.
In the spring of 2017 the Central Bank of Iceland (CBI) has chosen SIA to implement and support the new real-time gross settlement system (RTGS) and the new instant payment platform. These technology infrastructures developed by SIA, planned to go live in 2018, will replace CBI's current mainframe-based real-time solutions for high and low-value payment systems, which have been operating since 2001. Central Bank of Iceland manages all interbank payments in the country. Despite the small population, it processes a quite significant daily volume of transactions: up to 1 million payments with a peak of 160,000 per hour.
In May, Thomson Reuters launched the “SIABookbuilding” application on its flagship desktop platform Eikon, offering sell-side professionals a new fully integrated application for the IPO syndication and distribution process. SIABookbuilding is available through App Studio, Eikon's third-party development platform.
A couple of weeks later, Poste Italiane and SIA have signed a deal that allows holders of debit cards and Postepay cards to use the Extra Sconti App, which is based on a cash-back mechanism to credit consumers' postal current accounts for supermarket purchases of brands recommended by the Extra Sconti App.
On July 27, SIA celebrated its first 40 years of existence.
On May 25, 2018, SIA and First Data Corporation have signed an agreement for SIA to acquire First Data's card processing businesses in parts of Central and Southeastern Europe for €375 million. In 2017, these businesses generated a combined revenue of approximately €100 million for First Data. This acquisition by SIA provides card processing, card production, call center and back-office services, including 13.3 million payment cards, 1.4 billion transactions, in addition to the management of POS terminals and ATMs. These businesses are primarily located in 7 countries: Greece, Croatia, Czech Republic, Hungary, Romania, Serbia and Slovakia.
On November 29, 2018 the Board of Directors of SIA, meeting under the chairmanship of Giuliano Asperti, appointed Nicola Cordone to the position of Chief Executive Officer of the Company, after having co-opted him as Director.
On 5 October 2020 it was announced SIA will merge with Nexi, will create one of Europe’s largest fintech groups.
On December 16, 2021, Nexi signed the merger deed to create one of Europe's largest payment groups. The merger with SIA is effective as of Jan. 1, 2022.
Business areas
Payment systems: clearing and settlement of gross payments, management of interbank collections and payments (e.g. credit transfers, payment orders, direct debits, bank checks), contactless payments, mobile payments, multichannel payments, procurement supporting treasury processes, document management (electronic orders, e-billing and digital custody), reconciliation of accounting flows, payment terminal handling in the domestic and international market.
Payment cards: issuing and acquiring of debit, credit and prepaid cards for all domestic and international circuits, fraud and dispute prevention and management.
Services to financial markets: trading and post trading technology platforms for financial markets and their access systems, systems for market surveillance and for monitoring and transparency of trading.
Management of databases: Interbank register of bad cheques and payment cards, CAB bank branch database, Bancomat card block service, ATM procedural register and monitoring service.
Network: connectivity and data transport services to banks and financial firms, management of the Rete Nazionale Interbancaria (RNI – national interbank network) which links Banca d'Italia, datacenters of all financial institutions, capital markets, the Public Connectivity System, credit and debit card processors and the premises and sales outlets of businesses.
Business data
In 2017, the SIA Group processed overall the clearing of 13.1 billion transactions (+7% compared to 2016), 6.1 billion card transactions (+41.1%) and 3.3 billion payment transactions (+7.1%) relating to credit transfers and collections.
On the financial markets, the number of trading and post-trading transactions rose to 56.2 billion from 47.4 billion in 2016, an increase of 18.8%.
SIA handled a volume of traffic of over 784 terabytes of data, up 19.8% compared to 2016, on the 174,000 km of the SIAnet network, with total infrastructure availability and 100% service levels.
Main economic and financial results
2017 saw a rise in SIA's revenues of €403.4 million, with a growth of €12.6 million (+3.2%).
EBITDA is down at €114.6 million from €118.6 million in 2016 (-3.4%) and operating results reached €88.5 million (-12.1%). Net profit is €63.4 million, down by €6.4 million (-9.1%) compared to the previous financial year. These results, as well as SIA's net financial position, were affected by the acquisition of the cards business unit from UBIS for a sum of €500 million.
Shareholders as at 31 December 2017
FSIA Investimenti - 57.42%%
CDP Equity (Cassa Depositi e Prestiti) - 25.69%
Banco BPM - 5.33%
Mediolanum S.p.A. - 2.85%
Deutsche Bank S.p.A. - 2.58%
Others - 6.13%
SIA Group subsidiaries as at 1 January 2020
New SIA Greece Single Member S.A. - 100%
Perago FSE L.t.d. - 100%
PforCards GmbH - 100%
P4cards S.r.l. - 100%
SIAadvisor S.r.l. - 51%
SIApay S.r.l.- 100%
SIA Central Europe, a.s. - 100%
References
Italian companies established in 1977
Companies based in Milan
Business services companies established in 1977
Payment systems
Privately held companies of Italy
Real-time gross settlement | SIA S.p.A. | Technology | 2,527 |
65,505,892 | https://en.wikipedia.org/wiki/Italian%20Union%20of%20Chemical%20and%20Allied%20Industries | The Italian Union of Chemical and Allied Industries (, UILCID) was a trade union representing workers in the chemical and mining industries in Italy.
The union was founded in 1950, as the Italian Union of Chemical Workers, and was a founding affiliate of the Italian Labour Union. It grew steadily, and by 1953, had 22,006 members. In 1962, it absorbed the National Union of Mine and Quarry Workers, and renamed itself as the "Italian Union of Chemical and Allied Industries", and by 1964, it had 45,237 members.
In the summer of 1994, the union merged with the Italian Union of Oil and Gas Workers, to form the Italian Union of Chemical, Energy and Resource Workers. By 1997, the union had 61,815 members, of whom 70% worked in the chemical industry, and most of the remainder in ceramics and glass.
References
Chemical industry in Italy
Chemical industry trade unions
Mining trade unions
Trade unions in Italy
Trade unions established in 1950
Trade unions disestablished in 1994 | Italian Union of Chemical and Allied Industries | Chemistry | 204 |
55,199,641 | https://en.wikipedia.org/wiki/Edward%20Merewether%20%28physician%29 | Dr Edward Rowland Alworth Merewether FRSE CB CBE (1892-1970) was a British barrister and physician (combining two fields in a unique manner). He was an expert in industrial medicine and the laws linked to this, working especially with asbestosis. In 1944 he was appointed Honorary Physician to King George VI. Close colleagues called him "Uncle M". In authorship he is known as E. R. A. Merewether.
Life
He was born in Durham on 2 March 1892 the son of Alworth Edward Merewether, a naval surgeon.
He studied medicine at Durham University graduating MB BS in 1914. In the First World War he served in the Royal Navy. He received the Order of St Sava for his work in Serbia. After the war he started specialising in chest diseases.
In 1927 he joined the staff of the Factory Department of the Home Office. Here he was one of the first to identify the dangers of breathing asbestos fibre and also identified silicosis in sandblast operators. In 1928 he joined Dr H. E. Seiler, Medical Officer of Health in Glasgow looking at cases of pulmonary fibrosis in asbestos workers. Merewether conclusively proved a link between asbestos and the disease.
In 1940 he was elected a Fellow of the Royal Society of Edinburgh. His proposers were Sir Thomas Oliver, Stuart McDonald, Edwin Bramwell and David Murray Lyon.
In 1943 he succeeded Dr J. C. Bridge as His Majesty's Senior Inspector of Factories in Great Britain.
He retired in 1957 and died on 13 February 1970.
Family
His great-grandfather was John Merewether.
In 1918 he married Ruth Annie Hayton Waddell. They had three daughters.
Publications
Report on the Effects of Asbestos Dust on Lungs (1930)
Industrial Medicine and Hygiene
References
1892 births
1970 deaths
People from Durham, England
20th-century British medical doctors
Fellows of the Royal Society of Edinburgh
Asbestos
Alumni of Durham University College of Medicine
Military personnel from Durham, England
Royal Navy personnel of World War I
Royal Navy sailors | Edward Merewether (physician) | Environmental_science | 410 |
3,996,332 | https://en.wikipedia.org/wiki/Plunge%20pool | A plunge pool (or plunge basin or waterfall lake) is a deep depression in a stream bed at the base of a waterfall or shut-in. It is created by the erosional forces of cascading water on the rocks at the formation's base where the water impacts. The term may refer to the water occupying the depression, or the depression itself.
Formation
Plunge pools are formed by the natural force of falling water, such as at a waterfall or cascade; they also result from man-made structures such as some spillway designs. Plunge pools are often very deep, generally related to the height of the fall, the volume of water, the resistance of the rock below the pool and other factors. The impacting and swirling water, sometimes carrying rocks within it, abrades the riverbed into a basin, which often features rough and irregular sides. Plunge pools can remain long after the waterfall has ceased flow or the stream has been diverted. Several examples of former plunge pools exist at Dry Falls in the Channeled Scablands of eastern Washington. They can also be found underwater in areas that were formerly above sea level, for example, Perth Canyon off the coast of Western Australia.
Plunge pools are fluvial features of erosion which occur in the youthful stage of river development, characterized by steeper gradients and faster water flows. Where softer or fractured rock has been eroded back to a knickpoint, water continues to bombard its base. Because this rock is often less resistant than overlying strata, the water from the higher elevation continues eroding downward until an equilibrium is achieved.
A somewhat similar bowl-shaped feature developed by flowing water, as opposed to falling water, is known as a scour hole. These occur both naturally and as a result of bridge building.
See also
Stream pool
References
External links
USGS: Stream Modeling website
Bodies of water
Erosion landforms
Fluvial landforms
Garden features
Geomorphology
Hydrology
Natural pools
Swimming pools
Water streams | Plunge pool | Chemistry,Engineering,Environmental_science | 398 |
57,350,509 | https://en.wikipedia.org/wiki/Denis%20Albert%20Bardou | Denis Albert Bardou (15 February 1841 – 14 March 1893) was a French manufacturer of precision optical instruments.
Early life
He was born in Paris, the son of Pierre Gabriel Bardou, optician, and Gertrude Aglaé Anna Guichard. Denis Albert's grandfather had founded the Maison Bardou in 1819, an optical company in Paris, which had then passed to his father.
Career
In 1865, Denis Albert assumed control of the family business. The company was located at his residence at 55, rue de Chabrol.
The company manufactured and sold astronomical telescopes, spyglasses, binoculars, microscopes and opera glasses. The telescopes included both equatorial and azimuthal models with silvered glass mirrors (10, 16, 20 cm). Between 1867 and 1891 the Bardou company won numerous awards at expositions of Le Havre, Philadelphia, and Paris, including a gold medal at the Exposition Universelle in Paris in 1889. It furnished optical instruments to the French Ministère de la Guerre, Ministère de la Marine and to foreign governments.
The Maison Bardou and its fellow Paris-based competitors the Secrétan and Mailhat companies were among the leading French precision optics manufacturers of the early twentieth century. Bardou telescopes and optical products were widely exported to Europe, the United States and further afield.
Other activities
Bardou became a member of the Société astronomique de France in 1888 (only one year after it was established). Advertisements for his company's telescopes appeared frequently in the pages of the society's bulletin.
Death and legacy
Bardou died on 14 March 1893 in his home in Paris.
In 1896, Jules Vial, an engineer, became the successor to the Maison Bardou. He continued manufacturing telescopes under the name “Bardou” or “Bardou-Vial” for at least the next 15 years. By 1899, the company had moved to 59, rue Caulaincourt, Paris.
Notable telescopes
Besides manufacturing small telescopes, Bardou also built large ones upon request.
When Camille Flammarion built his observatory in Juvisy-sur-Orge in 1883, he commissioned Bardou to construct the large equatorial mount refracting telescope of 240 mm diameter and 3600 mm focal length.
In 1889, the Société Astronomique de France commissioned Bardou to build an equatorial mount refractor with a 108 mm diameter for the Observatory of the rue Serpente atop its new headquarters in the 6th arrondissement of Paris.
References
Telescope manufacturers
1841 births
1893 deaths | Denis Albert Bardou | Astronomy | 507 |
2,083,415 | https://en.wikipedia.org/wiki/Navigation%20mesh | A navigation mesh, or navmesh, is an abstract data structure used in artificial intelligence applications to aid agents in pathfinding through complicated spaces. This approach has been known since at least the mid-1980s in robotics, where it has been called a meadow map, and was popularized in video game AI in 2000.
Description
A navigation mesh is a collection of two-dimensional convex polygons (a polygon mesh) that define which areas of an environment are traversable by agents. In other words, a character in a game could freely walk around within these areas unobstructed by trees, lava, or other barriers that are part of the environment. Adjacent polygons are connected to each other in a graph.
Pathfinding within one of these polygons can be done trivially in a straight line because the polygon is convex and traversable. Pathfinding between polygons in the mesh can be done with one of the large number of graph search algorithms, such as A*. Agents on a navmesh can thus avoid computationally expensive collision detection checks with obstacles that are part of the environment.
Representing traversable areas in a 2D-like form simplifies calculations that would otherwise need to be done in the "true" 3D environment, yet unlike a 2D grid it allows traversable areas that overlap above and below at different heights. The polygons of various sizes and shapes in navigation meshes can represent arbitrary environments with greater accuracy than regular grids can.
Creation
Navigation meshes can be created manually, automatically, or by some combination of the two. In video games, a level designer might manually define the polygons of the navmesh in a level editor. This approach can be quite labor intensive. Alternatively, an application could be created that takes the level geometry as input and automatically outputs a navmesh.
It is commonly assumed that the environment represented by a navmesh is static – it does not change over time – and thus the navmesh can be created offline and be immutable. However, there has been some investigation of online updating of navmeshes for dynamic environments.
History
In robotics, using linked convex polygons in this manner has been called "meadow mapping", coined in a 1986 technical report by Ronald C. Arkin.
Navigation meshes in video game artificial intelligence are usually credited to Greg Snook's 2000 article "Simplified 3D Movement and Pathfinding Using Navigation Meshes" in Game Programming Gems. In 2001, J.M.P. van Waveren described a similar structure with convex and connected 3D polygons, dubbed the "Area Awareness System", used for bots in Quake III Arena.
Notes
References
External links
UDK: Navigation Mesh Reference
Unity: Navigation Overview
Source Engine: Navigation Meshes
Urho3D: Navigation
Godot Engine Navigation
Cry Engine Navigation And AI
Graph data structures
Video game development
Computational physics
Robotics engineering | Navigation mesh | Physics,Technology,Engineering | 601 |
1,785,141 | https://en.wikipedia.org/wiki/ZTE | ZTE Corporation is a Chinese partially state-owned technology company that specializes in telecommunication. Founded in 1985, ZTE is listed on both the Hong Kong and Shenzhen Stock Exchanges.
ZTE's core business is wireless, exchange, optical transmission, data telecommunications gear, telecommunications software, and mobile phones. ZTE primarily sells products under its own name, but it is also an OEM.
The company has faced criticism in the United States, India, and Sweden over ties to the Chinese government that could enable mass surveillance. In 2017, ZTE was fined for illegally exporting U.S. technology to Iran and North Korea in violations of economic sanctions. In April 2018, after the company failed to properly reprimand the employees involved, the U.S. Department of Commerce banned U.S. companies (semiconductors) from exporting to ZTE for seven years. The ban was lifted in July 2018 after ZTE replaced its senior management, and agreed to pay additional fines and establish an internal compliance team for 10 years. In June 2020, the Federal Communications Commission (FCC) designated ZTE a national security threat. In 2023, the European Commission banned ZTE from providing telecommunication services.
History
ZTE, initially founded as Zhongxing Semiconductor Co., Ltd in Shenzhen, Guangdong province, in 1985, was incorporated by a group of investors associated with China's Ministry of Aerospace Industry. In March 1993, Zhongxing Semiconductor changed its name to Zhongxing New Telecommunications Equipment Co., Ltd with capital of RMB 3 million, and created a new business model as a "state-owned and private-operating" economic entity. ZTE made an initial public offering (IPO) on the Shenzhen stock exchange in 1997 and another on the Hong Kong stock exchange in December 2004.
While the company initially profited from domestic sales, it vowed to use proceeds of its 2004 Hong Kong IPO to further expand R&D, overseas sales to developed nations, and overseas production. Making headway in the international telecom market in 2006, it took 40% of new global orders for CDMA networks topping the world CDMA equipment market by number of shipments. That same year also saw ZTE find a customer in the Canadian Telus and membership in the Wi-Fi Alliance.
By 2009, the company had become the third-largest vendor of GSM telecom equipment worldwide, and about 20% of all GSM gear sold throughout the world that year was ZTE branded. As of 2011, it holds around 7% of the LTE patents.
In 2023, the World Intellectual Property Organization (WIPO)’s Annual PCT Review ranked ZTE's number of patent applications published under the PCT System as 11th in the world, with 1,738 patent applications being published during 2023.
U.S. sanctions and import ban
In March 2017, ZTE pleaded guilty to illegally exporting U.S. technology to Iran and North Korea in violation of trade sanctions, and was fined a total of US$1.19 billion by the U.S. Department of Commerce. It was the largest-ever U.S. fine for export control violations.
ZTE was allowed to continue working with U.S. companies, provided that it properly reprimand all employees involved in the violations. However, the Department of Commerce found that ZTE had violated these terms and made false statements regarding its compliance, having fired only 4 senior officials and still providing bonuses to 35 other employees involved in the violations. On 16 April 2018, the Department of Commerce banned U.S. companies from providing exports to ZTE for seven years. At least 25% of components on recent ZTE smartphones originated from the U.S., including Qualcomm processors and certified Android software with Google Mobile Services. An analyst stated that it would take a significant amount of effort for ZTE to redesign its products as to not use U.S.-originated components.
On 9 May 2018, ZTE announced that, although it was "actively communicating with the relevant U.S. government departments" to reverse the export ban, it had suspended its "major operating activities" (including manufacturing) and trading of its shares. On 13 May 2018, U.S. president Donald Trump stated that he would be working with Chinese president Xi Jinping to reverse the ban. It was argued that the export ban was being used as leverage by the United States as part of an ongoing trade dispute with China. On 7 June 2018, ZTE agreed to a settlement with the Department of Commerce in order to lift the import ban. The company agreed to pay a US$1 billion fine, place an additional US$400 million of suspended penalty money in escrow, replace its entire senior management, and establish a compliance department selected by the department.
Later that month, the U.S. Senate passed a version of the National Defense Authorization Act for Fiscal Year 2019 that blocked the settlement, and banned the federal government from purchasing equipment from Huawei and ZTE (citing them as national security risks due to risks of Chinese government surveillance). The settlement was criticized by Senators as being "personal favors" between Trump and the Chinese government, as the Chinese government issued a loan for an Indonesian theme park project with a Trump golf course following the May 2018 announcement. However, the House version of the bill, signed by Trump, did not include the provision blocking the settlement, but still included the ban on federal purchase of Huawei and ZTE products.
On 13 July 2018, the denial order was officially lifted.
In January 2019, it became public that ZTE has retained the services of former senator Joe Lieberman as a lobbyist.
In June 2020, the Federal Communications Commission (FCC) designated ZTE as a threat to U.S. communications networks. In July 2020, the U.S. government banned companies that use ZTE from receiving federal contracts. The FCC denied the company's appeal of the decision in November 2020.
In September 2020, the U.S. Department of Justice filed a criminal complaint against ZTE accusing it of using two shell companies named Ryer International Trading and Rensy International Trading to violate sanctions against North Korea. In December 2020, the U.S. Congress included $1.9 billion to help telecom carriers in rural areas of the U.S. to remove ZTE equipment and networks they had previously purchased.
In January 2021, Gina Raimondo, President Joe Biden's nominee for United States Secretary of Commerce, said in her confirmation hearings that she would protect U.S. networks from interference by Chinese companies including ZTE. In June 2021, the FCC voted unanimously to prohibit approvals of ZTE gear in U.S. telecommunication networks on national security grounds.
In March 2022, ZTE was accused of violating its probation from its guilty plea for sanctions violations. After President Joe Biden signed into law the Secure Equipment Act of 2021, in November 2022, the FCC banned sales or import of equipment made by ZTE for national security reasons.
Bribery investigation
In 2020, it was disclosed that the United States Department of Justice opened an investigation into ZTE for potential violations of the Foreign Corrupt Practices Act.
Ownership
, Zhongxing Xin (; aka ZTE Holdings), an intermediate holding company, owned 27.40% stake of ZTE. The shareholders of ZTE Holdings were Xi'an Microelectronics (; a subsidiary of the state-owned China Academy of Aerospace Electronics Technology) with 34%, Aerospace Guangyu (; a subsidiary of the state-owned China Aerospace Science and Industry Corporation Shenzhen Group) with 14.5%, Zhongxing WXT (; aka Zhongxing Weixiantong) with 49%, and a private equity fund Guoxing Ruike () with 2.5%. The first two shareholders are state-owned enterprises, nominating 5 out 9 directors of ZTE Holdings, while Zhongxing WXT was owned by the founders of ZTE, including Hou Weigui, which Zhongxing WXT nominated the rest of the directors (4 out 9) of ZTE Holdings.
The mixed ownership model of ZTE was described as "a firm is an SOE from the standpoint of ownership, but a POE [privately owned enterprises] from the standpoint of management" by an article in The Georgetown Law Journal. ZTE described itself as "state-owned and private-run". The South China Morning Post and the Financial Times have both described ZTE as state-owned. Other scholars have noted the links between ZTE's state-owned shareholders and the People's Liberation Army.
Subsidiaries
ZTE has several international subsidiaries in countries including Indonesia, Australia, Germany, the United States, India, Brazil, Sri Lanka, Myanmar, Singapore, and Romania.
ZTEsoft engages in ICT industry and specializes in providing BSS/OSS, big data products and services to telecom operators, and ICT, smart city and industry products and services to enterprises and governments.
Nubia Technology was a fully owned subsidiary of ZTE Corporation. The company has subsequently disposed of the majority of its equity in the company. In 2017 it reduced its stake to 49.9%.
Zonergy is a renewables company with interests in electricity generation through solar parks in China and Pakistan and palm oil cultivation in Indonesia to produce biofuels. ZTE is a major shareholder and was instrumental in the creation of the company in 2007 but holds a minority of the shares in the entity.
ZTE agreed to take over a 48% stake in Turkish company Netaş Telekomünikasyon A.Ş. for $101.3 million from the American private equity firm One Equity Partners in December 2016. Following the acquisition in August 2017, ZTE has become its largest shareholder while Netaş remains an independent company.
Products
ZTE operates in three business segments: carrier networks, government and corporate business, and consumer business. In October 2010, ZTE's unified encryption module received U.S./Canada FIPS140-2 security certification.
ZTE was also reported to have developed identification cards for Venezuela that were allegedly used for tracking and social control.
Customers
During the 2000s, the majority of ZTE's customers were mobile network operators that came from the developing world, but ZTE products also saw use in developed countries as well. Among ZTE's clients from the first world included Britain's Vodafone, Canada's Telus and Fido, Australia's Telstra, as well as France Telecom have all purchased equipment from ZTE.
Many Chinese telecommunications operators are also clients of ZTE, including China Netcom, China Mobile, China Satcom, China Telecom, and China Unicom.
ZTE began to offer smartphones in the United States in 2011. The company elected to focus its efforts on low-cost products for discount and prepaid wireless carriers, including devices with premium features typically associated with high-end products, such as large high-resolution screens and fingerprint readers.
Sponsorship
In May 2016, ZTE became the co-sponsor of German soccer term, Borussia Mönchengladbach.
Since 2015, several U.S.-based National Basketball Association teams have had sponsorship deals with ZTE, including the Houston Rockets, Golden State Warriors, and New York Knicks.
Controversies
Bans
ZTE has been banned in multiple countries over national security concerns and alleged spying.
Bribes for contracts
Norway
Norwegian telecommunications giant Telenor, one of the world's largest mobile operators, banned ZTE from "participating in tenders and new business opportunities for 6 months because of an alleged breach of its code of conduct in a procurement proceeding" during a five-month time span ending in March 2009.
Philippines
Contracts with ZTE to build a broadband network for the Philippine government reportedly involved kickbacks to government officials. The project was later cancelled.
West Africa
Court documents filed in the US show that ZTE had a practice of handing over “brown paper bags” of cash to win contracts in West Africa. The company had an entire department dedicated to bribe management.
Surveillance system sale
In December 2010, ZTE sold systems for eavesdropping on phone and Internet communications to the government-controlled Telecommunication Company of Iran. This system may help Iran monitor and track political dissidents.
Security
At least one ZTE mobile phone (sold as the ZTE Score in the United States by Cricket and MetroPCS) can be remotely accessed by anyone with an easily obtained password.
ZTE, as well as Huawei, has faced scrutiny by the U.S. federal government over allegations that Chinese government surveillance could be performed through its handsets and infrastructure equipment. In 2012, the House Permanent Select Committee on Intelligence issued a report recommending that the government be prohibited from purchasing equipment from the firms, citing them as possible threats to national security. A ban on government purchases of Huawei and ZTE equipment was formalized in a defense funding bill passed in August 2018.
Following the 2020–2021 China–India skirmishes, India announced that ZTE would be blocked from participating in the country's 5G network for national security reasons. Sweden has also banned the use of ZTE telecommunications equipment in its 5G network on the advice of its military and security service, which said China is "one of the biggest threats against Sweden."
Operations in Russia
During the Russian invasion of Ukraine, ZTE refused to withdraw from the Russian market. Research from Yale University published on 10 August 2022 identified ZTE among the companies defying demands to exit Russia or reduce business activities.
References
External links
1997 initial public offerings
Chinese brands
Chinese companies established in 1985
Civilian-run enterprises of China
Companies formerly in the Hang Seng China Enterprises Index
Companies listed on the Hong Kong Stock Exchange
Companies listed on the Shenzhen Stock Exchange
Companies in the CSI 100 Index
Computer companies of China
Computer hardware companies
Defence companies of the People's Republic of China
Electronics companies established in 1985
Electronics companies of China
Government-owned companies of China
Mobile phone companies of China
Mobile phone manufacturers
Multinational companies headquartered in China
Networking hardware companies
Telecommunication equipment companies of China
Telecommunications equipment vendors
Mass surveillance in China
1985 in Shenzhen | ZTE | Technology | 2,874 |
18,112,972 | https://en.wikipedia.org/wiki/Prince%20Philip%20Medal | The Prince Philip Medal is named after Prince Philip, Duke of Edinburgh, who was the Senior Fellow of the Royal Academy of Engineering (RAE). In 1989 Prince Philip agreed to the commissioning of solid gold medals to be "awarded periodically to an engineer of any nationality who has made an exceptional contribution to engineering as a whole through practice, management or education." The first of these medals was awarded in 1991 to Air Commodore Sir Frank Whittle.
Background
The Prince Philip medal is awarded through the Royal Academy of Engineering. Nominations are opened around September each year.
Candidates can be from any nationality and hence it is an international award. Although it is an annual award, at times when
there is no qualified candidate, the medal is not awarded. Winners include people from industry and university.
Winners
Previous recipients of the Prince Philip medal were:
Others
Another different medal also known as the Prince Philip medal is the City and Guilds Institute of London Gold Medal, awarded by the City & Guilds.
See also
List of engineering awards
Notes
International awards
Awards established in 1991
Awards of the Royal Academy of Engineering
1991 establishments in the United Kingdom
Medal | Prince Philip Medal | Technology | 225 |
3,200,976 | https://en.wikipedia.org/wiki/Etioplast | Etioplasts are an intermediate type of plastid that develop from proplastids that have not been exposed to light, and convert into chloroplasts upon exposure to light. They are usually found in stem and leaf tissue of flowering plants (Angiosperms) grown either in complete darkness, or in extremely low-light conditions.
Etymology
The word "etiolated" (from French word étioler — "straw") was first coined by Erasmus Darwin in 1791 to describe the white and straw-like appearance of dark-grown plants. However, the term "etioplast" did not exist until 1967 when it was invented by John T. O. Kirk and Richard A. E. Tilney-Bassett to distinguish etioplasts from proplastids, their precursors.
Structure
Etioplasts are characterized by the absence of chlorophyll and the presence of a complicated structure called a prolamellar body (PLB). Usually, a single one is present in each etioplast. PLB is composed of symmetrically arranged, tetrahedrally-branched tubules and may contain ribosomes and plastoglobules inside. The latter are rich with carotenoids, especially lutein and violaxanthin, which may help in transition to chloroplasts. Due to the higher presence of carotenoids than protochlorophyllide, etiolated leaves appear pale yellow instead of just white.
Transition to chloroplast
Every PLB contains protochlorophyllide which is rapidly converted into chlorophyllide by the enzyme protochlorophyllide reductase upon exposure to light. Following this, chlorophyllide is converted to chlorophyll through enzymatic processes. This is stimulated by plant growth hormones: cytokinins and gibberellins. The structure of PLB itself is almost immediately disrupted, and thylakoid and grana development is started in reaction to light: photosystem I activates within 15 minutes, photosystem II within 2 hours, and after approximately 3 hours an etioplast is completely converted into a functional chloroplast.
The transitional stage between an etioplast and a chloroplast which still contains small PLBs interconnected with developing thylakoids, but already has chlorophyll is sometimes called an "etio-chloroplast". Etioplasts were once thought to be laboratory artefacts not found in nature, but that has since been disproven: in cabbage heads, developing inner leaves contain etioplasts due to being shaded by outer leaves; seedlings that naturally germinate underground may also contain etioplasts.
See also
Plastid
Chloroplast
Chromoplast
Leucoplast
Amyloplast
Elaioplast
Proteinoplast
Gerontoplast
References
Organelles
Plant physiology | Etioplast | Biology | 625 |
23,950,388 | https://en.wikipedia.org/wiki/N-slit%20interferometer | The N-slit interferometer is an extension of the double-slit interferometer also known as Young's double-slit interferometer. One of the first known uses of N-slit arrays in optics was illustrated by Newton. In the first part of the twentieth century, Michelson described various cases of N-slit diffraction.
Feynman described thought experiments the explored two-slit quantum interference of electrons, using Dirac's notation. This approach was extended to N-slit interferometers, by Duarte and colleagues in 1989, using narrow-linewidth laser illumination, that is, illumination by indistinguishable photons. The first application of the N-slit interferometer was the generation and measurement of complex interference patterns. These interferograms are accurately reproduced, or predicted, by the N-slit interferometric equation for either even (N = 2, 4, 6,...), or odd (N = 3, 5, 7,...), numbers of slits.
N-slit laser interferometer
The N-slit laser interferometer, introduced by Duarte, uses prismatic beam expansion to illuminate a transmission grating, or N-slit array, and a photoelectric detector array (such as a CCD or CMOS) at the interference plane to register the interferometric signal. The expanded laser beam illuminating the N-slit array is single-transverse-mode and narrow-linewidth. This beam can also take the shape, via the introduction of a convex lens prior to the prismatic expander, of a beam extremely elongated in the propagation plane and extremely thin in the orthogonal plane. This use of one-dimensional (or line) illumination eliminates the need of point-by-point scanning in microscopy and microdensitometry. Thus, these instruments can be used as straight forward N-slit interferometers or as interferometric microscopes.
The disclosure of this interferometric configuration introduced the use of digital detectors to N-slit interferometry.
Applications
Secure optical communications
These interferometers, originally introduced for applications in imaging, are also useful in optical metrology and have been proposed for secure optical communications in free space, between spacecraft. This is due to the fact that propagating N-slit interferograms suffer catastrophic collapse from interception attempts using macroscopic optical methods such as beam splitting. Recent experimental developments include terrestrial intra-interferometric path lengths of 35 meters and 527 meters.
These large, and very large, N-slit interferometers are used to study various propagation effects including microscopic disturbances on propagating interferometric signals. This work has yielded the first observation of diffraction patterns superimposed over propagating interferograms.
These diffraction patterns (as shown in the first photograph) are generated by inserting a spider web fiber (or spider silk thread) into the propagation path of the interferogram. The position of the spider web fiber is perpendicular to the propagation plane.
Clear air turbulence
N-slit interferometers, using large intra interferometric distances, are detectors of clear air turbulence. The distortions induced by clear air turbulence upon the interferometric signal are different, in both character and magnitude, from the catastrophic collapse resulting from attempted interception of optical signals using macroscopic optical elements.
Expanded beam interferometric microscopy
The original application of the N-slit laser interferometer was interferometric imaging. In particular, the one dimensionally expanded laser beam (with a cross section 25-50 mm wide by 10-25 μm high) was used to illuminate imaging surfaces (such as silver-halide films) to measure the microscopic density of the illuminated surface. Hence the term interferometric microdensitometer. Resolution down to the nano regime can be provided via the use of interinterferometric calculations. When used as a microdensitometer the N-slit interferometer is also known as a laser microdensitometer.
The multiple-prism expanded laser beam is also described as an extremely elongated laser beam. The elongated dimension of the beam (25-50 mm) is in the propagation plane while the very thin dimension (in the μm regime) of the beam is in the orthogonal plane. This was demonstrated, for imaging and microscopy applications, in 1993. Alternative descriptions of this type of extremely elongated illumination include the terms line illumination, linear illumination, thin light sheet illumination (in light sheet microscopy), and plane illumination (in selective plane illumination microscopy).
Other applications
N-slit interferometers are of interest to researchers working in atom optics, Fourier imaging, optical computing, and quantum computing.
See also
Beam expander
Clear air turbulence
Diffraction from slits
Double-slit experiment
Free-space optical communication
Laser communication in space
Microscopy
Microdensitometer
N-slit interferometric equation
List of laser articles
References
Interference
Interferometry
Interferometers
Optical instruments
American inventions | N-slit interferometer | Technology,Engineering | 1,033 |
32,523,543 | https://en.wikipedia.org/wiki/Puppis%20in%20Chinese%20astronomy | According to traditional Chinese uranography, the modern constellation Puppis is located within the southern quadrant of the sky, which is symbolized as the Vermilion Bird of the South (南方朱雀, Nán Fāng Zhū Què).
The name of the western constellation in modern Chinese is 船尾座 (chuán wěi zuò), meaning "behind of the ship constellation".
Stars
The map of Chinese constellation in constellation Puppis area consists of:
See also
Chinese astronomy
Traditional Chinese star names
Chinese constellations
References
External links
Puppis – Chinese associations
香港太空館研究資源
中國星區、星官及星名英譯表
天象文學
台灣自然科學博物館天文教育資訊網
中國古天文
中國古代的星象系統
Astronomy in China
Puppis | Puppis in Chinese astronomy | Astronomy | 174 |
2,157,145 | https://en.wikipedia.org/wiki/Rollin%20film | A Rollin film, named after Bernard V. Rollin, is a -thick liquid film of helium in the helium II state. It exhibits a "creeping" effect in response to surfaces extending past the film's level (wave propagation). Helium II can escape from any non-closed container via creeping toward and eventually evaporating from capillaries of or greater.
Rollin films are involved in the fountain effect where superfluid helium leaks out of a container in a fountain-like manner. They have high thermal conductivity.
The ability of superfluid liquids to cross obstacles that lie at a higher level is often referred to as the Onnes effect, named after Heike Kamerlingh Onnes. The Onnes effect is enabled by the capillary forces dominating gravity and viscous forces.
Waves propagating across a Rollin film are governed by the same equation as gravity waves in shallow water, but rather than gravity, the restoring force is the van der Waals force. The film suffers a change in chemical potential when the thickness varies. These waves are known as third sound.
Thickness of the film
The thickness of the film can be calculated by the energy balance. Consider a small fluid volume element which is located at a height from the free surface. The potential energy due to the gravitational force acting on the fluid element is , where is the total density and is the gravitational acceleration. The quantum kinetic energy per particle is , where is the thickness of the film and is the mass of the particle. Therefore, the net kinetic energy is given by , where is the fraction of atoms which are Bose–Einstein condensate. Minimizing the total energy with respect to the thickness provides us the value of the thickness:
See also
Zero sound
Second sound
References
External links
Video of the property in action
Video: Liquid Helium, Superfluid: demonstrating Lambda point transition/viscosity paradox/two fluid model/fountain effect/Rollin film/second sound (Alfred Leitner, 1963, 38 min.)
Helium
Bose–Einstein condensates
Fluid mechanics
Superfluidity
de:Rollin-Film | Rollin film | Physics,Chemistry,Materials_science,Engineering | 430 |
1,022,471 | https://en.wikipedia.org/wiki/Solarium%20%28myrmecology%29 | In myrmecology, a solarium is an above-ground earthen structure constructed by some ant species for the purpose of nest thermoregulation and brood incubation. Solaria are usually dome-shaped and fashioned from a paper-thin layer of soil, connected to the main nest by way of subterranean runs. Some species, such as Formica candida, construct solaria using plant materials.
Tapinoma erraticum is an example of a solaria-constructing species whose skill at so doing was noted by Horace Donisthorpe in the early 20th century in his book British Ants, their Life Histories and Classification.
References
Myrmecology
Shelters built or used by animals | Solarium (myrmecology) | Biology | 141 |
13,290,757 | https://en.wikipedia.org/wiki/Metagame%20analysis | Metagame analysis involves framing a problem situation as a strategic game in which participants try to realise their objectives by means of the options available to them. The subsequent meta-analysis of this game gives insight in possible strategies and their outcome.
Origin
Metagame theory was developed by Nigel Howard in the 1960s as a reconstruction of mathematical game theory on a non-quantitative basis, hoping that it would thereby make more practical and intuitive sense . Metagame analysis reflects on a problem in terms of decision issues, and stakeholders who may exert different options to gain control over these issues. The analysis reveals what likely scenarios exist, and who has the power to control the course of events. The practical application of metagame theory is based on the analysis of options method, first applied to study problems like the strategic arms race and nuclear proliferation.
Method
Metagame analysis proceeds in three phases: analysis of options, scenario development, and scenario analysis.
Analysis of options
The first phase of analysis of options consists of the following four steps:
Structure the problem by identifying the issues to be decided.
Identify the stakeholders who control the issues, either directly or indirectly.
Make an inventory of policy options by means of which the stakeholders control the issues.
Determine the dependencies between the policy options.
The dependencies between options should typically be formulated as "option X can only be implemented if option Y is also implemented", or "options Y and Z are mutually exclusive". The result is a metagame model, which can then be analysed in different ways.
Scenario development
The possible outcomes of the game, based on the combination of options, are called scenarios. In theory, a game with N stakeholders s1, ..., sN who have Oi options (i = 1, ..., N), there are O1×...×ON possible outcomes. As the number of stakeholders and the number of the options they have increase, the number of scenarios will increase steeply due to a combinatorial explosion. Conversely, the dependencies between options will reduce the number of scenarios, because they rule out those containing logically or physically impossible combinations of options.
If the set of feasible scenarios is too large to be analysed in full, some combinations may be eliminated because the analyst judges them to be not worth considering. When doing so, the analyst should take care to preserve these particular types of scenarios :
The Status Quo, representing the future as it was previously expected.
The present scenario, which may differ from the Status Quo as it incorporates the intentions that are expressed by the stakeholders to change their plans; the Status Quo necessarily remains the same, but the present scenario may change as stakeholders interact and influence each other's plans.
The positions of different stakeholders, being the scenarios they would like others to agree to. Similar to the present scenario, positions may change through interaction.
Compromises between two stakeholders, defined as scenarios that, while not the position of either, are preferred by both to the other's position. A compromise does not necessarily have to involve all stakeholders.
Conflict points, defined as scenarios that stakeholders might move to in trying to force others to accept their positions.
Scenario analysis
The next step in the metagame analysis consists of the actual analysis of the scenarios generated so far. This analysis centres around stability and is broken down in the following four steps :
Choose a particular scenario to analyse for stability. A scenario is stable if "each stakeholder expects to do its part and expects others to do theirs." Note that stable scenarios are accepted by all stakeholders, but that acceptance does not need to be voluntary. There may be more than one stable scenario, the stability of a scenario may change, and unstable scenarios can also happen.
Identify all unilateral improvements for stakeholders and subsets of stakeholders from the particular scenario. These are all the scenarios that are both preferred by all members of a certain subset and 'reachable' by them alone changing their selection of individual options.
Identify all sanctions that exist to deter the unilateral improvements. A sanction against an improvement is a possible reaction to an improvement by the stakeholders who were not involved in the improvement. It is such that the stakeholder who was involved in the improvement finds the sanction not preferred to the particular scenario, making it not worthwhile for that stakeholder to have helped with the improvement. The general "law of stability" to be used in scenario analysis is: for a scenario to be stable, it is necessary for each credible improvement to be deterred by a credible sanction Steps 1 to 3 need to be repeated to analyse some additional scenarios. When a number of scenarios have been analysed, one can proceed to the next step:
Draw a strategic map, laying out all the threats and promises stakeholders can make to try to stabilise the situation at scenarios they prefer. Strategic maps are diagrams in which scenarios are shown by balloons, with arrows from one balloon to another representing unilateral improvements. Dotted arrows from improvement arrows to balloons represent sanctions by which the improvements may be deterred, thus changing the destination of the improvement arrow.
This analysis procedure shows that the credibility of threats and promises (sanctions and improvements) is of importance in metagame analysis. A threat or promise, one that the stakeholder prefers to carry out for its own sake, is inherently credible. Sometimes a stakeholder may want to make credible an 'involuntary' threat or promise, to use this to move the situation in the desired direction. Such threats and promises can be made credible in three basic ways: preference change, irrationality, and deceit .
Development
Metagame analysis is still used as a technique in its own right. However it has been further developed in distinct ways as the basis of more recent approaches:
the graph model
confrontation analysis
References
Game theory | Metagame analysis | Mathematics | 1,170 |
254,789 | https://en.wikipedia.org/wiki/Database%20administrator | A database administrator (DBA) manages computer databases. The role may include capacity planning, installation, configuration, database design, migration, performance monitoring, security, troubleshooting, as well as backup and data recovery.
Skills
Required skills for database administrators include knowledge of SQL, database queries, database theory, database design, specific databases, such as Oracle, Microsoft SQL Server, or MySQL, storage technologies, distributed computing architectures, operating systems, routine maintenance, recovery, and replication/failover.
Certification
Training for DBAs with accompanying certifications is widely available, offered by database vendors and third parties. Offerings include:
IBM Certified Advanced Database Administrator – DB2 10.1 for Linux, Unix and Windows
IBM Certified Database Administrator – DB2 10.1 for Linux, Unix, and Windows
Oracle Database 12c Administrator Certified Professional
Oracle MySQL 5.6 Database Administrator Certified Professional
MCSA SQL Server 2012
MCSE Data Platform Solutions Expert
See also
Comparison of database administration tools
Database administration
Database management
References
Computer occupations
Data management
Database management systems | Database administrator | Technology | 213 |
45,119,474 | https://en.wikipedia.org/wiki/Bioclaustration | Bioclaustration is kind of interaction when one organism (usually soft bodied) is embedded in a living substrate (i.e. skeleton of another organism); it means "biologically walled-up". In case of symbiosis the walling-up is not complete and both organisms stay alive (Palmer and Wilson, 1988).
References
Ecology
Ecology terminology
Symbiosis
Trace fossils | Bioclaustration | Biology | 84 |
16,845,493 | https://en.wikipedia.org/wiki/ABHD2 | Abhydrolase domain-containing protein 2 is a serine hydrolase enzyme that is strongly expressed in human spermatozoa. It is a key controller of sperm hyperactivation, which is a necessary step in allowing sperm to fertilize an egg. It is encoded by the ABHD2 gene.
Function
In the presence of Progesterone (or Pregnenolone Sulfate,) it cleaves 2-arachidonoylglycerol (2AG) into glycerol and arachidonic acid (AA). 2AG inhibits sperm calcium channel CatSper, and so when ABHD2 removes 2AG calcium flows into the cell through the CatSper channel, leading to hyperactivation.
ABHD2 is inhibited by testosterone, (as well as hydrocortisone, and the plant triterpenoids lupeol and pristimerin) which may prevent premature hyperactivation.
Structure
This gene encodes a protein containing an alpha/beta hydrolase fold, which is a catalytic domain found in a very wide range of enzymes. Alternative splicing of this gene results in two transcript variants encoding the same protein.
Role in disease
The ABHD2 gene is down regulated in the lungs of people with Emphysema. Analysis of ABHD2 deficiency in mice found a decrease in phosphatidylcholine levels. The mice developed emphysema which was attributed to an increase in macrophage infiltration, increased inflammatory cytokine levels, an imbalance of protease/anti-protease, and an increase in cell death. This research suggests that ABHD2 is important in maintaining the structural integrity of the lungs, and that disruption of phospholipid metabolism in the alveoli may lead to the development of emphysema. Increased expression has also been seen in the lungs of smokers.
ABHD2 is also expressed in atherosclerotic lesions. Expression has been found to be higher in patients with unstable angina than in patients with stable angina.
Up-regulation of ABHD2 has been observed in cells transfected with Hepatitis B virus (HBV) DNA (HepG2.2.15 cells). Expression was down-regulated by the drug lamivudine, used in the treatment of hepatitis B. It has been suggested that ABHD2 has an important role in HBV propagation and could be a potential drug target in the treatment of hepatitis B.
Suppression of ABHD2 has been linked to poor prognoses in ovarian cancer and resistance to platinum-based chemotherapy drugs.
References
External links
Further reading
Enzymes
Enzymes of unknown structure
Hydrolases
Membrane proteins
Genes on human chromosome 15 | ABHD2 | Biology | 562 |
52,089,502 | https://en.wikipedia.org/wiki/North%20Wind%27s%20Weir | North Wind's Weir or North Wind's Fish Weir south of Seattle on the Duwamish River in Tukwila, Washington is a site that figures prominently in the oral traditions of the Salish people of the Puget Sound region. The legends describe battles between North Wind and South Wind for control of the region.
Salish tradition
According to Salish tradition, North Wind stretched a weir of ice across the Duwamish River at this site; no fish could pass, starving the people up the valley, the people of the Chinook Wind who was married to North Wind's daughter Mountain Beaver Woman. The mother of Mountain Beaver woman survived the starvation, but retreated to the mountain. Mountain Beaver Woman's son, the child Storm Wind, also survived.
The people of the North Wind warned Storm Wind to stay away from the mountain, trying to keep from him the knowledge of what had happened to his people, but eventually he defied them and found his grandmother living in misery. He heard her story and helped her out of her misery; she, in return, aided him with a flood that shattered the weir and turned it to stone. Storm Wind and his grandmother defeated North Wind, who only occasionally and briefly torments the area with snow and ice.
Location and environs
North Wind's Weir is just east of Cecil Moses Memorial Park, in a zone where fresh and salt waters mix, creating a key transition zone for young Chinook salmon swimming downstream to Puget Sound. A pedestrian and bicycle bridge coming out of the park on the Green River Trail crosses the Duwamish River just south of the weir, allowing a view of the rock formation in the river, except when there is a high tide. The United States Army Corps of Engineers, King County, and construction contractor Doyon Project Services completed a habitat restoration project at the site in December 2009, restoring of mudflat and vegetated marsh.
Notes
Locations in Native American mythology
Landforms of King County, Washington
Rock formations of Washington (state)
Weirs | North Wind's Weir | Environmental_science | 413 |
2,288,158 | https://en.wikipedia.org/wiki/William%20Duncan%20MacMillan | William Duncan MacMillan (July 24, 1871 – November 14, 1948) was an American mathematician and astronomer on the faculty of the University of Chicago. He published research on the applications of classical mechanics to astronomy, and is noted for pioneering speculations on physical cosmology. For the latter, Helge Kragh noted, "the cosmological model proposed by MacMillan was designed to lend support to a cosmic optimism, which he felt was threatened by the world view of modern physics."
Biography
He was born in La Crosse, Wisconsin, to D. D. MacMillan, who was in the lumber business, and Mary Jane McCrea. His brother, John H. MacMillan, headed the Cargill Corporation from 1909 to 1936. MacMillan graduated from La Crosse High School in 1888. In 1889, he attended Lake Forest College, then entered the University of Virginia. Later in 1898, he earned an A.B. degree from Fort Worth University, which was then a Methodist university in Texas. He performed his graduate work at the University of Chicago, earning a master's degree in 1906 and a PhD in astronomy in 1908. In 1907, prior to completing his PhD, he joined the staff of the University of Chicago as a research assistant in geology. In 1908, he became an associate in mathematics, then in 1909, he began instruction in astronomy at the same institution. His career as a professor began in 1912 when he became an assistant professor. In 1917, when the U.S. declared war on Germany, Dr. MacMillan served as a major in the U.S. army's ordnance department during World War I. Following the war, he became associate professor in 1919, then full professor in 1924. MacMillan retired in 1936.
In a 1958 paper about MacMillan's work on cosmology, Richard Schlegel introduced MacMillan as "best known to physicists for his three-volume Classical Mechanics" that remained in print for decades after MacMillan's 1936 retirement. MacMillan published extensively on the mathematics of the orbits of planets and stars. In the 1920s, MacMillan developed a cosmology that presumed an unchanging, steady-state model of the universe. This was uncontroversial at the time, and indeed in 1918, Albert Einstein had also sought to adapt his relativity theories to the model using a cosmological constant. MacMillan accepted that the radiance of stars came from then unknown processes that converted their mass into radiant energy. This perspective suggested that individual stars and the universe itself would ultimately go dark, which was called the "heat death" of the universe. MacMillan avoided the conclusion about the universe through a mechanism later known as the "tired-light hypothesis". He speculated that the light emitted by stars might recreate matter in its travels through space.
MacMillan's work on cosmology lost influence in the 1930s after Hubble's law became accepted. Edwin Hubble's 1929 publication, and earlier work by Georges Lemaître, reported on observations of entire galaxies far from the earth and its galaxy. The further away a galaxy is, the faster it is apparently moving away from the earth. Hubble's law strongly suggested that universe is expanding. In 1948, a new version of a steady-state cosmology was proposed by Bondi, Gold, and Hoyle that was consistent with the measurements on distant galaxies. While the authors were apparently not aware of MacMillan's earlier work, substantial similarities exist. With the observation of the cosmic microwave background (CMB) in 1965, steady-state models of the universe have been rejected by most astronomers and physicists. The CMB is a prediction of the Big Bang model of an expanding universe.
MacMillan also had a distaste for Einstein's relativity theories. In a published debate in 1927, Macmillan invoked "postulates of normal intuition" to argue against them. He objected to the theories' inconsistency with an absolute scale of time. Einstein's theories predict that an observer will see that rapidly moving clocks tick more slowly than the observer's own clock. Later experiments amply confirmed this "time dilation" prediction of relativity theory.
In an Associated Press report, MacMillan speculated on the nature of interstellar civilizations, believing that they would be vastly more advanced than our own. "Out in the heavens, perhaps, are civilizations as far above ours as we are above the single cell, since they are so much older than ours."
The crater MacMillan on the Moon is named in his honor.
Selected publications
1916
.
. Later reprinted by Dover, 1958, .
Reprinted by Dover, 1958, .
Later reprinted by Dover, 1960, .
See also
Sitnikov problem
Static universe
References
1871 births
1948 deaths
American astronomers
Lake Forest College alumni
Relativity critics
University of Chicago alumni
United States Army officers
University of Chicago faculty | William Duncan MacMillan | Physics | 974 |
553,525 | https://en.wikipedia.org/wiki/Norman%20Haworth | Sir Walter Norman Haworth FRS (19 March 1883 – 19 March 1950) was a British chemist best known for his groundbreaking work on ascorbic acid (vitamin C) while working at the University of Birmingham. He received the 1937 Nobel Prize in Chemistry "for his investigations on carbohydrates and vitamin C". The prize was shared with Swiss chemist Paul Karrer for his work on other vitamins.
Haworth worked out the correct structure of a number of sugars, and is known among organic chemists for his development of the Haworth projection that translates three-dimensional sugar structures into convenient two-dimensional graphical form.
Academic career
Having worked for some time from the age of fourteen in the local Ryland's linoleum factory managed by his father, he studied for and successfully passed the entrance examination to the University of Manchester in 1903 to study chemistry. He made this pursuit in spite of active discouragement by his parents. He gained his first-class honours degree in 1906. After gaining his master's degree under William Henry Perkin Jr., he was awarded an 1851 Research Fellowship from the Royal Commission for the Exhibition of 1851 and studied at the University of Göttingen earning his PhD in Otto Wallach's laboratory after only one year of study. A DSc from the University of Manchester followed in 1911, after which he served a short time at the Imperial College of Science and Technology as Senior Demonstrator in Chemistry.
In 1912 Haworth became a lecturer at United College of University of St Andrews in Scotland and became interested in carbohydrate chemistry, which was being investigated at St Andrews by Thomas Purdie (1843–1916) and James Irvine (1877–1952). Haworth began his work on simple sugars in 1915 and developed a new method for the preparation of the methyl ethers of sugars using methyl sulfate and alkali (now called Haworth methylation). He then began studies on the structural features of the disaccharides. Haworth organised the laboratories at St Andrews University for the production of chemicals and drugs for the British government during World War I (1914–1918).
He was appointed Professor of Organic Chemistry at the Armstrong College (Newcastle upon Tyne) of Durham University in 1920. The next year Haworth was appointed Head of the Chemistry Department at the college. It was during his time in the North East of England that he married Violet Chilton Dobbie.
In 1925 he was appointed Mason Professor of Chemistry at the University of Birmingham (a position he held until 1948). Among his lasting contributions to science was the confirmation of a number of structures of optically active sugars: by 1928, he had deduced and confirmed, among others, the structures of maltose, cellobiose, lactose, gentiobiose, melibiose, gentianose, raffinose, as well as the glucoside ring tautomeric structure of aldose sugars. He published a classic text in 1929, The Constitution of Sugars.
In 1933, working with the then Assistant Director of Research (later Sir) Edmund Hirst and a team led by post-doctoral student Maurice Stacey (who in 1956 rose to the same Mason Chair), having properly deduced the correct structure and optical-isomeric nature of vitamin C, Haworth reported the synthesis of the vitamin. Haworth had been given his initial reference sample of "water-soluble vitamin C" or "hexuronic acid" (the previous name for the compound as extracted from natural products) by Hungarian physiologist Albert Szent-Györgyi, who had codiscovered its vitamin properties along with Charles Glen King, and had more recently discovered that it could be extracted in bulk from Hungarian paprika. In honour of the compound's antiscorbutic properties, Haworth and Szent-Györgyi now proposed the new name of "a-scorbic acid" for the molecule, with L-ascorbic acid as its formal chemical name. During World War II, he was a member of the MAUD Committee which oversaw research on the British atomic bomb project.
Recognition
Haworth is commemorated at the University of Birmingham in the Haworth Building, which houses most of the University of Birmingham School of Chemistry. The School has a Haworth Chair of Chemistry, held by Professor Nigel Simpkins from 2007 until his retirement in 2017, and by Professor Neil Champness since 2021.
In 1977 the Royal Mail issued a postage stamp (one of a series of four) featuring Haworth's achievement in synthesising vitamin C and his Nobel prize.
He also developed a simple method of representing on paper the three-dimensional structure of sugars. The representation, using perspective, now known as a Haworth projection, is still widely used in biochemistry.
Personal life
In 1922 he married Violet Chilton Dobbie, daughter of Sir James Johnston Dobbie. They had two sons, James and David.
He was elected a Fellow of the Royal Society (FRS) in 1928.
He was knighted in the 1947 New Years Honours list.
He died suddenly from a heart attack on 19 March 1950, his 67th birthday.
References
External links
including the Nobel Lecture on 11 December 1937 The Structure of Carbohydrates and of Vitamin C
1883 births
1950 deaths
University of Göttingen alumni
20th-century British chemists
Stereochemists
Nobel laureates in Chemistry
British Nobel laureates
Alumni of the University of Manchester
Fellows of the Royal Society
Academics of Imperial College London
Academics of Durham University
Academics of the University of Birmingham
Academics of the University of St Andrews
People from Chorley
Knights Bachelor
Royal Medal winners
English Nobel laureates
Manchester Literary and Philosophical Society
Vitamin researchers | Norman Haworth | Chemistry | 1,159 |
2,286,702 | https://en.wikipedia.org/wiki/Kaonic%20hydrogen | Kaonic hydrogen is an exotic atom consisting of a negatively charged kaon orbiting a proton.
Such particles were first identified, through their X-ray spectrum, at the KEK proton synchrotron in Tsukuba, Japan in 1997.
More detailed studies have been performed at DAFNE in Frascati, Italy.
Kaonic hydrogen has been created in very low energy collisions of kaons with the protons in a gaseous hydrogen target. At DAFNE, kaons are produced by the decay of φ mesons which are in turn created in collisions between electrons and positrons. The experiments analyzed X-rays from several electronic transitions in kaonic hydrogen.
Unlike in the hydrogen atom, where the binding between electron and proton is dominated by the electromagnetic interaction, kaons and protons interact also to a large extent by the strong interaction.
In kaonic hydrogen this strong contribution was found to be repulsive, shifting the ground state energy by 283 ± 36 (statistical) ± 6 (systematic) eV, thus making the system unstable with a resonance width of 541 ± 89 (stat) ± 22 (syst) eV (decay into Λπ and Σπ).
Kaonic hydrogen is studied mainly because of its importance for the understanding of kaon-nucleon interactions and for testing quantum chromodynamics.
See also
Kaonium
Pionic helium
References
External links
Article in CERN Courier
Exotic atoms
Atomic physics
Hydrogen physics
Mesons
Nuclear physics
Quantum chromodynamics
Substances discovered in the 1990s
Strange quark | Kaonic hydrogen | Physics,Chemistry | 312 |
40,872 | https://en.wikipedia.org/wiki/Circuit%20reliability | Circuit reliability (also time availability) (CiR) is the percentage of time an electronic circuit was available for use in a specified period of scheduled availability. Circuit reliability is given by where T o is the circuit total outage time, Ts is the circuit total scheduled time, and T a is the circuit total available time.
In addition, circuit reliability is the expected lifespan of operation of a functioning system under nominal conditions.
References
Electrical engineering | Circuit reliability | Engineering | 89 |
23,676,098 | https://en.wikipedia.org/wiki/Canadian%20Society%20of%20Landscape%20Architects | The Canadian Society of Landscape Architects (; CSLA-AAPC) is the national organization representing 1600 landscape architects in Canada's ten provinces and three territories. The organization was founded in 1934. Its mission is to "advance the art, science and business of landscape architecture."
One of the founding members was Lorrie Dunington-Grubb, co-founder with her husband Howard of the Sheridan Nurseries. In 1944 she became president of the society.
Members of the College of Fellows
Cornelia Oberlander
Don Vaughan (landscape architect)
Peter Jacobs (landscape architect)
Janet Rosenberg
References
External links
Canadian Society of Landscape Architects Official Site - English
Canadian Society of Landscape Architects Official Site - Francais
Architecture associations based in Canada
Landscape architecture organizations
1934 establishments in Canada
Organizations established in 1934 | Canadian Society of Landscape Architects | Engineering | 157 |
23,498,248 | https://en.wikipedia.org/wiki/Quasiregular%20element | This article addresses the notion of quasiregularity in the context of ring theory, a branch of modern algebra. For other notions of quasiregularity in mathematics, see the disambiguation page quasiregular.
In mathematics, specifically ring theory, the notion of quasiregularity provides a computationally convenient way to work with the Jacobson radical of a ring. In this article, we primarily concern ourselves with the notion of quasiregularity for unital rings. However, one section is devoted to the theory of quasiregularity in non-unital rings, which constitutes an important aspect of noncommutative ring theory.
Definition
Let R be a ring (with unity) and let r be an element of R. Then r is said to be quasiregular, if 1 − r is a unit in R; that is, invertible under multiplication. The notions of right or left quasiregularity correspond to the situations where 1 − r has a right or left inverse, respectively.
An element x of a non-unital ring R is said to be right quasiregular if there exists y in R such that . The notion of a left quasiregular element is defined in an analogous manner. The element y is sometimes referred to as a right quasi-inverse of x. If the ring is unital, this definition of quasiregularity coincides with that given above. If one writes , then this binary operation is associative. In fact, in the unital case, the map (where × denotes the multiplication of the ring R) is a monoid isomorphism. Therefore, if an element possesses both a left and right quasi-inverse, they are equal.
Note that some authors use different definitions. They call an element x right quasiregular if there exists y such that , which is equivalent to saying that 1 + x has a right inverse when the ring is unital. If we write , then , so we can easily go from one set-up to the other by changing signs. For example, x is right quasiregular in one set-up if and only if −x is right quasiregular in the other set-up.
Examples
If R is a ring, then the additive identity of R is always quasiregular.
If is right (resp. left) quasiregular, then is right (resp. left) quasiregular.
If R is a rng, every nilpotent element of R is quasiregular. This fact is supported by an elementary computation:
If , then
(or if we follow the second convention).
From this we see easily that the quasi-inverse of x is (or ).
In the second convention, a matrix is quasiregular in a matrix ring if it does not possess −1 as an eigenvalue. More generally, a bounded operator is quasiregular if −1 is not in its spectrum.
In a unital Banach algebra, if , then the geometric series converges. Consequently, every such x is quasiregular.
If R is a ring and S = R[[X1, ..., Xn]] denotes the ring of formal power series in n indeterminants over R, an element of S is quasiregular if and only its constant term is quasiregular as an element of R.
Properties
Every element of the Jacobson radical of a (not necessarily commutative) ring is quasiregular. In fact, the Jacobson radical of a ring can be characterized as the unique right ideal of the ring, maximal with respect to the property that every element is right quasiregular. However, a right quasiregular element need not necessarily be a member of the Jacobson radical. This justifies the remark in the beginning of the article – "bad elements" are quasiregular, although quasiregular elements are not necessarily "bad". Elements of the Jacobson radical of a ring are often deemed to be "bad".
If an element of a ring is nilpotent and central, then it is a member of the ring's Jacobson radical. This is because the principal right ideal generated by that element consists of quasiregular (in fact, nilpotent) elements only.
If an element, r, of a ring is idempotent, it cannot be a member of the ring's Jacobson radical. This is because idempotent elements cannot be quasiregular. This property, as well as the one above, justify the remark given at the top of the article that the notion of quasiregularity is computationally convenient when working with the Jacobson radical.
Generalization to semirings
The notion of quasiregular element readily generalizes to semirings. If a is an element of a semiring S, then an affine map from S to itself is . An element a of S is said to be right quasiregular if has a fixed point, which need not be unique. Each such fixed point is called a left quasi-inverse of a. If b is a left quasi-inverse of a and additionally b = ab + 1, then b it is called a quasi-inverse of a; any element of the semiring that has a quasi-inverse is said to be quasiregular. It is possible that some but not all elements of a semiring be quasiregular; for example, in the semiring of nonnegative reals with the usual addition and multiplication of reals, has the fixed point for all a < 1, but has no fixed point for a ≥ 1. If every element of a semiring is quasiregular then the semiring is called a quasi-regular semiring, closed semiring, or occasionally a Lehmann semiring (the latter honoring the paper of Daniel J. Lehmann.)
Examples of quasi-regular semirings are provided by the Kleene algebras (prominently among them, the algebra of regular expressions), in which the quasi-inverse is lifted to the role of a unary operation (denoted by a*) defined as the least fixedpoint solution. Kleene algebras are additively idempotent but not all quasi-regular semirings are so. We can extend the example of nonegative reals to include infinity and it becomes a quasi-regular semiring with the quasi-inverse of any element a ≥ 1 being the infinity. This quasi-regular semiring is not additively idempotent however, so it is not a Kleene algebra. It is however a complete semiring. More generally, all complete semirings are quasiregular. The term closed semiring is actually used by some authors to mean complete semiring rather than just quasiregular.
Conway semirings are also quasiregular; the two Conway axioms are actually independent, i.e. there are semirings satisfying only the product-star [Conway] axiom, (ab)* = 1+a(ba)*b, but not the sum-star axiom, (a+b)* = (a*b)*a* and vice versa; it is the product-star [Conway] axiom that implies that a semiring is quasiregular. Additionally, a commutative semiring is quasiregular if and only if it satisfies the product-star Conway axiom.
Quasiregular semirings appear in algebraic path problems, a generalization of the shortest path problem.
See also
inverse element
Notes
References | Quasiregular element | Mathematics | 1,595 |
42,526,327 | https://en.wikipedia.org/wiki/Q-Weibull%20distribution | In statistics, the q-Weibull distribution is a probability distribution that generalizes the Weibull distribution and the Lomax distribution (Pareto Type II). It is one example of a Tsallis distribution.
Characterization
Probability density function
The probability density function of a q-Weibull random variable is:
where q < 2, > 0 are shape parameters and λ > 0 is the scale parameter of the distribution and
is the q-exponential
Cumulative distribution function
The cumulative distribution function of a q-Weibull random variable is:
where
Mean
The mean of the q-Weibull distribution is
where is the Beta function and is the Gamma function. The expression for the mean is a continuous function of q over the range of definition for which it is finite.
Relationship to other distributions
The q-Weibull is equivalent to the Weibull distribution when q = 1 and equivalent to the q-exponential when
The q-Weibull is a generalization of the Weibull, as it extends this distribution to the cases of finite support (q < 1) and to include heavy-tailed distributions .
The q-Weibull is a generalization of the Lomax distribution (Pareto Type II), as it extends this distribution to the cases of finite support and adds the parameter. The Lomax parameters are:
As the Lomax distribution is a shifted version of the Pareto distribution, the q-Weibull for is a shifted reparameterized generalization of the Pareto. When q > 1, the q-exponential is equivalent to the Pareto shifted to have support starting at zero. Specifically:
See also
Constantino Tsallis
Tsallis statistics
Tsallis entropy
Tsallis distribution
q-Gaussian
References
Statistical mechanics
Continuous distributions
Probability distributions with non-finite variance | Q-Weibull distribution | Physics | 368 |
33,127,408 | https://en.wikipedia.org/wiki/VFTS%20682 | VFTS 682 is a Wolf–Rayet star in the Large Magellanic Cloud. It is located over north-east of the massive cluster R136 in the Tarantula Nebula. It is 138 times the mass of the Sun and 3.2 million times more luminous, which makes it one of the most massive and most luminous stars known.
Discovery
VFTS 682 is a prominent infrared source in the Large Magellanic Cloud and has been catalogued numerous times. In 1992 it was identified as entry 153 in a list of possible protostars. In 2009 it was again classified as a probable young stellar object on account of its exceptional infrared luminosity.
The VLT-FLAMES Tarantula survey (VFTS) examined 800 massive stars in detail and determined a spectral type of WN5h for VFTS 682. It is heavily reddened and visually several magnitudes fainter than other stars of similar luminosity and temperature in the 30 Doradus region.
Runaway
VFTS 682 is in the large star-forming region of the Tarantula Nebula, but is not within a dense massive cluster. The existence of an extremely massive and extremely young star in some isolation is unexpected since these stars are expected to form only from the most massive and dense molecular clouds and hence to form in large groups such as R136 as the result of competitive accretion or stellar mergers. The formation of isolated massive star would require different models to allow monolithic disk accretion of very massive stars.
VFTS 682 is close enough to R136 that it might have formed there and been ejected. No bow shock has been detected and it has a space velocity lower than most runaways, but large enough and in the right direction that it could be from R136.
Properties
The star's high mass of compresses its core to a high temperature and causes very rapid fusion via the CNO cycle, leading to the extremely high luminosity of . The star is 22 times the radius of the Sun, but because of its high temperature it emits 3.2 million times more energy, mostly at ultraviolet wavelengths so it is only 43,000 times as bright as the Sun visually. Nearly 99% (AV = 4.5) of the ultraviolet and visual radiation is then blocked by intervening interstellar material. The luminosity, intense UV radiation, and chemical makeup of the star's surface layers results in a stellar wind with a speed up to .
Evolution
Stars as massive as VFTS 682 with metallicity typical of the Large Magellanic Cloud will maintain near-homogeneous chemical structure due to strong convection and rotational mixing. This produces strong helium and nitrogen surface abundance enhancement even during core hydrogen burning. Their rotation rates will also decrease significantly due to mass loss and envelope inflation, so that gamma-ray bursts are unlikely when this type of star reaches core collapse.
Very massive stars are expected to develop directly from hydrogen-rich young stars showing an Of or WNh spectrum into classical hydrogen-poor Wolf–Rayet stars, possibly with a short period as a luminous blue variable They will continue to lose mass rapidly, passing through WN, WC, and WO stages before exploding as a type Ic supernova and leaving behind a black hole. It is unclear whether the resulting supernova would be under-luminous, or even invisible, as the result of collapsing into the black hole, or over-luminous due to a large mass of ejected radioactive Ni56.
The total lifetime would be around 2-3 million years, with the last half million years or so spent as a Wolf Rayet star burning helium at the core and a very short period burning heavier elements.
References
Stars in the Large Magellanic Cloud
Wolf–Rayet stars
Dorado
Extragalactic stars
Large Magellanic Cloud
J05385552-6904267
Tarantula Nebula | VFTS 682 | Astronomy | 793 |
15,874,481 | https://en.wikipedia.org/wiki/Sulfadoxine | Sulfadoxine (also spelled sulphadoxine) is an ultra-long-lasting sulfonamide used in combination with pyrimethamine to treat malaria.
It is also used to prevent malaria but due to high levels of sulphadoxine-pyrimethamine resistance, this use has become less common.
It is also used, usually in combination with other drugs, to treat or prevent various infections in livestock.
Mechanism of action
Sulfadoxine competitively inhibits dihydropteroate synthase, interfering with folate synthesis.
See also
Sulfadoxine/pyrimethamine
References
4-Aminophenyl compounds
Ethers
Pyrimidines
Sulfonamide antibiotics
Dihydropteroate synthetase inhibitors
Antimalarial agents | Sulfadoxine | Chemistry | 169 |
35,835,916 | https://en.wikipedia.org/wiki/WISE%200535%E2%88%927500 | WISE J053516.80−750024.9 (designation abbreviated to WISE 0535−7500) is either a sub-brown dwarf or a free planet. It has spectral class Y1 and is located in constellation Mensa. It is estimated to be 47 light-years from Earth.
In 2017, more accurate analysis found it to be a binary system made up of two substellar objects of spectral class≥Y1 in orbit less than one astronomical unit from each other.
Discovery
WISE 0535−7500 was discovered in 2012 by J. Davy Kirkpatrick et al. from data, collected by Wide-field Infrared Survey Explorer (WISE) Earth-orbiting satellite — NASA infrared-wavelength space telescope, which mission lasted from December 2009 to February 2011. In 2012 Kirkpatrick et al. published a paper in The Astrophysical Journal, where they presented the discovery of seven new found by WISE brown dwarfs of spectral type Y, among which also was WISE 0535−7500.
Distance
Trigonometric parallax of WISE 0535−7500 is 0.070 ± 0.005 arcsec, corresponding to a distance of 14 pc and 47 ly.
Y dwarf
Brown dwarfs are defined as substellar objects that have at some time in their lives burnt deuterium in their interior. The borderline between a brown dwarf and a planet is conventionally taken to be 13 times the mass of Jupiter. All brown dwarfs are either M dwarfs, L dwarfs, T dwarfs or Y dwarfs, in order of decreasing temperature. An increasing number after the letter in the spectral type also means decreasing temperature, a Y2 dwarf is cooler than a Y1 dwarf is cooler than a Y0 dwarf. Planets can also be L dwarfs, T dwarfs or Y dwarfs.
JWST observation
WISE 0535−7500 was studied with JWST by Beiler et al. in 2024 together with 22 other late-T and Y-dwarfs. WISE 0535−7500 stands out due to it having no discernable CO2 band and an almost undetectable CO band. This could be due a low metallicity or high surface gravity. These features make this object extremely red in Spitzer colors. This object also showed stronger NH3 absorption when compared to objects of the same temperature. Other common prominent features like H2O and CH4 are present in its spectrum. But like other late-T and Y-dwarfs it is missing PH3, which is predicted to occur for these objects.
See also
List of star systems within 45–50 light-years
List of Y-dwarfs
References
Brown dwarfs
Y-type brown dwarfs
Mensa (constellation)
20120509
WISE objects | WISE 0535−7500 | Astronomy | 549 |
14,756,913 | https://en.wikipedia.org/wiki/TFDP1 | Transcription factor Dp-1 is a protein that in humans is encoded by the TFDP1 gene.
Function
The E2F transcription factor family (see MIM 189971) regulates the expression of various cellular promoters, particularly those involved in the cell cycle. E2F factors bind to DNA as homodimers or heterodimers in association with dimerization partner DP1. TFDP1 may be the first example of a family of related transcription factors; see TFDP2 (MIM 602160).[supplied by OMIM]
Interactions
TFDP1 has been shown to interact with:
E2F1,
E2F5, and
P53.
References
Further reading | TFDP1 | Chemistry | 147 |
56,741,386 | https://en.wikipedia.org/wiki/NGC%204586 | NGC 4586 is a spiral galaxy located about 50 million light-years away in the constellation Virgo. The galaxy was discovered by astronomer William Herschel on February 2, 1786. Although listed in the Virgo Cluster Catalog, NGC 4586 is considered to be a member of the Virgo II Groups which form a southern extension of the Virgo cluster. NGC 4586 is currently in the process of infalling into the Virgo Cluster and is predicted to enter the cluster in about 500 million years.
Boxy/Peanut bulge
NGC 4586 has a boxy or peanut-shaped bulge. The bulge has been interpreted to be a bar viewed edge-on.
See also
List of NGC objects (4001–5000)
NGC 4469
NGC 4013
References
External links
Virgo (constellation)
Unbarred spiral galaxies
4586
42241
7804
Astronomical objects discovered in 1786
Discoveries by William Herschel | NGC 4586 | Astronomy | 183 |
39,086,554 | https://en.wikipedia.org/wiki/Wrightoporia%20lenta | Wrightoporia lenta is a species of fungus in the family Bondarzewiaceae. First described as a species of Poria in 1946, Czech mycologist Zdeněk Pouzar transferred it to Wrightoporia in 1966.
References
External links
Russulales
Fungi described in 1946
Fungus species | Wrightoporia lenta | Biology | 63 |
3,772,202 | https://en.wikipedia.org/wiki/Rev%20limiter | A rev limiter is a device fitted in modern vehicles that have internal combustion engines. They are intended to protect an engine by restricting its maximum rotational speed, measured in revolutions per minute (RPM).
Rev limiters are pre-set by the engine manufacturer. There are also aftermarket units where a separate controller is installed using a custom RPM setting. A limiter prevents a vehicle's engine from being pushed beyond the manufacturer's limit, known as the redline (literally the red line marked on the tachometer). At some point beyond the redline, engine damage may occur.
Operation
Limiters usually work by shutting off a component necessary for the combustion processes to occur, whether it be fuel, air or spark. Compression-ignition engines use mechanical governors or limiters to shut off electronic fuel injectors. A spark-ignition engine may also shut off fuel or stop the spark ignition and some just reduce the engine's power by changing the spark timing.
In the case of an automatic transmission in "drive" mode, the engine RPM stays safely within the range that the transmission chooses. Only when over revving the engine in "park", "neutral" or "manual" modes is there any need for a rev limiter. These vehicles often did not include a tachometer until the turn of the millennium. Without this gauge, the redline cannot be seen but there is so little risk of excessive engine speed with fully automatic transmissions that engine RPM is not a concern.
However, with a manual transmission engine RPM can redline in "neutral", or by shifting to a higher gear too late, or by shifting to a lower gear too early. In the case of "neutral" or shifting up too late, a rev limiter can easily keep engine RPM below the redline.
If a manual transmission is shifted down too early, the speed of the vehicle will drive the engine over the redline. In this case, a rev limiter will cut engine power but it cannot prevent the engine's RPM from going beyond the redline.
Perhaps the worst situation occurs when a shift is "missed". In the diagram shown, it is possible to be at high RPM and "miss" shifting from 2nd to 3rd and get 1st gear instead. This will result in exceeding the redline and there is nothing to prevent an engine from being severely damaged due to valvetrain failure or connecting rod failure. Using the clutch as quickly as possible may avoid engine damage.
Most small engines, such as on lawn mowers have a speed governor. As the RPM of the engine increases, the throttle plate in the carburetor is gradually closed, reducing the amount of fuel and air admitted to the engine, until the engine RPM is stable. If RPM drops below the desired value, the throttle plate will automatically open, admitting more fuel/air mix to the engine. Adjusting the throttle generally adjusts spring tension on the governor, which in turn allows the engine to run faster or slower, as desired. While the redline cannot be seen on most small engines, due to their lack of a tachometer, the risk of excessive engine speed is not generally a concern.
Types of control
Fuel control
Fuel-cutting rev limiters are the most common in road cars because they wear less on exhaust components, particularly the catalytic converter. These systems usually lean out the engine's overspeed by shutting off the fuel injectors, and are the only practical system on diesel engines. This is less popular in high performance or racing engines due to high temperatures in lean operation and the lack of a catalytic converter.
Spark control
Ignition control rev limiting systems work by shutting off the spark
plugs once the engine overspeeds.
This is less common in production vehicles because the system still injects fuel into the cylinder and consequently releases unburned fuel which may ignite at a turbo charger or in the exhaust pipe. This can affect the temperatures in the exhaust, causing premature wear on the catalytic converter.
Throttle control
Vehicles equipped with drive-by-wire systems allow the ECU to modulate throttle position to keep engine RPMs in a safe range. This is by far the safest method of limiting engine speed and is used on most modern production cars, as they don’t use a throttle cable.
Hard-cut vs. soft-cut limiters
Hard-cut limiters
Hard-cut limiters completely cut fuel or spark to the engine. These types of limiters activate at the set RPM and "bounce" off of it if throttle is applied. This phenomenon is referred to as hysteresis. The "bouncing" occurs because the limiter will cut off fuel or spark at the set RPM, which causes the RPM to drop. If the engine is in a state of open throttle when the RPM drops, the RPM will then raise back to the limit. This causes the engine to cycle its power on and off. A longer hysteresis period will facilitate a more significant drop in RPMs before fuel/spark is re-engaged, and a shorter hysteresis period can help decrease that drop in RPMs. In racing applications, extremely short hysteresis is desired so you won't lose all engine power suddenly if/when you hit the rev limiter.
Soft-cut limiters
Soft-cut limiters are a type of rev limiter that partially cuts off fuel to the engine. These limiters may also retard the ignition timing. If using a soft-cut rev limiter, the engine will start to cut fuel or retard ignition timing before the set RPM until it slowly reaches it and remains there. If the engine over-revs anyway, a soft-cut limiter may progressively shut off each cylinder one by one until engine RPMs drop to safe levels. These types of rev limiters are often conflated with "soft limiters", which are a separate, lower RPM limit for when a vehicle is not in gear.
Physical limiters
The maximum RPM of an engine is limited to the airflow through the engine, the displacement
of the engine, the mass and balance of the rotating parts, along with the bore and stroke of the pistons. Formula
One engines can rev up to 15,000 rpm as per Formula One rules because of their smaller displacement, low mass, and short stroke.
Engines with hydraulic tappets (such as the Buick/Rover V8) often have in effect a rev limiter by virtue of their design. The tappet clearances are maintained by the flow of the engine's lubricating oil. At high engine speeds, the oil pressure rises to such an extent that the tappets 'pump up', causing valve float. This sharply reduces engine power, causing speed to drop.
Racing uses
The RPM level that results with the spark timing being arrested can be a constant level, or, with the proper ignition control modules, variable. Variable rate ignition modules can be adjusted quickly and easily to achieve the appropriate RPM limit for different situations, such as street racing, drag racing, road course racing and highway driving.
Multiple stage ignition modules offer greater RPM limit control. The first stage can be used to limit RPM levels when launching a vehicle from a stationary position, providing maximum power and traction. The second stage is activated after launch to set a higher RPM limit for wide-open-throttle acceleration.
Engine damage beyond the redline
There is considerable variation between manufactures on where to have the redline for their engines: from 100 to 12,000 RPM. If an engine goes overspeed, commonly called "over-revving", damage to the piston and valvetrain may occur when a valve stays open longer than usual. Valve float can possibly result in loss of compression, misfire, or a valve and piston colliding with each other. It's also possible the engine will throw a connecting rod between the crankshaft and piston. The engine will then need to be repaired or replaced entirely.
See also
Redline
Overspeed
References
Engine technology | Rev limiter | Technology | 1,625 |
43,747,041 | https://en.wikipedia.org/wiki/Open%20coopetition | In R&D management and systems development, open coopetition or open-coopetition is a neologism to describe cooperation among competitors in the open-source arena. The term was first coined by the scholars Jose Teixeira and Tingting Lin to describe how rival firms that, while competing with similar products in the same markets, cooperate which each other in the development of open-source projects (e.g., Apple, Samsung, Google, Nokia) in the co-development of WebKit.
Open-coopetition is a compound-word term bridging coopetition and open-source. Coopetition refers to a paradoxical relationship between two or more actors simultaneously involved in cooperative and competitive interactions; and open-source both as a development method that emphasizes transparency and collaboration, and as a "private-collective" innovation model with features both from the private investment and collective action — firms contribute towards the creation of public goods while giving up associated intellectual property rights such patents, copyright, licenses, or trade secrets.
By exploring coopetition in the particular context of open-source, Open-coopetition emphasizes transparency on the co-development of technological artifacts that become available to the public under an open-source license—allowing anyone to freely obtain, study, modify and redistribute them. Within open-coopetition, development transparency and sense of community are maximized; while the managerial control and IP enforcement are minimized. Open-coopetitive relationships are paradoxical as the core managerial concepts of property, contract and price play an outlier role.
The openness characteristic of open-source projects also distinguishes open-coopetition from other forms of cooperative arrangements by its inclusiveness: Everybody can contribute. Users or other contributors do not need to hold a supplier contract or sign a legal intellectual property arrangement to contribute. Moreover, neither to be a member of a particular firm or affiliated with a particular joint venture or consortia to be able to contribute. In the words of Massimo Banzi, "You don't need anyone's permission to make something great".
More recently open-coopetition is used to describe open-innovation among competitors more broadly with many cases out of the software industry. While some authors use open-coopetition to emphasize the production of open-source software among competitors, others use open-coopetition to emphasis open-innovation among competitors.
History
2008
In a large-scale study involving multiple European-based software intensive firms, the scholars Pär Ågerfalk and Brian Fitzgerald revealed a shift from "open-source as a community of individual developers to open-source as a community of commercial organizations, primarily small and medium-sized enterprises, operating as a symbiotic ecosystem in a spirit of coopetition".
Even if they were exploring open-sourcing as "a novel and unconventional approach to global sourcing and coopetition", they captured the following quote that highlights that competition in the open-source arena is not as in business as usual.
"In a traditional market you don't call up your competitor and be like, oh, well tell me what your stuff
does. But in open source you do." [Open Source Program Director, at IONA]
2012
Also in the academic world, and after following a software company based in Norway for over five years, and while theorizing on the concept of software ecosystem, the academic Geir K. Hanssen noted that the characteristic networks of a software ecosystem, open-source or proprietary ones, can embed competing organizations.
"Software ecosystems have a networked character. CSoft and its external environment constitute a network of customers and
third party organizations. Even competitors may be considered a part of this network, although this aspect has not been studied
in particular here."
In an opinion article entitled Open Source Coopetition Fueled by Linux Foundation Growth, the journalist and market analyst Jay Lyman highlights that "working with direct rivals may have been unthinkable 10 years ago, but Linux, open-source and organizations such as the Linux Foundation have highlighted how solving common problems and easing customer pain and friction in using and choosing different technologies can truly drive innovation and traction in the market." The term "open source coopetition" was employed to highlight the role of the Linux Foundation as a mediator of collaboration among rival firms.
2013
At the OpenStack summit in Hong Kong, the co-founder of Mirantis Boris Renski talked about his job on figuring out how to co-opete in the crowded OpenStack open-source community. In a 43-minute broadcast video, Boris Renski shed some light on OpenStack coopetition politics and shared a subjective view on strategies of individual players within the OpenStack community (e.g., Rackspace, Mirantis, IBM, HP and Red Hat among others). The Mirantis co-founder provided a rich description of an open-source community working in co-opetition.
Along with this lines, the pioneering scholarly work of Germonprez et al. (2013) reported on how key business actors within the financial services industry that traditionally viewed open-source software with skepticism, tied up an open-source ‘community of competitors’. By taking the case of OpenMAMA, a Middleware Agnostic Messaging API used by some of the world's largest financial players, they show that corporate market rivals (e.g., J. P. Morgan, Bank of America, IBM and BMC) can coexist in open-source communities, and intentionally coordinate activities or mutual benefits in precise, market focused, and non-differentiating engagements. Their work pointed out that high-competitive capital-oriented industries do not epitomize the traditional and grassroots idea that open-source software was originally born from. Furthermore, they argued that open-source communities can be deliberately designed to include competing vendors and customers under neutral institutional structures (e.g., foundations and steering committees).
2014
In an academic paper entitled "Collaboration in the open-source arena: The WebKit case", the scholars Jose Teixeira and Tingting Lin executed an ethnographic informed social network analysis on the development of the WebKit open-source web browsing technologies. Among a set of the reported findings, they pointed out that even if Apple and Samsung were involved in expensive patent wars in the courts at the time, they still collaborated in the open-source arena. As some of the research results did not confirm prior research in coopetition, the authors proposed and coined the "open-coopetition" term while emphasizing the openness of collaborating with competitors in the open-source arena.
2015
By turning to OpenStack, the scholars Teixeira et al. (2015)
went further and modeled and analyzed both collaborative and competitive networks from the OpenStack open-source project (a large and complex cloud computing infrastructure for big data). Somewhat surprising results point out that competition for the same revenue model (i.e., operating conflicting business models) does not necessarily affect collaboration within the OpenStack ecosystem—in other words, competition among firms did not significantly influence collaboration among software developers affiliated with them. Furthermore, the expected social tendency of developers to work with developers from same firm (i.e., homophily) did not hold within the OpenStack ecosystem. The case of OpenStack revealed to be much about genuine collaboration in software development besides ubiquitous competition among the firms that produce and use the software.
2016
A related study by Linåker et al. (2016) analyzed the Apache Hadoop ecosystem in a quantitative longitudinal case study to investigate changing stakeholder influence and collaboration patterns. They found that the collaborative network had a quite stable number of network components (i.e., number of sub-communities within the community) with many unconnected stakeholders. Furthermore, such components were dominated by a core set of stakeholders that engaged in most of the collaborative relationships. As in OpenStack, there was much cooperation among competing and non-competing actors within the Apache Hadoop ecosystem—or in other words, firms with competing business models collaborate as openly as non-rivaling firms. Finally, they also
argued that the openness of software ecosystems decreases the distance to competitors within the same ecosystem, it becomes possible and important to track what the competitors do within. Knowing about their existing collaborations, contributions, and interests in specific features offer valuable information about the competitors’ strategies and tactics.
In a study addressing coopetition in the cloud computing industry, Teixeira et al. analyzed not only coopetition among individuals and organizations but also among cohesive inter-organizational networks. Relationships among individuals were modeled and visualized in 2D longitudinal visualizations and relationships among inter-organizational networks (e.g., alliances, consortium or ecosystem) were modeled and visualized in 3D longitudinal visualizations. The author added evidence to prior research suggesting that competition is a multi-level phenomenon that is influenced by individual-level, organizational-level, and network-level factors.
By noting that many firms engaging into open-coopetition actively manage multiple portfolios of alliances in the software industry (i.e., many strategically contribute to multiple open-source software ecosystems) and by analyzing the co-evolution of OpenStack and the CloudStack cloud computing platforms, the same authors propose that development transparency and the weak intellectual property rights, two well-known characteristics of open-source ecosystems, allow an easier transfer of information and resources from one alliance to another. Even if openness enables a focal firm to transfer information and resources more easily between multiple alliances, such 'ease of transfer' should not be seen as a source of competitive advantage as competitors can do the same.
2017
In a study explicitly addressing coopetition in open-source software ecosystems, Nguyen Duc et al. (2017) identified a number of situations in which different actors within the software ecosystem deal with collaborative-competitive issues:
Central actors that act as a bridge between the community and the companies contributing to it (e.g., a lead developer or a maintainer) need to act as gatekeepers (aka boundary spanners) for bugs reported against specific products sold by the participating firms. As the software is integrated downstream into specific products often sold by competing firms, it matters to sort out what bugs are the responsibility of a specific firm or the community as a whole. In parallel, such 'bridging' actors also act as gatekeepers in flows of code and information (e.g., what code should, or should not, be included in the official community-releases and what information should circulate among the ecosystem participants).
Contributors affiliated with firms need to balance the interests of their employers with the interests of the community as a whole. Therefore, their work encompasses the filtering of what is to be kept private (hidden and/or property of the firm) or what is to be open (transparent and publicly available under the open-source community terms). Such filtering is impacted by many factors that can range from technical, legal, bureaucratic, as well as organizational strategy issues. .
Competitive behavior within open-source software ecosystems frictions with the more purist view of free and open-source software. The same authors reported on some working practices that conflict with the more traditional values of free and open-source software.
Developers occasionally establish private communication channels. Some open-source purists would prefer that all communication remains transparent and publicly available to the overall community.
Developers limit sensitive information to certain partners. While purists would prefer all relevant information to remain available to all ecosystem participants, legal or security issues are often discussed in private and secure communication channels.
The same study also unfolded a number of benefits that organization can rip by actively contributing to open-source software ecosystems that encompass both cooperative and competitive relationships:
Keeping the differences between their packaged software and the upstream software to a minimum. This allows firms to more easily benefit from the newest developments in the community. By implementing an upstream first policy, organizations can more easily catch updates, fixes, and changes from upstream.
Sharing maintenance responsibilities. Organizations working only downstream (i.e., just taking the software without contributing back) become solely responsible for maintaining their solution without the benefit of the overall community.
Reducing maintenance costs by revealing their own developments. If organizations extensively modify the software and opt to close some parts (keep it private or obfuscated) they will need to maintain such closed parts by themselves in the future without the benefit of the overall community.
Faster integration of new contributions. Active contributors will get their work integrated upstream more easily due to an improved social position within the community.
Receiving help. Active contributors with an improved social position within the community are more likely to benefit from the help from others members in the community. Given the complex nature of software development, help from other members in the ecosystem can be very valuable.
A sense of friendly competitiveness. Besides being competitors, the ecosystem participants develop a sense of community. Developers employed by competing firms can perceive others as partners and/or friends rather than competitors. In their work, developers often think of others as developers, partners or colleagues (individual persons) over the firms that they are representing.
Mutual co-creation of value. Even if many of competing firms are competitors they are often also customers/suppliers of each other. Furthermore, they might be competing in different geographical areas or different business domains creating a heterogeneous and heterophilous environment for reciprocal learning and value co-creation. Given the complex nature of software development, value creation should benefit from the involvement of multiple and heterogeneous actors holding complementary skills and resources.
2018
In the last chapter of book dedicated to coopetition strategies, scholars Frédéric Le Roy and Henry Chesbrough developed the concept of open-coopetition by combining insights from both the open innovation and coopetition literatures.
They departed from open-coopetition in the specific realm of open-source software to the more broader context of open innovation among competitors. Their work defines open-coopetition as "open innovation between competitors including collaboration", outline key success factors of open innovation based on collaboration with a competitors, and calls for further research on the topic.
2019
While proposing a research agenda for open-coopetion, Roth et al. (2019)
argued that there is no need to narrow the concept of open coopetition to the software industry. More broadly, they redefined the concept as "simultaneously collaborative and competitive open innovation between competitors and third parties such as networks, platforms, communities or ecosystems". Furthermore, they also argued that open-coopetition not only takes place in a growing number of industries but also constitutes both a management challenge at the individual or inter-firm level and as an organizing principle of many regional or national innovation systems. While prior work explored open-coopetition among individuals, firms, platforms and ecosystems, Roth et al. (2019) discussed open-coopetition among Public–private partnerships and the Triple helix model of innovation that relies on the interactions refers to a set of interactions between academia (the university), industry and government.
2020
An editorial review of a special issue on "coopetition strategies" pointed out the popularity of the open-coopetition strategy among firms. The scholars pinpointed that from a strategic management perspective "it seems very important to know why, how and for which outcomes they follow this kind of strategy".
2023
Empirical work investigating open-coopetition in the automotive industry by Jose Teixeira suggested that cooperating with competitors in the open-source arena is not only about saving money but also about saving time.
The same author also pointed out the practical benefits of open-source software and reduction of duplication efforts in both production and maintenance of software. Furthermore, the inclusiveness and openness of open-source software projects encourages contributions from enthusiasts, students, hackers, and academics.
The same author also suggested industrial convergence and increased competition as antecedents of open-coopetition.
Cases
Cases of open coopetition are common in the software industry in general. Some cases also occur in the electronics, semiconductors, automotive, financial, telecommunications, retail, education, healthcare, defense, aerospace, and additive manufacturing industries. Cases of open coopetition are often associated with high-tech corporations and startups based in the USA (mostly on the West Coast). Cases can be also recognized in Cuba, Brazil, Europe (predominantly on Western Europe), India, South-Korea, China, Vietnam, Australia, and Japan.
Many of the software projects encompassing open coopetition are legally governed by foundations such as the Linux Foundation, the Free Software Foundation, the Apache Software Foundation, the Eclipse Foundation, the Cloud Native Computing Foundation, and the X.Org Foundation among many others. Most of the Linux Foundation collaborative projects are coopetitive in nature: the Linux Foundation claims to be "a neutral home for collaborative development". Furthermore, many coopetitive open-source projects dealing with both software and hardware (e.g., computer graphics, data storage) are bounded by standard organizations such as the Khronos Group, W3C and the Open Compute Project.
Software-intensive domains
Beyond software
See also
Co-Opetition: A Revolution Mindset That Combines Competition and Cooperation
References
External links
Research project website addressing open-coopetition in the WebKit open-source project
Research project website addressing open-coopetition in the OpenStack open-source project
Strategic alliances
Research and development
Business terms
Public commons
Strategic management
Business models
Systems engineering
Free and open-source software
Criticism of intellectual property | Open coopetition | Engineering | 3,595 |
415,895 | https://en.wikipedia.org/wiki/Lyman%20series | In physics and chemistry, the Lyman series is a hydrogen spectral series of transitions and resulting ultraviolet emission lines of the hydrogen atom as an electron goes from n ≥ 2 to n = 1 (where n is the principal quantum number), the lowest energy level of the electron (groundstate). The transitions are named sequentially by Greek letters: from n = 2 to n = 1 is called Lyman-alpha, 3 to 1 is Lyman-beta, 4 to 1 is Lyman-gamma, and so on. The series is named after its discoverer, Theodore Lyman. The greater the difference in the principal quantum numbers, the higher the energy of the electromagnetic emission.
History
The first line in the spectrum of the Lyman series was discovered in 1906 by physicist Theodore Lyman IV, who was studying the ultraviolet spectrum of electrically excited hydrogen gas. The rest of the lines of the spectrum (all in the ultraviolet) were discovered by Lyman from 1906-1914.
The spectrum of radiation emitted by hydrogen is non-continuous or discrete. Here is an illustration of the first series of hydrogen emission lines:
Historically, explaining the nature of the hydrogen spectrum was a considerable problem in physics. Nobody could predict the wavelengths of the hydrogen lines until 1885 when the Balmer formula gave an empirical formula for the visible hydrogen spectrum. Within five years Johannes Rydberg came up with an empirical formula that solved the problem, presented first in 1888 and final form in 1890. Rydberg managed to find a formula to match the known Balmer series emission lines, and also predicted those not yet discovered. Different versions of the Rydberg formula with different simple numbers were found to generate different series of lines.
On December 1, 2011, it was announced that Voyager 1 detected the first Lyman-alpha radiation originating from the Milky Way galaxy. Lyman-alpha radiation had previously been detected from other galaxies, but due to interference from the Sun, the radiation from the Milky Way was not detectable.
The Lyman series
The version of the Rydberg formula that generated the Lyman series was:
where n is a natural number greater than or equal to 2 (i.e., ).
Therefore, the lines seen in the image above are the wavelengths corresponding to n = 2 on the right, to n → ∞ on the left. There are infinitely many spectral lines, but they become very dense as they approach n → ∞ (the Lyman limit), so only some of the first lines and the last one appear.
The wavelengths in the Lyman series are all ultraviolet:
Explanation and derivation
In 1914, when Niels Bohr produced his Bohr model theory, the reason why hydrogen spectral lines fit Rydberg's formula was explained. Bohr found that the electron bound to the hydrogen atom must have quantized energy levels described by the following formula,
According to Bohr's third assumption, whenever an electron falls from an initial energy level Ei to a final energy level Ef, the atom must emit radiation with a wavelength of
There is also a more comfortable notation when dealing with energy in units of electronvolts and wavelengths in units of angstroms,
Å.
Replacing the energy in the above formula with the expression for the energy in the hydrogen atom where the initial energy corresponds to energy level n and the final energy corresponds to energy level m,
Where RH is the same Rydberg constant for hydrogen from Rydberg's long known formula. This also means that the inverse of the Rydberg constant is equal to the Lyman limit.
For the connection between Bohr, Rydberg, and Lyman, one must replace m with 1 to obtain
which is Rydberg's formula for the Lyman series. Therefore, each wavelength of the emission lines corresponds to an electron dropping from a certain energy level (greater than 1) to the first energy level.
See also
Bohr model
H-alpha
Hydrogen spectral series
K-alpha
Lyman-alpha line
Lyman continuum photon
Moseley's law
Rydberg formula
Balmer series
References
Emission spectroscopy
Hydrogen physics | Lyman series | Physics,Chemistry | 798 |
237,305 | https://en.wikipedia.org/wiki/Abel%27s%20theorem | In mathematics, Abel's theorem for power series relates a limit of a power series to the sum of its coefficients. It is named after Norwegian mathematician Niels Henrik Abel, who proved it in 1826.
Theorem
Let the Taylor series
be a power series with real coefficients with radius of convergence Suppose that the series
converges.
Then is continuous from the left at that is,
The same theorem holds for complex power series
provided that entirely within a single Stolz sector, that is, a region of the open unit disk where
for some fixed finite . Without this restriction, the limit may fail to exist: for example, the power series
converges to at but is unbounded near any point of the form so the value at is not the limit as tends to 1 in the whole open disk.
Note that is continuous on the real closed interval for by virtue of the uniform convergence of the series on compact subsets of the disk of convergence. Abel's theorem allows us to say more, namely that the restriction of to is continuous.
Stolz sector
The Stolz sector has explicit equationand is plotted on the right for various values.
The left end of the sector is , and the right end is . On the right end, it becomes a cone with angle where .
Remarks
As an immediate consequence of this theorem, if is any nonzero complex number for which the series
converges, then it follows that
in which the limit is taken from below.
The theorem can also be generalized to account for sums which diverge to infinity. If
then
However, if the series is only known to be divergent, but for reasons other than diverging to infinity, then the claim of the theorem may fail: take, for example, the power series for
At the series is equal to but
We also remark the theorem holds for radii of convergence other than : let
be a power series with radius of convergence and suppose the series converges at Then is continuous from the left at that is,
Applications
The utility of Abel's theorem is that it allows us to find the limit of a power series as its argument (that is, ) approaches from below, even in cases where the radius of convergence, of the power series is equal to and we cannot be sure whether the limit should be finite or not. See for example, the binomial series. Abel's theorem allows us to evaluate many series in closed form. For example, when
we obtain
by integrating the uniformly convergent geometric power series term by term on ; thus the series
converges to by Abel's theorem. Similarly,
converges to
is called the generating function of the sequence Abel's theorem is frequently useful in dealing with generating functions of real-valued and non-negative sequences, such as probability-generating functions. In particular, it is useful in the theory of Galton–Watson processes.
Outline of proof
After subtracting a constant from we may assume that Let Then substituting and performing a simple manipulation of the series (summation by parts) results in
Given pick large enough so that for all and note that
when lies within the given Stolz angle. Whenever is sufficiently close to we have
so that when is both sufficiently close to and within the Stolz angle.
Related concepts
Converses to a theorem like Abel's are called Tauberian theorems: There is no exact converse, but results conditional on some hypothesis. The field of divergent series, and their summation methods, contains many theorems of abelian type and of tauberian type.
See also
Further reading
- Ahlfors called it Abel's limit theorem.
References
External links
(a more general look at Abelian theorems of this type)
Theorems in real analysis
Theorems in complex analysis
Mathematical series
Niels Henrik Abel
Summability methods | Abel's theorem | Mathematics | 770 |
71,443,545 | https://en.wikipedia.org/wiki/ISAM-140 | ISAM-140 is a selective non-xanthinic adenosine A2B receptor antagonist. Discovered in 2016, it has a Ki of 3.49 nM on the A2B receptor and >1000-fold selectivity with respect to the other three adenosine receptor subtypes. It has been shown to help the immune system to attack cancer cells in in vitro assays by rescuing T and NK cell proliferation, cytokine release, and TIL infiltration.
References
2-Furyl compounds
Esters
Nitrogen heterocycles
Heterocyclic compounds with 3 rings | ISAM-140 | Chemistry | 120 |
55,963,248 | https://en.wikipedia.org/wiki/Dengvaxia%20controversy | The Dengvaxia controversy () occurred in the Philippines when the dengue vaccine Dengvaxia was found to increase the risk of disease severity for some people who had received it.
A vaccination program run by the Philippine Department of Health (DOH) administered Sanofi Pasteur's Dengvaxia to schoolchildren. The program was stopped when Sanofi Pasteur advised the government that the vaccine could put previously uninfected people at a somewhat higher risk of a severe case of dengue fever through antibody-dependent enhancement. A political controversy erupted over whether the program was run with sufficient care and who should be held responsible for the alleged harm to the vaccinated children.
In late November 2017, the DOH suspended the school-based vaccination program. The DOH subsequently banned the vaccine's use and sale in the Philippines. The scare caused by the controversy has been suggested as a factor in the country's loss of confidence in vaccines and low immunization rates, resulting in an infectious disease crisis in the country in 2019, including a measles outbreak.
Events
On December 1, 2015, former President Benigno Aquino III met with executives of Sanofi Pasteur in a courtesy call in Paris, making the Philippines the first Asian country to approve the commercial sale of Dengvaxia.
In April 2016, the DOH launched the dengue vaccination campaign in Central Luzon, Calabarzon and Metro Manila, where about 700,000 individuals received at least one dose of the vaccine. The government paid P3.5-billion for the vaccine.
On November 29, 2017, French drugmaker Sanofi Pasteur released a statement stating that their dengue vaccine, Dengvaxia, posed risk to individuals vaccinated without having a prior dengue infection. Soon after, the Philippine Department of Health (DOH) suspended the vaccination on FCS program based in schools owing to the said concern, with DOH Secretary Francisco Duque saying "In the light of this new analysis, the DOH will place the dengue vaccination on hold while review and consultation is ongoing with experts, key stakeholders and the World Health Organization." On December 2, 2017, the government of Makati immediately suspended its anti-dengue vaccination drive following its statement.
In its statement, pharmaceutical company Sanofi Pasteur reported concerns that Filipinos, mostly schoolchildren, could potentially be at risk of a more severe disease, where the recipient of the vaccine had not had a previous dengue infection; however, a medical director of Sanofi said that the dengue vaccination would not cause "severe dengue." On December 4, 2017, Sanofi also denied that they made Filipinos into “guinea pigs,” explaining that the vaccine program was conducted by the DOH and not them. Senator JV Ejercito, chair of the Senate Committee on Health and Demography, sought to identify by January 2018 whether there was an irregularity in the procurement of the vaccine, while Senator Risa Hontiveros urged the government to address the health threats posed by the vaccine. The Food and Drug Administration of the Philippines ordered Sanofi to stop distributing of Dengvaxia into the country. Former Health Secretary Janette Garin said she welcomed the investigation which will be conducted by the Philippine Department of Justice. Presidential spokesperson Harry Roque told the media that 10 percent of the 733,000 to 830,000 schoolchildren were at risk of dengue infection. Since then, the Philippine Department of Education has closely monitored the students who have been injected by the vaccine. Hontiveros said that Sanofi should take liability for the medical expenses of those who contracted severe dengue fever after receiving doses of the vaccine.
Sanofi representative Thomas Triomphe "was forced to apologize" during the House of Representatives hearing on the Dengvaxia dengue vaccine. Former President Benigno Aquino III, who approved the vaccination program in 2016, expressed interest in attending the Senate hearing. On December 16, Aquino told reporters that "With the announcement of Sanofi and the reactions to it, there has been a lot of tension building up and I think it is incumbent upon me even as a citizen to try and allay certain fears, to put it in the proper perspective, to put it on the proper level."
Secretary Duque reminded the public, especially parents, that "the vaccine is not a 'standalone' preventive measure against dengue." On December 15, 2017, former Education and Skills Development chief Augusto Syjuco Jr filed "mass murder and plunder" complaints against Aquino and former health secretary Janette Garin over the controversial vaccination program. Former health secretary Enrique Ona blamed his successor Janette Garin, who advised former president Benigno Aquino lll to purchase the Dengvaxia.
On February 2, 2018, the University of the Philippines-Philippine General Hospital (UP-PGH) issued a report stating that three out of 14 children who died after receiving Dengvaxia indicated dengue despite immunization. On February 3, a group of doctors, including former health secretary Esperanza Cabral, urged the Public Attorney's Office (PAO) to stop conducting autopsies.
On February 5, 2018, during a probe at the House of Representatives, mothers of children who took part in a mass vaccination program confronted Garin, screaming at her and accusing her of killing their children. The women would later admit to the media that none of their children died after vaccination.
On February 21, 2018, Senator Richard Gordon said that the DOH must be liable for the controversy. On March 13, Senator Gordon formally terminated the investigation of the controversy.
On February 26, 2018, Aquino appeared for the first time at a House inquiry about the controversy; he said that the controversy has been "politicized," but the Malacañang Palace distanced itself from Aquino's allegations.
On February 27, 2018, the opposition Representatives such as Gary Alejano of Magdalo and Edcel Lagman of Albay urged President Rodrigo Duterte to intervene in the dispute between the PAO and DOH. On March 3, about 200 families of Dengvaxia vaccines joined the advocacy run held in Quezon Memorial Circle.
In 2018, Dengvaxia was approved in Europe and US in 2019, only for use in people who have been infected with dengue virus before and who live in areas where this infection is endemic.
Charges
On April 5, 2018, Public Attorney's Office filed the criminal charges (reckless imprudence resulting in homicide under Article 365 of the Revised Penal Code and violation of Republic Act No. 9745 (Anti-Torture Act) and torture) against the Former Health Secretary Janette Garin and other former officials. However, Garin said that the charges have no basis and vowed to file a counter-charge against PAO. The families of four children — Aejay Bautista (11), Lenard Baldonado (10), Zandro Colite (11), and Angelica Pestilos (10), whose deaths had been linked to Dengvaxia — also filed the charges.
On April 19, 2018, PAO filed criminal complaints before the Department of Justice (DOJ), including the incumbent Health Secretary Francisco Duque III, following the death of the 13-year-old girl after receiving Dengvaxia on November 17, 2017. Duque described the charges against him as "malicious and oppressive" and he also said that he has nothing to do with the implementation of the dengue immunization program since he was seated as the secretary in October 2017.
Gordon's draft report
On April 15, 2018, Gordon said he expected at least 10 senators to sign his report holding former President Benigno Aquino III and other officials liable. Senator Panfilo Lacson will not sign the report due to "unreasonable comments" about him. On April 17, Aside from Gordon, who already signed the report, Senators Ralph Recto, Manny Pacquiao, Win Gatchalian, Tito Sotto, Gregorio Honasan, Migz Zubiri, JV Ejercito, Nancy Binay, and Grace Poe have signed. On April 20, Senator Sonny Angara also signed.
Aftermath
Approximately 800,000 schoolchildren received the Dengvaxia vaccine and benefit from the protection it grants against dengue fever. However around 10% of those 800,000 had not had dengue fever before and therefore are at risk of severe infection because of the vaccine.
In the Philippines, the Dengvaxia controversy has contributed to overall vaccine hesitancy because of heightened concerns about vaccine safety. While concerns about vaccine safety are usually irrational, in the case of Dengvaxia there was a basis in evidence. Many parents of children who died blamed the vaccine.
Most of the deaths were caused by internal bleeding in the heart, lungs and brain, which are symptoms of hemorrhagic dengue.
According to the DOH, 729,105 grade 4 students from selected regions have received the first dose of the vaccine. Of these, 534,303 students had approved parental consent but only 491,990 students received the first dose of the vaccine.
Effects on COVID-19 vaccination program
A study published by the University of the Philippines College of Medicine directly attributes the Dengvaxia controversy as one of the major factors for the vaccine hesitancy of Filipinos affecting the COVID-19 vaccination program.
Reactions
Citizens, as well as Senator Ejercito, expressed frustration on February 5, 2018, blaming the PAO for panic in dengue vaccination. Attorney Persida Acosta of the PAO said that the PAO should not be blamed for that panic but Sanofi Pasteur itself.
Allegations of corruption
Employees of the Public Attorneys Office have asked the Office of the Ombudsman to issue a preventive suspension order against PAO chief Persida Acosta and her forensics chief Dr. Erwin Erfe for alleged corruption in the agency. It was alleged that Acosta has two "loyal" certified public accountants named Lira Hosea Suangco and Maveric Sales who are tasked to maintain office supplies such as bond paper to be used for the Dengvaxia cases. The funds, however, were used for other purposes. It was also alleged that Acosta and Erfe are using PAO funds to purchase tarpaulins, t-shirts, and coffins to be used in rallies.
In January 2021, the Ombudsman cleared Acosta and Erfe of criminal and administrative charges relating to the Dengvaxia issue, saying there was "no probable cause for malversation of public funds or property and illegal use of public funds or property."
See also
2019 Philippines measles outbreak – attributed to the aftermath of the controversy
References
Vaccine controversies
2017 in the Philippines
2018 in the Philippines
2017 controversies
2018 controversies
Health disasters in the Philippines
2018 health disasters | Dengvaxia controversy | Chemistry,Biology | 2,255 |
17,973 | https://en.wikipedia.org/wiki/Liquid%20crystal | Liquid crystal (LC) is a state of matter whose properties are between those of conventional liquids and those of solid crystals. For example, a liquid crystal can flow like a liquid, but its molecules may be oriented in a common direction as in a solid. There are many types of LC phases, which can be distinguished by their optical properties (such as textures). The contrasting textures arise due to molecules within one area of material ("domain") being oriented in the same direction but different areas having different orientations. An LC material may not always be in an LC state of matter (just as water may be ice or water vapor).
Liquid crystals can be divided into three main types: thermotropic, lyotropic, and metallotropic. Thermotropic and lyotropic liquid crystals consist mostly of organic molecules, although a few minerals are also known. Thermotropic LCs exhibit a phase transition into the LC phase as temperature changes. Lyotropic LCs exhibit phase transitions as a function of both temperature and concentration of molecules in a solvent (typically water). Metallotropic LCs are composed of both organic and inorganic molecules; their LC transition additionally depends on the inorganic-organic composition ratio.
Examples of LCs exist both in the natural world and in technological applications. Lyotropic LCs abound in living systems; many proteins and cell membranes are LCs, as well as the tobacco mosaic virus . LCs in the mineral world include solutions of soap and various related detergents, and some clays. Widespread liquid-crystal displays (LCD) use liquid crystals.
History
In 1888, Austrian botanical physiologist Friedrich Reinitzer, working at the Karl-Ferdinands-Universität, examined the physico-chemical properties of various derivatives of cholesterol which now belong to the class of materials known as cholesteric liquid crystals. Previously, other researchers had observed distinct color effects when cooling cholesterol derivatives just above the freezing point, but had not associated it with a new phenomenon. Reinitzer perceived that color changes in a derivative cholesteryl benzoate were not the most peculiar feature. He found that cholesteryl benzoate does not melt in the same manner as other compounds, but has two melting points. At it melts into a cloudy liquid, and at it melts again and the cloudy liquid becomes clear. The phenomenon is reversible. Seeking help from a physicist, on March 14, 1888, he wrote to Otto Lehmann, at that time a in Aachen. They exchanged letters and samples. Lehmann examined the intermediate cloudy fluid, and reported seeing crystallites. Reinitzer's Viennese colleague von Zepharovich also indicated that the intermediate "fluid" was crystalline. The exchange of letters with Lehmann ended on April 24, with many questions unanswered. Reinitzer presented his results, with credits to Lehmann and von Zepharovich, at a meeting of the Vienna Chemical Society on May 3, 1888.
By that time, Reinitzer had discovered and described three important features of cholesteric liquid crystals (the name coined by Otto Lehmann in 1904): the existence of two melting points, the reflection of circularly polarized light, and the ability to rotate the polarization direction of light.
After his accidental discovery, Reinitzer did not pursue studying liquid crystals further. The research was continued by Lehmann, who realized that he had encountered a new phenomenon and was in a position to investigate it: In his postdoctoral years he had acquired expertise in crystallography and microscopy. Lehmann started a systematic study, first of cholesteryl benzoate, and then of related compounds which exhibited the double-melting phenomenon. He was able to make observations in polarized light, and his microscope was equipped with a hot stage (sample holder equipped with a heater) enabling high temperature observations. The intermediate cloudy phase clearly sustained flow, but other features, particularly the signature under a microscope, convinced Lehmann that he was dealing with a solid. By the end of August 1889 he had published his results in the Zeitschrift für Physikalische Chemie.
Lehmann's work was continued and significantly expanded by the German chemist Daniel Vorländer, who from the beginning of the 20th century until he retired in 1935, had synthesized most of the liquid crystals known. However, liquid crystals were not popular among scientists and the material remained a pure scientific curiosity for about 80 years.
After World War II, work on the synthesis of liquid crystals was restarted at university research laboratories in Europe. George William Gray, a prominent researcher of liquid crystals, began investigating these materials in England in the late 1940s. His group synthesized many new materials that exhibited the liquid crystalline state and developed a better understanding of how to design molecules that exhibit the state. His book Molecular Structure and the Properties of Liquid Crystals became a guidebook on the subject. One of the first U.S. chemists to study liquid crystals was Glenn H. Brown, starting in 1953 at the University of Cincinnati and later at Kent State University. In 1965, he organized the first international conference on liquid crystals, in Kent, Ohio, with about 100 of the world's top liquid crystal scientists in attendance. This conference marked the beginning of a worldwide effort to perform research in this field, which soon led to the development of practical applications for these unique materials.
Liquid crystal materials became a focus of research in the development of flat panel electronic displays beginning in 1962 at RCA Laboratories. When physical chemist Richard Williams applied an electric field to a thin layer of a nematic liquid crystal at 125 °C, he observed the formation of a regular pattern that he called domains (now known as Williams Domains). This led his colleague George H. Heilmeier to perform research on a liquid crystal-based flat panel display to replace the cathode ray vacuum tube used in televisions. But the para-azoxyanisole that Williams and Heilmeier used exhibits the nematic liquid crystal state only above 116 °C, which made it impractical to use in a commercial display product. A material that could be operated at room temperature was clearly needed.
In 1966, Joel E. Goldmacher and Joseph A. Castellano, research chemists in Heilmeier group at RCA, discovered that mixtures made exclusively of nematic compounds that differed only in the number of carbon atoms in the terminal side chains could yield room-temperature nematic liquid crystals. A ternary mixture of Schiff base compounds resulted in a material that had a nematic range of 22–105 °C. Operation at room temperature enabled the first practical display device to be made. The team then proceeded to prepare numerous mixtures of nematic compounds many of which had much lower melting points. This technique of mixing nematic compounds to obtain wide operating temperature range eventually became the industry standard and is still used to tailor materials to meet specific applications.
In 1969, Hans Keller succeeded in synthesizing a substance that had a nematic phase at room temperature, N-(4-methoxybenzylidene)-4-butylaniline (MBBA), which is one of the most popular subjects of liquid crystal research. The next step to commercialization of liquid-crystal displays was the synthesis of further chemically stable substances (cyanobiphenyls) with low melting temperatures by George Gray. That work with Ken Harrison and the UK MOD (RRE Malvern), in 1973, led to design of new materials resulting in rapid adoption of small area LCDs within electronic products.
These molecules are rod-shaped, some created in the laboratory and some appearing spontaneously in nature. Since then, two new types of LC molecules have been synthesized: disc-shaped (by Sivaramakrishna Chandrasekhar in India in 1977) and cone or bowl shaped (predicted by Lui Lam in China in 1982 and synthesized in Europe in 1985).
In 1991, when liquid crystal displays were already well established, Pierre-Gilles de Gennes working at the Université Paris-Sud received the Nobel Prize in physics "for discovering that methods developed for studying order phenomena in simple systems can be generalized to more complex forms of matter, in particular to liquid crystals and polymers".
Design of liquid crystalline materials
A large number of chemical compounds are known to exhibit one or several liquid crystalline phases. Despite significant differences in chemical composition, these molecules have some common features in chemical and physical properties. There are three types of thermotropic liquid crystals: discotic, conic (bowlic), and rod-shaped molecules. Discotics are disc-like molecules consisting of a flat core of adjacent aromatic rings, whereas the core in a conic LC is not flat, but is shaped like a rice bowl (a three-dimensional object). This allows for two dimensional columnar ordering, for both discotic and conic LCs. Rod-shaped molecules have an elongated, anisotropic geometry which allows for preferential alignment along one spatial direction.
The molecular shape should be relatively thin, flat or conic, especially within rigid molecular frameworks.
The molecular length should be at least 1.3 nm, consistent with the presence of long alkyl group on many room-temperature liquid crystals.
The structure should not be branched or angular, except for the conic LC.
A low melting point is preferable in order to avoid metastable, monotropic liquid crystalline phases. Low-temperature mesomorphic behavior in general is technologically more useful, and alkyl terminal groups promote this.
An extended, structurally rigid, highly anisotropic shape seems to be the main criterion for liquid crystalline behavior, and as a result many liquid crystalline materials are based on benzene rings.
Liquid-crystal phases
The various liquid-crystal phases (called mesophases together with plastic crystal phases) can be characterized by the type of ordering. One can distinguish positional order (whether molecules are arranged in any sort of ordered lattice) and orientational order (whether molecules are mostly pointing in the same direction). Liquid crystals are characterized by orientational order, but only partial or completely absent positional order. In contrast, materials with positional order but no orientational order are known as plastic crystals. Most thermotropic LCs will have an isotropic phase at high temperature: heating will eventually drive them into a conventional liquid phase characterized by random and isotropic molecular ordering and fluid-like flow behavior. Under other conditions (for instance, lower temperature), a LC might inhabit one or more phases with significant anisotropic orientational structure and short-range orientational order while still having an ability to flow.
The ordering of liquid crystals extends up to the entire domain size, which may be on the order of micrometers, but usually not to the macroscopic scale as often occurs in classical crystalline solids. However some techniques, such as the use of boundaries or an applied electric field, can be used to enforce a single ordered domain in a macroscopic liquid crystal sample. The orientational ordering in a liquid crystal might extend along only one dimension, with the material being essentially disordered in the other two directions.
Thermotropic liquid crystals
Thermotropic phases are those that occur in a certain temperature range. If the temperature rise is too high, thermal motion will destroy the delicate cooperative ordering of the LC phase, pushing the material into a conventional isotropic liquid phase. At too low temperature, most LC materials will form a conventional crystal. Many thermotropic LCs exhibit a variety of phases as temperature is changed. For instance, a particular type of LC molecule (called a mesogen) may exhibit various smectic phases followed by the nematic phase and finally the isotropic phase as temperature is increased. An example of a compound displaying thermotropic LC behavior is para-azoxyanisole.
Nematic phase
The simplest liquid crystal phase is the nematic. In a nematic phase, organic molecules lack a crystalline positional order, but do self-align with their long axes roughly parallel. The molecules are free to flow and their center of mass positions are randomly distributed as in a liquid, but their orientation is constrained to form a long-range directional order.
The word nematic comes from the Greek (), which means "thread". This term originates from the disclinations: thread-like topological defects observed in nematic phases.
Nematics also exhibit so-called "hedgehog" topological defects. In two dimensions, there are topological defects with topological charges and . Due to hydrodynamics, the defect moves considerably faster than the defect. When placed close to each other, the defects attract; upon collision, they annihilate.
Most nematic phases are uniaxial: they have one axis (called a directrix) that is longer and preferred, with the other two being equivalent (can be approximated as cylinders or rods). However, some liquid crystals are biaxial nematic, meaning that in addition to orienting their long axis, they also orient along a secondary axis. Nematic crystals have fluidity similar to that of ordinary (isotropic) liquids but they can be easily aligned by an external magnetic or electric field. Aligned nematics have the optical properties of uniaxial crystals and this makes them extremely useful in liquid-crystal displays (LCD).
Nematic phases are also known in non-molecular systems: at high magnetic fields, electrons flow in bundles or stripes to create an "electronic nematic" form of matter.
Smectic phases
The smectic phases, which are found at lower temperatures than the nematic, form well-defined layers that can slide over one another in a manner similar to that of soap. The word "smectic" originates from the Latin word "smecticus", meaning cleaning, or having soap-like properties.
The smectics are thus positionally ordered along one direction. In the Smectic A phase, the molecules are oriented along the layer normal, while in the Smectic C phase they are tilted away from it. These phases are liquid-like within the layers. There are many different smectic phases, all characterized by different types and degrees of positional and orientational order. Beyond organic molecules, Smectic ordering has also been reported to occur within colloidal suspensions of 2-D materials or nanosheets. One example of smectic LCs is [[p,p-dinonylazobenzene|p,p-dinonylazobenzene]].
Chiral phases or twisted nematics
The chiral nematic phase exhibits chirality (handedness). This phase is often called the cholesteric phase because it was first observed for cholesterol derivatives. Only chiral molecules can give rise to such a phase. This phase exhibits a twisting of the molecules perpendicular to the director, with the molecular axis parallel to the director. The finite twist angle between adjacent molecules is due to their asymmetric packing, which results in longer-range chiral order. In the smectic C* phase (an asterisk denotes a chiral phase), the molecules have positional ordering in a layered structure (as in the other smectic phases), with the molecules tilted by a finite angle with respect to the layer normal. The chirality induces a finite azimuthal twist from one layer to the next, producing a spiral twisting of the molecular axis along the layer normal, hence they are also called twisted nematics.
The chiral pitch, p, refers to the distance over which the LC molecules undergo a full 360° twist (but note that the structure of the chiral nematic phase repeats itself every half-pitch, since in this phase directors at 0° and ±180° are equivalent). The pitch, p, typically changes when the temperature is altered or when other molecules are added to the LC host (an achiral LC host material will form a chiral phase if doped with a chiral material), allowing the pitch of a given material to be tuned accordingly. In some liquid crystal systems, the pitch is of the same order as the wavelength of visible light. This causes these systems to exhibit unique optical properties, such as Bragg reflection and low-threshold laser emission, and these properties are exploited in a number of optical applications. For the case of Bragg reflection only the lowest-order reflection is allowed if the light is incident along the helical axis, whereas for oblique incidence higher-order reflections become permitted. Cholesteric liquid crystals also exhibit the unique property that they reflect circularly polarized light when it is incident along the helical axis and elliptically polarized if it comes in obliquely.
Blue phases
Blue phases are liquid crystal phases that appear in the temperature range between a chiral nematic phase and an isotropic liquid phase. Blue phases have a regular three-dimensional cubic structure of defects with lattice periods of several hundred nanometers, and thus they exhibit selective Bragg reflections in the wavelength range of visible light corresponding to the cubic lattice. It was theoretically predicted in 1981 that these phases can possess icosahedral symmetry similar to quasicrystals.
Although blue phases are of interest for fast light modulators or tunable photonic crystals, they exist in a very narrow temperature range, usually less than a few kelvins. Recently the stabilization of blue phases over a temperature range of more than 60 K including room temperature (260–326 K) has been demonstrated. Blue phases stabilized at room temperature allow electro-optical switching with response times of the order of 10−4 s. In May 2008, the first blue phase mode LCD panel had been developed.
Blue phase crystals, being a periodic cubic structure with a bandgap in the visible wavelength range, can be considered as 3D photonic crystals. Producing ideal blue phase crystals in large volumes is still problematic, since the produced crystals are usually polycrystalline (platelet structure) or the single crystal size is limited (in the micrometer range). Recently, blue phases obtained as ideal 3D photonic crystals in large volumes have been stabilized and produced with different controlled crystal lattice orientations.
Discotic phases
Disk-shaped LC molecules can orient themselves in a layer-like fashion known as the discotic nematic phase. If the disks pack into stacks, the phase is called a discotic columnar. The columns themselves may be organized into rectangular or hexagonal arrays. Chiral discotic phases, similar to the chiral nematic phase, are also known.
Conic phases
Conic LC molecules, like in discotics, can form columnar phases. Other phases, such as nonpolar nematic, polar nematic, stringbean, donut and onion phases, have been predicted. Conic phases, except nonpolar nematic, are polar phases.
Lyotropic liquid crystals
A lyotropic liquid crystal consists of two or more components that exhibit liquid-crystalline properties in certain concentration ranges. In the lyotropic phases, solvent molecules fill the space around the compounds to provide fluidity to the system. In contrast to thermotropic liquid crystals, these lyotropics have another degree of freedom of concentration that enables them to induce a variety of different phases.
A compound that has two immiscible hydrophilic and hydrophobic parts within the same molecule is called an amphiphilic molecule. Many amphiphilic molecules show lyotropic liquid-crystalline phase sequences depending on the volume balances between the hydrophilic part and hydrophobic part. These structures are formed through the micro-phase segregation of two incompatible components on a nanometer scale. Soap is an everyday example of a lyotropic liquid crystal.
The content of water or other solvent molecules changes the self-assembled structures. At very low amphiphile concentration, the molecules will be dispersed randomly without any ordering. At slightly higher (but still low) concentration, amphiphilic molecules will spontaneously assemble into micelles or vesicles. This is done so as to 'hide' the hydrophobic tail of the amphiphile inside the micelle core, exposing a hydrophilic (water-soluble) surface to aqueous solution. These spherical objects do not order themselves in solution, however. At higher concentration, the assemblies will become ordered. A typical phase is a hexagonal columnar phase, where the amphiphiles form long cylinders (again with a hydrophilic surface) that arrange themselves into a roughly hexagonal lattice. This is called the middle soap phase. At still higher concentration, a lamellar phase (neat soap phase) may form, wherein extended sheets of amphiphiles are separated by thin layers of water. For some systems, a cubic (also called viscous isotropic) phase may exist between the hexagonal and lamellar phases, wherein spheres are formed that create a dense cubic lattice. These spheres may also be connected to one another, forming a bicontinuous cubic phase.
The objects created by amphiphiles are usually spherical (as in the case of micelles), but may also be disc-like (bicelles), rod-like, or biaxial (all three micelle axes are distinct). These anisotropic self-assembled nano-structures can then order themselves in much the same way as thermotropic liquid crystals do, forming large-scale versions of all the thermotropic phases (such as a nematic phase of rod-shaped micelles).
For some systems, at high concentrations, inverse phases are observed. That is, one may generate an inverse hexagonal columnar phase (columns of water encapsulated by amphiphiles) or an inverse micellar phase (a bulk liquid crystal sample with spherical water cavities).
A generic progression of phases, going from low to high amphiphile concentration, is:
Discontinuous cubic phase (micellar cubic phase)
Hexagonal phase (hexagonal columnar phase) (middle phase)
Lamellar phase
Bicontinuous cubic phase
Reverse hexagonal columnar phase
Inverse cubic phase (Inverse micellar phase)
Even within the same phases, their self-assembled structures are tunable by the concentration: for example, in lamellar phases, the layer distances increase with the solvent volume. Since lyotropic liquid crystals rely on a subtle balance of intermolecular interactions, it is more difficult to analyze their structures and properties than those of thermotropic liquid crystals.
Similar phases and characteristics can be observed in immiscible diblock copolymers.
Metallotropic liquid crystals
Liquid crystal phases can also be based on low-melting inorganic phases like ZnCl2 that have a structure formed of linked tetrahedra and easily form glasses. The addition of long chain soap-like molecules leads to a series of new phases that show a variety of liquid crystalline behavior both as a function of the inorganic-organic composition ratio and of temperature. This class of materials has been named metallotropic.
Laboratory analysis of mesophases
Thermotropic mesophases are detected and characterized by two major methods, the original method was use of thermal optical microscopy, in which a small sample of the material was placed between two crossed polarizers; the sample was then heated and cooled. As the isotropic phase would not significantly affect the polarization of the light, it would appear very dark, whereas the crystal and liquid crystal phases will both polarize the light in a uniform way, leading to brightness and color gradients. This method allows for the characterization of the particular phase, as the different phases are defined by their particular order, which must be observed. The second method, differential scanning calorimetry (DSC), allows for more precise determination of phase transitions and transition enthalpies. In DSC, a small sample is heated in a way that generates a very precise change in temperature with respect to time. During phase transitions, the heat flow required to maintain this heating or cooling rate will change. These changes can be observed and attributed to various phase transitions, such as key liquid crystal transitions.
Lyotropic mesophases are analyzed in a similar fashion, though these experiments are somewhat more complex, as the concentration of mesogen is a key factor. These experiments are run at various concentrations of mesogen in order to analyze that impact.
Biological liquid crystals
Lyotropic liquid-crystalline phases are abundant in living systems, the study of which is referred to as lipid polymorphism. Accordingly, lyotropic liquid crystals attract particular attention in the field of biomimetic chemistry. In particular, biological membranes and cell membranes are a form of liquid crystal. Their constituent molecules (e.g. phospholipids) are perpendicular to the membrane surface, yet the membrane is flexible. These lipids vary in shape (see page on lipid polymorphism). The constituent molecules can inter-mingle easily, but tend not to leave the membrane due to the high energy requirement of this process. Lipid molecules can flip from one side of the membrane to the other, this process being catalyzed by flippases and floppases (depending on the direction of movement). These liquid crystal membrane phases can also host important proteins such as receptors freely "floating" inside, or partly outside, the membrane, e.g. CTP:phosphocholine cytidylyltransferase (CCT).
Many other biological structures exhibit liquid-crystal behavior. For instance, the concentrated protein solution that is extruded by a spider to generate silk is, in fact, a liquid crystal phase. The precise ordering of molecules in silk is critical to its renowned strength. DNA and many polypeptides, including actively-driven cytoskeletal filaments, can also form liquid crystal phases. Monolayers of elongated cells have also been described to exhibit liquid-crystal behavior, and the associated topological defects have been associated with biological consequences, including cell death and extrusion. Together, these biological applications of liquid crystals form an important part of current academic research.
Mineral liquid crystals
Examples of liquid crystals can also be found in the mineral world, most of them being lyotropic. The first discovered was vanadium(V) oxide, by Zocher in 1925. Since then, few others have been discovered and studied in detail. The existence of a true nematic phase in the case of the smectite clays family was raised by Langmuir in 1938, but remained an open question for a very long time and was only confirmed recently.
With the rapid development of nanosciences, and the synthesis of many new anisotropic nanoparticles, the number of such mineral liquid crystals is increasing quickly, with, for example, carbon nanotubes and graphene. A lamellar phase was even discovered, H3Sb3P2O14, which exhibits hyperswelling up to ~250 nm for the interlamellar distance.
Pattern formation in liquid crystals
Anisotropy of liquid crystals is a property not observed in other fluids. This anisotropy makes flows of liquid crystals behave more differentially than those of ordinary fluids. For example, injection of a flux of a liquid crystal between two close parallel plates (viscous fingering) causes orientation of the molecules to couple with the flow, with the resulting emergence of dendritic patterns. This anisotropy is also manifested in the interfacial energy (surface tension) between different liquid crystal phases. This anisotropy determines the equilibrium shape at the coexistence temperature, and is so strong that usually facets appear. When temperature is changed one of the phases grows, forming different morphologies depending on the temperature change. Since growth is controlled by heat diffusion, anisotropy in thermal conductivity favors growth in specific directions, which has also an effect on the final shape.
Theoretical treatment of liquid crystals
Microscopic theoretical treatment of fluid phases can become quite complicated, owing to the high material density, meaning that strong interactions, hard-core repulsions, and many-body correlations cannot be ignored. In the case of liquid crystals, anisotropy in all of these interactions further complicates analysis. There are a number of fairly simple theories, however, that can at least predict the general behavior of the phase transitions in liquid crystal systems.
Director
As we already saw above, the nematic liquid crystals are composed of rod-like molecules with the long axes of neighboring molecules aligned approximately to one another. To describe this anisotropic structure, a dimensionless unit vector n called the director, is introduced to represent the direction of preferred orientation of molecules in the neighborhood of any point. Because there is no physical polarity along the director axis, n and -n''' are fully equivalent.
Order parameter
The description of liquid crystals involves an analysis of order. A second rank symmetric traceless tensor order parameter, the Q tensor is used to describe the orientational order of the most general biaxial nematic liquid crystal. However, to describe the more common case of uniaxial nematic liquid crystals, a scalar order parameter is sufficient. To make this quantitative, an orientational order parameter is usually defined based on the average of the second Legendre polynomial:
where is the angle between the liquid-crystal molecular axis and the local director (which is the 'preferred direction' in a volume element of a liquid crystal sample, also representing its local optical axis). The brackets denote both a temporal and spatial average. This definition is convenient, since for a completely random and isotropic sample, S = 0, whereas for a perfectly aligned sample S=1. For a typical liquid crystal sample, S is on the order of 0.3 to 0.8, and generally decreases as the temperature is raised. In particular, a sharp drop of the order parameter to 0 is observed when the system undergoes a phase transition from an LC phase into the isotropic phase. The order parameter can be measured experimentally in a number of ways; for instance, diamagnetism, birefringence, Raman scattering, NMR and EPR can be used to determine S.
The order of a liquid crystal could also be characterized by using other even Legendre polynomials (all the odd polynomials average to zero since the director can point in either of two antiparallel directions). These higher-order averages are more difficult to measure, but can yield additional information about molecular ordering.
A positional order parameter is also used to describe the ordering of a liquid crystal. It is characterized by the variation of the density of the center of mass of the liquid crystal molecules along a given vector. In the case of positional variation along the z-axis the density is often given by:
The complex positional order parameter is defined as and the average density. Typically only the first two terms are kept and higher order terms are ignored since most phases can be described adequately using sinusoidal functions. For a perfect nematic and for a smectic phase will take on complex values. The complex nature of this order parameter allows for many parallels between nematic to smectic phase transitions and conductor to superconductor transitions.
Onsager hard-rod model
A simple model which predicts lyotropic phase transitions is the hard-rod model proposed by Lars Onsager. This theory considers the volume excluded from the center-of-mass of one idealized cylinder as it approaches another. Specifically, if the cylinders are oriented parallel to one another, there is very little volume that is excluded from the center-of-mass of the approaching cylinder (it can come quite close to the other cylinder). If, however, the cylinders are at some angle to one another, then there is a large volume surrounding the cylinder which the approaching cylinder's center-of-mass cannot enter (due to the hard-rod repulsion between the two idealized objects). Thus, this angular arrangement sees a decrease in the net positional entropy of the approaching cylinder (there are fewer states available to it).
The fundamental insight here is that, whilst parallel arrangements of anisotropic objects lead to a decrease in orientational entropy, there is an increase in positional entropy. Thus in some case greater positional order will be entropically favorable. This theory thus predicts that a solution of rod-shaped objects will undergo a phase transition, at sufficient concentration, into a nematic phase. Although this model is conceptually helpful, its mathematical formulation makes several assumptions that limit its applicability to real systems. An extension of Onsager Theory was proposed by Flory to account for non entropic effects.
Maier–Saupe mean field theory
This statistical theory, proposed by Alfred Saupe and Wilhelm Maier, includes contributions from an attractive intermolecular potential from an induced dipole moment between adjacent rod-like liquid crystal molecules. The anisotropic attraction stabilizes parallel alignment of neighboring molecules, and the theory then considers a mean-field average of the interaction. Solved self-consistently, this theory predicts thermotropic nematic-isotropic phase transitions, consistent with experiment. Maier-Saupe mean field theory is extended to high molecular weight liquid crystals by incorporating the bending stiffness of the molecules and using the method of path integrals in polymer science.
McMillan's model
McMillan's model, proposed by William McMillan, is an extension of the Maier–Saupe mean field theory used to describe the phase transition of a liquid crystal from a nematic to a smectic A phase. It predicts that the phase transition can be either continuous or discontinuous depending on the strength of the short-range interaction between the molecules. As a result, it allows for a triple critical point where the nematic, isotropic, and smectic A phase meet. Although it predicts the existence of a triple critical point, it does not successfully predict its value. The model utilizes two order parameters that describe the orientational and positional order of the liquid crystal. The first is simply the average of the second Legendre polynomial and the second order parameter is given by:
The values zi, θi, and d'' are the position of the molecule, the angle between the molecular axis and director, and the layer spacing. The postulated potential energy of a single molecule is given by:
Here constant α quantifies the strength of the interaction between adjacent molecules. The potential is then used to derive the thermodynamic properties of the system assuming thermal equilibrium. It results in two self-consistency equations that must be solved numerically, the solutions of which are the three stable phases of the liquid crystal.
Elastic continuum theory
In this formalism, a liquid crystal material is treated as a continuum; molecular details are entirely ignored. Rather, this theory considers perturbations to a presumed oriented sample. The distortions of the liquid crystal are commonly described by the Frank free energy density. One can identify three types of distortions that could occur in an oriented sample: (1) twists of the material, where neighboring molecules are forced to be angled with respect to one another, rather than aligned; (2) splay of the material, where bending occurs perpendicular to the director; and (3) bend of the material, where the distortion is parallel to the director and molecular axis. All three of these types of distortions incur an energy penalty. They are distortions that are induced by the boundary conditions at domain walls or the enclosing container. The response of the material can then be decomposed into terms based on the elastic constants corresponding to the three types of distortions. Elastic continuum theory is an effective tool for modeling liquid crystal devices and lipid bilayers.
External influences on liquid crystals
Scientists and engineers are able to use liquid crystals in a variety of applications because external perturbation can cause significant changes in the macroscopic properties of the liquid crystal system. Both electric and magnetic fields can be used to induce these changes. The magnitude of the fields, as well as the speed at which the molecules align are important characteristics industry deals with. Special surface treatments can be used in liquid crystal devices to force specific orientations of the director.
Electric and magnetic field effects
The ability of the director to align along an external field is caused by the electric nature of the molecules. Permanent electric dipoles result when one end of a molecule has a net positive charge while the other end has a net negative charge. When an external electric field is applied to the liquid crystal, the dipole molecules tend to orient themselves along the direction of the field.
Even if a molecule does not form a permanent dipole, it can still be influenced by an electric field. In some cases, the field produces slight re-arrangement of electrons and protons in molecules such that an induced electric dipole results. While not as strong as permanent dipoles, orientation with the external field still occurs.
The response of any system to an external electrical field is
where , and are the components of the electric field, electric displacement field and polarization density. The electric energy per volume stored in the system is
(summation over the doubly appearing index ). In nematic liquid crystals, the polarization, and electric displacement both depend linearly on the direction of the electric field. The polarization should be even in the director since liquid crystals are invariants under reflexions of . The most general form to express is
(summation over the index ) with and the electric permittivity parallel and perpendicular to the director . Then density of energy is (ignoring the constant terms that do not contribute to the dynamics of the system)
(summation over ). If is positive, then the minimum of the energy is achieved when and are parallel. This means that the system will favor aligning the liquid crystal with the externally applied electric field. If is negative, then the minimum of the energy is achieved when and are perpendicular (in nematics the perpendicular orientation is degenerated, making possible the emergence of vortices).
The difference is called dielectrical anisotropy and is an important parameter in liquid crystal applications. There are both and commercial liquid crystals. 5CB and E7 liquid crystal mixture are two liquid crystals commonly used. MBBA is a common liquid crystal.
The effects of magnetic fields on liquid crystal molecules are analogous to electric fields. Because magnetic fields are generated by moving electric charges, permanent magnetic dipoles are produced by electrons moving about atoms. When a magnetic field is applied, the molecules will tend to align with or against the field. Electromagnetic radiation, e.g. UV-Visible light, can influence light-responsive liquid crystals which mainly carry at least a photo-switchable unit.
Surface preparations
In the absence of an external field, the director of a liquid crystal is free to point in any direction. It is possible, however, to force the director to point in a specific direction by introducing an outside agent to the system. For example, when a thin polymer coating (usually a polyimide) is spread on a glass substrate and rubbed in a single direction with a cloth, it is observed that liquid crystal molecules in contact with that surface align with the rubbing direction. The currently accepted mechanism for this is believed to be an epitaxial growth of the liquid crystal layers on the partially aligned polymer chains in the near surface layers of the polyimide.
Several liquid crystal chemicals also align to a 'command surface' which is in turn aligned by electric field of polarized light. This process is called photoalignment.
Fréedericksz transition
The competition between orientation produced by surface anchoring and by electric field effects is often exploited in liquid crystal devices. Consider the case in which liquid crystal molecules are aligned parallel to the surface and an electric field is applied perpendicular to the cell. At first, as the electric field increases in magnitude, no change in alignment occurs. However at a threshold magnitude of electric field, deformation occurs. Deformation occurs where the director changes its orientation from one molecule to the next. The occurrence of such a change from an aligned to a deformed state is called a Fréedericksz transition and can also be produced by the application of a magnetic field of sufficient strength.
The Fréedericksz transition is fundamental to the operation of many liquid crystal displays because the director orientation (and thus the properties) can be controlled easily by the application of a field.
Effect of chirality
As already described, chiral liquid-crystal molecules usually give rise to chiral mesophases. This means that the molecule must possess some form of asymmetry, usually a stereogenic center. An additional requirement is that the system not be racemic: a mixture of right- and left-handed molecules will cancel the chiral effect. Due to the cooperative nature of liquid crystal ordering, however, a small amount of chiral dopant in an otherwise achiral mesophase is often enough to select out one domain handedness, making the system overall chiral.
Chiral phases usually have a helical twisting of the molecules. If the pitch of this twist is on the order of the wavelength of visible light, then interesting optical interference effects can be observed. The chiral twisting that occurs in chiral LC phases also makes the system respond differently from right- and left-handed circularly polarized light. These materials can thus be used as polarization filters.
It is possible for chiral LC molecules to produce essentially achiral mesophases. For instance, in certain ranges of concentration and molecular weight, DNA will form an achiral line hexatic phase. An interesting recent observation is of the formation of chiral mesophases from achiral LC molecules. Specifically, bent-core molecules (sometimes called banana liquid crystals) have been shown to form liquid crystal phases that are chiral. In any particular sample, various domains will have opposite handedness, but within any given domain, strong chiral ordering will be present. The appearance mechanism of this macroscopic chirality is not yet entirely clear. It appears that the molecules stack in layers and orient themselves in a tilted fashion inside the layers. These liquid crystals phases may be ferroelectric or anti-ferroelectric, both of which are of interest for applications.
Chirality can also be incorporated into a phase by adding a chiral dopant, which may not form LCs itself. Twisted-nematic or super-twisted nematic mixtures often contain a small amount of such dopants.
Applications of liquid crystals
Liquid crystals find wide use in liquid crystal displays, which rely on the optical properties of certain liquid crystalline substances in the presence or absence of an electric field. In a typical device, a liquid crystal layer (typically 4 μm thick) sits between two polarizers that are crossed (oriented at 90° to one another). The liquid crystal alignment is chosen so that its relaxed phase is a twisted one (see Twisted nematic field effect). This twisted phase reorients light that has passed through the first polarizer, allowing its transmission through the second polarizer (and reflected back to the observer if a reflector is provided). The device thus appears transparent. When an electric field is applied to the LC layer, the long molecular axes tend to align parallel to the electric field thus gradually untwisting in the center of the liquid crystal layer. In this state, the LC molecules do not reorient light, so the light polarized at the first polarizer is absorbed at the second polarizer, and the device loses transparency with increasing voltage. In this way, the electric field can be used to make a pixel switch between transparent or opaque on command. Color LCD systems use the same technique, with color filters used to generate red, green, and blue pixels. Chiral smectic liquid crystals are used in ferroelectric LCDs which are fast-switching binary light modulators. Similar principles can be used to make other liquid crystal based optical devices.
Liquid crystal tunable filters are used as electro-optical devices, e.g., in hyperspectral imaging.
Thermotropic chiral LCs whose pitch varies strongly with temperature can be used as crude liquid crystal thermometers, since the color of the material will change as the pitch is changed. Liquid crystal color transitions are used on many aquarium and pool thermometers as well as on thermometers for infants or baths. Other liquid crystal materials change color when stretched or stressed. Thus, liquid crystal sheets are often used in industry to look for hot spots, map heat flow, measure stress distribution patterns, and so on. Liquid crystal in fluid form is used to detect electrically generated hot spots for failure analysis in the semiconductor industry.
Liquid crystal lenses converge or diverge the incident light by adjusting the refractive index of liquid crystal layer with applied voltage or temperature. Generally, the liquid crystal lenses generate a parabolic refractive index distribution by arranging molecular orientations. Therefore, a plane wave is reshaped into a parabolic wavefront by a liquid crystal lens. The focal length of liquid crystal lenses could be continuously tunable when the external electric field can be properly tuned. Liquid crystal lenses are a kind of adaptive optics. Imaging systems can benefit from focusing correction, image plane adjustment, or changing the range of depth-of-field or depth of focus. The liquid crystal lense is one of the candidates to develop vision correction devices for myopia and presbyopia (e.g., tunable eyeglass and smart contact lenses). Being an optical phase modulator, a liquid crystal lens feature space-variant optical path length (i.e., optical path length as the function of its pupil coordinate). In different imaging system, the required function of optical path length varies from one to another. For example, to converge a plane wave into a diffraction limited spot, for a physically-planar liquid crystal structure, the refractive index of liquid crystal layer should be spherical or paraboloidal under paraxial approximation. As for projecting images or sensing objects, it may be expected to have the liquid crystal lens with aspheric distribution of optical path length across its aperture of interest. Liquid crystal lenses with electrically tunable refractive index (by addressing the different magnitude of electric field on liquid crystal layer) have potentials to achieve arbitrary function of optical path length for modulating incoming wavefront; current liquid crystal freeform optical elements were extended from liquid crystal lens with same optical mechanisms. The applications of liquid crystals lenses includes pico-projectors, prescriptions lenses (eyeglasses or contact lenses), smart phone camera, augmented reality, virtual reality etc.
Liquid crystal lasers use a liquid crystal in the lasing medium as a distributed feedback mechanism instead of external mirrors. Emission at a photonic bandgap created by the periodic dielectric structure of the liquid crystal gives a low-threshold high-output device with stable monochromatic emission.
Polymer dispersed liquid crystal (PDLC) sheets and rolls are available as adhesive backed Smart film which can be applied to windows and electrically switched between transparent and opaque to provide privacy.
Many common fluids, such as soapy water, are in fact liquid crystals. Soap forms a variety of LC phases depending on its concentration in water.
Liquid crystal films have revolutionized the world of technology. Currently they are used in the most diverse devices, such as digital clocks, mobile phones, calculating machines and televisions. The use of liquid crystal films in optical memory devices, with a process similar to the recording and reading of CDs and DVDs may be possible.
Liquid crystals are also used as basic technology to imitate quantum computers, using electric fields to manipulate the orientation of the liquid crystal molecules, to store data and to encode a different value for every different degree of misalignment with other molecules.
See also
References
External links
Definitions of basic terms relating to low-molar-mass and polymer liquid crystals (IUPAC Recommendations 2001)
An intelligible introduction to liquid crystals from Case Western Reserve University
Liquid Crystal Physics tutorial from the Liquid Crystals Group, University of Colorado
Liquid Crystals & Photonics Group – Ghent University (Belgium) , good tutorial
Simulation of light propagation in liquid crystals, free program
Liquid Crystals Interactive Online
Liquid Crystal Institute Kent State University
Liquid Crystals a journal by Taylor&Francis
Molecular Crystals and Liquid Crystals a journal by Taylor & Francis
Hot-spot detection techniques for ICs
What are liquid crystals? from Chalmers University of Technology, Sweden
Progress in liquid crystal chemistry Thematic series in the Open Access Beilstein Journal of Organic Chemistry
DoITPoMS Teaching and Learning Package- "Liquid Crystals"
Bowlic liquid crystal from San Jose State University
Phase calibration of a Spatial Light Modulator
Soft matter
Optical materials
Phase transitions
Phases of matter | Liquid crystal | Physics,Chemistry,Materials_science | 10,071 |
50,960,623 | https://en.wikipedia.org/wiki/Nonpathogenic%20organisms | Nonpathogenic organisms are those that do not cause disease, harm or death to another organism. The term is usually used to describe bacteria. It describes a property of a bacterium – its inability to cause disease. Most bacteria are nonpathogenic. It can describe the presence of non-disease causing bacteria that normally reside on the surface of vertebrates and invertebrates as commensals. Some nonpathogenic microorganisms are commensals on and inside the body of animals and are called microbiota. Some of these same nonpathogenic microorganisms have the potential to cause disease, or being pathogenic, if they enter the body, multiply and cause symptoms of infection. Immunocompromised individuals are especially vulnerable to bacteria that are typically nonpathogenic; because of a compromised immune system, disease occurs when these bacteria gain access to the body's interior. Genes have been identified that predispose disease and infection with nonpathogenic bacteria by a small number of persons. Nonpathogenic Escherichia coli strains normally found in the gastrointestinal tract have the ability to stimulate the immune response in humans, though further studies are needed to determine clinical applications.
A particular strain of bacteria can be nonpathogenic in one species but pathogenic in another. One species of bacterium can have many different types or strains. One strain of a bacterium species can be nonpathogenic and another strain of the same species can be pathogenic.
References
Bacteriology
Gram-positive bacteria
Gram-negative bacteria
Immune system | Nonpathogenic organisms | Biology | 314 |
960,826 | https://en.wikipedia.org/wiki/Messier%2023 | Messier 23, also known as NGC 6494, is an open cluster of stars in the northwest of the southern constellation of Sagittarius. It was discovered by Charles Messier in 1764. It can be found in good conditions with binoculars or a modestly sized telescope. It is in front of "an extensive gas and dust network", which there may be no inter-association. It is within 5° the sun's position (namely in mid-December) so can be occulted by the moon.
The cluster is centered about 2,050 light years away. Estimates for the number of its members range from 169 up to 414, with a directly-counted mass of ; by application of the virial theorem. The cluster is around 330 million years old with a near-solar metallicity of [Fe/H] = −0.04. The brightest component (lucida) is of magnitude 9.3. Five of the cluster members are candidate red giants, while orange variable VV Sgr in the far south, is a candidate asymptotic giant branch star.
A 6th-magnitude star, shown in the top-right corner, figures in the far north-west as a foreground star – HD 163245 (HR 6679). Its parallax shift is , having taken into account proper motion, which means it is about away.
Gallery
See also
List of Messier objects
Footnotes and References
Footnotes
References
External links
Messier 23, SEDS Messier pages
Messier 023
Orion–Cygnus Arm
Messier 023
023
Messier 023
17640620
Discoveries by Charles Messier | Messier 23 | Astronomy | 338 |
42,095,159 | https://en.wikipedia.org/wiki/Sponge%20isolates | Lacking an immune system, protective shell, or mobility, sponges have developed an ability to synthesize a variety of unusual compounds for survival. C-nucleosides isolated from Caribbean Cryptotethya crypta, were the basis for the synthesis of zidovudine (AZT), aciclovir (Cyclovir), cytarabine (Depocyt), and cytarabine derivative gemcitabine (Gemzar).
Semisynthetic analogs of the sponge isolate jasplakinolide, were submitted to National Cancer Institute’s Biological Evaluation Committee in 2011.
Other marine isolates
Trabectedin, aplidine, didemnin, were isolated from sea squirts. Monomethyl auristatin E is a derivative of a dolastatin 10, a compound made by Dolabella auricularia. Bryostatins were first isolated from Bryozoa.
Salinosporamides are derived from Salinispora tropica. Ziconotide is derived from the sea snail Conus magus.
See also
Bacillus isolates
Biotechnology in pharmaceutical manufacturing
Fungal isolates
Marine pharmacognosy
Medicinal molds
Streptomyces isolates
References
Pharmaceutical isolates
Marine biology | Sponge isolates | Chemistry,Biology | 264 |
41,688,269 | https://en.wikipedia.org/wiki/List%20of%20dimensionless%20quantities | This is a list of well-known dimensionless quantities illustrating their variety of forms and applications. The tables also include pure numbers, dimensionless ratios, or dimensionless physical constants; these topics are discussed in the article.
Biology and medicine
Chemistry
Physics
Physical constants
Fluids and heat transfer
Solids
Optics
Other
Mathematics and statistics
Geography, geology and geophysics
Sport
Other fields
References
Bibliography | List of dimensionless quantities | Physics,Mathematics | 76 |
48,677,845 | https://en.wikipedia.org/wiki/2D%20silica | Two-dimensional silica (2D silica) is a layered polymorph of silicon dioxide. Two varieties of 2D silica, both of hexagonal crystal symmetry, have been grown so far on various metal substrates. One is based on SiO4 tetrahedra, which are covalently bonded to the substrate. The second comprises graphene-like fully saturated sheets, which interact with the substrate via weak van der Waals bonds. One sheet of the second 2D silica variety is also called hexagonal bilayer silica (HBS); it can have either ordered or disordered (amorphous) structure.
2D silica has potential applications in electronics as the thinnest gate dielectric. It can also be used for isolation of graphene sheets from the substrate. 2D silica is a wide band gap semiconductor, whose band gap and geometry can be engineered by external electric field. It was shown to be a member of the auxetics materials family with a negative Poisson's ratio.
References
Two-dimensional nanomaterials
Silicon dioxide
Silica polymorphs | 2D silica | Materials_science,Engineering | 228 |
28,381,767 | https://en.wikipedia.org/wiki/Drosometer | A drosometer (from Classical Greek , drosos, dew + , metron, measure) is an apparatus for measuring the quantity of dew formed in a unit of time per unit area of surface.
Description
The surface may be either a horizontal metal plate or a leaf hanging naturally, or a bit of wool or cotton representing a large surface of fine fibres. The unit of time is usually one hour, and the measurement is made in the early morning, before the rising sun evaporates the dew. When the apparatus is made self-registering, the surface, with its accumulating dew, hangs at one end of a delicate balance, or from a delicate spiral metallic spring, and by its gradual sinking moves the index that makes the record on a moving sheet of paper.
Shortcomings
Although many forms have been suggested, yet none have been considered to give results that are comparable with each other from day to day, owing largely to the fact that the slightest change in the surface that receives the moisture alters the quantity of dew that is caught. Even the same bit of wool, when used day after day, changes its nature in this respect. If a metallic surface is used, its behavior must be compared frequently with a standard, partly because different metals have different properties, but principally because the same surface, when it becomes greasy, dirty, or scratched, has different properties.
Many peculiarities of the deposition of dew on different objects are explained in detail in the popular work by Charles Tomlinson, entitled The Dew-drop and the Mist (London, 1860). Moreover, the case of the natural deposition of dew on the grass and other plants very near the surface of the ground is not at all parallel to that where it is deposited upon metal plates or other bodies used as drosometers, partly because of the location, partly because of the difference in the substances, and largely because of the influence of slight local currents of air.
References
Meteorological instrumentation and equipment | Drosometer | Technology,Engineering | 399 |
964,630 | https://en.wikipedia.org/wiki/CATH%20database | The CATH Protein Structure Classification database is a free, publicly available online resource that provides information on the evolutionary relationships of protein domains. It was created in the mid-1990s by Professor Christine Orengo and colleagues including Janet Thornton and David Jones, and continues to be developed by the Orengo group at University College London. CATH shares many broad features with the SCOP resource, however there are also many areas in which the detailed classification differs greatly.
Hierarchical organization
Experimentally determined protein three-dimensional structures are obtained from the Protein Data Bank and split into their consecutive polypeptide chains, where applicable. Protein domains are identified within these chains using a mixture of automatic methods and manual curation.
The domains are then classified within the CATH structural hierarchy: at the Class (C) level, domains are assigned according to their secondary structure content, i.e. all alpha, all beta, a mixture of alpha and beta, or little secondary structure; at the Architecture (A) level, information on the secondary structure arrangement in three-dimensional space is used for assignment; at the Topology/fold (T) level, information on how the secondary structure elements are connected and arranged is used; assignments are made to the Homologous superfamily (H) level if there is good evidence that the domains are related by evolution i.e. they are homologous.
Additional sequence data for domains with no experimentally determined structures are provided by CATH's sister resource, Gene3D, which are used to populate the homologous superfamilies. Protein sequences from UniProtKB and Ensembl are scanned against CATH HMMs to predict domain sequence boundaries and make homologous superfamily assignments.
Releases
The CATH team releases new data both as daily snapshots, and official releases approximately annually. The latest release of CATH-Gene3D (v4.3) was released in December 2020 and consists of:
500,238 structural protein domain entries
151 mln non-structural protein domain entries
5,481 homologous superfamily entrie
212,872 functional family entries
Open-source software
CATH is an open source software project, with developers developing and maintaining a number of open-source tools, which are available publicly on GitHub.
References
Protein structure databases
Protein structure
Protein folds
Protein classification
Protein superfamilies
University College London | CATH database | Chemistry,Biology | 479 |
55,055,036 | https://en.wikipedia.org/wiki/NGC%204993 | NGC 4993 (also catalogued as NGC 4994 in the New General Catalogue) is a lenticular galaxy located about 140 million light-years away in the constellation Hydra. It was discovered on 26 March 1789 by William Herschel and is a member of the NGC 4993 Group.
NGC 4993 was the site of GW170817, a collision of two neutron stars, the first astronomical event detected in both electromagnetic and gravitational radiation, a discovery given the Breakthrough of the Year award for 2017 by the journal Science. Detecting a gravitational wave event associated with the gamma-ray burst provided direct confirmation that binary neutron star collisions produce short gamma-ray bursts.
Physical characteristics
NGC 4993 has several concentric shells of stars and a large dust lane—with a diameter of approximately a few kiloparsecs—which surrounds the nucleus and is stretched out into an "s" shape. The dust lane appears to be connected to a small dust ring with a diameter of ~. These features in NGC 4993 may be the result of a recent merger with a gaseous late-type galaxy that occurred about 400 million years ago. However, Palmese et al. suggest that the galaxy involved in the merger was a gas-poor galaxy.
Dark matter content
NGC 4993 has a dark matter halo with an estimated mass of .
Globular clusters
NGC 4993 has an estimated population of 250 globular clusters.
The luminosity of NGC 4993 indicates that the globular cluster system surrounding the galaxy may be dominated by metal-poor globular clusters.
Supermassive black hole
NGC 4993 has a supermassive black hole with an estimated mass of roughly 80 to 100 million solar masses ().
Galactic nucleus activity
The presence of weak O III, NII and SII emission lines in the nucleus of NGC 4993 and the relatively high ratio of [NII]λ6583/Hα suggest that NGC 4993 is a low-luminosity AGN (LLAGN). The activity may have been triggered by gas from the late-type galaxy as it merged with NGC 4993.
Neutron star merger observations
In August 2017, rumors circulated regarding a short gamma-ray burst designated GRB 170817A, of the type conjectured to be emitted in the collision of two neutron stars. On 16 October 2017, the LIGO and Virgo collaborations announced that they had detected a gravitational wave event, designated GW170817. The gravitational wave signal matched prediction for the merger of two neutron stars, two seconds before the gamma-ray burst. The gravitational wave signal, which had a duration of about 100 seconds, was the first gravitational wave detection of the merger of two neutron stars.
An optical transient, (also known as SSS 17a), was detected in NGC 4993 11 hours after the gravitational wave and gamma-ray signals, allowing the location of the merger to be determined. The optical emission is thought to be due to a kilonova. The discovery of AT 2017gfo was the first observation (and first localisation) of an electromagnetic counterpart to a gravitational wave source.
GRB 170817A was a gamma-ray burst (GRB) detected by NASA's Fermi and ESA's INTEGRAL on 17 August 2017. Although only localized to a large area of the sky, it is believed to correspond to the other two observations, in part due to its arrival time 1.7 seconds after the GW event.
See also
Gravitational-wave astronomy
List of gamma-ray bursts
List of gravitational wave observations
Neil Gehrels Swift Observatory
Ultra-Fast Flash Observatory Pathfinder
References
External links
GRB 170817A – NASA/IPAC Extragalactic Database (NED)
GRB 170817A – Max Planck Institute for Extraterrestrial Physics (MPE)
GRB 170817A - INTEGRAL Science Data Center (ISDC)
The galaxy NGC 4993 in the constellation of Hydra Starmap
Active galaxies
Astronomical objects discovered in 1789
Elliptical galaxies
Lenticular galaxies
Shell galaxies
Hydra (constellation)
4993
45657 | NGC 4993 | Astronomy | 840 |
14,199,931 | https://en.wikipedia.org/wiki/Edeleanu%20process | The Edeleanu process is a type of extraction process in the petroleum refining industry, whereby liquid sulfur dioxide is used to extract aromatics from kerosene. Liquid SO2 selectively dissolves the aromatics leaving behind the low aromatic content kerosene as the finished product. It is named after the Romanian chemist Lazăr Edeleanu. The aromatic extract is then separated from SO2 through rectification and SO2 recirculation. Temperature is maintained at -20°C.
By using a blend of sulfur dioxide and benzene, some improvement can be effected, more suitable solvents such as furfural, phenol and N-methyl-2-pyrrolidone are used.
References
Oil refining | Edeleanu process | Chemistry | 150 |
899,908 | https://en.wikipedia.org/wiki/Chess%20piece%20relative%20value | In chess, a relative value (or point value) is a standard value conventionally assigned to each piece. Piece valuations have no role in the rules of chess but are useful as an aid to evaluating a position.
The best-known system assigns 1 point to a pawn, 3 points to a knight or bishop, 5 points to a rook and 9 points to a queen. Valuation systems, however, provide only a rough guide; the true value of a piece can vary significantly depending on position.
Standard valuations
Piece values exist because calculating to checkmate in most positions is beyond reach even for top computers. Thus, players aim primarily to create a material advantage; to pursue this goal, it is normally helpful to quantitatively approximate the strength of an army of pieces. Such piece values are valid for, and conceptually averaged over, tactically "quiet" positions where immediate tactical gain of material will not happen.
The following table is the most common assignment of point values.
The oldest derivation of the standard values is due to the Modenese School (Ercole del Rio, Giambattista Lolli, and Domenico Lorenzo Ponziani) in the 18th century and is partially based on the earlier work of Pietro Carrera. The value of the king is undefined as it cannot be captured or traded during the course of the game. Chess engines usually assign the king an arbitrary large value such as 200 points or more to indicate that the inevitable loss of the king due to checkmate trumps all other considerations. During the endgame, as there is less danger of checkmate, the king will often assume a more active role. It is better at defending nearby pieces and pawns than the knight is and better at attacking them than the bishop is. Overall, this makes the king more powerful than a minor piece but less powerful than a rook, so its fighting value is about four points.
This system has some shortcomings: namely, combinations of pieces do not always equal the sum of their parts; for instance, two bishops on opposite colours are usually worth slightly more than a bishop plus a knight, and three (nine points) are often slightly stronger than two rooks (ten points) or a queen (nine points). Chess-variant theorist Ralph Betza identified the 'leveling effect', which causes reduction of the value of stronger pieces in the presence of opponent weaker pieces, due to the latter interdicting access to part of the board for the former in order to prevent the value difference from evaporating by 1-for-1 trading. This effect causes 3 queens to badly lose against 7 knights (when both start behind a wall of pawns), even though the added piece values predict that the player with the seven knights is two knights short of equality. In a less exotic case, it explains why trading rooks in the presence of a queen-vs-3-minors imbalance favours the player with the queen, as the rooks hinder the movement of the queen more than of the minor pieces. Adding piece values is thus a first approximation, because piece cooperation must also be considered (e.g. opposite-coloured bishops cooperate very well) alongside each piece’s mobility (e.g. a short-range piece far away from the action on a large board is almost worthless).
The evaluation of the pieces depends on many parameters. Edward Lasker stated that "It is difficult to compare the relative value of different pieces, as so much depends on the peculiarities of the position". Nevertheless, he valued the bishop and knight () equally, the rook a minor piece plus one or two pawns, and the queen three minor pieces or two rooks. Larry Kaufman suggests the following values in the middlegame:
The is worth 7.5 pawns – half a pawn more than the values of its constituent bishops combined. Although it would be a very theoretical situation, there is no such bonus for a pair of same-coloured bishops. Per investigations by H. G. Muller, three light-squared bishops and one dark-squared bishop would receive only a 0.5-point bonus, while two on each colour would receive a 1-point bonus. More imbalanced combinations like 3:0 or 4:0, however, were not tested. The position of each piece also makes a significant difference: pawns near the edges are worth less than those near the centre, pawns close to promotion are worth far more, pieces controlling the centre are worth more than average, trapped pieces (such as ) are worth less, etc.
Alternative valuations
Although the 1-3-3-5-9 system of point totals is the most commonly given, many other systems of valuing pieces have been proposed. Several systems treat the bishop as slightly more powerful than a knight.
Note: Where a value for the king is given, this is used when considering piece development, its power in the endgame, etc.
Larry Kaufman's 2021 system
Larry Kaufman in 2021 gives a more detailed system based on his experience working with chess engines, depending on the presence or absence of queens. He uses "middlegame" to mean positions where both queens are on the board, "threshold" for positions where there is an imbalance (one queen versus none, or two queens versus one), and "endgame" for positions without queens. (Kaufman did not give the queen's value in the middlegame or endgame cases, since in these cases both sides have the same number of queens and their values cancel.)
The file of a pawn is also important, because this cannot change except by capture. According to Kaufman, the difference is small in the endgame (when queens are absent), but in the middlegame (when queens are present) the difference is substantial:
In conclusion:
an unpaired bishop is slightly stronger than knight;
a knight is superior to three average pawns, even in the endgame (situations like three passed pawns, especially if they are connected, would be exceptions);
with queens on the board, a knight is worth four pawns (as commented by Vladimir Kramnik for a full board);
the bishop pair is an advantage (as one can hide from one bishop by fixing king and pawns on the opposite colour, but not from both), and this advantage increases in the endgame;
an extra rook is helpful in the "threshold" case, but not otherwise (because two rooks fighting against a queen benefit from the ability to defend each other, but minor pieces against a rook need a rook's help more than the rook needs the help of another rook);
a second queen has lower value than normal.
In the endgame:
R = B (unpaired) + 2P, and R > N + 2P (slightly); but if a rook is added on both sides, the situation favours the minor piece side
2N are only trivially better than R + P in the endgame (slightly worse if there are no other pieces), but adding a rook on both sides gives the knights a big advantage
2B ≈ R + 2P; adding a rook on both sides makes the bishops superior
R + 2B + P ≈ 2R + N
In the threshold case (queen versus other pieces):
Q ≥ 2R with all minor pieces still on the board, but Q + P = 2R with none of them (because the queen derives more advantage from cooperating with minor pieces than the rooks do)
Q > R + N (or unpaired B) + P, even if another pair of rooks is added
Q + minor ≈ R + 2B + P (slightly favouring the rook side)
3 minors > Q, especially if the minors include the bishop pair. The difference is about a pawn if rooks are still on the board (because in this case they help the minors more than the queen); with all rooks still on the board, 2B + N > Q + P (slightly).
In the middlegame case:
B > N (slightly)
N = 4P
The exchange is worth:
just under 2 pawns if it is unpaired R vs N, but less if the rook is paired, and a bit less still if the minor piece is an unpaired bishop
one pawn if it is paired R vs paired B
2B + P = R + N with extra rooks on the board
2N > R + 2P, especially with an extra pair of rooks
2B = R + 3P with extra rooks on the board
The above is written for around ten pawns on the board (a typical number); the value of the rooks goes down as pawns are added, and goes up as pawns are removed.
Finally, Kaufman proposes a simplified version that avoids decimals: use the traditional values P = 1, N = 3, B = 3+, and R = 5 with queens off the board, but use P = 1, N = 4, B = 4+, R = 6, Q = 11 when at least one player has a queen. The point is to show that two minor pieces equal rook and two pawns with queens on the board, but only rook and one pawn without queens.
Hans Berliner's system
World Correspondence Chess Champion Hans Berliner gives the following valuations, based on experience and computer experiments:
There are adjustments for the and of a pawn and adjustments for the pieces depending on how or the position is. Bishops, rooks, and queens gain up to 10 percent more value in open positions and lose up to 20 percent in closed positions. Knights gain up to 50 percent in closed positions and lose up to 30 percent in the corners and edges of the board. The value of a may be at least 10 percent higher than that of a .
There are different types of doubled pawns; see the diagram. White's doubled pawns on the b-file are the best situation in the diagram, since advancing the pawns and exchanging can get them un-doubled and mobile. The doubled b-pawn is worth 0.75 points. If the black pawn on a6 were on c6, it would not be possible to dissolve the doubled pawn, and it would be worth only 0.5 points. The doubled pawn on f2 is worth about 0.5 points. The second white pawn on the h-file is worth only 0.33 points, and additional pawns on the file would be worth only 0.2 points.
Changing valuations in the endgame
As already noted when the standard values were first formulated, the relative strength of the pieces will change as a game progresses to the endgame. Pawns gain value as their path towards promotion becomes clear, and strategy begins to revolve around either defending or capturing them before they can promote. Knights lose value as their unique mobility becomes a detriment to crossing an empty board. Rooks and (to a lesser extent) bishops gain value as their lines of movement and attack are less obstructed. Queens slightly lose value as their high mobility becomes less proportionally useful when there are fewer pieces to attack and defend. Some examples follow.
A queen versus two rooks
In the middlegame, they are equal
In the endgame, the two rooks are somewhat more powerful. With no other pieces on the board, two rooks are equal to a queen and a pawn
A rook versus two minor pieces
In the opening and middlegame, a rook and two pawns are weaker than two bishops; equal to or slightly weaker than a bishop and knight; and equal to two knights
In the endgame, a rook and one pawn are equal to two knights; and equal to or slightly weaker than a bishop and knight. A rook and two pawns are equal to two bishops.
Bishops are often more powerful than rooks in the opening. Rooks are usually more powerful than bishops in the middlegame, and rooks dominate the minor pieces in the endgame.
As the tables in Berliner's system show, the values of pawns change dramatically in the endgame. In the opening and middlegame, pawns on the central files are more valuable. In the late middlegame and endgame the situation reverses, and pawns on the wings become more valuable due to their likelihood of becoming an outside passed pawn and threatening to promote. When there is about fourteen points of material on both sides, the value of pawns on any file is about equal. After that, wing pawns become more valuable.
C.J.S. Purdy gave a value of points in the opening and middlegame but 3 points in the endgame.
Shortcomings of piece valuation systems
There are shortcomings of giving each type of piece a single, static value.
Two minor pieces plus two pawns are sometimes as good as a queen. Two rooks are sometimes better than a queen and pawn.
Many of the systems have a 2-point difference between the rook and a , but most theorists put that difference at about points (see ).
In some open positions, a rook plus a pair of bishops are stronger than two rooks plus a knight.
Example 1
Positions in which a bishop and knight can be exchanged for a rook and pawn are fairly common (see diagram). In this position, White should not do that, e.g.:
1. Nxf7 Rxf7
2. Bxf7+ Kxf7
This seems like an even exchange (6 points for 6 points), but it is not, as two minor pieces are better than a rook and pawn in the middlegame.
In most openings, two minor pieces are better than a rook and pawn and are usually at least as good as a rook and two pawns until the position is greatly simplified (i.e. late middlegame or endgame). Minor pieces get into play earlier than rooks, and they coordinate better, especially when there are many pieces and pawns on the board. On the other hand, rooks are usually blocked by pawns until later in the game. Pachman also notes that the is almost always better than a rook and pawn.
Example 2
In this position, White has exchanged a queen and a pawn (10 points) for three minor pieces (9 points). White is better because three minor pieces are usually better than a queen because of their greater mobility, and Black's extra pawn is not important enough to change the situation. Three minor pieces are almost as strong as two rooks.
Example 3
In this position, Black is ahead in material, but White is better. White's queenside is completely defended, and Black's additional queen has no target; additionally, White is much more active than Black and can gradually build up pressure on Black's weak kingside.
Fairy pieces
In general, the approximate value in centipawns of a short-range leaper with moves on an 8 × 8 board is . The quadratic term reflects the possibility of cooperation between moves.
If pieces are asymmetrical, moves going forward are about twice as valuable as move going sideways or backward, presumably because enemy pieces can generally be found in the forward direction. Similarly, capturing moves are usually twice as valuable as noncapturing moves (of relevance for pieces that do not capture the same way they move). There also seems to be significant value in reaching different squares (e.g. ignoring the board edges, a king and knight both have 8 moves, but in one or two moves a knight can reach 40 squares whereas a king can only reach 24). It is also valuable for a piece to have moves to squares that are orthogonally adjacent, as this enables it to wipe out lone passed pawns (and also checkmate the king, but this is less important as usually enough pawns survive to the late endgame to allow checkmate to be achieved via promotion). As many games are decided by promotion, the effectiveness of a piece in opposing or supporting pawns is a major part of its value.
An unexpected result from empirical computer studies is that the princess (a bishop-knight compound) and empress (a rook-knight compound) have almost exactly the same value, even though the lone rook is two pawns stronger than the lone bishop. The empress is about 50 centipawns weaker than the queen, and the princess 75 centipawns weaker than the queen. This does not appear to have much to do with the bishop's colourboundedness being masked in the compound, because adding a non-capturing backward step turns out to benefit the bishop about as much as the knight; and it also does not have much to do with the bishop's lack of mating potential being so masked, because adding a backward step (capturing and non-capturing) to the bishop benefits it about as much as adding such a step to the knight as well. A more likely explanation seems to be the large number of orthogonal contacts in the move pattern of the princess, with 16 such contacts for the princess compared to 8 for the empress and queen each: such orthogonal contacts would explain why even in cylindrical chess, the rook is still stronger than the bishop even though they now have the same mobility. This makes the princess extremely good at annihilating pawn chains, because it can attack a pawn as well as the square in front of it.
See also
Chess endgame has material which justifies the common valuation system
Compensation (chess)
Evaluation function
discusses the difference between a rook and a minor piece
References
Bibliography
External links
Relative Value of Chess Pieces
Relative Value of Pieces and Principles of Play from The Modern Chess Instructor by Wilhelm Steinitz
About the Values of Chess Pieces by Ralph Betza, 1996.
The Evaluation of Material Imbalances by Larry Kaufman
“The Value of the Chess Pieces” by Edward Winter
Mathematical chess problems | Chess piece relative value | Mathematics | 3,633 |
13,525,027 | https://en.wikipedia.org/wiki/Dual%20norm | In functional analysis, the dual norm is a measure of size for a continuous linear function defined on a normed vector space.
Definition
Let be a normed vector space with norm and let denote its continuous dual space. The dual norm of a continuous linear functional belonging to is the non-negative real number defined by any of the following equivalent formulas:
where and denote the supremum and infimum, respectively.
The constant map is the origin of the vector space and it always has norm
If then the only linear functional on is the constant map and moreover, the sets in the last two rows will both be empty and consequently, their supremums will equal instead of the correct value of
Importantly, a linear function is not, in general, guaranteed to achieve its norm on the closed unit ball meaning that there might not exist any vector of norm such that (if such a vector does exist and if then would necessarily have unit norm ).
R.C. James proved James's theorem in 1964, which states that a Banach space is reflexive if and only if every bounded linear function achieves its norm on the closed unit ball.
It follows, in particular, that every non-reflexive Banach space has some bounded linear functional that does not achieve its norm on the closed unit ball.
However, the Bishop–Phelps theorem guarantees that the set of bounded linear functionals that achieve their norm on the unit sphere of a Banach space is a norm-dense subset of the continuous dual space.
The map defines a norm on (See Theorems 1 and 2 below.)
The dual norm is a special case of the operator norm defined for each (bounded) linear map between normed vector spaces.
Since the ground field of ( or ) is complete, is a Banach space.
The topology on induced by turns out to be stronger than the weak-* topology on
The double dual of a normed linear space
The double dual (or second dual) of is the dual of the normed vector space . There is a natural map . Indeed, for each in define
The map is linear, injective, and distance preserving. In particular, if is complete (i.e. a Banach space), then is an isometry onto a closed subspace of .
In general, the map is not surjective. For example, if is the Banach space consisting of bounded functions on the real line with the supremum norm, then the map is not surjective. (See space). If is surjective, then is said to be a reflexive Banach space. If then the space is a reflexive Banach space.
Examples
Dual norm for matrices
The defined by
is self-dual, i.e., its dual norm is
The , a special case of the induced norm when , is defined by the maximum singular values of a matrix, that is,
has the nuclear norm as its dual norm, which is defined by
for any matrix where denote the singular values.
If the Schatten -norm on matrices is dual to the Schatten -norm.
Finite-dimensional spaces
Let be a norm on The associated dual norm, denoted is defined as
(This can be shown to be a norm.) The dual norm can be interpreted as the operator norm of interpreted as a matrix, with the norm on , and the absolute value on :
From the definition of dual norm we have the inequality
which holds for all and The dual of the dual norm is the original norm: we have for all (This need not hold in infinite-dimensional vector spaces.)
The dual of the Euclidean norm is the Euclidean norm, since
(This follows from the Cauchy–Schwarz inequality; for nonzero the value of that maximises over is )
The dual of the -norm is the -norm:
and the dual of the -norm is the -norm.
More generally, Hölder's inequality shows that the dual of the -norm is the -norm, where satisfies that is,
As another example, consider the - or spectral norm on . The associated dual norm is
which turns out to be the sum of the singular values,
where This norm is sometimes called the .
Lp and ℓp spaces
For -norm (also called -norm) of vector is
If satisfy then the and norms are dual to each other and the same is true of the and norms, where is some measure space.
In particular the Euclidean norm is self-dual since
For , the dual norm is with positive definite.
For the -norm is even induced by a canonical inner product meaning that for all vectors This inner product can expressed in terms of the norm by using the polarization identity.
On this is the defined by
while for the space associated with a measure space which consists of all square-integrable functions, this inner product is
The norms of the continuous dual spaces of and satisfy the polarization identity, and so these dual norms can be used to define inner products. With this inner product, this dual space is also a Hilbert spaces.
Properties
Given normed vector spaces and let be the collection of all bounded linear mappings (or ) of into Then can be given a canonical norm.
A subset of a normed space is bounded if and only if it lies in some multiple of the unit sphere; thus for every if is a scalar, then so that
The triangle inequality in shows that
for every satisfying This fact together with the definition of implies the triangle inequality:
Since is a non-empty set of non-negative real numbers, is a non-negative real number.
If then for some which implies that and consequently This shows that is a normed space.
Assume now that is complete and we will show that is complete. Let be a Cauchy sequence in so by definition as This fact together with the relation
implies that is a Cauchy sequence in for every It follows that for every the limit exists in and so we will denote this (necessarily unique) limit by that is:
It can be shown that is linear. If , then for all sufficiently large integers and . It follows that
for sufficiently all large Hence so that and This shows that in the norm topology of This establishes the completeness of
When is a scalar field (i.e. or ) so that is the dual space of
Let denote the closed unit ball of a normed space
When is the scalar field then so part (a) is a corollary of Theorem 1. Fix There exists such that
but,
for every . (b) follows from the above. Since the open unit ball of is dense in , the definition of shows that if and only if for every . The proof for (c) now follows directly.
As usual, let denote the canonical metric induced by the norm on and denote the distance from a point to the subset by
If is a bounded linear functional on a normed space then for every vector
where denotes the kernel of
See also
Notes
References
External links
Notes on the proximal mapping by Lieven Vandenberge
Functional analysis
Linear algebra
Mathematical optimization
Linear functionals | Dual norm | Mathematics | 1,432 |
1,312,556 | https://en.wikipedia.org/wiki/The%20Last%20Night%20of%20a%20Jockey | "The Last Night of a Jockey" is an episode of the American television anthology series The Twilight Zone. In this episode, a diminutive jockey's wish to be a big man is granted. Rod Serling wrote the episode specifically for Mickey Rooney, who is the only actor to appear in it.
Opening narration
Plot
A jockey named Michael Grady is lying alone in his room after being banned from horse racing for life for fixing races by horse doping. He drinks in his depression, and rues his five-foot height, which horse riding had served to compensate for. He then hears a voice. The voice introduces himself as "the alter ego" and claims to live in Grady's head. He argues with the alter ego, trying to justify his life and his actions, even lying about his crimes, but the alter ego knows all about him. Grady is offered the chance to change his life with one wish. Grady says his greatest wish is to be big. After Grady wakes from a nap he finds his wish has been granted; he is now close to eight feet tall.
Ecstatic, Grady calls his ex-girlfriend over the phone, but she dismisses him. He boasts that he can find more girls who will appreciate him because of his newfound height. The alter ego remains unimpressed, feeling Grady has not made good on any of his promises. He derides his dumb and "cheap" wish, and says that Grady could have wished to win the Kentucky Derby fairly, or perform a heroic act.
A telephone call from the racing commission informs Grady that he has been reinstated and can jockey again. Grady joyfully thanks everyone who petitioned to give him a second chance, but the alter ego laughs at him. Grady realizes he has become even larger, about 10 feet tall — too tall to ride a horse, or properly fit in his own apartment. Devastated, the now-giant Grady wrecks his room and pleads with the alter ego to make him small again. The alter ego denies the request, and instead replies, "You are small, Mr. Grady. You see, every time you won an honest race, that's when you were a giant. But right now, they just don't come any smaller."
Closing narration
Censorship
CBS's Program Practices department criticized this episode for use of the word "dwarf" in a negative context, suggesting that instead the terms "half-pint", "runt" or "shrimp" could be used.
Mickey Rooney and Rod Serling
Although this was Mickey Rooney's sole appearance on The Twilight Zone, he had earlier co-starred in two dramas written by Rod Serling — "The Comedian", a live 1957 episode of the 90-minute anthology series Playhouse 90, as well as the theatrical feature Requiem for a Heavyweight, a 1962 remake of the same-titled 1956 episode of Playhouse 90. The making of the as-yet-unreleased production was the subject of a discussion on the December 21, 1961 episode of the late-night talk show PM East, with guests Mickey Rooney, Rod Serling and the film's star Anthony Quinn. A decade later, in October 1972, Mickey Rooney co-starred in one additional Rod Serling teleplay — "Rare Objects" — a half-hour episode of the horror anthology series Night Gallery.
References
DeVoe, Bill. (2008). Trivia from The Twilight Zone. Albany, GA: Bear Manor Media.
Grams, Martin. (2008). The Twilight Zone: Unlocking the Door to a Television Classic. Churchville, MD: OTR Publishing.
Zicree, Marc Scott: The Twilight Zone Companion. Sillman-James Press, 1982 (second edition)
External links
1963 American television episodes
The Twilight Zone (1959 TV series) season 5 episodes
Fiction about size change
Television episodes written by Rod Serling
Television episodes about termination of employment | The Last Night of a Jockey | Physics,Mathematics | 787 |
13,401,263 | https://en.wikipedia.org/wiki/Copper%28I%29%20fluoride | Copper(I) fluoride or cuprous fluoride is an inorganic compound with the chemical formula CuF. Its existence is uncertain. It was reported in 1933 to have a sphalerite-type crystal structure. Modern textbooks state that CuF is not known, since fluorine is so electronegative that it will always oxidise copper to its +2 oxidation state. Complexes of CuF such as [(Ph3P)3CuF] are, however, known and well characterised.
Synthesis and reactivity
Unlike other copper(I) halides like copper(I) chloride, copper(I) fluoride tends to disproportionate into copper(II) fluoride and copper in a one-to-one ratio at ambient conditions, unless it is stabilised through complexation as in the example of [Cu(N2)F].
2CuF → Cu + CuF2
See also
Copper(II) fluoride, the other simple fluoride of copper
References
Fluorides
Metal halides
Copper(I) compounds
Zincblende crystal structure
Hypothetical_chemical_compounds | Copper(I) fluoride | Chemistry | 233 |
8,705,069 | https://en.wikipedia.org/wiki/Pure%20%28company%29 | Pure International Ltd. is a British consumer electronics company, based in Kings Langley, Hertfordshire, founded in 2002. They are best known for designing and manufacturing digital audio broadcasting (DAB) and DAB+ radios. In recent years the company has moved away from being a digital radio company with more broad-based audio products in the radio, bluetooth and wireless speaker market.
The imprint on the devices' casing states that they were designed in the UK and manufactured in China.
Pure have sold over five million products worldwide.
Pure products are available in the United Kingdom, Australia, Denmark, France, Germany, Ireland, Italy, Netherlands, Norway and Switzerland, and via online suppliers.
History
2002
Pure was formerly a division of another Hertfordshire-based company, Imagination Technologies, which primarily designs Central processing units and Graphics processing units. Imagination did not originally set out to sell consumer electronics and the first Pure radio was merely a demonstration platform for its DAB decoding chip. The success of the first sub-£100 DAB receiver, the Evoke-1, led to the development of further products.
2003
In 2003, Pure launched the PocketDAB 1000. It was the world's first pocket digital radio.
2004
Pure released the Bug, the first-ever digital radio with EPG, pause, rewind and record.
2005
Sonus-1XT was launched by Pure and became the world's first digital radio for the blind and visually impaired.
2007
Pure released Highway, the world's first in-car digital radio adapter, in 2007.
2008
Pure launched the first Energy Saving Trust approved radio range. Under the name EcoPlus, products had reduced power consumption, packaging materials from recycled and sustainable sources and components selected to minimise their environmental impact.
2009
The world's first high-resolution touchscreen digital radio, Sensia, is launched by Pure.
2012
Pure celebrated its tenth anniversary in 2012 with a brand revamp. They also commemorated the landmark with the launch of the Evoke Mio Union Jack Edition.
2014
Pure introduced its three-year warranty.
2015
Pure shipped in excess of five million digital radios worldwide positioning itself as the best-selling digital radio manufacturer.
2016
Pure became Pure International Ltd.
Parent company Imagination Technologies sold the Pure brand to AVenture AT, in September 2016.
2019
Pure acquired the license for Braun Audio from Braun, a division of Procter & Gamble.
In 2019 Pure launched the Braun Audio range of design speakers, referencing the LE1 speakers designed by Dieter Rams .
References
Audio equipment manufacturers of the United States
Electronics companies established in 2002
Companies based in Three Rivers District
British brands
Manufacturing companies of the United Kingdom
2002 establishments in England
Radio manufacturers
Loudspeaker manufacturers | Pure (company) | Engineering | 546 |
36,698,981 | https://en.wikipedia.org/wiki/Aureoboletus%20innixus | Aureoboletus innixus is a species of bolete fungus in the family Boletaceae. Found in eastern North America, it was first described scientifically by Charles Christopher Frost in 1874, from collections made in New England. An edible mushroom, the convex cap grows to wide and is dull reddish brown to yellow brown. The stem is long by thick, but often swollen at the apex with a tapered base. It has a bright yellow pore surface when young that dulls in color when mature. There are about 1 to 3 pores per mm when young, but they expand as they mature to about 2 mm wide. The spore print is olive-brown, and the spores are ellipsoid, smooth, and measure 8–11 by 3–5 um.
The mushroom is often confused with the similar (also edible) Aureoboletus auriporus, which has a pinkish cinnamon to dark reddish brown cap.
See also
List of North American boletes
References
External links
innixus
Edible fungi
Fungi described in 1874
Fungi of North America
Fungus species | Aureoboletus innixus | Biology | 222 |
15,744,107 | https://en.wikipedia.org/wiki/Virtuality%20%28software%20design%29 | Virtuality is a term used by Ted Nelson for what he considers one of the central issues of software design. "Virtuality" refers to the seeming of anything, as opposed to its reality. (This has been the dictionary meaning of "virtuality" since at least the 18th century). Everything has a reality and a virtuality. Nelson divides virtuality into two parts: conceptual structure and feel so in every field these have different roles. The conceptual structure of all cars are the same, but the conceptual structure of every movie is different. The reality of a car is important, but the reality of a movie is unimportant—how a shot was made is of interest only to movie buffs.
The feel of software, like the feel of a car, is a matter of late-stage fine-tuning (if it is worked on at all). But Nelson regards the design of software conceptual structure—the constructs we imagine as we sit at the screen—as the center of the computer field. However, the conceptual structure of almost all software has been determined by what Nelson calls the PARC User Interface, or PUI, on which Windows, Macintosh and Linux are all based. The feel is only icing on top of that.
In relation to new media, Steve Woolgar has proposed 'five rules of virtuality'" that are drawn from in-depth research in the UK on uses of the so-called 'New Media':
Both the uptake and uses of new media are critically dependent on the non-ICT-related contexts in which people are situated (gender, age, employment, income, education, nationality).
Fears and risks associated with new media are unevenly socially distributed, particularly in relation to security and surveillance.
CMC-mediated or 'virtual' interactions supplement rather than substitute for 'real' activities.
The introduction of more scope for 'virtual' interaction acts as a stimulus for more face-to-face or 'real' interaction.
The capacity of 'virtual' communication to promote globalization throughout communication that is spatially disembedded encourages, perhaps paradoxically, new forms of 'localism' and the embedding, rather than the transcendence, of identities grounded in a sense of place, belief, experience, or practice.
References
Further reading
Cyberspace and Human Nature, Howard Rheingold. 1991
External links
Web Studies, and other new media studies resources
VoS: Voice of the Shuttle
Software design
Ted Nelson
Technology neologisms | Virtuality (software design) | Engineering | 505 |
8,266,455 | https://en.wikipedia.org/wiki/Fran%C3%A7ois%20Diederich | François Diederich (9 July 1952 – 23 September 2020) was a Luxembourgish chemist specializing in organic chemistry.
Education
He obtained both his diploma and PhD (first synthesis of Kekulene) from the University of Heidelberg in 1977 and 1979, respectively.
Career and research
After postdoctoral studies with Orville L. Chapman at the University of California, Los Angeles (UCLA) and habilitation at the Max Planck Institute for Medical Research in Heidelberg, he became Full Professor of Organic and Bioorganic Chemistry at UCLA in 1989. In 1992 he was appointed Professor of Organic Chemistry at ETH Zurich. He retired on 31 July 2017, and remained a research-active professor at ETH Zurich. On 16 March 2019, the German Chemical Society (Gesellschaft Deutscher Chemiker, GDCh) bestowed him with their highest recognition, Honorary Membership.
Diederich died on 23 September 2020 after a battle with cancer.
His research interests cover a wide range of topics:
Molecular recognition in chemistry and biology.
Modern medicinal chemistry: molecular recognition studies with biological receptors and X-ray structure-based design of nonpeptidic enzyme inhibitors. Examples of targets: plasmepsin II, IspE and IspF in the non-mevalonate pathway of isoprenoid biosynthesis (malaria); t-RNA guanine transglycosylase (shigellosis); trypanothione reductase (African sleeping sickness).
Supramolecular nanosystems and nano-patterned surfaces.
Advanced materials based on carbon-rich acetylenic molecular architecture: new organic super-acceptors and their inter- and intramolecular charge-transfer complexes, opto-electronic materials for molecular electronic circuitry, chiral macrocyclic and acyclic alleno-acetylenes, amplification of chirality and its transfer from the molecular to the macroscopic scale.
Honors and awards
Source:
Otto Hahn Medal of the Max Planck Society (1979)
Dreyfus Teacher Scholar Award (1987)
ACS Arthur C. Cope Scholar Award (1992)
Otto Bayer Award (1993)
Janssen Prize for Creativity in Organic Synthesis (2000)
Havinga Medal (2000)
Myron L. Bender & Muriel S. Bender Distinguished Summer Lecturer at Northwestern University (2004)
Humboldt Prize (2005)
Burckhardt Helferich Prize (2005)
August Wilhelm von Hofmann-Denkmünze of the German Chemical Society (2006)
ACS Ronald Breslow Award for Achievements in Biomimetic Chemistry (2007)
of the German Chemical Society (2011)
Honorary Doctoral Degree, Technion, Haifa (2012)
Ernst Hellmut Vits-Preis (2014)
Prix Paul Metz by the Institut Grand Ducal, Luxembourg (2014)
EFMC Nauta Award for Pharmacochemistry and for outstanding results of scientific research in the field of Medicinal Chemistry (2016)
Honorary Membership of the German Chemical Society (Gesellschaft Deutscher Chemiker, GDCh) (2019)
Memberships in scientific academies
Deutsche Akademie der Naturforscher Leopoldina
Berlin-Brandenburgische Akademie der Wissenschaften
American Academy of Arts and Sciences (Foreign Honorary Member)
Real Academia de Ciencias Exactas, Físicas y Naturales (Spain, foreign member)
US National Academy of Sciences (Foreign Associate)
Books
References
1952 births
2020 deaths
Luxembourgian chemists
Organic chemists
Carbon scientists
Academic staff of ETH Zurich
People from Ettelbruck
Foreign associates of the National Academy of Sciences | François Diederich | Chemistry | 724 |
1,629,687 | https://en.wikipedia.org/wiki/Carry%20%28arithmetic%29 | In elementary arithmetic, a carry is a digit that is transferred from one column of digits to another column of more significant digits. It is part of the standard algorithm to add numbers together by starting with the rightmost digits and working to the left. For example, when 6 and 7 are added to make 13, the "3" is written to the same column and the "1" is carried to the left. When used in subtraction the operation is called a borrow.
Carrying is emphasized in traditional mathematics, while curricula based on reform mathematics do not emphasize any specific method to find a correct answer.
Carrying makes a few appearances in higher mathematics as well. In computing, carrying is an important function of adder circuits.
Manual arithmetic
A typical example of carry is in the following pencil-and-paper addition:
1
27
+ 59
----
86
7 + 9 = 16, and the digit 1 is the carry.
The opposite is a borrow, as in
−1
47
− 19
----
28
Here, , so try , and the 10 is got by taking ("borrowing") 1 from the next digit to the left. There are two ways in which this is commonly taught:
The ten is moved from the next digit left, leaving in this example in the tens column. According to this method, the term "borrow" is a misnomer, since the ten is never paid back.
The ten is copied from the next digit left, and then 'paid back' by adding it to the subtrahend in the column from which it was 'borrowed', giving in this example in the tens column.
Mathematics education
Traditionally, carry is taught in the addition of multi-digit numbers in the 2nd or late first year of elementary school. However, since the late 20th century, many widely adopted curricula developed in the United States such as TERC omitted instruction of the traditional carry method in favor of invented arithmetic methods, and methods using coloring, manipulatives, and charts. Such omissions were criticized by such groups as Mathematically Correct, and some states and districts have since abandoned this experiment, though it remains widely used.
Higher mathematics
Kummer's theorem states that the number of carries involved in adding two numbers in base is equal to the exponent of the highest power of dividing a certain binomial coefficient.
When several random numbers of many digits are added, the statistics of the carry digits bears an unexpected connection with Eulerian numbers and the statistics of riffle shuffle permutations.
In abstract algebra, the carry operation for two-digit numbers can be formalized using the language of group cohomology. This viewpoint can be applied to alternative characterizations of the real numbers.
Mechanical calculators
Carry represents one of the basic challenges facing designers and builders of mechanical calculators. They face two basic difficulties: The first one stems from the fact that a carry can require several digits to change: in order to add 1 to 999, the machine has to increment 4 different digits. Another challenge is the fact that the carry can "develop" before the next digit finished the addition operation.
Most mechanical calculators implement carry by executing a separate carry cycle after the addition itself. During the addition, each carry is "signaled" rather than performed, and during the carry cycle, the machine increments the digits above the "triggered" digits. This operation has to be performed sequentially, starting with the ones digit, then the tens, the hundreds, and so on, since adding the carry can generate a new carry in the next digit.
Some machines, notably Pascal's calculator, the second known calculator to be built, and the oldest surviving, use a different method: incrementing the digit from 0 to 9, cocks a mechanical device to store energy, and the next increment, which moves the digit from 9 to 0, releases this energy to increment the next digit by 1. Pascal used weights and gravity in his machine. Another notable machine using similar method is the highly successful 19th century Comptometer, which replaced the weights with springs.
Some innovative machines use continuous transmission: adding 1 to any digit, advances the next one by 1/10 (which in turn advances the next one by 1/100 and so on). Some innovative early calculators, notably Chebyshev calculator from 1870, and a design by Selling, from 1886, used this method, but neither were successful. In the early 1930, Marchant calculator implemented continuous transmission with great success, starting with the aptly named "Silent Speed" calculator. Marchant (later to become SCM Corporation) continued to use and improve it, and made continuous-transmission calculators with unmatched speed, into the late 1960s, to the end of the mechanical calculator era.
Computing
When speaking of a digital circuit like an adder, the word carry is used in a similar sense.
In most computers, the carry from the most significant bit of an arithmetic operation (or bit shifted out from a shift operation) is placed in a special carry bit which can be used as a carry-in for multiple precision arithmetic or tested and used to control execution of a computer program. The same carry bit is also generally used to indicate borrows in subtraction, though the bit's meaning is inverted due to the effects of two's complement arithmetic. Normally, a carry bit value of "1" signifies that an addition overflowed the ALU, and must be accounted for when adding data words of lengths greater than that of the CPU. For subtractive operations, two (opposite) conventions are employed as most machines set the carry flag on borrow while some machines (such as the 6502 and the PIC) instead reset the carry flag on borrow (and vice versa).
A carry can lead to integer overflow.
References
External links
Carrying - nLab
Elementary arithmetic
Computer arithmetic
fr:Retenue
ja:ステータスレジスタ#キャリー | Carry (arithmetic) | Mathematics | 1,236 |
4,115,260 | https://en.wikipedia.org/wiki/Mobile%20marketing | Mobile marketing is a multi-channel online marketing technique focused at reaching a specific audience on their smartphones, feature phones, tablets, or any other related devices through websites, e-mail, SMS and MMS, social media, or mobile applications. Mobile marketing can provide customers with time and location sensitive, personalized information that promotes goods, services, appointment reminders and ideas. In a more theoretical manner, academic Andreas Kaplan defines mobile marketing as "any marketing activity conducted through a ubiquitous network to which consumers are constantly connected using a personal mobile device".
SMS marketing
Marketing through cellphones SMS (Short Message Service) became increasingly popular in the early 2000s in Europe and some parts of Asia when businesses started to collect mobile phone numbers and send off wanted (or unwanted) content. On average, SMS messages have a 98% open rate and are read within 3 minutes, making them highly effective at reaching recipients quickly.
Over the past few years, SMS marketing has become a legitimate advertising channel in some parts of the world. This is because, unlike email over the public internet, the carriers who police their own networks have set guidelines and best practices for the mobile media industry (including mobile advertising). The IAB (Interactive Advertising Bureau)has established guidelines and is evangelizing the use of the mobile channel for marketers. While this has been fruitful in developed regions such as North America, Western Europe and some other countries, mobile SPAM messages (SMS sent to mobile subscribers without a legitimate and explicit opt-in by the subscriber) remain an issue in many other parts of the world, partly due to the carriers selling their member databases to third parties. In India, however, the government's efforts to create the National Do Not Call Registry have helped cellphone users stop SMS advertisements by sending a simple SMS or calling 1909.
Mobile marketing approaches through SMS have expanded rapidly in Europe and Asia as a new channel to reach the consumer. SMS initially received negative media coverage in many parts of Europe for being a new form of spam as some advertisers purchased lists and sent unsolicited content to consumer's phones; however, as guidelines are put in place by the mobile operators, SMS has become the most popular branch of the Mobile Marketing industry with several 100 million advertising SMS sent out every month in Europe alone. This is thanks in part to SMS messages being hardware agnostic—they can be delivered to practically any mobile phone, smartphone or feature phone and accessed without a Wi-Fi or mobile data connection. This is important to note since there were over 5 billion unique mobile phone subscribers worldwide in 2017, which is about 66% of the world population.
However, nowadays, the mobile phone has become a focal device in people’s lives, and many people cannot live without it. These advanced mobile technologies bring people more business opportunities that connect business people and consumers at any time and place. Because of this, digital marketing has become more essential, and mobile marketing is one of the newest digital marketing channels that people are considering; it can get information about the features of goods that people like without the need for buyers to go to the actual store.
SMS marketing has both inbound and outbound marketing strategies. Inbound marketing focuses on lead generation, and outbound marketing focuses on sending messages for sales, promotions, contests, donations, television program voting, appointments and event reminders.
There are 5 key components to SMS marketing: sender ID, message size, content structure, spam compliance, and message delivery.
Sender ID
A sender ID is a name or number that identifies who the sender is. For commercial purposes, virtual numbers, short codes, SIM hosting, and custom names are most commonly used and can be leased through bulk SMS providers.
Shared Virtual Numbers
As the name implies, shared virtual numbers are shared by many different senders. They're usually free, but they can't receive SMS replies, and the number changes from time to time without notice or consent. Senders may have different shared virtual numbers on different days, which may make it confusing or untrustworthy for recipients depending on the context. For example, shared virtual numbers may be suitable for 2-factor authentication text messages, as recipients are often expecting these text messages, which are often triggered by actions that the recipients make. But for text messages that the recipient isn't expecting, like a sales promotion, a dedicated virtual number may be preferred.
Dedicated Virtual Numbers
To avoid sharing numbers with other senders, and for brand recognition and number consistency, leasing a dedicated virtual number, which are also known as a long code or long number (international number format, e.g. +44 7624 805000 or US number format, e.g. 757 772 8555), is a viable option. Unlike a shared number, it can receive SMS replies. Senders can choose from a list of available dedicated virtual numbers from a bulk SMS provider. Prices for dedicated virtual numbers can vary. Some numbers, often called Gold numbers, are easier to recognize, and therefore more expensive to lease. Senders may also get creative and choose a vanity number. These numbers spell out a word or phrase using the keypad, like +1-(123)-ANUMBER.
Short codes
Shortcodes offer very similar features to a dedicated virtual number but are short mobile numbers that are usually 5-6 digits. Their length and availability are different in each and every country. These are usually more expensive and are commonly used by enterprises and governmental organizations. For mass messaging, shortcodes are preferred over a dedicated virtual number because of their higher throughput and are great for time-sensitive campaigns and emergencies.
In Europe the first cross-carrier SMS shortcode campaign was run by Txtbomb in 2001 for an Island Records release, In North America, it was the Labatt Brewing Company in 2002. Over the past few years, mobile short codes have been increasingly popular as a new channel to communicate to the mobile consumer. Brands have begun to treat the mobile shortcode as a mobile domain name, allowing the consumer to text message the brand at an event, in-store and off any traditional media.
Short codes provide a direct line between a brand and their customer base. Once a company has a dedicated short code, they are able to directly message their audience without worrying if the messages are being delivered, unlike long code D.I.D.s (Direct Inward Dial, another term for phone number). Whereas long code texts face a higher level of scrutiny, short codes give you unrivalled throughput without triggering red flags from the carriers.
SIM hosting
Physical and virtual SIM hosting allows a mobile number sourced from a carrier to be used for receiving SMS as part of a marketing campaign. The SIM associated with the number is hosted by a bulk SMS provider. With physical SIM hosting, a SIM is physically hosted in a GSM modem and SMS received by the SIM are relayed to the customer. With virtual SIM hosting, the SIM is roamed onto the Bulk SMS provider's partner mobile network and SMS sent to the mobile number are routed from the mobile network's SS7 network to an SMSC or virtual mobile gateway, and then onto the customer.
Custom Sender ID
A custom sender ID, also known as an alphanumeric sender ID, enables users to set a business name as the sender ID for one-way organization-to-consumer messages. Custom sender IDs are only supported in certain countries and are up to 11 characters long, and support uppercase and lowercase ASCII letters and digits 0-9. Senders are not allowed to use digits only as this would mimic a shortcode or virtual number that they do not have access to. Reputable bulk SMS providers will check customer sender IDs beforehand to make sure senders are not misusing or abusing them.
Message Size
The message size will then determine the number of SMS messages that are sent, which then determines the amount of money spent on marketing a product or service. Not all characters in a message are the same size.
A single SMS message has a maximum size of 1120 bits. This is important because there are two types of character encodings, GSM and Unicode. Latin-based languages like English are GSM based encoding, which are 7 bits per character. This is where text messages typically get their 160 characters per SMS limit. Long messages that exceed this limit are concatenated. They are split into smaller messages, which are recombined by the receiving phone.
Concatenated messages can only fit 153 characters instead of 160. For example, a 177 character message is sent as 2 messages. The first is sent with 153 characters and the second with 24 characters. The process of SMS concatenation can happen up to 4 times for most bulk SMS providers, which allows senders a maximum of 612 character messages per campaign.
Non-Latin based languages, like Chinese, and also emojis use a different encoding process called Unicode or Unicode Transformation Format (UTF-8). It is meant to encompass all characters for efficiency but has a caveat. Each Unicode character is 16 bits in size, which takes more information to send, therefore limiting SMS messages to 70 characters. Messages that are larger than 70 characters are also concatenated. These messages can fit 67 characters and can be concatenated up to 4 times for a maximum of 268 characters.
Content Structure
Special elements that can be placed inside a text message include:
UTF-8 Characters: Send SMS in different languages, special characters, or emojis
Keywords: Use keywords to trigger an automated response
Links: Track campaigns easily by using shortened URLs to custom landing pages
Interactive Elements: Pictures, animations, audio, or video
Texting is simple, however, when it comes to SMS marketing - there are many different content structures that can be implemented. Popular message types include sale alerts, reminders, keywords, and multimedia messaging services (MMS).
SMS Sales Alerts
Sale alerts are the most basic form of SMS marketing. They are generally used for clearance, flash sales, and special promotions. Typical messages include coupon codes, and information like expiration dates, products, and website links for additional information.
SMS Transaction Alerts
Transaction Alerts are used by financial institutions to notify their customer about a financial transaction done from their account. Some SMS only highlights the amount transacted while some also include the balance amount left in the account.
SMS Reminders
Reminders are commonly used in appointment-based industries or for recurring events. Some senders choose to ask their recipients to respond to the reminder text with an SMS keyword to confirm their appointment. This can really help improve the sender's workflow and reduce missed appointments, leading to improved productivity and revenue.
SMS Keywords
This allows people to text a custom keyword to a dedicated virtual number or short code. Through custom keywords, users can opt-in to service with minimal effort. Once a keyword is triggered, an autoresponder can be set to guide the user to the next step. They can also activate different functions, which include entering a contest, forwarding to an email or mobile number, group chat, and sending an auto-response.
Keywords also allow users to opt-in to receive further marketing correspondence. When using a long code number you face higher levels of scrutiny from Telecom Companies. When sending SMS messages through long code you are unable to send messages with a link in the first message. This is done at the carrier level to help cut down on spam. Using keyword responses, a company can create a bridge between themselves and the user. Carriers will recognize users responding to an SMS with a keyword as a conversation and will allow links to be delivered.
Spam Compliance
Similar to email, SMS has anti-spam laws which differ from country to country. As a general rule, it's important to obtain the recipient's permission before sending any text message, especially an SMS marketing type of message. Permission can be obtained in a myriad of ways, including allowing prospects or customers to tick a permission checkbox on a website, filling in a form, or getting a verbal agreement.
In most countries, SMS senders need to identify themselves as their business name inside their initial text message. Identification can be placed in either the sender ID or within the message body copy. Spam prevention laws may also apply to SMS marketing messages, which must include a method to opt out of messages.
One key criterion for provisioning is that the consumer opts in to the service. The mobile operators demand a double opt-in from the consumer and the ability for the consumer to opt-out of the service at any time by sending the word STOP via SMS. These guidelines are established in the CTIA Playbook and the MMA Consumer Best Practices Guidelines which are followed by all mobile marketers in the United States. In Canada, opt-in became mandatory once the Fighting Internet and Wireless Spam Act came into force in 2014.
Message Delivery
Simply put, SMS infrastructure is made up of special servers that talk to each other, using software called Short Message Service Centre (SMSC) that use a special protocol called Short Message Peer to Peer (SMPP).
Through the SMPP connections, bulk SMS providers (also known as SMS Gateways) like the ones mentioned above can send text messages and process SMS replies and delivery receipts.
When a user sends messages through a bulk SMS provider, it gets delivered to the recipient's carrier via an ON-NET connection or the International SS7 Network.
SS7 Network
Operators around the world are connected by a network known as Signaling System #7. It's used to exchange information related to phone calls, number translations, prepaid billing systems, and is the backbone of SMS. SS7 is what carriers around the world use to talk to each other.
ON-NET Routing
ON-NET routing is the most popular form of messaging globally. It's the most reliable and preferable way for telecommunications/carriers to receive messages, as the messages from the bulk SMS provider is sent to them directly. For senders that need consistency and reliability, seeking a provider that uses ON-NET routing should be the preferred option.
Grey Routing
Grey Routing is a term given to messages that are sent to carriers (often offshore) that have low cost interconnect agreements with other carriers. Instead of sending the messages directly to the intended carrier, some bulk SMS providers send it to an offshore carrier, which will relay the message to the intended carrier. At the cost of consistency and reliability, this roundabout way is cheaper, and these routes can disappear without notice and are slower. Many carriers don't like this type of routing, and will often block them with filters set up in their SMSCs.
Hybrid Routing
Some bulk SMS providers have the option to combine more reliable grey routing on lower value carriers with their ON-NET offerings. If the routes are managed well, then messages can be delivered reliably. Hybrid routing is more common for SMS marketing messages, where timeliness and reliable delivery is less of an issue.
SMS Service Providers
The easiest and most efficient way of sending an SMS marketing campaign is through a bulk SMS service provider. Enterprise-grade SMS providers will usually allow new customers the option to sign-up for a free trial account before committing to their platform. Reputable companies also offer free spam compliance, real-time reporting, link tracking, SMS API, multiple integration options, and a 100% delivery guarantee. Most providers can provide link shorteners and built-in analytics to help track the return on investment of each campaign.
Depending on the service provider and country, each text message can cost up to a few cents each. Senders intending to send a lot of text messages per month or per year may get discounts from service providers.
Since spam laws differ from country to country, SMS service providers are usually location-specific. This is a list of the most popular and reputable SMS companies in each continent, with some information about the number of phones in use. It is important to note that message pricing, message delivery, and service offerings will also differ substantially from country to country.
Africa
Asia
Australia/Oceania
North America
Europe
South America
MMS
MMS mobile marketing can contain a timed slideshow of images, text, audio and video. This mobile content is delivered via MMS (Multimedia Message Service). Nearly all new phones produced with a color screen are capable of sending and receiving standard MMS message. Brands are able to both send (mobile terminated) and receive (mobile originated) rich content through MMS A2P (application-to-person) mobile networks to mobile subscribers. In some networks, brands are also able to sponsor messages that are sent P2P (person-to-person).
A typical MMS message based on the GSM encoding can have up to 1500 characters, whereas one based on Unicode can have up to 500 characters. Messages that are longer than the limit are truncated and not concatenated like an SMS.
Good examples of mobile-originated MMS marketing campaigns are Motorola's ongoing campaigns at House of Blues venues, where the brand allows the consumer to send their mobile photos to the LED board in real-time as well as blog their images online.
Push notifications
Push notifications were first introduced to smartphones by Apple with the Push Notification Service in 2009. For Android devices, Google developed Android Cloud to Messaging or C2DM in 2010. Google replaced this service with Google Cloud Messaging in 2013. Commonly referred to as GCM, Google Cloud Messaging served as C2DM's successor, making improvements to authentication and delivery, new API endpoints and messaging parameters, and the removal of limitations on API send-rates and message sizes. It is a message that pops up on a mobile device. It is the delivery of information from a software application to a computing device without any request from the client or the user. They look like SMS notifications but they reach only the users who installed the app. The specifications vary for iOS and Android users. SMS and push notifications can be part of a well-developed inbound mobile marketing strategy.
According to mobile marketing company Leanplum, Android sees open rates nearly twice as high as those on iOS. Android sees open rates of 3.48 percent for push notification, versus iOS which has open rates of 1.77 percent.
App-based marketing
With the strong growth in the use of smartphones, app usage has also greatly increased. The annual number of mobile app downloads over the last few years has exponentially grown, with hundreds of billions of downloads in 2018, and the number of downloads expecting to climb by 2022. Therefore, mobile marketers have increasingly taken advantage of smartphone apps as a marketing resource. Marketers aim to optimize the visibility of an app in a store, which will maximize the number of downloads. This practice is called App Store Optimization (ASO).
There is a lot of competition in this field as well. However, just like other services, it is not easy anymore to rule the mobile application market.
Most companies have acknowledged the potential of Mobile Apps to increase the interaction between a company and its target customers. With the fast progress and growth of the smartphone market, high-quality Mobile app development is essential to obtain a strong position in a mobile app store.
The term app marketing has not yet been defined in a unified scientific definition and is also used in various ways in practice. The term refers on the one hand to those activities that serve to generate app downloads and thus attract new users for a mobile app. In some cases, the term is also used to describe the promotional sending of push notifications and in-app messages.
Here are several models for App marketing.
1. Content embedded mode For the most part at present, the downloading APP from APP store is free, for APP development enterprise, need a way to flow to liquidate, implantable advertising and APP combines content marketing and game characters to seamlessly integrating user experience, so as to improve advertising hits.
With these free downloading apps, developers use in-app purchases or subscription to profit.
2. Advertising model advertisement implantation mode is a common marketing mode in most APP applications. Through Banner ads, consumer announcements, or in-screen advertising, users will jump to the specified page and display the advertising content when users click. This model is more intuitive, and can attract users' attention quickly.
3. User participation mode is mainly applied to website transplantation and brand APP. The company publishes its own brand APP to the APP store for users to download, so that users can intuitively understand the enterprise or product information better. As a practical tool, this APP brings great convenience to users' life. User reference mode enables users to have a more intimate experience, so that users can understand the product, enhance the brand image of the enterprise, and seize the user's heart.
4. The shopping website embedded mode is the traditional Internet electric business offering platforms in the mobile APP, which is convenient for users to browse commodity information anytime and anywhere, order to purchase and order tracking. This model has promoted the transformation of traditional e-commerce enterprises from shopping to mobile Internet channels, which is a necessary way to use mobile APP for online and offline interactive development, such as Amazon, eBay and so on. The above several patterns for the more popular marketing methods, as for the details while are not mentioned too much, but the hope can help you to APP marketing have a preliminary understanding, and on the road more walk more far in the marketing.
In-game mobile marketing
There are essentially three major trends in mobile gaming right now: interactive real-time 3D games, massive multi-player games and social networking games. This means a trend towards more complex and more sophisticated, richer game play. On the other side, there are the so-called casual games, i.e. games that are very simple and very easy to play. Most mobile games today are such casual games and this will probably stay so for quite a while to come.
Brands are now delivering promotional messages within mobile games or sponsoring entire games to drive consumer engagement. This is known as mobile advergaming or ad-funded mobile game.
In in-game mobile marketing, advertisers pay to have their name or products featured in the mobile games. For instance, racing games can feature real cars made by Ford or Chevy. Advertisers have been both creative and aggressive in their attempts to integrate ads organically in the mobile games.
Although investment in mobile marketing strategies like advergaming is slightly more expensive than what is intended for a mobile app, a good strategy can make the brand derive a substantial revenue. Games that use advergaming make the users remember better the brand involved. This memorization increases virality of the content so that the users tend to recommend them to their friends and acquaintances, and share them via social networks.
One form of in-game mobile advertising is what allows players to actually play. As a new and effective form of advertising, it allows consumers to try out the content before they actually install it. This type of marketing can also really attract the attention of users like casual players. These advertising blur the lines between game and advertising, and provide players with a richer experience that allows them to spend their precious time interacting with advertising.
This kind of advertisement is not only interesting, but also brings some benefits to marketers. As this kind of in-gaming mobile marketing can create more effective conversion rates because they are interactive and have faster conversion speeds than general advertising. Moreover, games can also offer a stronger lifetime value. They measure the quality of the consumer in advance to provide some more in-depth experience. So this type of advertising can be more effective in improving user stickiness than advertising channels such as stories and video.
QR codes
Two-dimensional barcodes that are scanned with a mobile phone camera. They can take a user to the particular advertising webpage a QR code is attached to. QR codes are often used in mobile gamification when they appear as surprises during a mobile app game and directs users to the specific landing page. Such codes are also a bridge between physical medium and online via mobile: businesses print QR codes on promotional posters, brochures, postcards, and other physical advertising materials.
Bluetooth
Bluetooth technology is a wireless short range digital communication that allows devices to communicate without the now superseded RS-232 cables.
Proximity systems
Mobile marketing via proximity systems, or proximity marketing, relies on GSM 03.41 which defines the Short Message Service - Cell Broadcast. SMS-CB allows messages (such as advertising or public information) to be broadcast to all mobile users in a specified geographical area. In the Philippines, GSM-based proximity broadcast systems are used by select Government Agencies for information dissemination on Government-run community-based programs to take advantage of its reach and popularity (Philippines has the world's highest traffic of SMS). It is also used for commercial service known as Proxima SMS. Bluewater, a super-regional shopping center in the UK, has a GSM based system supplied by NTL to help its GSM coverage for calls, it also allows each customer with a mobile phone to be tracked though the center which shops they go into and for how long. The system enables special offer texts to be sent to the phone. For example, a retailer could send a mobile text message to those customers in their database who have opted-in, who happen to be walking in a mall. That message could say "Save 50% in the next 5 minutes only when you purchase from our store." Snacks company, Mondelez International, makers of Cadbury and Oreo products has committed to exploring proximity-based messaging citing significant gains in point-of-purchase influence.
Location-based services
Location-based services (LBS) are offered by some cell phone networks as a way to send custom advertising and other information to cell-phone subscribers based on their current location. The cell-phone service provider gets the location from a GPS chip built into the phone, or using radiolocation and trilateration based on the signal-strength of the closest cell-phone towers (for phones without GPS features). In the United Kingdom, which launched location-based services in 2003, networks do not use trilateration; LBS uses a single base station, with a "radius" of inaccuracy, to determine a phone's location.
Some location-based services work without GPS tracking technique, instead transmitting content between devices peer-to-peer.
There are various methods for companies to utilize a device's location.
1.Store locators.
Utilizing the location-based feedback, the nearest store location can be found rapidly by retail clients.
2.Proximity-based marketing.
Companies can deliver advertisements merely to individuals in the same geographical location.
Location-based services send advertisements prospective customers of the area who may truly take action on the information.
3.Travel information.
Location-based services can provide actual time information for the smartphones, such as traffic condition and weather forecast, then the customers can make the plan.
4.Roadside assistance.
In the event of sudden traffic accidents, the roadside assistance company can develop an app to track the customer's real-time location without navigation.
Ringless voicemail
The advancement of mobile technologies has allowed the ability to leave a voice mail message on a mobile phone without ringing the line. The technology was pioneered by VoAPP, which used the technology in conjunction with live operators as a debt collection service. The FCC has ruled that the technology is compliant with all regulations. CPL expanded on the existing technology to allow for a completely automated process including the replacement of live operators with pre recorded messages.
User-controlled media
Mobile marketing differs from most other forms of marketing communication in that it is often user (consumer) initiated (mobile originated, or MO) message, and requires the express consent of the consumer to receive future communications. A call delivered from a server (business) to a user (consumer) is called a mobile terminated (MT) message. This infrastructure points to a trend set by mobile marketing of consumer controlled marketing communications.
Due to the demands for more user controlled media, mobile messaging infrastructure providers have responded by developing architectures that offer applications to operators with more freedom for the users, as opposed to the network-controlled media. Along with these advances to user-controlled Mobile Messaging 2.0, blog events throughout the world have been implemented in order to launch popularity in the latest advances in mobile technology. In June 2007, Airwide Solutions became the official sponsor for the Mobile Messaging 2.0 blog that provides the opinions of many through the discussion of mobility with freedom.
GPS plays an important role in location-based marketing.
Privacy concerns
Mobile advertising has become more and more popular. However, some mobile advertising is sent without a required permission from the consumer causing privacy violations. It should be understood that irrespective of how well advertising messages are designed and how many additional possibilities they provide, if consumers do not have confidence that their privacy will be protected, this will hinder their widespread deployment. But if the messages originate from a source where the user is enrolled in a relationship/loyalty program, privacy is not considered violated and even interruptions can generate goodwill.
The privacy issue became even more salient as it was before with the arrival of mobile data networks. A number of important new concerns emerged mainly stemming from the fact that mobile devices are intimately personal and are always with the user, and four major concerns can be identified: mobile spam, personal identification, location information and wireless security. Aggregate presence of mobile phone users could be tracked in a privacy-preserving fashion.
Classification
Kaplan categorizes mobile marketing along the degree of consumer knowledge and the trigger of communication into four groups: strangers, groupies, victims, and patrons. Consumer knowledge can be high or low and according to its degree organizations can customize their messages to each individual user, similar to the idea of one-to-one marketing. Regarding the trigger of communication, Kaplan differentiates between push communication, initiated by the organization, and pull communication, initiated by the consumer. Within the first group (low knowledge/push), organizations broadcast a general message to a large number of mobile users. Given that the organization cannot know which customers have ultimately been reached by the message, this group is referred to as "strangers". Within the second group (low knowledge/pull), customers opt to receive information but do not identify themselves when doing so. The organizations therefore does not know which specific clients it is dealing with exactly, which is why this cohort is called "groupies". In the third group (high knowledge/push) referred to as "victims", organizations know their customers and can send them messages and information without first asking permission. The last group (high knowledge/pull), the "patrons" covers situations where customers actively give permission to be contacted and provide personal information about themselves, which allows for one-to-one communication without running the risk of annoying them.
References
Mobile content | Mobile marketing | Technology | 6,327 |
10,429,786 | https://en.wikipedia.org/wiki/Medact | Medact is a non-profit organization and registered charity, whose mission is "to support health professionals from all disciplines to work together towards a world in which everyone can truly achieve and exercise their human right to health".
Medact was formed in 1992 following the merger of the Medical Association for the Prevention of War (MAPW) and the Medical Campaign Against Nuclear Weapons (MCANW). Following the merger of these not-for-profit medical peace organizations, Medact broadened its mission to include the health threats posed by climate change and economic inequality.
Medact is affiliated with International Physicians for the Prevention of Nuclear War.
Notable work
Between 2001 and 2012, Medact produced a number of reports on the health impact of the war in Iraq. They have issued three reports and two shorter "updates", have defended the Lancet surveys of casualties of the Iraq War and, as part of the Count the Casualties campaign, have called for an independent investigation into increased mortality in Iraq.
Medact has produced reports documenting the phenomenon of health worker migration from less economically developed nations to rich countries, which they describe as a "perverse subsidy".
Medact also works on the health of refugees and migrants in the UK, in particular documenting and challenging barriers to healthcare.
Medact has been involved in the Global Health Watch, a civil society project aiming to produce alternative versions of the World Health Organization's annual World Health Report.
Collaborators
Quakers
Médecins Sans Frontières
Campaign Against Arms Trade
Saferworld
International Campaign to Abolish Nuclear Weapons
British Medical Association
Royal Society of Medicine
Health Poverty Action
People's Health Movement
Friends of the Earth
War on Want
Tax Justice Network
New Economics Foundation
Oxford Research Group
Queen Mary University
See also
Right to health
References
External links
The archive of Medact (and its predecessor body the Medical Campaign Against Nuclear Weapons) is held at Wellcome Collection, and is searchable in the library catalogue (SA/MED).
Health charities in the United Kingdom
Organizations established in 1992
Anti-nuclear organizations
Medical associations based in the United Kingdom
Environmental organisations based in the United Kingdom
Public health in the United Kingdom
English health activists | Medact | Engineering | 427 |
22,619,813 | https://en.wikipedia.org/wiki/Cyclopentadienylcobalt%20dicarbonyl | Cyclopentadienylcobalt dicarbonyl is an organocobalt compound with formula (C5H5)Co(CO)2, abbreviated CpCo(CO)2. It is an example of a half-sandwich complex. It is a dark red air sensitive liquid. This compound features one cyclopentadienyl ring that is bound in an η5-manner and two carbonyl ligands. The compound is soluble in common organic solvents.
Preparation
CpCo(CO)2 was first reported in 1954 by Piper, Cotton, and Wilkinson who produced it by the reaction of cobalt carbonyl with cyclopentadiene. It is prepared commercially by the same method:
Co2(CO)8 + 2 C5H6 → 2 C5H5Co(CO)2 + H2 + 4 CO
Alternatively, it is generated by the high pressure carbonylation of bis(cyclopentadienyl)cobalt (cobaltocene) at elevated temperature and pressures:
Co(C5H5)2 + 2 CO → C5H5Co(CO)2 + "C5H5"
The compound is identified by strong bands in its IR spectrum at 2030 and 1960 cm−1.
Reactions
CpCo(CO)2 catalyzes the cyclotrimerization of alkynes. The catalytic cycle begins with dissociation of one CO ligand forming bis(alkyne) intermediate.
CpCo(CO)2 + 2 R2C2 → CpCo(R2C2)2 + 2 CO
This reaction proceeds by formation of metal-alkyne complexes by dissociation of CO. Although monoalkyne complexes CpCo(CO)(R1C2R2) have not been isolated, their analogues, CpCo(PPh3)(R1C2R2) are made by the following reactions:
CpCo(CO)2 + PR3 → CO + CpCo(CO)(PR3)
CpCoL(PR3) + R2C2 → L + CpCo(PR3)(R2C2) (where L = CO or PR3)
CpCo(CO)2 catalyzes the formation of pyridines from a mixture of alkynes and nitriles. Reduction of CpCo(CO)2 with sodium yields the dinuclear radical [Cp2Co2(CO)2]−, which reacts with alkyl halides to give the dialkyl complexes [Cp2Co2(CO)2R2]. Ketones are produced by carbonylation of these dialkyl complexes, regenerating CpCo(CO)2.
Related compounds
The pentamethylcyclopentadienyl analogue Cp*Co(CO)2 (CAS RN#12129-77-0) is well studied. The Rh and Ir analogues, CpRh(CO)2 (CAS RN#12192-97-1) and CpIr(CO)2 (CAS RN#12192-96-0), are also well known.
References
Cyclopentadienyl complexes
Carbonyl complexes
Organocobalt compounds
Half sandwich compounds | Cyclopentadienylcobalt dicarbonyl | Chemistry | 664 |
61,594,784 | https://en.wikipedia.org/wiki/Ovine%20gammaherpesvirus%202 | Ovine gammaherpesvirus 2 (OvHV-2) is a species of virus in the genus Macavirus, subfamily Gammaherpesvirinae, family Herpesviridae, and order Herpesvirales.
References
External links
Gammaherpesvirinae | Ovine gammaherpesvirus 2 | Biology | 57 |
40,615,969 | https://en.wikipedia.org/wiki/In%20ovo | In ovo is Latin for in the egg. In medical usage it refers to the growth of live virus in chicken egg embryos for vaccine development for human use, as well as an effective method for vaccination of poultry against various Avian influenza and coronaviruses. During the incubation period, the virus replicates in the cells that make up the chorioallantoic membrane.
Advantages
In human vaccine development, the main advantage is rapid propagation, and high yield, of viruses for vaccine production. This method is most commonly used for growth of influenza virus, both attenuated vaccine and inactivated vaccine forms. It is recommended by the World Health Organization in managing influenza pandemics because it is high-yield and cost effective.
In poultry, In ovo vaccination improves hatchability and efficient protection against Avian influenza (AI), Newcastle disease (ND) and Coronaviruses (Av-CoV). Seroconversion rates of chickens vaccinated as embryos ranged from 27% to 100% with ND vaccination and 85% to 100% for AI vaccination. The birds are protected before delivery to a commercial operation such as a farm, thus preventing the spread of Avian viruses.
Vaccination
In ovo vaccination is carried out by machines. These machines perform a number of actions to ensure good vaccination of the chick inside the egg. Benefits of In ovo vaccination include avoidance of bird stress, controlled hygienic conditions, and earlier immunity with less interference from maternal antibodies.
Feeding
In ovo feeding is a considered as a potential tool to provide nutrient to embryo as well as to modulate performance and gut health of pre and post hatched chicks. Based on the purpose the in ovo injection could be considered as in ovo stimulation or in ovo feeding.
See also
Vaccine
Polio vaccine
List of vaccine ingredients
List of vaccine topics
Virosome
References
Vaccination | In ovo | Biology | 398 |
1,586,599 | https://en.wikipedia.org/wiki/Sceptrum%20Brandenburgicum | Sceptrum Brandenburgicum (or Sceptrum Brandenburgium – Latin for scepter of Brandenburg) was a constellation created in 1688 by Gottfried Kirch, astronomer of the Prussian Royal Society of Sciences. It represented the scepter used by the royal family of the Brandenburgs. It was west from the constellation of Lepus. The constellation was quickly forgotten and is no longer in use. Its name was, however, partially inherited by one of its brightest stars, Sceptrum, which today is denoted 53 Eridani. This name is still in use today.
External links
Sceptrum Brandenburgium
Star Tales – Sceptrum Brandenburgicum
Former constellations
Brandenburg-Prussia | Sceptrum Brandenburgicum | Astronomy | 138 |
19,283,589 | https://en.wikipedia.org/wiki/Urinal%20%28health%20care%29 | A urinal, urine bottle, or male urinal is a bottle for urination. It is most frequently used in health care for patients who find it impossible or difficult to get out of bed during sleep. Urinals allow the patient who has cognition and movement of their arms to urinate without the help of staff. A urinal bottle can also be used by travelers or transportation workers who are unable to immediately use a public restroom as part of an emergency kit, or in areas where restroom facilities are too distant.
Urinals are used as part of input and output measurement and feature embedded markings to measure the fluid.
Generally, patients who are able to are encouraged to walk to the toilet or use a bedside commode as opposed to a urinal. The prolonged use of a urinal has been shown to lead to constipation or difficulty urinating.
Urinals are most frequently used for male patients, since they are easier to use with male anatomy. While female urinals exist, they are more difficult to use, and the common practice for females is to use a bedpan. Female urinals require a wider opening and must be placed between the legs. For many women, female urinals are more practical in a wheelchair rather than in a bed. But the opening part of urinal may get infected by the germs and can be disinfected by various chemicals.
References
Incontinence
Medical equipment
Urine | Urinal (health care) | Biology | 295 |
13,502,050 | https://en.wikipedia.org/wiki/Cold%20filter%20plugging%20point | Cold filter plugging point (CFPP) is the lowest temperature, expressed in degrees Celsius (°C), at which a given volume of diesel type of fuel still passes through a standardized filtration device in a specified time when cooled under certain conditions. This test gives an estimate for the lowest temperature that a fuel will give trouble free flow in certain fuel systems. This is important as in cold temperate countries, a high cold filter plugging point will clog up vehicle engines more easily.
The test is important in relation to the use of additives that allow spreading the usage of winter diesel at temperatures below the cloud point. The tests according to EN 590 show that a CloudPoint of +1 °C can have a CFPP −10 °C. Current additives allow a CFPP of −20 °C to be based on diesel fuel with a CloudPoint of −7 °C.
The trustworthiness of the EN 590 have been criticized as being too low for modern diesel motors – the German ADAC has run a test series on customary winter diesel in a cold chamber. All diesel brands did exceed the legal minimum by 3 to 11 degrees in the laboratory according to the legal DIN test. One of the real diesel motors however stopped working even before the legal minimum was reached, presumably due to an undersized filter heater. Notably the experiments did not show a direct correlation between the CFPP value of the mineral oil and the cold start capability of the diesel motors – hence the automobile club suggest the creation of a new test standard.
Test method
The ASTM no. for the test method to define cold filter plugging point is ASTM D6371.
See also
Cloud point
Petroleum
Pour point
References
External links
BP information
Chemical properties
Fuel technology | Cold filter plugging point | Chemistry | 351 |
248,042 | https://en.wikipedia.org/wiki/Weissenberg%20number | The Weissenberg number (Wi) is a dimensionless number used in the study of viscoelastic flows. It is named after Karl Weissenberg. The dimensionless number compares the elastic forces to the viscous forces. It can be variously defined, but it is usually given by the relation of stress relaxation time of the fluid and a specific process time. For instance, in simple steady shear, the Weissenberg number, often abbreviated as Wi or We, is defined as the shear rate times the relaxation time . Using the Maxwell model and the Oldroyd-B model, the elastic forces can be written as the first Normal force (N1).
Since this number is obtained from scaling the evolution of the stress, it contains choices for the shear or elongation rate, and the length-scale. Therefore the exact definition of all non dimensional numbers should be given as well as the number itself.
While Wi is similar to the Deborah number and is often confused with it in technical literature, they have different physical interpretations. The Weissenberg number indicates the degree of anisotropy or orientation generated by the deformation, and is appropriate to describe flows with a constant stretch history, such as simple shear. In contrast, the Deborah number should be used to describe flows with a non-constant stretch history, and physically represents the rate at which elastic energy is stored or released.
References
Dimensionless numbers of fluid mechanics
Fluid dynamics
Non-Newtonian fluids
Rheology | Weissenberg number | Chemistry,Engineering | 294 |
7,778,580 | https://en.wikipedia.org/wiki/Wamani | Wamani is a non-governmental organisation working on ICT issues in Argentina. In 2004, Wamani built a regional information system for the Latin American chapters of Amnesty International.
Human rights organisations in that part of the world (Latin America), such as Madres de la Plaza de Mayo have been using the service platform offered by Wamani, which is targeted towards non-profit organisations.
Wamani has also help boost the online communication systems of six different sites, related to Amnesty sections in as many different countries. It has also been work on themes relating to intranets, virtual campuses and real strategic alliances.
Wamani has also been involved in implementing an intranet and a campus for the whole of human rights watchdog Amnesty International in the Latin American and Caribbean region. It has built links with researchers and experts in distance education, and organisational development, management and the development of human resources, according to the 2005 annual report of the Association for Progressive Communications, of which Wamani is a member: "Throughout 2005, Wamani's technical team work on the development, implementation and experimentation of a group of tools to support training and distance education processes, as well as support systems (intranets) for the internal and external operations of organisations or networks".
References
External links
Wamani website
Information and communication technologies for development
Non-profit technology
Information technology organisations based in Argentina
Non-profit organisations based in Argentina | Wamani | Technology | 282 |
22,856,302 | https://en.wikipedia.org/wiki/Infosys%20Prize | The Infosys Prize is an annual award granted to scientists, researchers, engineers and social scientists of Indian origin (not necessarily born in India) by the Infosys Science Foundation and ranks among the highest monetary awards for research in India. The prize for each category includes a gold medallion, a citation certificate, and prize money of US$100,000 (or equivalent in Indian Rupees). The prize purse is tax free for winners living in India. The winners are selected by the jury of their respective categories, headed by the jury chairs.
In 2008, the prize was jointly awarded by the Infosys Science Foundation and National Institute of Advanced Studies for mathematics. The following year, three additional categories were added: Life Sciences, Mathematical Sciences, Physical Sciences and Social Sciences. In 2010, Engineering and Computer Science was added as a category. In 2012, a sixth category, Humanities, was added.
Laureates in Engineering and Computer Science
The Infosys Prize in Engineering and Computer Science has been awarded annually since 2010.
Laureates in Humanities
The Infosys Prize in Humanities has been awarded annually since 2012.
Laureates in Life Sciences
The Infosys Prize in Life Sciences has been awarded annually since 2009.
Laureates in Mathematical Sciences
The Infosys Prize in Mathematical Sciences has been awarded annually since 2008.
Laureates in Physical Sciences
The Infosys Prize in Physical Sciences has been awarded annually since 2009.
Laureates in Social Sciences
The Infosys Prize in Social Sciences has been awarded annually since 2009.
Trustees
N. R. Narayana Murthy
S. Gopalakrishnan
K. Dinesh
S. D. Shibulal
T.V. Mohandas Pai
Srinath Batni
Nandan Nilekani
Controversies
Lawrence Liang, a professor of law awarded the Infosys Prize, was found guilty by an internal university inquiry committee of sexually harassing a doctoral student on multiple occasions. Following the adverse finding, prominent activists, academics and gender rights groups issued a public statement on social media condemning Liang and criticising the award of the Infosys Prize to Liang.
See also
List of Infosys Prize laureates
List of chemistry awards
List of engineering awards
List of mathematics awards
List of physics awards
List of social sciences awards
Notes
External links
Indian science and technology awards
Infosys
Awards established in 2008
Mathematics awards
Physics awards
2008 establishments in India | Infosys Prize | Technology | 475 |
23,807,785 | https://en.wikipedia.org/wiki/Surface%20Evolver | Surface Evolver is an interactive program for the study of surfaces shaped by surface tension and other energies, and subject to various constraints. A surface is implemented as a simplicial complex. The user defines an initial surface in a datafile. The Evolver evolves the surface toward minimal energy by a gradient descent method. The aim can be to find a minimal energy surface, or to model the process of evolution by mean curvature. The energy in the Evolver can be a combination of surface tension, gravitational energy, squared mean curvature, user-defined surface integrals, or knot energies. The Evolver can handle arbitrary topology, volume constraints, boundary constraints, boundary contact angles, prescribed mean curvature, crystalline integrands, gravity, and constraints expressed as surface integrals. The surface can be in an ambient space of arbitrary dimension, which can have a Riemannian metric, and the ambient space can be a quotient space under a group action.
Evolver was written at The Geometry Center, sponsored by the National Science Foundation, the Department of Energy, Enterprise Minnesota, and the University of Minnesota.
References
Mathematical software
Physics software
Science software | Surface Evolver | Physics,Mathematics | 234 |
2,163,601 | https://en.wikipedia.org/wiki/Carbon%20monoxide%20%28data%20page%29 | This page provides supplementary chemical data on carbon monoxide.
Material safety data sheet
The handling of this chemical may incur notable safety precautions. It is highly recommended that you seek the material safety data sheet (MSDS) for this chemical from a reliable source such as SIRI, and follow its directions.
MSDS from Advanced Gas Technologies in the SDSdata.org database
Structure and properties
Thermodynamic properties
Spectral data
References
Chemical data pages
Carbon monoxide
Chemical data pages cleanup | Carbon monoxide (data page) | Chemistry | 98 |
70,950,228 | https://en.wikipedia.org/wiki/Veronica%20M.%20Bierbaum | Veronica Marie Bierbaum is an emeritus professor of chemistry at the University of Colorado Boulder, specialising in mass spectrometry in the areas of atmospheric chemistry and stellar chemistry.
Education
Bierbaum studied for a BA in chemistry at the Catholic University of America in Washington, D.C., graduating in 1970. She then studied for a PhD at the University of Pittsburgh, graduating in 1974. Bierbaum demonstrated an aptitude for chemistry whilst in secondary school, winning first place in the Senior High chemistry section of the Pennsylvania Junior Academy of Science competition.
Career
Bierbaum joined the University of Colorado, Boulder, in 1974 as a postdoctoral research associate. Her early work at Boulder involved collaborations with Charles H. DePuy and Stephen Leone. Her research interests strongly overlapped with DePuy and Leone, and focus on the use of mass spectrometry to understand gas-phase ion-molecule interactions, an area of research which is relevant to the fields of atmospheric chemistry and stellar chemistry. She remains at University of Colorado, Boulder, and was appointed research professor in the department of chemistry and biochemistry, as well as joining JILA (formerly the Joint Institute for Laboratory Astrophysics) which is based at Boulder.
Bierbaum served as the president of the American Society for Mass Spectrometry between 1996 and 1998, and prior to this served as secretary and vice-president. She was also an associate editor for the Journal of the American Society for Mass Spectrometry, standing down in 2019 after 20 years.
She was awarded the Frank H. Field and Joe L. Franklin Award for Outstanding Achievement in Mass Spectrometry in 2021.
Bierbaum is also active at the University of Colorado, Boulder, in enabling access to higher education for minorities.
References
Year of birth missing (living people)
University of Pittsburgh alumni
University of Colorado Boulder faculty
Mass spectrometrists
American women chemists
20th-century American women scientists
20th-century American chemists
Living people | Veronica M. Bierbaum | Physics,Chemistry | 399 |
1,430,977 | https://en.wikipedia.org/wiki/Polydeuces%20%28moon%29 | Polydeuces , also designated Saturn XXXIV, is a small trojan moon of Saturn occupying the trailing Lagrange point of Dione. It was discovered by the Cassini Imaging Science Team in images taken by the Cassini space probe on 21 October 2004. With a mean diameter of about , Polydeuces is thought to have a smooth surface coated with fine, icy particles accumulated from the cryovolcanic plumes of Enceladus. In its orbit around Saturn, Polydeuces periodically drifts away from Dione's Lagrange point due to gravitational perturbations by other nearby moons of Saturn. Of the four known trojan moons of Saturn, Polydeuces exhibits the largest displacement from its Lagrange point.
Discovery
Polydeuces was discovered by the Cassini Imaging Science Team on 24 October 2004 while routinely investigating images taken by the Cassini space probe earlier on 21 October 2004. The images were visually inspected through the blink comparison technique, which revealed any potential moons that moved relative to the background stars. The discovery images consisted of four frames taken with Cassinis wide-angle camera over less than six minutes, which showed Polydeuces moving 3–6 pixels per frame. The observed motion of Polydeuces immediately suggested that it could be orbiting Saturn at the distance of one of the large moons, Dione, possibly sharing its orbit in a co-orbital configuration.
By 4 November 2004, the Cassini Imaging Science Team obtained more Cassini images of Polydeuces, including two frames taken on 2 November 2004 and another two predating the discovery images by three hours. Preliminary orbit determinations using these images confirmed that Polydeuces was a co-orbital trojan moon residing around Dione's Lagrange point. With the aid of ephemeris predictions from Polydeuces's newly determined orbit, the Cassini Imaging Science Team was able to identify 52 pre-discovery detections of Polydeuces in Cassinis narrow-angle camera images taken between 9 April 2004 and 9 May 2004. The International Astronomical Union (IAU) announced the discovery of Polydeuces on 8 November 2004. Besides Polydeuces, Cassini has discovered five other objects orbiting Saturn in 2004: Methone, Pallene, S/2004 S 3, S/2004 S 4, and S/2004 S 6.
After the discovery announcement, Cassini was retasked to begin targeted observations of Polydeuces in January 2005 to better determine its orbit. In 2006, researchers found even earlier Cassini pre-discovery images of Polydeuces taken on 2 April 2004.
Name
The name Polydeuces was approved and announced by the IAU Working Group on Planetary System Nomenclature on 21 January 2005. In Greek mythology, Polydeuces is another name for Pollux, who is the twin brother of Castor and the son of Zeus and Leda. Polydeuces is also known by its official Roman numeral designation Saturn XXXIV (34th moon of Saturn discovered) and was previously known by its provisional designation , which was given by the IAU when it announced the moon's discovery.
Orbit
Polydeuces is an inner moon of Saturn in a co-orbital configuration with Dione, meaning they share the same orbit. Together with Dione and its other co-orbital companion Helene, Polydeuces orbits Saturn in 2.74 days at an average distance of from the planet's center, between the orbits of Tethys and Rhea. Due to gravitational perturbations by other nearby moons of Saturn, Polydeuces's orbital radius can vary by ± over time. Its orbit is closely aligned with Saturn's equatorial plane with a low orbital inclination of 0.2°.
Polydeuces has a slightly elliptical orbit with an eccentricity of 0.019, which is unusually higher than Dione's eccentricity of 0.002. While Dione's eccentricity is known to result from its 1:2 mean-motion orbital resonance with Enceladus, the effects of this resonance are too weak to explain Polydeuces's relatively high eccentricity. One possible explanation is that Polydeuces always had an eccentric orbit since its formation because its orbit did not change much over billions of years.
Polydeuces resides around Dione's Lagrange point trailing 60° behind Dione in its orbit, which makes Polydeuces a trojan moon of Dione. The Lagrange points are locations where the gravitational pulls of Dione and Saturn balance out, allowing for stable co-orbital configurations in Dione's trojans. Dione's other co-orbital moon, Helene, is a trojan residing around the Lagrange point leading 60° ahead of Dione. Trojan moons are not unique to Dione; another large moon of Saturn, Tethys, also has two trojans, named Telesto and Calypso, which reside in its and Lagrange points, respectively.
Because of perturbations by other moons of Saturn, Polydeuces does not stay exactly 60° behind Dione; its angular distance from Dione oscillates or librates over time. Of Saturn's four known trojan moons, Polydeuces librates the farthest from its Lagrange point: its angular distance behind Dione oscillates from 33.9° to 91.4° with a period of . In a rotating reference frame with respect to Dione's orbit, Polydeuces appears to travel in a looping path around Dione's point due to its varying relative speed and radial distance from Saturn in its perturbed eccentric orbit. Polydeuces's apparent looping motion combined with its librating angular distance from Dione forms a tadpole orbit about Dione's point.
Origin
Polydeuces is thought to have formed by accreting out of leftover debris trapped in Dione's Lagrange point, in a similar process experienced by Saturn's other trojan moons. This process likely took place at an intermediate stage of the formation of Saturn's moons, when Tethys and Dione have not finished forming and gases have become depleted in Saturn's circumplanetary disk. Mean-motion orbital resonances by other nearby moons did not appear to play a significant role in the formation of the trojan moons.
Dynamical modeling of the trojan moons' formation suggests that Tethys's and Dione's and Lagrange points should have started with similar amounts of material for trojan moons to form with roughly similar sizes. However, this is not the case for Dione's trojans, Helene and Polydeuces, whose masses significantly differ by more than an order of magnitude. As of yet, this mass asymmetry in Dione's and trojans remains unexplained.
Physical characteristics
, the most recent estimate for Polydeuces's dimensions is , based on resolved Cassini imagery of the moon from 2015. These dimensions correspond to a volume-equivalent mean diameter of for Polydeuces. Cassinis highest-resolution images of Polydeuces from 2015 show that it has an elongated shape, with a relatively smooth limb deviating from a simple ellipsoid. Polydeuces presumably rotates synchronously with its orbital period, similar to the rest of Saturn's trojan moons.
Little is known about Polydeuces's other physical properties because it was never approached up close by Cassini or any other space mission to Saturn. Because of its very small size, Polydeuces's gravitational perturbations on the trajectory of Cassini spacecraft and other Saturnian moons are negligible, which prevents the measurement of the moon's mass and density. In spite of this, researchers assume that Polydeuces has a density similar to those of Saturn's small inner moons, whose average density is ,
Polydeuces's small size makes it prone to disruption by impact events. Depending on the size-frequency of impactors in the Saturnian system, Polydeuces is predicted to have suffered at least one disruptive impact in the last one billion years. This implies that Polydeuces is either very young with an age of less than one billion years, or it is a primordial moon that has consistently reaccreted from each disruptive impact over the Saturnian system's 4.5 billion-year lifespan.
Polydeuces has a bright and likely smooth surface due to the accumulation of fine water ice particles from the surrounding E Ring, which is generated by the cryovolcanic plumes of Enceladus. Because of its small size, any craters on Polydeuces would be completely buried in E Ring material, giving it a craterless appearance resembling Methone or Pallene. Its geometric albedo is unknown since it has never been observed at low phase angles. Cassini imagery shows that Polydeuces has a uniform surface brightness across its leading and trailing hemispheres. Its surface is about as bright as Dione's but darker than Helene's. The trojan moons of Tethys exhibit a similar difference in surface brightness, where Calypso is brighter than Telesto and Tethys. The reason for these brightness asymmetries in the trojan moons of Dione and Tethys remains unknown; possible explanations include an asymmetric distribution of E Ring particles or recent impacts that brightened Helene and Calypso.
Exploration
Cassini is the only space mission to Saturn that has made targeted observations of Polydeuces. Over the 13-year span of Cassinis mission in orbit around Saturn, the spacecraft has made 22 close approaches within of Polydeuces. Cassinis closest encounter with Polydeuces took place on 17 February 2005, when it passed from Polydeuces while moving outbound from periapse. However, Cassini did not take any images of Polydeuces on that date. The only encounters where Cassini has taken resolved images of Polydeuces were on 22 May 2006, 10 May 2015, and 16 June 2015, at closest approach distances of , , and , respectively. Cassinis two close encounters in 2015 provided the first images where Polydeuces was larger than 10 pixels across.
See also
Telesto and Calypso, trojan moons of Tethys at its and Lagrange points, respectively
Janus and Epimetheus, two inner moons of Saturn in a co-orbital exchange orbit with each other
Notes
References
External links
Polydeuces In Depth, NASA Solar System Exploration, updated 19 December 2019
PIA08209: New Moon, NASA Photojournal, 28 June 2006
Cassini finds treasures among Saturn's rings, moons, Cassini news release via Spaceflight Now, 24 February 2005
Moons of Saturn
Trojan moons
20041021
Moons with a prograde orbit
Castor and Pollux | Polydeuces (moon) | Astronomy | 2,266 |
1,782,065 | https://en.wikipedia.org/wiki/1%2C8-Diazabicyclo%285.4.0%29undec-7-ene | 1,8-Diazabicyclo[5.4.0]undec-7-ene, or more commonly DBU, is a chemical compound and belongs to the class of amidine compounds. It is used in organic synthesis as a catalyst, a complexing ligand, and a non-nucleophilic base.
Occurrence
Although all commercially available DBU is produced synthetically, it may also be isolated from the sea sponge Niphates digitalis. The biosynthesis of DBU has been proposed to begin with adipaldehyde and 1,3-diaminopropane.
Uses
As a reagent in organic chemistry, DBU is used as a ligand and base. As a base, protonation occurs at the imine nitrogen. Lewis acids also attach to the same nitrogen.
These properties recommend DBU for use as a catalyst, for example as a curing agent for epoxy resins and polyurethane.
It is used in the separation of fullerenes in conjunction with trimethylbenzene. It reacts with C70 and higher fullerenes, but not with C60.
It is useful for dehydrohalogenations.
See also
1,5-Diazabicyclo[4.3.0]non-5-ene
DABCO
References
Amidines
Reagents for organic chemistry
Non-nucleophilic bases | 1,8-Diazabicyclo(5.4.0)undec-7-ene | Chemistry | 290 |
6,742,890 | https://en.wikipedia.org/wiki/Keyword-driven%20testing | Keyword-driven testing, also known as action word based testing (not to be confused with action driven testing), is a software testing methodology suitable for both manual and automated testing. This method separates the documentation of test casesincluding both the data and functionality to usefrom the prescription of the way the test cases are executed. As a result, it separates the test creation process into two distinct stages: a design and development stage, and an execution stage. The design substage covers the requirement analysis and assessment and the data analysis, definition, and population.
Overview
This methodology uses keywords (or action words) to symbolize a functionality to be tested, such as Enter Client. The keyword Enter Client is defined as the set of actions that must be executed to enter a new client in the database. Its keyword documentation would contain:
the starting state of the system under test (SUT)
the window or menu to start from
the keys or mouse clicks to get to the correct data entry window
the names of the fields to find and which arguments to enter
the actions to perform in case additional dialogs pop up (like confirmations)
the button to click to submit
an assertion about what the state of the SUT should be after completion of the actions
Keyword-driven testing syntax lists test cases (data and action words) using a table format (see example below). The first column (column A) holds the keyword, Enter Client, which is the functionality being tested. Then the remaining columns, B-E, contain the data needed to execute the keyword: Name, Address, Postcode and City.
To enter another client, the tester would create another row in the table with Enter Client as the keyword and the new client's data in the following columns. There is no need to relist all the actions included.
In it, you can design your test cases by:
Indicating the high-level steps needed to interact with the application and the system in order to perform the test.
Indicating how to validate and certify the features are working properly.
Specifying the preconditions for the test.
Specifying the acceptance criteria for the test.
Given the iterative nature of software development, the test design is typically more abstract (less specific) than a manual implementation of a test, but it can easily evolve into one.
Advantages
Keyword-driven testing reduces the sensitivity to maintenance caused by changes in the System/Software Under Test (SUT). If screen layouts change or the system is migrated to another OS hardly any changes have to be made to the test cases: the changes will be made to the keyword documentation, one document for every keyword, no matter how many times the keyword is used in test cases, and it implies a deep process of test design.
Also, due to the very detailed description of the way of executing the keyword (in the keyword documentation) the test can be performed by almost anyone. Thus keyword-driven testing can be used for both manual testing and automated testing.
Furthermore, this approach is an open and extensible framework that unites all the tools, assets, and data both related to and produced by the testing effort. Under this single framework, all participants in the testing effort can define and refine the quality goals they are working toward. It is where the team defines the plan it will implement to meet those goals. And, most importantly, it provides the entire team with one place to go to determine the state of the system at any time.
Testing is the feedback mechanism in the software development process. It tells you where corrections need to be made to stay on course at any given iteration of a development effort. It also tells you about the current quality of the system being developed. The activity of implementing tests involves the design and development of reusable test scripts that implement the test case. After the implementation, it can be associated with the test case.
Implementation is different in every testing project. In one project, you might decide to build both automated test scripts and manual test scripts. Designing tests, instead, is an iterative process. You can start designing tests before any system implementation by basing the test design on use case specifications, requirements, prototypes, and so on. As the system becomes more clearly specified, and you have builds of the system to work with, you can elaborate on the details of the design. The activity of designing tests answers the question, “How am I going to perform the testing?” A complete test design informs readers about what actions need to be taken with the system and what behaviors and characteristics they should expect to observe if the system is functioning properly.
A test design is different from the design work that should be done in determining how to build your test implementation.
Methodology
The keyword-driven testing methodology divides test process execution into several stages:
Model basis/prototyping: analysis and assessment of requirements.
Test model definition: on the result of requirements assessment, approach an own software model.
Test data definition: on the basis of the defined own model, start keyword and main/complement data definition.
Test preparation: intake test basis etc.
Test design: analysis of test basis, test case/procedure design, test data design.
Manual test execution: manual execution of the test cases using keyword documentation as execution guideline.
Automation of test execution: creation of automated script that perform actions according to the keyword documentation.
Automated test execution.
Definition
A Keyword or Action Word is a defined combination of actions on a test object which describes how test lines must be executed. An action word contains arguments and is defined by a test analyst.
The test is a key step in any process of development and shall to apply a series of tests or checks to an object (system / SW test — SUT). Always remembering that the test can only show the presence of errors, not their absence. In the RT system test, it is not sufficient to check whether the SUT produces the correct outputs. It must also verify that the time taken to produce that output is as expected. Furthermore, the timing of these outputs may also depend on the timing of the inputs. In turn, the timing of future inputs applicable is determined from the outputs.
Automation of the test execution
The implementation stage differs depending on the tool or framework.
Often, automation engineers implement a framework that provides keywords like “check” and “enter”. Testers or test designers (who do not need to know how to program) write test cases based on the keywords defined in the planning stage that have been implemented by the engineers. The test is executed using a driver that reads the keywords and executes the corresponding code.
Other methodologies use an all-in-one implementation stage. Instead of separating the tasks of test design and test engineering, the test design is the test automation. Keywords, such as “edit” or “check” are created using tools in which the necessary code has already been written. This removes the necessity for extra engineers in the test process, because the implementation for the keywords is already a part of the tool. Examples include GUIdancer and QTP.
Pros
Maintenance is low in the long run:
Test cases are concise
Test cases are readable for the stakeholders
Test cases are easy to modify
New test cases can reuse existing keywords more easily
Keyword re-use across multiple test cases
Not dependent on a specific tool or programming language
Division of Labor
Test case construction needs stronger domain expertise - lesser tool / programming skills
Keyword implementation requires stronger tool/programming skill - with relatively lower domain skill
Abstraction of Layers
Cons
Longer time to market (as compared to manual testing or record and replay technique)
Moderately high learning curve initially
See also
Data-driven testing
Test Automation Framework
Test-Driven Development
References
External links
Action based testing
Success Factors for Keyword Driven Testing, by Hans Buwalda
SAFS (Software Automation Framework Support)
Test automation frameworks
Automation Framework - gFast: generic Framework for Automated Software Testing - QTP Framework
Software testing | Keyword-driven testing | Engineering | 1,634 |
45,019,735 | https://en.wikipedia.org/wiki/Niobium%20nanowire | Niobium nanowires are nanowires made of the element niobium, which is a transition metal. Niobium nanowires in form oxide or nitride are used to detect single photons at low temperatures. The superconducting nanowire single-photon detector is an example of something made from these nano-structured materials.
References
Nanoelectronics
Niobium | Niobium nanowire | Materials_science | 83 |
1,549,715 | https://en.wikipedia.org/wiki/Polysulfide | Polysulfides are a class of chemical compounds derived from anionic chains of sulfur atoms. There are two main classes of polysulfides: inorganic and organic. The inorganic polysulfides have the general formula . These anions are the conjugate bases of polysulfanes . Organic polysulfides generally have the formulae , where R is an alkyl or aryl group.
Polysulfide salts and complexes
The alkali metal polysulfides arise by treatment of a solution of the sulfide with elemental sulfur, e.g. sodium sulfide to sodium polysulfide:
In some cases, these anions have been obtained as organic salts, which are soluble in organic solvents.
The energy released in the reaction of sodium and elemental sulfur is the basis of battery technology. The sodium–sulfur battery and the lithium–sulfur battery require high temperatures to maintain liquid polysulfide and -conductive membranes that are unreactive toward sodium, sulfur, and sodium sulfide.
Polysulfides are ligands in coordination chemistry. Examples of transition metal polysulfido complexes include , , and . Main group elements also form polysulfides.
Organic polysulfides
In commerce, the term "polysulfide" usually refers to a class of polymers with alternating chains of several sulfur atoms and hydrocarbons. They have the formula . In this formula n indicates the number of sulfur atoms (or "rank"). Polysulfide polymers can be synthesized by condensation polymerization reactions between organic dihalides and alkali metal salts of polysulfide anions:
Dihalides used in this condensation polymerization are dichloroalkanes such as 1,2-dichloroethane, bis(2-chloroethoxy)methane (), and 1,3-dichloropropane. The polymers are called thiokols. In some cases, polysulfide polymers can be formed by ring-opening polymerization reactions.
Polysulfide polymers are also prepared by the addition of polysulfanes to alkenes. An idealized equation is:
In reality, homogeneous samples of are difficult to prepare.
Polysulfide polymers are insoluble in water, oils, and many other organic solvents. Because of their solvent resistance, these materials find use as sealants to fill the joints in pavement, automotive window glass, and aircraft structures.
Polymers containing one or two sulfur atoms separated by hydrocarbon sequences are usually not classified polysulfides, e.g. poly(p-phenylene) sulfide .
Polysulfides in vulcanized rubber
Many commercial elastomers contain polysulfides as crosslinks. These crosslinks interconnect neighboring polymer chains, thereby conferring rigidity. The degree of rigidity is related to the number of crosslinks. Elastomers, therefore, have a characteristic ability to return to their original shape after being stretched or compressed. Because of this memory for their original cured shape, elastomers are commonly referred to as rubbers. The process of crosslinking the polymer chains in these polymers with sulfur is called vulcanization. The sulfur chains attach themselves to the allylic carbon atoms, which are adjacent to C=C linkages. Vulcanization is a step in the processing of several classes of rubbers, including polychloroprene (Neoprene), styrene-butadiene, and polyisoprene, which is chemically similar to natural rubber. Charles Goodyear's discovery of vulcanization, involving the heating of polyisoprene with sulfur, was revolutionary because it converted a sticky and almost useless material into an elastomer that could be fabricated into useful products.
Occurrence in gas giants
In addition to water and ammonia, the clouds in the atmospheres of the gas giant planets contain ammonium sulfides. The reddish-brownish clouds are attributed to polysulfides, arising from the exposure of the ammonium sulfides to light.
Properties
Polysulfides, like sulfides, can induce stress corrosion cracking in carbon steel and stainless steel.
See also
References
Sulfur compounds
Anions
Inorganic polymers
Corrosion
Polysulfides | Polysulfide | Physics,Chemistry,Materials_science | 872 |
46,779,473 | https://en.wikipedia.org/wiki/S%20Trianguli%20Australis | S Trianguli Australis is a yellow-white hued variable star in the constellation Triangulum Australe. It is a dim star near the lower limit of visibility with the naked eye, having a typical apparent visual magnitude of 6.41. Based upon an annual parallax shift of , it is located 3,030 light years from the Earth.
A Classical Cepheid variable, its apparent magnitude ranges from 5.95 to 6.81 over 6.32344 days. It is a bright giant with a nominal stellar classification of F8 II, that pulsates between spectral types F6II-G2. The star has 2.8 times the mass of the Sun and 39.2 times the Sun's radius. It is losing mass at the estimated rate of .
References
F-type bright giants
Classical Cepheid variables
Triangulum Australe
Durchmusterung objects
078476
142941
5939
Trianguli Australis, S | S Trianguli Australis | Astronomy | 208 |
29,963,420 | https://en.wikipedia.org/wiki/Squamanita%20schreieri | Squamanita schreieri is a species of fungus in the order Agaricales and the type species of the genus Squamanita. It is parasitic on basidiocarps (fruit bodies] of the ectomycorrhizal fungi Amanita solitaria and A. strobiliformis, replacing their caps with its own. The species was first described scientifically by Swiss mycologist Emil J. Imbach in 1946. It is only known from a few sites in central mainland Europe and threats to its habitat (hardwood forests) have resulted in the species being assessed as globally "endangered" on the IUCN Red List of Threatened Species.
References
Fungi described in 1946
Fungi of Europe
Fungus species
Agaricales | Squamanita schreieri | Biology | 154 |
34,557,525 | https://en.wikipedia.org/wiki/NucleaRDB | The NucleaRDB is a database of nuclear receptors. It contains data about the sequences, ligand binding constants and mutations of those proteins.
See also
Nuclear receptor
References
External links
https://web.archive.org/web/20120409204749/http://www.receptors.org/nucleardb/.
Biological databases
Intracellular receptors
Protein families
Transcription factors | NucleaRDB | Chemistry,Biology | 79 |
76,000,797 | https://en.wikipedia.org/wiki/New%20materialism | New materialism is a term which refers to several theoretical perspectives within contemporary philosophy that attempt to rework the conventional ontological understanding of the material world. While many philosophical tendencies are associated with new materialism, in such a way that the movement resists a single definition, its common characteristics include a rejection of essentialism, representationalism, and anthropocentrism as well as the dualistic boundaries between nature/culture; subject/object; and human/non-human. Instead, new materialists emphasize how fixed entities and apparently closed systems are produced through dynamic relations and processes, considering the distribution of agency through the interaction of heterogeneous forces. The movement has influenced a wide variety of new articulations between intellectual currents in science and philosophy, in fields such as science and technology studies, as well as systems science.
Origin
The term was independently coined by Manuel DeLanda and Rosi Braidotti during the second half of the 1990s to identify an emerging body of interdisciplinary theory that sought to overcome the post-structuralist emphasis on discourse, while drawing on the work of Gilles Deleuze, Félix Guattari, and Gilbert Simondon in seeking to establish a materialist ontology that prioritizes processes of individuation.
Reception
As of 2024, new materialism has been well-received in a wide range of disciplines in contemporary academia, from environmental studies to philosophy. Frequently referenced works include Karen Barad's Meeting the Universe Halfway and Jane Bennett's Vibrant Matter. New Materialists emphasise how Cartesian binaries around human and nature have caused many issues in the world by ignoring social complexity. New materialism been championed for its more integrated approach that considers material and immaterial, biological, and social aspects as interconnected processes rather than distinct entities.
Criticism
Ecologist Andreas Malm has called New Materialism 'idealism of the most useless sort', stating that the approach has little use for climate action or changing our relationship with nature, since it denies distinctions between humanity and nature. Malm argues that this supports the status quo rather than challenging it. He also expresses frustration with the writing style of many New Materialists, claiming that they resist distinctions between things, making their writing impenetrable.
Associated theorists
Karen Barad
Jane Bennett
Rosi Braidotti
Donna Haraway
Isabelle Stengers
Rick Dolphijn
Manuel DeLanda
Catherine Malabou
Quentin Meillassoux
Bruno Latour
Arturo Escobar
Levi Bryant
Drew M. Dalton
Thomas Nail
Tim Ingold
See also
Posthumanism
Assemblage
Agential realism
Speculative realism
References
Bibliography
Materialism
Political ecology
Contemporary philosophy
Philosophical schools and traditions | New materialism | Physics,Environmental_science | 536 |
374,002 | https://en.wikipedia.org/wiki/Limit%20cardinal | In mathematics, limit cardinals are certain cardinal numbers. A cardinal number λ is a weak limit cardinal if λ is neither a successor cardinal nor zero. This means that one cannot "reach" λ from another cardinal by repeated successor operations. These cardinals are sometimes called simply "limit cardinals" when the context is clear.
A cardinal λ is a strong limit cardinal if λ cannot be reached by repeated powerset operations. This means that λ is nonzero and, for all κ < λ, 2κ < λ. Every strong limit cardinal is also a weak limit cardinal, because κ+ ≤ 2κ for every cardinal κ, where κ+ denotes the successor cardinal of κ.
The first infinite cardinal, (aleph-naught), is a strong limit cardinal, and hence also a weak limit cardinal.
Constructions
One way to construct limit cardinals is via the union operation: is a weak limit cardinal, defined as the union of all the alephs before it; and in general for any limit ordinal λ is a weak limit cardinal.
The ב operation can be used to obtain strong limit cardinals. This operation is a map from ordinals to cardinals defined as
(the smallest ordinal equinumerous with the powerset)
If λ is a limit ordinal,
The cardinal
is a strong limit cardinal of cofinality ω. More generally, given any ordinal α, the cardinal
is a strong limit cardinal. Thus there are arbitrarily large strong limit cardinals.
Relationship with ordinal subscripts
If the axiom of choice holds, every cardinal number has an initial ordinal. If that initial ordinal is then the cardinal number is of the form for the same ordinal subscript λ. The ordinal λ determines whether is a weak limit cardinal. Because if λ is a successor ordinal then is not a weak limit. Conversely, if a cardinal κ is a successor cardinal, say then Thus, in general, is a weak limit cardinal if and only if λ is zero or a limit ordinal.
Although the ordinal subscript tells us whether a cardinal is a weak limit, it does not tell us whether a cardinal is a strong limit. For example, ZFC proves that is a weak limit cardinal, but neither proves nor disproves that is a strong limit cardinal (Hrbacek and Jech 1999:168). The generalized continuum hypothesis states that for every infinite cardinal κ. Under this hypothesis, the notions of weak and strong limit cardinals coincide.
The notion of inaccessibility and large cardinals
The preceding defines a notion of "inaccessibility": we are dealing with cases where it is no longer enough to do finitely many iterations of the successor and powerset operations; hence the phrase "cannot be reached" in both of the intuitive definitions above. But the "union operation" always provides another way of "accessing" these cardinals (and indeed, such is the case of limit ordinals as well). Stronger notions of inaccessibility can be defined using cofinality. For a weak (respectively strong) limit cardinal κ the requirement is that cf(κ) = κ (i.e. κ be regular) so that κ cannot be expressed as a sum (union) of fewer than κ smaller cardinals. Such a cardinal is called a weakly (respectively strongly) inaccessible cardinal. The preceding examples both are singular cardinals of cofinality ω and hence they are not inaccessible.
would be an inaccessible cardinal of both "strengths" except that the definition of inaccessible requires that they be uncountable. Standard Zermelo–Fraenkel set theory with the axiom of choice (ZFC) cannot even prove the consistency of the existence of an inaccessible cardinal of either kind above , due to Gödel's incompleteness theorem. More specifically, if is weakly inaccessible then . These form the first in a hierarchy of large cardinals.
See also
Cardinal number
References
External links
http://www.ii.com/math/cardinals/ Infinite ink on cardinals
Set theory
Cardinal numbers | Limit cardinal | Mathematics | 830 |
10,946,528 | https://en.wikipedia.org/wiki/Activating%20transcription%20factor | Activating transcription factor, ATF, is a group of bZIP transcription factors, which act as homodimers or heterodimers with a range of other bZIP factors. First, they have been described as members of the CREB/ATF family, whereas it turned out later that some of them might be more similar to AP-1-like factors such as c-Jun or c-Fos. In general, ATFs are known to respond to extracellular signals and this suggests an important role that they have in maintaining homeostasis. Some of these ATFs, such as ATF3, ATF4, and ATF6 are known to play a role in stress responses. Another example of ATFs function would be ATFx that can suppress apoptosis.
Genes include ATF1, ATF2, ATF3, ATF4, ATF5, ATF6, ATF7, ATFx.
References
External links
Transcription factors | Activating transcription factor | Chemistry,Biology | 204 |
54,280,883 | https://en.wikipedia.org/wiki/NGC%207077 | NGC 7077 is a lenticular blue compact dwarf galaxy located about 56 million light-years away from Earth in the constellation Aquarius. Discovered by astronomer Albert Marth on August 11, 1863, the galaxy lies within the Local Void.
See also
List of NGC objects (7001–7840)
References
External links
Peculiar galaxies
Lenticular galaxies
Aquarius (constellation)
7077
11755
66860
Astronomical objects discovered in 1863
Markarian galaxies | NGC 7077 | Astronomy | 92 |
37,464,575 | https://en.wikipedia.org/wiki/Russula%20rugulosa | Russula rugulosa is a species of agaric fungus in the family Russulaceae. It was first described by American mycologist Charles Horton Peck in 1902.
See also
List of Russula species
References
External links
rugulosa
Fungi described in 1902
Fungi of North America
Taxa named by Charles Horton Peck
Fungus species | Russula rugulosa | Biology | 66 |
7,066,954 | https://en.wikipedia.org/wiki/Low-energy%20transfer | A low-energy transfer, or low-energy trajectory, is a route in space that allows spacecraft to change orbits using significantly less fuel than traditional transfers. These routes work in the Earth–Moon system and also in other systems, such as between the moons of Jupiter. The drawback of such trajectories is that they take longer to complete than higher-energy (more-fuel) transfers, such as Hohmann transfer orbits.
Low-energy transfers are also known as Weak Stability Boundary trajectories, and include ballistic capture trajectories.
Low-energy transfers follow special pathways in space, sometimes referred to as the Interplanetary Transport Network. Following these pathways allows for long distances to be traversed for little change in velocity, or .
Example missions
Missions that have used low-energy transfers include:
Hiten, from JAXA
SMART-1, from ESA
Genesis, from NASA.
GRAIL, from NASA.
Danuri from KARI
On-going missions that uses low-energy transfers include:
BepiColombo, from ESA/JAXA
CAPSTONE from NASA
SLIM, from JAXA
Proposed missions using low-energy transfers include:
European Student Moon Orbiter (ESMO)
Mars Direct
History
Low-energy transfers to the Moon were first demonstrated in 1991 by the Japanese spacecraft Hiten, which was designed to swing by the Moon but not to enter orbit. The Hagoromo subsatellite was released by Hiten on its first swing-by and may have successfully entered lunar orbit, but suffered a communications failure.
Edward Belbruno and James Miller of the Jet Propulsion Laboratory had heard of the failure, and helped to salvage the mission by developing a ballistic capture trajectory that would enable the main Hiten probe to itself enter lunar orbit. The trajectory they developed for Hiten used Weak Stability Boundary Theory and required only a small perturbation to the elliptical swing-by orbit, sufficiently small to be achievable by the spacecraft's thrusters. This course would result in the probe being captured into temporary lunar orbit using zero , but required five months instead of the usual three days for a Hohmann transfer.
Delta-v savings
From low Earth orbit to lunar orbit, the savings approach 25% on the burn applied after leaving low Earth orbit, compared to the retrograde burn applied near the Moon in the traditional , and allow for a doubling of payload.
Robert Farquhar had described a 9-day route from low earth orbit to lunar capture that takes 3.5 km/s. Belbruno's routes from low Earth orbit require a 3.1 km/s burn for trans lunar injection, a delta-v saving of not more than 0.4 km/s. However, the latter require no large delta-v change after leaving low Earth orbit, which may have operational benefits if using an upper stage with limited restart or in-orbit endurance capability, which would require the spacecraft to have a separate main propulsion system for capture.
For rendezvous with the Martian moons, the savings are 12% for Phobos and 20% for Deimos. Rendezvous is targeted because the stable pseudo-orbits around the Martian moons do not spend much time within 10 km of the surface.
See also
Bi-elliptic transfer
Gravity assist
Interplanetary Transport Network
Orbital mechanics
References
External links
Celestial Mechanics Theory Meets the Nitty-Gritty of Trajectory Design
Earth-to-Moon Low Energy Transfers Targeting L1 Hyperbolic Transit Orbit June 2005
Low Energy Trajectories and Chaos: Applications to Astrodynamics and Dynamical Astronomy
Navigating Celestial Currents
Astrodynamics | Low-energy transfer | Engineering | 725 |
46,674,466 | https://en.wikipedia.org/wiki/Peryton%20%28astronomy%29 | In radio astronomy, perytons are short man-made radio signals of a few milliseconds resembling fast radio bursts (FRB). A peryton differs from radio frequency interference by the fact that it is a pulse of several to tens of millisecond duration which sweeps down in frequency. They are further verified by the fact that they occur at the same time in many beams, indicating that they come from Earth, whereas FRBs occur in only one or two of the beams, indicating that they are of galactic origin. The first signal occurred in 2001 but was not discovered until 2007. First detected at the Parkes Observatory, data gathered by the telescope also suggested the source was local. The signals were found to be caused by premature opening of a microwave oven door nearby.
Naming
Due to the unclear origin of the detections at first, the radio signals were named after the peryton, a mythical winged stag that casts the shadow of a man. This interprets into "strangeness made by man". This name was chosen for these signals because they are man-made but have characteristics that mimic the natural phenomenon of FRBs. The name was coined by Sarah Burke-Spolaor et al. in 2011.
Detection
Perytons were observed at the Parkes Observatory and Bleien Radio Observatory. After the discovery of the first FRB in 2007, Dr. Burke searched through old telescope data looking for similar signals. She found what she was looking for, with a small difference. The 16 signals that she found seemed to fill the entire patch of the sky visible to the telescope. The lack of directionality in the new signals led Burke to the considerations that the signals were man-made and of earth. Between 1998 and 2015, old data showed 46 perytons that were identified at the Parkes Observatory. On June 23, 1998, 16 perytons were detected at that same location within 7 minutes. In January 2015, 3 perytons were detected at the Parkes Observatory. As of 2015, 25 perytons had been the subject of scientific publications.
Origin hypotheses
These signals mimicked some aspects of FRBs that appeared to be coming from outside the Milky Way galaxy, but the possibility of their having an astronomical origin was soon excluded. To track activities near the telescope, the Commonwealth Scientific and Industrial Research Organization (CSIRO) installed a radio frequency interference (RFI) monitor at the Parkes site in December 2014. This form of monitoring became more common as radio-emitting devices became more prevalent on radio telescope sites, including mobile phones, Wi-Fi, and digital televisions. Important information was disclosed by the RFI monitor data, which had not been accessible for earlier peryton discoveries. Each peryton event was accompanied by a period of radio emission at a frequency of 2.5 GHz that was outside the telescope's field of view. These spikes were probably related to the perytons. Hypothesized potential sources of perytons included:
Signals from aircraft
Flashes in the ionosphere
Lightning
Solar flares
Terrestrial gamma-ray flashes
Narrow bipolar pulse (electrical discharges between clouds at high altitude with a capacity of several hundred gigawatts).
Identification of origin
In 2015, perytons were found to be the result of premature opening of microwave oven doors at the Parkes Observatory. On March 17, 2015, three perytons were produced by experimentation by microwaving ceramic mugs filled with water and opening the door before the microwave had stopped operating. The microwave oven releases a frequency-swept radio pulse that mimics an FRB as the magnetron turns off. Two Matsushita microwave ovens were deemed responsible for most of the perytons. Both were functional and over 27 years old. Perytons were found to be produced about 50% of the times that the microwave door was opened before the timer expired.
References
External links
What is a peryton? at Physics Stack Exchange
See also
Fast radio burst
Wow! signal
Astrophysics
Radio astronomy
Microwave transmission | Peryton (astronomy) | Physics,Astronomy | 821 |
37,035 | https://en.wikipedia.org/wiki/Conway%27s%20Game%20of%20Life | The Game of Life, also known as Conway's Game of Life or simply Life, is a cellular automaton devised by the British mathematician John Horton Conway in 1970. It is a zero-player game, meaning that its evolution is determined by its initial state, requiring no further input. One interacts with the Game of Life by creating an initial configuration and observing how it evolves. It is Turing complete and can simulate a universal constructor or any other Turing machine.
Rules
The universe of the Game of Life is an infinite, two-dimensional orthogonal grid of square cells, each of which is in one of two possible states, live or dead (or populated and unpopulated, respectively). Every cell interacts with its eight neighbours, which are the cells that are horizontally, vertically, or diagonally adjacent. At each step in time, the following transitions occur:
Any live cell with fewer than two live neighbours dies, as if by underpopulation.
Any live cell with two or three live neighbours lives on to the next generation.
Any live cell with more than three live neighbours dies, as if by overpopulation.
Any dead cell with exactly three live neighbours becomes a live cell, as if by reproduction.
The initial pattern constitutes the seed of the system. The first generation is created by applying the above rules simultaneously to every cell in the seed, live or dead; births and deaths occur simultaneously, and the discrete moment at which this happens is sometimes called a tick. Each generation is a pure function of the preceding one. The rules continue to be applied repeatedly to create further generations.
Origins
Stanisław Ulam, while working at the Los Alamos National Laboratory in the 1940s, studied the growth of crystals, using a simple lattice network as his model. At the same time, John von Neumann, Ulam's colleague at Los Alamos, was working on the problem of self-replicating systems. Von Neumann's initial design was founded upon the notion of one robot building another robot. This design is known as the kinematic model. As he developed this design, von Neumann came to realize the great difficulty of building a self-replicating robot, and of the great cost in providing the robot with a "sea of parts" from which to build its replicant. Neumann wrote a paper entitled "The general and logical theory of automata" for the Hixon Symposium in 1948. Ulam was the one who suggested using a discrete system for creating a reductionist model of self-replication. Ulam and von Neumann created a method for calculating liquid motion in the late 1950s. The driving concept of the method was to consider a liquid as a group of discrete units and calculate the motion of each based on its neighbours' behaviours. Thus was born the first system of cellular automata. Like Ulam's lattice network, von Neumann's cellular automata are two-dimensional, with his self-replicator implemented algorithmically. The result was a universal copier and constructor working within a cellular automaton with a small neighbourhood (only those cells that touch are neighbours; for von Neumann's cellular automata, only orthogonal cells), and with 29 states per cell. Von Neumann gave an existence proof that a particular pattern would make endless copies of itself within the given cellular universe by designing a 200,000 cell configuration that could do so. This design is known as the tessellation model, and is called a von Neumann universal constructor.
Motivated by questions in mathematical logic and in part by work on simulation games by Ulam, among others, John Conway began doing experiments in 1968 with a variety of different two-dimensional cellular automaton rules. Conway's initial goal was to define an interesting and unpredictable cellular automaton. According to Martin Gardner, Conway experimented with different rules, aiming for rules that would allow for patterns to "apparently" grow without limit, while keeping it difficult to prove that any given pattern would do so. Moreover, some "simple initial patterns" should "grow and change for a considerable period of time" before settling into a static configuration or a repeating loop. Conway later wrote that the basic motivation for Life was to create a "universal" cellular automaton.
The game made its first public appearance in the October 1970 issue of Scientific American, in Martin Gardner's "Mathematical Games" column, which was based on personal conversations with Conway. Theoretically, the Game of Life has the power of a universal Turing machine: anything that can be computed algorithmically can be computed within the Game of Life. Gardner wrote, "Because of Life's analogies with the rise, fall, and alterations of a society of living organisms, it belongs to a growing class of what are called 'simulation games' (games that resemble real-life processes)."
Since its publication, the Game of Life has attracted much interest because of the surprising ways in which the patterns can evolve. It provides an example of emergence and self-organization. A version of Life that incorporates random fluctuations has been used in physics to study phase transitions and nonequilibrium dynamics. The game can also serve as a didactic analogy, used to convey the somewhat counter-intuitive notion that design and organization can spontaneously emerge in the absence of a designer. For example, philosopher Daniel Dennett has used the analogy of the Game of Life "universe" extensively to illustrate the possible evolution of complex philosophical constructs, such as consciousness and free will, from the relatively simple set of deterministic physical laws which might govern our universe.
The popularity of the Game of Life was helped by its coming into being at the same time as increasingly inexpensive computer access. The game could be run for hours on these machines, which would otherwise have remained unused at night. In this respect, it foreshadowed the later popularity of computer-generated fractals. For many, the Game of Life was simply a programming challenge: a fun way to use otherwise wasted CPU cycles. For some, however, the Game of Life had more philosophical connotations. It developed a cult following through the 1970s and beyond; current developments have gone so far as to create theoretic emulations of computer systems within the confines of a Game of Life board.
Examples of patterns
Many different types of patterns occur in the Game of Life, which are classified according to their behaviour. Common pattern types include: still lifes, which do not change from one generation to the next; oscillators, which return to their initial state after a finite number of generations; and spaceships, which translate themselves across the grid.
The earliest interesting patterns in the Game of Life were discovered without the use of computers. The simplest still lifes and oscillators were discovered while tracking the fates of various small starting configurations using graph paper, blackboards, and physical game boards, such as those used in Go. During this early research, Conway discovered that the R-pentomino failed to stabilize in a small number of generations. In fact, it takes 1103 generations to stabilize, by which time it has a population of 116 and has generated six escaping gliders; these were the first spaceships ever discovered.
Frequently occurring examples (in that they emerge frequently from a random starting configuration of cells) of the three aforementioned pattern types are shown below, with live cells shown in black and dead cells in white. Period refers to the number of ticks a pattern must iterate through before returning to its initial configuration.
The pulsar is the most common period-3 oscillator. The great majority of naturally occurring oscillators have a period of 2, like the blinker and the toad, but oscillators of all periods are known to exist, and oscillators of periods 4, 8, 14, 15, 30, and a few others have been seen to arise from random initial conditions. Patterns which evolve for long periods before stabilizing are called Methuselahs, the first-discovered of which was the R-pentomino. Diehard is a pattern that disappears after 130 generations. Starting patterns of eight or more cells can be made to die after an arbitrarily long time. Acorn takes 5,206 generations to generate 633 cells, including 13 escaped gliders.
Conway originally conjectured that no pattern can grow indefinitely—i.e. that for any initial configuration with a finite number of living cells, the population cannot grow beyond some finite upper limit. In the game's original appearance in "Mathematical Games", Conway offered a prize of fifty dollars () to the first person who could prove or disprove the conjecture before the end of 1970. The prize was won in November by a team from the Massachusetts Institute of Technology, led by Bill Gosper; the "Gosper glider gun" produces its first glider on the 15th generation, and another glider every 30th generation from then on. For many years, this glider gun was the smallest one known. In 2015, a gun called the "Simkin glider gun", which releases a glider every 120th generation, was discovered that has fewer live cells but which is spread out across a larger bounding box at its extremities.
Smaller patterns were later found that also exhibit infinite growth. All three of the patterns shown below grow indefinitely. The first two create a single block-laying switch engine: a configuration that leaves behind two-by-two still life blocks as it translates itself across the game's universe. The third configuration creates two such patterns. The first has only ten live cells, which has been proven to be minimal. The second fits in a five-by-five square, and the third is only one cell high.
Later discoveries included other guns, which are stationary, and which produce gliders or other spaceships; puffer trains, which move along leaving behind a trail of debris; and rakes, which move and emit spaceships. Gosper also constructed the first pattern with an asymptotically optimal quadratic growth rate, called a breeder or lobster, which worked by leaving behind a trail of guns.
It is possible for gliders to interact with other objects in interesting ways. For example, if two gliders are shot at a block in a specific position, the block will move closer to the source of the gliders. If three gliders are shot in just the right way, the block will move farther away. This sliding block memory can be used to simulate a counter. It is possible to construct logic gates such as AND, OR, and NOT using gliders. It is possible to build a pattern that acts like a finite-state machine connected to two counters. This has the same computational power as a universal Turing machine, so the Game of Life is theoretically as powerful as any computer with unlimited memory and no time constraints; it is Turing complete. In fact, several different programmable computer architectures have been implemented in the Game of Life, including a pattern that simulates Tetris.
Oblique spaceships
Until the 2010s, all known spaceships could only move orthogonally or diagonally. Spaceships which move neither orthogonally nor diagonally are commonly referred to as oblique spaceships. On May 18, 2010, Andrew J. Wade announced the first oblique spaceship, dubbed "Gemini", that creates a copy of itself on (5,1) further while destroying its parent. This pattern replicates in 34 million generations, and uses an instruction tape made of gliders oscillating between two stable configurations made of Chapman–Greene construction arms. These, in turn, create new copies of the pattern, and destroy the previous copy. In December 2015, diagonal versions of the Gemini were built.
A more specific case is a knightship, a spaceship that moves two squares left for every one square it moves down (like a knight in chess), whose existence had been predicted by Elwyn Berlekamp since 1982. The first elementary knightship, Sir Robin, was discovered in 2018 by Adam P. Goucher. This is the first new spaceship movement pattern for an elementary spaceship found in forty-eight years. "Elementary" means that it cannot be decomposed into smaller interacting patterns such as gliders and still lifes.
Self-replication
A pattern can contain a collection of guns that fire gliders in such a way as to construct new objects, including copies of the original pattern. A universal constructor can be built which contains a Turing complete computer, and which can build many types of complex objects, including more copies of itself. On November 23, 2013, Dave Greene built the first replicator in the Game of Life that creates a complete copy of itself, including the instruction tape. In October 2018, Adam P. Goucher finished his construction of the 0E0P metacell, a metacell capable of self-replication. This differed from previous metacells, such as the OTCA metapixel by Brice Due, which only worked with already constructed copies near them. The 0E0P metacell works by using construction arms to create copies that simulate the programmed rule. The actual simulation of the Game of Life or other Moore neighbourhood rules is done by simulating an equivalent rule using the von Neumann neighbourhood with more states. The name 0E0P is short for "Zero Encoded by Zero Population", which indicates that instead of a metacell being in an "off" state simulating empty space, the 0E0P metacell removes itself when the cell enters that state, leaving a blank space.
Undecidability
Many patterns in the Game of Life eventually become a combination of still lifes, oscillators, and spaceships; other patterns may be called chaotic. A pattern may stay chaotic for a very long time until it eventually settles to such a combination.
The Game of Life is undecidable, which means that given an initial pattern and a later pattern, no algorithm exists that can tell whether the later pattern is ever going to appear. Given that the Game of Life is Turing-complete, this is a corollary of the halting problem: the problem of determining whether a given program will finish running or continue to run forever from an initial input.
Iteration
From most random initial patterns of living cells on the grid, observers will find the population constantly changing as the generations tick by. The patterns that emerge from the simple rules may be considered a form of mathematical beauty. Small isolated subpatterns with no initial symmetry tend to become symmetrical. Once this happens, the symmetry may increase in richness, but it cannot be lost unless a nearby subpattern comes close enough to disturb it. In a very few cases, the society eventually dies out, with all living cells vanishing, though this may not happen for a great many generations. Most initial patterns eventually burn out, producing either stable figures or patterns that oscillate forever between two or more states; many also produce one or more gliders or spaceships that travel indefinitely away from the initial location. Because of the nearest-neighbour based rules, no information can travel through the grid at a greater rate than one cell per unit time, so this velocity is said to be the cellular automaton speed of light and denoted c.
Algorithms
Early patterns with unknown futures, such as the R-pentomino, led computer programmers to write programs to track the evolution of patterns in the Game of Life. Most of the early algorithms were similar: they represented the patterns as two-dimensional arrays in computer memory. Typically, two arrays are used: one to hold the current generation, and one to calculate its successor. Often 0 and 1 represent dead and live cells, respectively. A nested for loop considers each element of the current array in turn, counting the live neighbours of each cell to decide whether the corresponding element of the successor array should be 0 or 1. The successor array is displayed. For the next iteration, the arrays may swap roles so that the successor array in the last iteration becomes the current array in the next iteration, or one may copy the values of the second array into the first array then update the second array from the first array again.
A variety of minor enhancements to this basic scheme are possible, and there are many ways to save unnecessary computation. A cell that did not change at the last time step, and none of whose neighbours changed, is guaranteed not to change at the current time step as well, so a program that keeps track of which areas are active can save time by not updating inactive zones.
To avoid decisions and branches in the counting loop, the rules can be rearranged from an egocentric approach of the inner field regarding its neighbours to a scientific observer's viewpoint: if the sum of all nine fields in a given neighbourhood is three, the inner field state for the next generation will be life; if the all-field sum is four, the inner field retains its current state; and every other sum sets the inner field to death.
To save memory, the storage can be reduced to one array plus two line buffers. One line buffer is used to calculate the successor state for a line, then the second line buffer is used to calculate the successor state for the next line. The first buffer is then written to its line and freed to hold the successor state for the third line. If a toroidal array is used, a third buffer is needed so that the original state of the first line in the array can be saved until the last line is computed.
In principle, the Game of Life field is infinite, but computers have finite memory. This leads to problems when the active area encroaches on the border of the array. Programmers have used several strategies to address these problems. The simplest strategy is to assume that every cell outside the array is dead. This is easy to program but leads to inaccurate results when the active area crosses the boundary. A more sophisticated trick is to consider the left and right edges of the field to be stitched together, and the top and bottom edges also, yielding a toroidal array. The result is that active areas that move across a field edge reappear at the opposite edge. Inaccuracy can still result if the pattern grows too large, but there are no pathological edge effects. Techniques of dynamic storage allocation may also be used, creating ever-larger arrays to hold growing patterns. The Game of Life on a finite field is sometimes explicitly studied; some implementations, such as Golly, support a choice of the standard infinite field, a field infinite only in one dimension, or a finite field, with a choice of topologies such as a cylinder, a torus, or a Möbius strip.
Alternatively, programmers may abandon the notion of representing the Game of Life field with a two-dimensional array, and use a different data structure, such as a vector of coordinate pairs representing live cells. This allows the pattern to move about the field unhindered, as long as the population does not exceed the size of the live-coordinate array. The drawback is that counting live neighbours becomes a hash-table lookup or search operation, slowing down simulation speed. With more sophisticated data structures this problem can also be largely solved.
For exploring large patterns at great time depths, sophisticated algorithms such as Hashlife may be useful. There is also a method for implementation of the Game of Life and other cellular automata using arbitrary asynchronous updates while still exactly emulating the behaviour of the synchronous game. Source code examples that implement the basic Game of Life scenario in various programming languages, including C, C++, Java and Python can be found at Rosetta Code.
Variations
Since the Game of Life's inception, new, similar cellular automata have been developed. The standard Game of Life is symbolized in rule-string notation as B3/S23. A cell is born if it has exactly three neighbours, survives if it has two or three living neighbours, and dies otherwise. The first number, or list of numbers, is what is required for a dead cell to be born. The second set is the requirement for a live cell to survive to the next generation. Hence B6/S16 means "a cell is born if there are six neighbours, and lives on if there are either one or six neighbours". Cellular automata on a two-dimensional grid that can be described in this way are known as cellular automata. Another common automaton, Highlife, is described by the rule B36/S23, because having six neighbours, in addition to the original game's B3/S23 rule, causes a birth. HighLife is best known for its frequently occurring replicators.
Additional Life-like cellular automata exist. The vast majority of these 218 different rules produce universes that are either too chaotic or too desolate to be of interest, but a large subset do display interesting behaviour. A further generalization produces the isotropic rulespace, with 2102 possible cellular automaton rules (the Game of Life again being one of them). These are rules that use the same square grid as the Life-like rules and the same eight-cell neighbourhood, and are likewise invariant under rotation and reflection. However, in isotropic rules, the positions of neighbour cells relative to each other may be taken into account in determining a cell's future state—not just the total number of those neighbours.
Some variations on the Game of Life modify the geometry of the universe as well as the rules. The above variations can be thought of as a two-dimensional square, because the world is two-dimensional and laid out in a square grid. One-dimensional square variations, known as elementary cellular automata, and three-dimensional square variations have been developed, as have two-dimensional hexagonal and triangular variations. A variant using aperiodic tiling grids has also been made.
Conway's rules may also be generalized such that instead of two states, live and dead, there are three or more. State transitions are then determined either by a weighting system or by a table specifying separate transition rules for each state; for example, Mirek's Cellebration's multi-coloured Rules Table and Weighted Life rule families each include sample rules equivalent to the Game of Life.
Patterns relating to fractals and fractal systems may also be observed in certain variations. For example, the automaton B1/S12 generates four very close approximations to the Sierpinski triangle when applied to a single live cell. The Sierpinski triangle can also be observed in the Game of Life by examining the long-term growth of an infinitely long single-cell-thick line of live cells, as well as in Highlife, Seeds (B2/S), and Wolfram's Rule 90.
Immigration is a variation that is very similar to the Game of Life, except that there are two on states, often expressed as two different colours. Whenever a new cell is born, it takes on the on state that is the majority in the three cells that gave it birth. This feature can be used to examine interactions between spaceships and other objects within the game. Another similar variation, called QuadLife, involves four different on states. When a new cell is born from three different on neighbours, it takes the fourth value, and otherwise, like Immigration, it takes the majority value. Except for the variation among on cells, both of these variations act identically to the Game of Life.
Music
Various musical composition techniques use the Game of Life, especially in MIDI sequencing. A variety of programs exist for creating sound from patterns generated in the Game of Life.
Notable programs
Computers have been used to follow and simulate the Game of Life since it was first publicized. When John Conway was first investigating how various starting configurations developed, he tracked them by hand using a go board with its black and white stones. This was tedious and prone to errors. The first interactive Game of Life program was written in an early version of ALGOL 68C for the PDP-7 by M. J. T. Guy and S. R. Bourne. The results were published in the October 1970 issue of Scientific American, along with the statement: "Without its help, some discoveries about the game would have been difficult to make."
A color version of the Game of Life was written by Ed Hall in 1976 for Cromemco microcomputers, and a display from that program filled the cover of the June 1976 issue of Byte. The advent of microcomputer-based color graphics from Cromemco has been credited with a revival of interest in the game.
Two early implementations of the Game of Life on home computers were by Malcolm Banthorpe written in BBC BASIC. The first was in the January 1984 issue of Acorn User magazine, and Banthorpe followed this with a three-dimensional version in the May 1984 issue. Susan Stepney, Professor of Computer Science at the University of York, followed this up in 1988 with Life on the Line, a program that generated one-dimensional cellular automata.
There are now thousands of Game of Life programs online, so a full list will not be provided here. The following is a small selection of programs with some special claim to notability, such as popularity or unusual features. Most of these programs incorporate a graphical user interface for pattern editing and simulation, the capability for simulating multiple rules including the Game of Life, and a large library of interesting patterns in the Game of Life and other cellular automaton rules.
Golly is a cross-platform (Windows, Macintosh, Linux, iOS, and Android) open-source simulation system for the Game of Life and other cellular automata (including all Life-like cellular automata, the Generations family of cellular automata from Mirek's Cellebration, and John von Neumann's 29-state cellular automaton) by Andrew Trevorrow and Tomas Rokicki. It includes the Hashlife algorithm for extremely fast generation, and Lua or Python scriptability for both editing and simulation.
Mirek's Cellebration is a freeware one- and two-dimensional cellular automata viewer, explorer, and editor for Windows. It includes powerful facilities for simulating and viewing a wide variety of cellular automaton rules, including the Game of Life, and a scriptable editor.
Xlife is a cellular-automaton laboratory by Jon Bennett. The standard UNIX X11 Game of Life simulation application for a long time, it has also been ported to Windows. It can handle cellular automaton rules with the same neighbourhood as the Game of Life, and up to eight possible states per cell.
Dr. Blob's Organism is a Shoot 'em up based on Conway's Life. In the game, Life continually generates on a group of cells within a "petri dish". The patterns formed are smoothed and rounded to look like a growing amoeba spewing smaller ones (actually gliders). Special "probes" zap the "blob" to keep it from overflowing the dish while destroying its nucleus.
Google implemented an easter egg of the Game of Life in 2012. Users who search for the term are shown an implementation of the game in the search results page.
The visual novel Anonymous;Code includes a basic implementation of the Game of Life in it, which is connected to the plot of the novel. Near the end of Anonymous;Code, a certain pattern that appears throughout the game as a tattoo on the heroine Momo Aizaki has to be entered into the Game of Life to complete the game (Kok's galaxy, the same pattern used as the logo for the open-source Game of Life program Golly).
See also
, is set in a future society where the Game of Life is played in a competitive two-player mode
, a "human" Game of Life.
; the novel 'OX' features a cellular automaton lifeform based on Game of Life
(simulation of flocking birds)
Notes
References
External links
Life Lexicon, extensive lexicon with many patterns
LifeWiki
Catagolue, an online database of objects in Conway's Game of Life and similar cellular automata
Cellular Automata FAQ – Conway's Game of Life
Algebraic formula, recurrence relation for iterating Conway's Game of Life.
Cellular automaton rules
Self-organization
Games and sports introduced in 1970
John Horton Conway | Conway's Game of Life | Mathematics | 5,798 |
11,128,582 | https://en.wikipedia.org/wiki/Podosphaera%20macularis | Podosphaera macularis (formerly Sphaerotheca macularis) is a plant pathogen infecting several hosts including chamomile, caneberrie, strawberries, hop, hemp and Cineraria. It causes powdery mildew of hops.
Host range and symptoms of Podosphaera macularis
The pathogen that causes powdery mildew of hops was once considered to be Sphaerotheca macularis, which is capable of infecting many plants; however, in recent years, the pathogen that causes powdery mildew of hops has been taxonomically classified as Podosphaera macularis. This ascomycete is only known to be pathogenic on hop plants, including both ornamental and wild hops, and Cannabis sativa. The host range of many Podosphaera macularis strains is restricted by the existence of resistant hop varieties, such as the “Nugget” variety of Washington state and Oregon, although in recent years, resistance within this hop variety has been overcome in the laboratory. When disease does occur, early symptoms include chlorotic spots on the leaves of hop plants. Spots may fade to gray or white as the season progresses. Signs include white clusters of hyphae, which are often present on the leaves, and in some cases can infect the cone itself. If this infection occurs, a brown, necrotic lesion may develop. When both mating types exist within a population, chleistothecia can form and are visible as small, black dots on the undersides of leaves.
Disease cycle
Podosphaera macularis overwinters on the soil surface in debris as fungal survival structures (chasmothecia) or as mycelia in plant buds. These chasmothecia are formed closer to the end of the growing season. The characteristic morphology of chasmothecia of Hop Powdery Mildew are spherical black structures with spiked appendages. When favorable conditions are encountered during early spring, the asci (sac-like structures) within chasmothecia will rupture and ascospores will be discharged. Specifically, the favorable conditions for ascospore release include low light, excess fertility, and high soil moisture. Additionally, optimal infection is observed when the temperature is between 18 and 25 °C. Furthermore, the ascospores act as the primary inoculum and are dispersed passively by wind. Upon encountering a susceptible host plant, the ascospores will germinate and cause infection. Following infection, masses of asexual spores (conidia) will be produced during the season. It is these masses of conidia that contribute to the characteristic white, powdery appearance of infected plants. The lower leaves are the most affected, but the disease can appear on any part of the plant that is above the ground. These conidia are dispersed through wind. Thus, Podosphaera macularis is a polycyclic pathogen as conidia are produced/dispersed during the growing season and can further infect additional host plants. Particularly, the disease will be noticeable on infected plants as soon as the hop shoots start to emerge with the latent period being approximately 10 days at 12 and 15 °C compared to 5 days at 18-27 °C. These spore-covered shoots that emerge from infected buds are called “flag shoots” and will be stunted with distorted leaves. Periods of rapid plant growth are the most favorable for infection. In addition, the period in which lateral branch development takes place within the plants is also very vulnerable to the development of the disease. Due to Podosphaera macularis causing local infection, only the location of the host plant tissue where spores have landed will develop the disease.
Optimal environment
Under optimal conditions, this polycyclic disease can potentially grow 20 generations in a growing season. Favorable environmental conditions for Podosphaera macularis fecundity include low sun exposure, soil moisture, and excessive fertilization. The optimal temperature range for spore and mycelium growth is 18 to 25 °C. In addition, periods with small temperature differences between night and day, with a minimum of 10 °C at night and a daily high of 20 °C increase the risk of infection. High humidity and optimal temperature conditions are necessary for primary infection between the middle and end of May. The cleistothecia swell up and burst due to increased turgor pressure leading to the release of ascospores. During the secondary infection period from mid-July to August, conidia infectivity and germination is highest around 18 °C. However, leaf wetness is not essential for the formation and germination of conidia, but rather slight rain has an indirect effect related to high humidity and low sun light. Since the life cycle mainly exists externally, with only haustoria inside the host, supra-optimal temperatures and low relative humidity are unfavorable parameters for germination, infection or sporulation of powdery mildew. Temperatures exceeding 30 °C for more than three hours reduce the chance of infection by up to 50%. Intense rain and wind periods that cause spores blown throughout the hop yard also prevent powdery mildew fecundity. In addition, solar irradiation can kill released spores, but as hops grow, the sun can't penetrate the dense canopy.
Management
The two primary ways to control Podosphaera macularis are cultural and chemical control. The most effective way to manage hop powdery mildew is through preventative measures. Cultural control of the disease include growing powdery-mildew tolerant/resistant varieties of the host plant. Cultural practices that can help prevent the disease include carefully monitoring water and nutrient, reducing initial inoculum, and removing basal growth. Furthermore, pruning, crowning, and/or scratching will aid in further reduction of the disease. Pruning consists of removing shoots before training. Crowning refers to the process of removing the top 1–2 inches of the crown before budbreak. Scratching is done through disturbing the soil surface to remove the top 1–2 inches of buds. All of these methods disturb the overwintering stage of the life cycle of Podosphaera macularis. Likewise, chemical control primarily consists of spraying fungicides in hopes of preventing the disease through the use of early, continuous spraying during the growing season. Thus, prophylactic fungicide programs can be a very effective way in preventing the disease. Since the fungicides are a preventative measure, they are not very useful to use during a full-blown infection. Therefore, the use of fungicides disturbs release of spores and further infection within the disease cycle of Podosphaera macularis. As there are several fungicides that are effective against powdery mildew, it is important to apply the fungicides at specific times. If it is known that powdery mildew is present, spray programs should be started as soon as the shoots emerge. Due to powdery mildew's ability to quickly develop resistance to fungicides, it is important to rotate the fungicides that are used. However, few or no fungicide applications should be used during burr development as these burrs have increased vulnerability to damage. In this case, removing basal growth before flowering and applying a protectant fungicide with long-term residual action should be employed.
Disease importance
In 1997, hop powdery mildew was reported for the first time in hop yards in the United States Pacific Northwest. In Washington, severe infections lead to a yield loss of 800 hectares (US $10 million) of crops. At the time, sulfur was the only registered pesticide used on hop that was effective against powdery mildew. In 1998, the disease was confirmed in Idaho and Oregon. As a result, Yakima Valley growers managed the disease using approaches developed in Europe, such as, labor-intensive cultural practices, mechanical or chemical removal of spring growth, and intensive fungicide programs despite the small number of fungicides available for hop at the time. Although the methods successfully limited disease development, the depressed market for hops couldn't sustain the expensive production costs ($1400/ha annually in 1998). In 2001, a contracting brewery rejected 50% of an aroma hop grown in Oregon because of cone browning after drying, resulting in an additional US $5 million in losses that year. These losses have contributed to economic depression in the hop market and have forced several growers to declare bankruptcy. Currently, hop powdery mildew exists annually in all production regions in the United States. While more research is necessary to understand Podosphaera macularis and control, the current management system has returned economics to hop industry. Disease levels have decreased and control costs have been reduced to $740/ha on average. Unlike New York and California, hop production in the Pacific Northwest is likely to continue.
Pathogenesis of Podosphaera macularis
In order for pathogenesis to occur, a viable pathogen, susceptible host, and conducive environment must simultaneously be present. The germ tube of P. macularis plays an important role in determining the pathogen's viability, because it can penetrate its host in approximately 15 hours. The germ tube begins branching, leading to as many as three potentially conidia-forming germ tubes. As the pathogen invades host tissue, it establishes a haustorium to facilitate the collection of nutrients from the host cells. Despite this invasion, only certain hosts are susceptible, because there are seven R genes in hop varieties that can be activated in response to infection. Many of them operate by either causing the initial haustorium to lyse, or by preventing the pathogen from spreading. The spread is stopped by a hypersensitive response, which is often associated with the establishment of large callose and lignin deposits surrounding infected cells. Although susceptible plants can increase callose and lignin deposits in response to infection, the hypersensitive response is only found in resistant varieties. Finally, although powdery mildew can grow in a relatively hot and dry environment compared to downy mildew, conidia production peaks at temperatures of approximately 20 °C. Conidia can be produced at temperatures above 25 °C, but their infectivity is often reduced.
References
External links
Fungal plant pathogens and diseases
Food plant pathogens and diseases
Small fruit diseases
Fungal strawberry diseases
Hemp diseases
Ornamental plant pathogens and diseases
macularis
Fungi described in 2000
Fungus species | Podosphaera macularis | Biology | 2,154 |
1,158,925 | https://en.wikipedia.org/wiki/Anal%20stage | The anal stage is the second stage in Sigmund Freud's theory of psychosexual development, taking place approximately between the ages of 18 months and three years. In this stage, the anal erogenous zone becomes the primary focus of the child's libidinal energy. The main social context for the experience is the process of toilet training, where anal pleasure becomes associated with the control of bowel movements.
According to Freud's theory, personality is developed through a series of stages, focused on erogenous areas, throughout childhood.
A healthy personality in adulthood is dependent upon all of these childhood stages being resolved successfully. If issues are not resolved in a stage, then fixation can occur, potentially resulting in neurotic tendencies or psychological disturbance. A fixation at this stage can result in a personality that is too rigid or one that is too disordered.
General information
The anal stage, in Freudian psychology, is the period of human development occurring at about one to three years of age. Around this age, the child begins to toilet train, which brings about the child's fascination in the erogenous zone of the anus. The erogenous zone is focused on the bowel and bladder control. Therefore, Freud believed that the libido was mainly focused on controlling the bladder and bowel movements. The anal stage coincides with the start of the child's ability to control their anal sphincter, and therefore their ability to pass or withhold feces at will. If the children during this stage can overcome the conflict it will result in a sense of accomplishment and independence.
Conflict
This is the second stage of Freud's psychosexual stages. This stage represents a conflict with the id, ego, and superego. The child is approached with this conflict with the parent's demands. A successful completion of this stage depends on how the parents interact with the child while toilet training. If a parent praises the child and gives rewards for using the toilet properly and at the right times then the child will successfully go through the stage. However, if a parent ridicules and punishes a child while they are at this stage, the child can respond in negative ways.
Parents' role
As mentioned before the ability for the children to be successful in this stage is solely dependent upon their parents and the approach they use towards toilet training. Freud believed that parents should promote the use of toilet training with praise and rewards. The use of positive reinforcement after using the toilet at the appropriate times encourages positive outcomes. This will help reinforce the feeling that the child is capable of controlling their bladder. The parents help make the outcome of this stage a positive experience which in turn will lead to a competent, productive, and creative adult. This stage is also important in the child's future relationships with authority.
According to Freud's Psychosexual Theory, parents need to be very careful in how they react to their children during this sensitive stage. During this stage children test their parents, the authority figures, on how much power they really have as opposed to how much room the child has to make his or her own decisions.
Anal-retentive personality
Negative parent-child interactions in the anal stage, including early or harsh toilet training, can lead to the development of an anal-retentive personality. If the parents are too forceful or harsh in training the child to control their own bowel movements, the child may react by deliberately retaining their bowel movements in rebellion. They will form into an adult who hates mess, and is obsessively tidy, punctual, and respectful to authority. These adults can sometimes be stubborn and be very careful with their money.
Anal-expulsive personality
Overly passive parent-child interactions in the anal stage lead to the development of an anal-expulsive personality. Because the child's parents were inconsistent or neglectful in teaching the child to control their own bowel movements, the child may relieve themselves at inappropriate times and soil their pants in rebellion against using the toilet. As adults, they will want to share things with their peers and give things away. They can sometimes be messy, disorganized, and rebellious. They may also be inconsiderate of others' feelings.
See also
Psychosexual development
Oral stage
Phallic stage
Latency stage
Genital stage
References
External links
Freud's Psychosexual Stages
Freudian psychology
Toilet training | Anal stage | Biology | 903 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.