id
stringlengths
2
8
url
stringlengths
31
117
title
stringlengths
1
71
text
stringlengths
153
118k
topic
stringclasses
4 values
section
stringlengths
4
49
sublist
stringclasses
9 values
43815710
https://en.wikipedia.org/wiki/Amoeba
Amoeba
An amoeba (; less commonly spelled ameba or amœba; : amoebas (less commonly, amebas) or amoebae (amebae) ), often called an amoeboid, is a type of cell or unicellular organism with the ability to alter its shape, primarily by extending and retracting pseudopods. Amoebae do not form a single taxonomic group; instead, they are found in every major lineage of eukaryotic organisms. Amoeboid cells occur not only among the protozoa, but also in fungi, algae, and animals. Microbiologists often use the terms "amoeboid" and "amoeba" interchangeably for any organism that exhibits amoeboid movement. In older classification systems, most amoebae were placed in the class or subphylum Sarcodina, a grouping of single-celled organisms that possess pseudopods or move by protoplasmic flow. However, molecular phylogenetic studies have shown that Sarcodina is not a monophyletic group whose members share common descent. Consequently, amoeboid organisms are no longer classified together in one group. The best known amoeboid protists are Chaos carolinense and Amoeba proteus, both of which have been widely cultivated and studied in classrooms and laboratories. Other well known species include the so-called "brain-eating amoeba" Naegleria fowleri, the intestinal parasite Entamoeba histolytica, which causes amoebic dysentery, and the multicellular "social amoeba" or slime mould Dictyostelium discoideum. Biology Pseudopods and movement Amoeba do not have cell walls, which allows for free movement. Amoeba move and feed by using pseudopods, which are bulges of cytoplasm formed by the coordinated action of actin microfilaments pushing out the plasma membrane that surrounds the cell. The appearance and internal structure of pseudopods are used to distinguish groups of amoebae from one another. Amoebozoan species, such as those in the genus Amoeba, typically have bulbous (lobose) pseudopods, rounded at the ends and roughly tubular in cross-section. Cercozoan amoeboids, such as Euglypha and Gromia, have slender, thread-like (filose) pseudopods. Foraminifera emit fine, branching pseudopods that merge with one another to form net-like (reticulose) structures. Some groups, such as the Radiolaria and Heliozoa, have stiff, needle-like, radiating axopodia (actinopoda) supported from within by bundles of microtubules. Free-living amoebae may be "testate" (enclosed within a hard shell), or "naked" (also known as gymnamoebae, lacking any hard covering). The shells of testate amoebae may be composed of various substances, including calcium, silica, chitin, or agglutinations of found materials like small grains of sand and the frustules of diatoms. To regulate osmotic pressure, most freshwater amoebae have a contractile vacuole which expels excess water from the cell. This organelle is necessary because freshwater has a lower concentration of solutes (such as salt) than the amoeba's own internal fluids (cytosol). Because the surrounding water is hypotonic with respect to the contents of the cell, water is transferred across the amoeba's cell membrane by osmosis. Without a contractile vacuole, the cell would fill with excess water and, eventually, burst. Marine amoebae do not usually possess a contractile vacuole because the concentration of solutes within the cell are in balance with the tonicity of the surrounding water. Nutrition The food sources of amoebae vary. Some amoebae are predatory and live by consuming bacteria and other protists. Some are detritivores and eat dead organic material. Amoebae typically ingest their food by phagocytosis, extending pseudopods to encircle and engulf live prey or particles of scavenged material. Amoeboid cells do not have a mouth or cytostome, and there is no fixed place on the cell at which phagocytosis normally occurs. Some amoebae also feed by pinocytosis, imbibing dissolved nutrients through vesicles formed within the cell membrane. Size range The size of amoeboid cells and species is extremely variable. The marine amoeboid Massisteria voersi is just 2.3 to 3 micrometres in diameter, within the size range of many bacteria. At the other extreme, the shells of deep-sea xenophyophores can attain 20 cm in diameter. Most of the free-living freshwater amoebae commonly found in pond water, ditches, and lakes are microscopic, but some species, such as the so-called "giant amoebae" Pelomyxa palustris and Chaos carolinense, can be large enough to see with the naked eye. Sexual reproduction Recent evidence indicates that several Amoebozoa lineages undergo meiosis. Orthologs of genes employed in meiosis of sexual eukaryotes have recently been identified in the Acanthamoeba genome. These genes included Spo11, Mre11, Rad50, Rad51, Rad52, Mnd1, Dmc1, Msh and Mlh. This finding suggests that the ‘'Acanthamoeba'’ are capable of some form of meiosis and may be able to undergo sexual reproduction. The meiosis-specific recombinase, Dmc1, is required for efficient meiotic homologous recombination, and Dmc1 is expressed in Entamoeba histolytica. The purified Dmc1 from E. histolytica forms presynaptic filaments and catalyses ATP-dependent homologous DNA pairing and DNA strand exchange over at least several thousand base pairs. The DNA pairing and strand exchange reactions are enhanced by the eukaryotic meiosis-specific recombination accessory factor (heterodimer) Hop2-Mnd1. These processes are central to meiotic recombination, suggesting that E. histolytica undergoes meiosis. Studies of Entamoeba invadens found that, during the conversion from the tetraploid uninucleate trophozoite to the tetranucleate cyst, homologous recombination is enhanced. Expression of genes with functions related to the major steps of meiotic recombination also increase during encystations. These findings in E. invadens, combined with evidence from studies of E. histolytica indicate the presence of meiosis in the Entamoeba. Dictyostelium discoideum in the supergroup Amoebozoa can undergo mating and sexual reproduction including meiosis when food is scarce. Since the Amoebozoa diverged early from the eukaryotic family tree, these results suggest that meiosis was present early in eukaryotic evolution. Furthermore, these findings are consistent with the proposal of Lahr et al. that the majority of amoeboid lineages are anciently sexual. Ecology Pathogenic amoebae Some amoebae can infect other organisms pathogenically, causing disease: Entamoeba histolytica is the cause of amoebiasis, or amoebic dysentery. Naegleria fowleri (the "brain-eating amoeba") is a fresh-water-native species that can be fatal to humans if introduced through the nose. Acanthamoeba can cause amoebic keratitis and encephalitis in humans. Balamuthia mandrillaris is the cause of (often fatal) granulomatous amoebic meningoencephalitis. Amoeba have been found to harvest and grow the bacteria implicated in plague. Amoebae can likewise play host to microscopic organisms that are pathogenic to people and help in spreading such microbes. Bacterial pathogens (for example, Legionella) can oppose absorption of food when devoured by amoebae. The currently generally utilized and best-explored amoebae that host other organisms are Acanthamoeba castellanii and Dictyostelium discoideum. Microorganisms that can overcome the defenses of one-celled organisms can shelter and multiply inside them, where they are shielded from unfriendly outside conditions by their hosts. History of knowledge and classification Conceptual origins The earliest record of an amoeboid organism was produced in 1755 by August Johann Rösel von Rosenhof, who named his discovery "Der Kleine Proteus" ("the Little Proteus"). Rösel's illustrations show an unidentifiable freshwater amoeba, similar in appearance to the common species now known as Amoeba proteus. The term "Proteus animalcule" remained in use throughout the 18th and 19th centuries, as an informal name for any large, free-living amoeboid. In 1822, the genus Amiba (from the Greek ἀμοιβή amoibe, meaning "change") was erected by the French naturalist Bory de Saint-Vincent. Bory's contemporary, C. G. Ehrenberg, adopted the genus in his own classification of microscopic creatures, but changed the spelling to Amoeba. In 1841, Félix Dujardin coined the term "sarcode" (from Greek σάρξ sarx, "flesh," and εἶδος eidos, "form") for the "thick, glutinous, homogeneous substance" which fills protozoan cell bodies. Although the term originally referred to the protoplasm of any protozoan, it soon came to be used in a restricted sense to designate the gelatinous contents of amoeboid cells. Thirty years later, the Austrian zoologist Ludwig Karl Schmarda used "sarcode" as the conceptual basis for his division Sarcodea, a phylum-level group made up of "unstable, changeable" organisms with bodies largely composed of "sarcode". Later workers, including the influential taxonomist Otto Bütschli, amended this group to create the class Sarcodina, a taxon that remained in wide use throughout most of the 20th century. Traditional classification For convenience, all amoebae were grouped as Sarcodina and generally divided into morphological categories, on the basis of the form and structure of their pseudopods. Amoebae with pseudopods supported by regular arrays of microtubules (such as the freshwater Heliozoa and marine Radiolaria) were classified as Actinopoda, whereas those with unsupported pseudopods were classified as Rhizopoda. The Rhizopods were further subdivided into lobose, filose, plasmodial and reticulose, according to the morphology of their pseudopods. During the 1980s, taxonomists reached the following classification, based exclusively on morphological comparisons: Sarcodina : all amoebae. Rhizopoda : amorphous amoebae that lack axopodia and move through pseudopodia. Heterolobosea : amoebae with eruptive pseudopodia, similar to the lobose ones but with a distinct movement, and usually with flagellate life stages. It was traditionally divided into those which aggregate to form fruiting bodies (Acrasida) and those that do not (Schizopyrenida). Lobosea : amoebae with lobose pseudopodia. This paraphyletic group is now represented by a big portion of the current phylum Amoebozoa, in particular the classes Tubulinea, Discosea and Cutosea. Gymnamoebia : lobose naked amoebae. This polyphyletic group included the classic amorphous amoebae with big, blunt pseudopodia, such as Euamoebida, Leptomyxida, Acanthopodida, Echinamoebida, Entamoebida, etc. Testacealobosia : lobose testate amoebae. This polyphyletic group included three unrelated lineages of amoebozoans enclosed by tests or other complex coverings: Arcellinida, Himatismenida and Trichosida. Caryoblastea : amoebae with sparse, non-motile flagella on the surface. This group only includes the order Pelobiontida, which now belongs to the amoebozoan group Archamoebae together with some naked amoebae. Eumycetozoea : plasmodial amoebae with filiform subpseudopodia that produce fruiting bodies. Plasmodiophorea : endoparasitic plasmodial amoebae with minute pseudopodia. This group is now an order within Rhizaria, closely related to the endoparasites Phagomyxida. Filosea : amoebae with filose pseudopodia. Aconchulinia : filose naked amoebae, sometimes covered in scales. This group included two unrelated taxa: the nucleariid amoebae, closely related to fungi; and most of the Vampyrellida, found in Rhizaria. Testaceafilosia : filose testate amoebae. This group included taxa now found throughout Rhizaria, such as Gromiida and Euglyphida. Granuloreticulosea : amoebae with delicate granular pseudopodia. This group included both the Foraminifera (now in Rhizaria) and some members of Vampyrellida. Xenophyophorea : plasmodial amoebae enclosed in a branched-tube system composed of a transparent organic substance. This group is now fully integrated into the Foraminifera. Actinopoda : spherical amoebae that float in the water column. This group included those organisms that have a heliozoan-type appearance, with radially positioned filopodia, reticulopodia or axopodia surrounding the cell body. These were the Radiolaria, Phaeodaria, Proteomyxidea (all three now in Rhizaria), Centroplasthelida (now in Haptista), and Actinophryida (now in Stramenopiles). Transitional period In the final decades of the 20th century, a series of molecular phylogenetic analyses confirmed that Sarcodina was not a monophyletic group, and that amoebae evolved from flagellate ancestors. The protozoologist Thomas Cavalier-Smith proposed that the ancestor of most eukaryotes was an amoeboflagellate much like modern heteroloboseans, which in turn gave rise to a paraphyletic Sarcodina from which other groups (e.g., alveolates, animals, plants) evolved by a secondary loss of the amoeboid phase. In his scheme, the Sarcodina were divided into the more primitive Eosarcodina (with the phyla Reticulosa and Mycetozoa) and the more derived Neosarcodina (with the phyla Amoebozoa for lobose amoebae and Rhizopoda for filose amoebae). Shortly after, phylogenetic analyses disproved this hypothesis, as non-amoeboid zooflagellates and amoeboflagellates were found to be completely intermingled with amoebae. With the addition of many flagellates to Rhizopoda and the removal of some amoebae, the name was rejected in favour of a new name Cercozoa. As such, both names Rhizopoda and Sarcodina were finally abandoned as formal taxa, but they remained useful as descriptive terms for amoebae. The phylum Amoebozoa was conserved, as it still primarily included amoeboid organisms, and now included the Mycetozoa. Current classification Today, amoebae are dispersed among many high-level taxonomic groups. The majority of traditional sarcodines are placed in two eukaryote supergroups: Amoebozoa and Rhizaria. The rest have been distributed among the excavates, opisthokonts, stramenopiles and minor clades. Amoebozoa : includes all naked and testate lobose amoebae (traditional Lobosea) as well as the pelobionts and eumycetozoans, and a few flagellates. Rhizaria : includes amoebae bearing reticulose or filose pseudopodia, the majority of which were traditionally classified as Filosea, Granuloreticulosea and Actinopoda, such as Euglyphida, Gromiida, Radiolaria, Proteomyxidea, Phaeodarea and Foraminifera (including Xenophyophorea). It also houses a large diversity of free-living flagellates, amoeboflagellates and parasites like the Plasmodiophorida. Heterolobosea : amoebae with lobose pseudopodia but eruptive flow of cytoplasm. Currently it includes the aggregative Acrasida, as well as several other amoeboflagellates. They are a class of excavates closely related to Euglenozoa, with whom they share their characteristic discoidal mitochondrial cristae. Stramenopiles : although primarily composed by flagellates, it contains a few groups of amoebae. For example: the Actinophryida, an order with typical heliozoan morphology; the amoeboid Rhizochromulina, a genus of chrysophytes; or Synchroma, a genus of amoeboid algae with reticulate axopodia. Rotosphaerida : also known as nucleariids, includes a few filose amoebae traditionally classified within the Filosea, positioned as the sister group of Fungi. Centroplasthelida : heliozoans with a centroplast from which axopodia arise. They are closely related to the haptophyte algae inside the supergroup Haptista. Rigifilida : a small order of filose amoebae previously interpreted as nucleariids. Together with the flagellate orders Mantamonadida and Diphylleida, it composes the CRuMs clade, positioned closest to Amorphea. Breviatea : includes enigmatic free-living amoeboflagellates related to opisthokonts. The following cladogram shows the sparse positions of amoeboid groups (in bold), based on molecular phylogenetic analyses: Amoeboid cells in other organisms Amoeboid cell types in multicellular organisms Some multicellular organisms have amoeboid cells only in certain phases of life, or use amoeboid movements for specialized functions. In the immune system of humans and other animals, amoeboid white blood cells pursue invading organisms, such as bacteria and pathogenic protists, and engulf them by phagocytosis. Sponges exhibit a totipotent cell type known as archaeocytes, capable of transforming into the feeding cells or choanocytes. Amoeboid dispersal stages Amoeboid stages also occur in the multicellular fungus-like protists, the so-called slime moulds. Both the plasmodial slime moulds, currently classified in the class Myxogastria, and the cellular slime moulds of the groups Acrasida and Dictyosteliida, live as amoebae during their feeding stage. The amoeboid cells of the former combine to form a giant multinucleate organism, while the cells of the latter live separately until food runs out, at which time the amoebae aggregate to form a multicellular migrating "slug" which functions as a single organism. Other organisms may also present amoeboid cells during certain life-cycle stages, e.g., the gametes of some green algae (Zygnematophyceae) and pennate diatoms, the spores (or dispersal phases) of some Mesomycetozoea, and the sporoplasm stage of Myxozoa and of Ascetosporea.
Biology and health sciences
Other organisms: General
Plants
56822861
https://en.wikipedia.org/wiki/TikTok
TikTok
TikTok, known in mainland China and Hong Kong as Douyin (), is a short-form video-hosting service owned by Chinese internet company ByteDance. It hosts user-submitted videos, which may range in duration from three seconds to 60 minutes. It can be accessed both through a mobile app or through its website. Since its launch, TikTok has become one of the world's most popular social media platforms, using recommendation algorithms to connect content creators and influencers with new audiences. In April 2020, TikTok surpassed two billion mobile downloads worldwide. Cloudflare ranked TikTok the most popular website of 2021, surpassing Google. The popularity of TikTok has allowed viral trends in food, fashion, and music to take off and increase the platform's cultural impact worldwide. TikTok has come under scrutiny due to data privacy violations, mental health concerns, misinformation, offensive content, and its role during the Israel–Hamas war. Countries have fined, banned, or attempted to restrict TikTok to protect children or out of national security concerns over possible user data collection by the government of People's Republic of China through ByteDance. Corporate structure TikTok Ltd was incorporated in the Cayman Islands in the Caribbean and is based in both Singapore and Los Angeles. It owns four entities which are based respectively in the United States, Australia (which also runs the New Zealand business), United Kingdom (also owns subsidiaries in the European Union), and Singapore (owns operations in Southeast Asia and India). Its parent company, Beijing-based ByteDance, is owned by founders and Chinese investors, other global investors, and employees. One of ByteDance's main domestic subsidiaries is owned by Chinese state funds and entities through a 1% golden share. Employees have reported that multiple overlaps exist between TikTok and ByteDance in terms of personnel management and product development. TikTok says that since 2020, its US-based CEO is responsible for making important decisions, and has downplayed its China connection. History Douyin Douyin was launched by ByteDance in September 2016, originally under the name A.me, before rebranding to Douyin () in December 2016. Douyin was developed in 200 days and within a year had 100 million users, with more than one billion videos viewed every day. While TikTok and Douyin share a similar user interface, the platforms operate separately. Douyin includes an in-video search feature that can search by people's faces for more videos of them, along with other features such as buying, booking hotels, and making geo-tagged reviews. TikTok ByteDance planned on Douyin expanding overseas. The founder of ByteDance, Zhang Yiming, stated that "China is home to only one-fifth of Internet users globally. If we don't expand on a global scale, we are bound to lose to peers eyeing the four-fifths. So, going global is a must." ByteDance created TikTok as an overseas version of Douyin. TikTok was launched in the international market in September 2017. On 9 November 2017, ByteDance spent nearly $1 billion to purchase Musical.ly, a startup headquartered in Shanghai with an overseas office in Santa Monica, California, next to Los Angeles. Musical.ly was a social media video platform that allowed users to create short lip-sync and comedy videos, initially released in August 2014. TikTok merged with Musical.ly on 2 August 2018 with existing accounts and data consolidated into one app, keeping the title TikTok. On 23 January 2018, the TikTok app ranked first among free application downloads on app stores in Thailand and other countries. TikTok has been downloaded more than 130 million times in the United States and has reached 2 billion downloads worldwide, according to data from mobile research firm Sensor Tower (those numbers exclude Android users in China). In the United States, Jimmy Fallon, Tony Hawk, and other celebrities began using the app in 2018. Other celebrities like Jennifer Lopez, Jessica Alba, Will Smith, and Justin Bieber joined TikTok. In January 2019, TikTok allowed creators to embed merchandise sale links into their videos. On 3 September 2019, TikTok and the U.S. National Football League (NFL) announced a multi-year partnership. The agreement came just two days before the NFL's 100th season kick-off at Soldier Field in Chicago where TikTok hosted activities for fans in honor of the deal. The partnership entails the launch of an official NFL TikTok account, which is to bring about new marketing opportunities such as sponsored videos and hashtag challenges. In July 2020, TikTok, excluding Douyin, reported close to 800 million monthly active users worldwide after less than four years of existence. In May 2021, TikTok appointed Shou Zi Chew as their new CEO who assumed the position from interim CEO Vanessa Pappas, following the resignation of Kevin A. Mayer on 27 August 2020. In September 2021, TikTok reported that it had reached 1 billion users. In 2021, TikTok earned $4 billion in advertising revenue. In October 2022, TikTok was reported to be planning an expansion into the e-commerce market in the US, following the launch of TikTok Shop in the United Kingdom. The company posted job listings for staff for a series of order fulfillment centers in the U.S. and is reportedly planning to start the new live shopping business before the end of the year. The Financial Times reported that TikTok will launch a video gaming channel, but the report was denied in a statement to Digiday, with TikTok instead aiming to be a social hub for the gaming community. According to data from app analytics group Sensor Tower, advertising on TikTok in the U.S. grew by 11% in March 2023, with companies including Pepsi, DoorDash, Amazon, and Apple among the top spenders. According to estimates from research group Insider Intelligence, TikTok is projected to generate $14.15 billion in revenue in 2023, up from $9.89 billion in 2022. In March 2024, The Wall Street Journal reported that TikTok's growth in the U.S. had stagnated. Since at least 2020, following calls to ban TikTok in the country, the Committee on Foreign Investment in the United States (CFIUS) has been investigating the company's 2017 merger with Musical.ly but has not finalized any of its negotiations with TikTok, such as the Project Texas proposal, waiting instead for Congress to act. In January 2025, Chinese officials began preliminary talks about potentially selling TikTok's U.S. operations to Elon Musk if the app faced an impending ban due to national security concerns. While Beijing preferred TikTok remain under ByteDance's control, the sale could happen through a competitive process or with U.S. government involvement. One possibility involved Musk's platform, X, taking over TikTok's U.S. business. The move came ahead of a Supreme Court case that upheld the constitutionality of a law that would force a sale or ban of TikTok in the U.S. by 19 January 2025, due to national security concerns regarding its ties to China. Other potential buyers included Project Liberty's "The People’s Bid For TikTok" consortium of billionaire real estate mogul Frank McCourt with Shark Tank investor Kevin O’Leary; former Trump Treasury Secretary Steve Mnuchin; YouTuber Jimmy Donaldson (MrBeast) and former Activision Blizzard CEO Bobby Kotick; the seriousness of these potential buyers was unclear. The day before the impending ban, California-based conversational search engine company Perplexity AI submitted a bid for a merger with TikTok US. Expansion in other markets TikTok was downloaded over 104 million times on Apple's App Store during the first half of 2018, according to data provided to CNBC by Sensor Tower. After merging with musical.ly in August, downloads increased and TikTok became the most downloaded app in the U.S. in October 2018, which musical.ly had done once before. In February 2019, TikTok, together with Douyin, hit one billion downloads globally, excluding Android installs in China. In 2019, media outlets cited TikTok as the 7th-most-downloaded mobile app of the decade, from 2010 to 2019. It was also the most-downloaded app on Apple's App Store in 2018 and 2019, surpassing Facebook, YouTube and Instagram. In September 2020, a deal was confirmed between ByteDance and Oracle in which the latter will serve as a partner to provide cloud hosting. In November 2020, TikTok signed a licensing deal with Sony Music. In December 2020, Warner Music Group signed a licensing deal with TikTok. The advertising revenue of short video clips is lower than other social media: while users spend more time, American audience is monetized at a rate of $0.31 per hour, a third the rate of Facebook and a fifth the rate of Instagram, $67 per year while Instagram will make more than $200. In July 2023, Iranian Mehr News Agency reported "experts from Douyin" will meet Iranian business in Tehran to enable Iranian exports to China. In 2023, several high-level executives transferred from ByteDance to TikTok to focus on moneymaking operations. Some moved from Beijing to the US. According to sources for The Wall Street Journal, the personnel move led to concerns from some TikTok employees and was reported to the office of U.S. Senator Ted Cruz for further investigation. In December 2023, TikTok invested $1.5 billion in GoTo's Indonesian e-commerce business, Tokopedia. In March 2024, The Information reported that it is an open secret among investors that TikTok loses billions of dollar annually. Competition with other platforms Although the size of its user base falls short of that of Facebook, Instagram, or YouTube, TikTok reached 1 billion active monthly users faster than any of them. Competition from TikTok prompted Instagram, which is owned by Facebook, to spend $120 million as of 2022 to entice more content creators to its Reels service, although engagement level remained low. Snapchat had likewise paid out $250 million in 2021 to its creators. Many platforms and services, including YouTube Shorts, began to imitate TikTok's format and recommendation page. Those changes caused a backlash from users of Instagram, Spotify, and Twitter. In March 2022, The Washington Post reported that Facebook's owner Meta Platforms paid Targeted Victory—a consulting firm backed by supporters of the U.S. Republican Party—to coordinate lobbying and media campaigns against TikTok and portray it as "a danger to American children and society". Its efforts included asking local reporters to serve as "back channels" of anti-TikTok messages, writing opinion articles and letters to the editor, including one in the name of a concerned parent, amplifying stories about TikTok trends, such as "devious licks" and "Slap a Teacher", that actually originated on Facebook, and promoting Facebook's own corporate initiatives. Ties to Meta were not disclosed to the other parties involved. Targeted Victory said that it is "proud of the work". A Meta spokesperson said that all platforms, including TikTok, should face scrutiny. The Wall Street Journal reported that Silicon Valley executives met with U.S. lawmakers to build an "anti-China alliance" before TikTok CEO's congressional hearing in March 2023. TikTok
Technology
Social network and blogging
null
36762240
https://en.wikipedia.org/wiki/Shrimp
Shrimp
A shrimp (: shrimp (US) or shrimps (UK)) is a crustacean (a form of shellfish) with an elongated body and a primarily swimming mode of locomotion – typically belonging to the Caridea or Dendrobranchiata of the order Decapoda, although some crustaceans outside of this order are also referred to as "shrimp". More narrow definitions may be restricted to Caridea, to smaller species of either group or to only the marine species. Under a broader definition, shrimp may be synonymous with prawn, covering stalk-eyed swimming crustaceans with long, narrow muscular tails (abdomens), long whiskers (antennae), and slender legs. Any small crustacean which resembles a shrimp tends to be called one. They swim forward by paddling with swimmerets on the underside of their abdomens, although their escape response is typically repeated flicks with the tail driving them backwards very quickly. Crabs and lobsters have strong walking legs, whereas shrimp have thin, fragile legs which they use primarily for perching. Shrimp are widespread and abundant. There are thousands of species adapted to a wide range of habitats. They can be found feeding near the seafloor on most coasts and estuaries, as well as in rivers and lakes. To escape predators, some species flip off the seafloor and dive into the sediment. They usually live from one to seven years. Shrimp are often solitary, though they can form large schools during the spawning season. They play important roles in the food chain and are an important food source for larger animals ranging from fish to whales. The muscular tails of many shrimp are edible to humans, and they are widely caught and farmed for human consumption. Commercial shrimp species support an industry worth 50 billion dollars a year, and in 2010 the total commercial production of shrimp was nearly 7 million tonnes. Shrimp farming became more prevalent during the 1980s, particularly in China, and by 2007 the harvest from shrimp farms exceeded the capture of wild shrimp. There are significant issues with excessive bycatch when shrimp are captured in the wild, and with pollution damage done to estuaries when they are used to support shrimp farming. Many shrimp species are small as the term shrimp suggests, about long, but some shrimp exceed . Larger shrimp are more likely to be targeted commercially and are often referred to as prawns, particularly in the Commonwealth of Nations and former British colonies. Classification Shrimp are swimming crustaceans with long narrow muscular abdomens and long antennae. Unlike crabs and lobsters, shrimp have well-developed pleopods (swimmerets) and slender walking legs; they are more adapted for swimming than walking. Historically, it was the distinction between walking and swimming that formed the primary taxonomic division into the former suborders Natantia and Reptantia. Members of the Natantia (shrimp in the broader sense) were adapted for swimming while the Reptantia (crabs, lobsters, etc.) were adapted for crawling or walking. Some other groups also have common names that include the word "shrimp"; any small swimming crustacean resembling a shrimp tends to be called one. Description The following description refers mainly to the external anatomy of the common European shrimp, Crangon crangon, as a typical example of a decapod shrimp. The body of the shrimp is divided into two main parts: the head and thorax which are fused together to form the cephalothorax, and a long narrow abdomen. The shell which protects the cephalothorax is harder and thicker than the shell elsewhere on the shrimp and is called the carapace. The carapace typically surrounds the gills, through which water is pumped by the action of the mouthparts. The rostrum, eyes, whiskers and legs also issue from the carapace. The rostrum, from the Latin rōstrum meaning beak, looks like a beak or pointed nose at the front of the shrimp's head. It is a rigid forward extension of the carapace and can be used for attack or defense. It may also stabilize the shrimp when it swims backward. Two bulbous eyes on stalks sit either side of the rostrum. These are compound eyes which have panoramic vision and are very good at detecting movement. Two pairs of whiskers (antennae) also issue from the head. One of these pairs is very long and can be twice the length of the shrimp, while the other pair is quite short. The antennae have sensors on them which allow the shrimp to feel where they touch, and also allow them to "smell" or "taste" things by sampling the chemicals in the water. The long antennae help the shrimp orient itself with regard to its immediate surroundings, while the short antennae help assess the suitability of prey. Eight pairs of appendages issue from the cephalothorax. The first three pairs, the maxillipeds, Latin for "jaw feet", are used as mouthparts. In Crangon crangon, the first pair, the maxillula, pumps water into the gill cavity. After the maxilliped come five more pairs of appendages, the pereiopods. These form the ten decapod legs. In Crangon crangon, the first two pairs of pereiopods have claws or chela. The chela can grasp food items and bring them to the mouth. They can also be used for fighting and grooming. The remaining four legs are long and slender, and are used for walking or perching. The muscular abdomen has six segments and has a thinner shell than the carapace. Each segment has a separate overlapping shell, which can be transparent. The first five segments each have a pair of appendages on the underside, which are shaped like paddles and are used for swimming forward. The appendages are called pleopods or swimmerets, and can be used for purposes other than swimming. Some shrimp species use them for brooding eggs, others have gills on them for breathing, and the males in some species use the first pair or two for insemination. The sixth segment terminates in the telson flanked by two pairs of appendages called the uropods. The uropods allow the shrimp to swim backward, and function like rudders, steering the shrimp when it swims forward. Together, the telson and uropods form a splayed tail fan. If a shrimp is alarmed, it can flex its tail fan in a rapid movement. This results in a backward dart called the caridoid escape reaction (lobstering). Habitat Shrimp are widespread, and can be found near the seafloor of most coasts and estuaries, as well as in rivers and lakes. There are numerous species, and usually there is a species adapted to any particular habitat. Most shrimp species are marine, although about a quarter of the described species are found in fresh water. Marine species are found at depths of up to , and from the tropics to the polar regions. Although shrimp are almost entirely fully aquatic, the two species of Merguia are semi-terrestrial and spend a significant part of their life on land in mangrove. Behaviour There are many variations in the ways different types of shrimp look and behave. Even within the core group of caridean shrimp, the small delicate Pederson's shrimp (above) looks and behaves quite unlike the large commercial pink shrimp or the snapping pistol shrimp. The caridean family of pistol shrimp are characterized by big asymmetrical claws, the larger of which can produce a loud snapping sound. The family is diverse and worldwide in distribution, consisting of about 600 species. Colonies of snapping shrimp are a major source of noise in the ocean and can interfere with sonar and underwater communication. The small emperor shrimp has a symbiotic relationship with sea slugs and sea cucumbers, and may help keep them clear of ectoparasites. Most shrimp are omnivorous, but some are specialised for particular modes of feeding. Some are filter feeders, using their setose (bristly) legs as a sieve; some scrape algae from rocks. Cleaner shrimp feed on the parasites and necrotic tissue of the reef fish they groom. Some species of shrimp are known to cannibalize others as well if other food sources are not readily available. In turn, shrimp are eaten by various animals, particularly fish and seabirds, and frequently host bopyrid parasites. Mating Females of the freshwater shrimp Caridina ensifera are capable of storing sperm from multiple partners, and thus can produce progeny with different paternities. Reproductive success of sires was found to correlate inversely with their genetic relatedness to the mother. This finding suggests that sperm competition and/or pre- and post-copulatory female choice occurs. Female choice may increase the fitness of progeny by reducing inbreeding depression that ordinarily results from the expression of homozygous deleterious recessive mutations. Species Decapods There is little agreement among taxonomists concerning the phylogeny of crustaceans. Within the decapods, "every study gives totally different results. Nor do even one of these studies match any of the rival morphology studies". Some taxonomists identify shrimp with the infraorder Caridea and prawns with the suborder Dendrobranchiata. While different experts give different answers, there is no disagreement that the caridean species are shrimp. There are over 3000 caridean species. Occasionally they are referred to as "true shrimp". Traditionally, decapods were divided into two suborders: the Natantia (or swimmers) and the Reptantia (or walkers). The Natantia or swimmers included the shrimp. They were defined by their abdomen which, together with its appendages was well adapted for swimming. The Reptantia or walkers included the crabs and lobsters. These species have small abdominal appendages, but robust legs well adapted for walking. The Natantia was thought to be paraphyletic; that is, it was thought that originally all decapods were like shrimp. However, classifications are now based on clades, and the paraphyletic suborder Natantia has been discontinued. "On this basis, taxonomic classifications now divide the order Decapoda into the two suborders: Dendrobranchiata for the largest shrimp clade, and Pleocyemata for all other decapods. The Pleocyemata are in turn divided into half a dozen infra-orders" The taxonomists De Grave and Fransen, 2011, recognise four major groups of shrimp: the suborder Dendrobranchiata and the infraorders Procarididea, Stenopodidea and Caridea". This group is identical to the traditional Natantia group, and contains decapods only. All shrimp of commercial interest belong to the Natantia. The FAO determine the categories and terminology used in the reporting of global fisheries. They define a shrimp as a "decapod crustacean of the suborder Natantia". According to the Codex Alimentarius Commission of the FAO and WHO: "The term shrimp (which includes the frequently used term prawn) refers to the species covered by the most recent edition of the FAO listing of shrimp, FAO Species Catalogue, Volume 1, Shrimps and prawns of the world, an annotated catalogue of species of interest to fisheries FAO Fisheries Synopsis No. 125." In turn, the Species Catalogue says the highest category it deals with is "the suborder Natantia of the order Crustacea Decapoda to which all shrimps and prawns belong". Other decapod crustaceans also called shrimp, are the ghost or mud shrimp belonging to the infra-order Thalassinidea. In Australia they are called yabbies. The monophyly of the group is not certain; recent studies have suggested dividing the group into two infraorders, Gebiidea and Axiidea. Non-decapods A wide variety of non-decapod crustaceans are also commonly referred to as shrimp. This includes the brine shrimp, clam shrimp, fairy shrimp and tadpole shrimp belonging to the branchiopods, the lophogastridan shrimp, opossum shrimp and skeleton shrimp belonging the Malacostraca; and seed shrimp which are ostracods. Many of these species look quite unlike the commercial decapod shrimp that are eaten as seafood. For example, skeleton shrimp have short legs and a slender tail like a scorpion tail, fairy shrimp swim upside down with swimming appendages that look like leaves, and the tiny seed shrimp have bivalved carapaces which they can open or close. Krill resemble miniature shrimp, and are sometimes called "krill shrimp". Some mantis shrimp are a foot long, and have bulging eyes, a flattened tail and formidable claws equipped with clubs or sharp spikes, which it can use to knock out its opponents. Human uses History In 1991, archeologists suggested that ancient raised paved areas near the coast in Chiapas, Mexico, were platforms used for drying shrimp in the sun, and that adjacent clay hearths were used to dry the shrimp when there was no sun. The evidence was circumstantial, because the chitinous shells of shrimp are so thin they degrade rapidly, leaving no fossil remains. In 1985 Quitmyer and others found direct evidence dating back to 600 AD for shrimping off the southeastern coast of North America, by successfully identifying shrimp from the archaeological remains of their mandibles (jaws). Clay vessels with shrimp decorations have been found in the ruins of Pompeii. In the 3rd century AD, the Greek author Athenaeus wrote in his literary work, Deipnosophistae; "... of all fish the daintiest is a young shrimp in fig leaves." In North America, indigenous peoples of the Americas captured shrimp and other crustaceans in fishing weirs and traps made from branches and Spanish moss, or used nets woven with fibre beaten from plants. At the same time early European settlers, oblivious to the "protein-rich coasts" all about them, starved from lack of protein. In 1735 beach seines were imported from France, and Cajun fishermen in Louisiana started catching white shrimp and drying them in the sun, as they still do today. In the mid nineteenth century, Chinese immigrants arrived for the California Gold Rush, many from the Pearl River Delta where netting small shrimp had been a tradition for centuries. Some immigrants starting catching shrimp local to San Francisco Bay, particularly the small inch long Crangon franciscorum. These shrimp burrow into the sand to hide, and can be present in high numbers without appearing to be so. The catch was dried in the sun and was exported to China or sold to the Chinese community in the United States. This was the beginning of the American shrimping industry. Overfishing and pollution from gold mine tailings resulted in the decline of the fishery. It was replaced by a penaeid white shrimp fishery on the South Atlantic and Gulf coasts. These shrimp were so abundant that beaches were piled with windrows from their moults. Modern industrial shrimping methods originated in this area. ""For shrimp to develop into one of the world's most popular foods, it took the simultaneous development of the otter trawl... and the internal combustion engine." Shrimp trawling can capture shrimp in huge volumes by dragging a net along the seafloor. Trawling was first recorded in England in 1376, when King Edward III received a request that he ban this new and destructive way of fishing. In 1583, the Dutch banned shrimp trawling in estuaries. In the 1920s, diesel engines were adapted for use in shrimp boats. Power winches were connected to the engines, and only small crews were needed to rapidly lift heavy nets on board and empty them. Shrimp boats became larger, faster, and more capable. New fishing grounds could be explored, trawls could be deployed in deeper offshore waters, and shrimp could be tracked and caught round the year, instead of seasonally as in earlier times. Larger boats trawled offshore and smaller boats worked bays and estuaries. By the 1960s, steel and fibreglass hulls further strengthened shrimp boats, so they could trawl heavier nets, and steady advances in electronics, radar, sonar, and GPS resulted in more sophisticated and capable shrimp fleets. As shrimp fishing methods industrialised, parallel changes were happening in the way shrimp were processed. "In the 19th century, sun dried shrimp were largely replaced by canneries. In the 20th century, the canneries were replaced with freezers." In the 1970s, significant shrimp farming was initiated, particularly in China. The farming accelerated during the 1980s as the quantity of shrimp demand exceeded the quantity supplied, and as excessive bycatch and threats to endangered sea turtles became associated with trawling for wild shrimp. In 2007, the production of farmed shrimp exceeded the capture of wild shrimp. Commercial species Although there are thousands of species of shrimp worldwide, only about 20 of these species are commercially significant. The following table contains the principal commercial shrimp, the seven most harvested species. All of them are decapods; most of them belong to the Dendrobranchiata and four of them are penaeid shrimp. Fishing Commercial techniques for catching wild shrimp include otter trawls, seines and shrimp baiting. A system of nets is used when trawling. Baited traps are common in parts of the Pacific Northwest. Shrimp trawling can result in very high incidental catch rates of non-target species. In 1997, the FAO found discard rates up to 20 pounds for every pound of shrimp. The world average was 5.7 pounds for every pound of shrimp. Trawl nets in general, and shrimp trawls in particular, have been identified as sources of mortality for species of finfish and cetaceans. Bycatch is often discarded dead or dying by the time it is returned to the sea, and may alter the ecological balance in discarded regions. Worldwide, shrimp trawl fisheries generate about 2% of the world's catch of fish in weight, but result in more than one third of the global bycatch total. The most extensively fished species are the akiami paste shrimp, the northern prawn, the southern rough shrimp, and the giant tiger prawn. Together these four species account for nearly half of the total wild capture. In recent years, the global capture of wild shrimp has been overtaken by the harvest from farmed shrimp. Farming A shrimp farm is an aquaculture business for the cultivation of marine shrimp or prawns for human consumption. Commercial shrimp farming began in the 1970s, and production grew steeply, particularly to match the market demands of the United States, Japan and Western Europe. The total global production of farmed shrimp reached more than 1.6 million tonnes in 2003, representing a value of nearly 9 billion U.S. dollars. About 75% of farmed shrimp are produced in Asia, in particular in China, Thailand, Indonesia, India and Vietnam. The other 25% are produced mainly in Latin America, where Brazil is the largest producer. By 2016, the largest exporting nation is India, followed by Ecuador, Thailand, Indonesia and China. As can be seen from the global production chart on the left, significant aquaculture production started slowly in the 1970s and then rapidly expanded during the 1980s. After a lull in growth during the 1990s, due to pathogens, production took off again and by 2007 exceeded the capture from wild fisheries. By 2010, the aquaculture harvest was 3.9 million tonnes, compared to 3.1 million tonnes for the capture of wild shrimp. In the earlier years of marine shrimp farming the preferred species was the large giant tiger prawn. This species is reared in circular holding tanks where they think they are in the open ocean, and swim in "never ending migration" around the circumference of the tank. In 2000, global production was 630,984 tonnes, compared to only 146,362 tonnes for whiteleg shrimp. Subsequently, these positions reversed, and by 2010 the production of giant tiger prawn increased modestly to 781,581 tonnes while whiteleg shrimp rocketed nearly twenty-fold to 2,720,929 tonnes. The whiteleg shrimp is currently the dominant species in shrimp farming. It is a moderately large shrimp reaching a total length of 230 mm (9"), and is particularly suited to farming because it "breeds well in captivity, can be stocked at small sizes, grows fast and at uniform rates, has comparatively low protein requirements... and adapts well to variable environmental conditions." In China, prawns are cultured along with sea cucumbers and some fish species, in integrated multi-trophic systems. The major producer of farmed shrimp is China. Other significant producers are Thailand, Indonesia, India, Vietnam, Brazil, Ecuador and Bangladesh. Most farmed shrimp is exported to the United States, the European Union and Japan, also other Asian markets, including South Korea, Hong Kong, Taiwan and Singapore. Investigations by The Guardian in 2014 and The Associated Press in 2015 found human rights abuses on fishing boats operated by Thailand. The boats are manned with slaves, and catch shrimp and fish (including fish for the production of fishmeal which is fed to farmed prawns). Greenpeace has challenged the sustainability of tropical shrimp farming practices on the grounds that farming these species "has led to the destruction of vast areas of mangroves in several countries [and] over-fishing of juvenile shrimp from the wild to supply farms." Greenpeace has placed a number of the prominent tropical shrimp species that are farmed commercially on its seafood red list, including the whiteleg shrimp, Indian prawn and giant tiger shrimp. There is also evidence that shrimps, like other crustaceans, can feel pain. Common welfare concerns include water pollution, high population densities, and the spread of diseases. Additionally, many female shrimps have their eyes cut without anesthetic in order to induce maturation of the ovaries. As food Shrimp are marketed and commercialized with several issues in mind. Most shrimp are sold frozen and marketed based on their categorization of presentation, grading, colour, and uniformity. Shrimp have high levels of omega-3 fatty acids and low levels of mercury. Usually shrimp is sold whole, though sometimes only the meat of shrimp is marketed. As with other seafood, shrimp is high in calcium, iodine and protein but low in food energy. A shrimp-based meal is also a significant source of cholesterol, from 122 mg to 251 mg per 100 g of shrimp, depending on the method of preparation. Shrimp consumption, however, is considered healthy for the circulatory system because the lack of significant levels of saturated fat in shrimp means that the high cholesterol content in shrimp improves the ratio of LDL to HDL cholesterol and lowers triglycerides. Ebiko - shrimp roe, sometimes translated as "shrimp flakes" - is used as one of the ingredients in the preparation of sushi. Shrimp and other shellfish are among the most common food allergens. They are not kosher and thus are forbidden in Jewish cuisine. Since the early 2020s, plant-based and cultured alternatives to shrimps have emerged and have been rapidly improving. Aquaria Several types of shrimp are kept in home aquaria. Some are purely ornamental, while others are useful in controlling algae and removing debris. Freshwater shrimp commonly available for aquaria include the Bamboo shrimp, Japanese marsh shrimp (Caridina multidentata, also called "Amano shrimp," as their use in aquaria was pioneered by Takashi Amano), cherry shrimp (Neocaridina heteropoda), and ghost or glass shrimp (Palaemonetes spp.). Popular saltwater shrimp include the cleaner shrimp Lysmata amboinensis, the fire shrimp (Lysmata debelius) and the harlequin shrimp (Hymenocera picta). Shrimp versus prawn The terms shrimp and prawn are common names, not scientific names. They are vernacular or colloquial terms, which lack the formal definition of scientific terms. They are not taxa, but are terms of convenience with little circumscriptional significance. There is no reason to avoid using the terms shrimp or prawn when convenient, but it is important not to confuse them with the names or relationships of actual taxa. According to the crustacean taxonomist Tin-Yam Chan, "The terms shrimp and prawn have no definite reference to any known taxonomic groups. Although the term shrimp is sometimes applied to smaller species, while prawn is more often used for larger forms, there is no clear distinction between both terms and their usage is often confused or even reverse in different countries or regions." Writing in 1980, L. B. Holthuis noted that the terms prawn and shrimp were used inconsistently "even within a single region", generalising that larger species fished commercially were generally called shrimp in the United States, and prawns in other English-speaking countries, although not without exceptions. A lot of confusion surrounds the scope of the term shrimp. Part of the confusion originates with the association of smallness. That creates problems with shrimp-like species that are not small. The expression "jumbo shrimp" can be viewed as an oxymoron, a problem that does not exist with the commercial designation "jumbo prawns". The term shrimp originated around the 14th century with the Middle English , akin to the Middle Low German , and meaning to contract or wrinkle; and the Old Norse , meaning to shrivel up, or skreppa, meaning a thin person. It is not clear where the term prawn originated, but early forms of the word surfaced in England in the early 15th century as prayne, praine and prane. According to the linguist Anatoly Liberman it is unclear how shrimp, in English, came to be associated with small. "No Germanic language associates the shrimp with its size... The same holds for Romance... it remains unclear in what circumstances the name was applied to the crustacean." Taxonomic studies in Europe on shrimp and prawns were shaped by the common shrimp and the common prawn, both found in huge numbers along the European coastlines. The common shrimp, Crangon crangon, was categorised in 1758 by Carl Linnaeus, and the common prawn, Palaemon serratus, was categorised in 1777 by Thomas Pennant. The common shrimp is a small burrowing species aligned with the notion of a shrimp as being something small, whereas the common prawn is much larger. The terms true shrimp or true prawn are sometimes used to mean what a particular person thinks is a shrimp or prawn. This varies with the person using the terms. But such terms are not normally used in the scientific literature, because the terms shrimp and prawn themselves lack scientific standing. Over the years the way shrimp and prawn are used has changed, and nowadays the terms are almost interchangeable. Although from time to time some biologists declare that certain common names should be confined to specific taxa, the popular use of these names seems to continue unchanged. Fossils Only 57 exclusively fossil species are known in the shrimp fossil record. The earliest dates from the Lower Jurassic, followed by specimens from the Cretaceous.
Biology and health sciences
Crustaceans
null
42411494
https://en.wikipedia.org/wiki/Google%20Cloud%20Platform
Google Cloud Platform
Google Cloud Platform (GCP) is a suite of cloud computing services offered by Google that provides a series of modular cloud services including computing, data storage, data analytics, and machine learning, alongside a set of management tools. It runs on the same infrastructure that Google uses internally for its end-user products, such as Google Search, Gmail, and Google Docs, according to Verma et al. Registration requires a credit card or bank account details. Google Cloud Platform provides infrastructure as a service, platform as a service, and serverless computing environments. In April 2008, Google announced App Engine, a platform for developing and hosting web applications in Google-managed data centers, which was the first cloud computing service from the company. The service became generally available in November 2011. Since the announcement of App Engine, Google added multiple cloud services to the platform. Google Cloud Platform is a part of Google Cloud, which includes the Google Cloud Platform public cloud infrastructure, as well as Google Workspace (G Suite), enterprise versions of Android and ChromeOS, and application programming interfaces (APIs) for machine learning and enterprise mapping services. Since at least 2022, Google's official materials have stated that "Google Cloud" is the new name for "Google Cloud Platform," which may cause naming confusion. Challenges Like other cloud computing solutions, applications hosted on Google Cloud Platform are subject to the fallacies of distributed computing, a series of misconceptions that can lead to significant issues in software development and deployment. Products Google lists over 100 products under the Google Cloud brand. Some of the key services are listed below. Compute App Engine – Platform as a Service to deploy applications developed with Java, PHP, Node.js, Python, C#, .Net, Ruby and Go programming languages. Compute Engine – Infrastructure as a Service to run Microsoft Windows and Linux virtual machines. Google Kubernetes Engine (GKE) or GKE on-prem offered as part of Anthos platform – Containers as a Service based on Kubernetes. Cloud Functions – Functions as a Service to run event-driven code written in Node.js, Java, Python, or Go. Cloud Run – Compute execution environment based on Knative. Offered as Cloud Run (fully managed) or as Cloud Run for Anthos. Currently supports GCP, AWS and VMware management. Storage and databases Cloud Storage – Object storage with integrated edge caching to store unstructured data. Cloud SQL – Database as a Service based on MySQL, PostgreSQL and Microsoft SQL Server. Cloud Bigtable – Managed NoSQL database service. Cloud Spanner – Horizontally scalable, strongly consistent, relational database service. Cloud Datastore – NoSQL database for web and mobile applications. Persistent Disk – Block storage for Compute Engine virtual machines. Cloud Memorystore – Managed in-memory data store based on Redis and Memcached. Local SSD: High-performance, transient, local block storage. Filestore: High-performance file storage for Google Cloud users. AlloyDB: Fully managed PostgreSQL database service. Networking VPC – Virtual Private Cloud for managing the software defined network of cloud resources. Cloud Load Balancing – Software-defined, managed service for load balancing the traffic. Cloud Armor – Web application firewall to protect workloads from DDoS attacks. Cloud CDN – Content Delivery Network based on Google's globally distributed edge points of presence. Cloud Interconnect – Service to connect a data center with Google Cloud Platform Cloud DNS – Managed, authoritative DNS hosting service running on the same infrastructure as Google. Network Service Tiers – Option to choose Premium vs Standard network tier for higher-performing network. Big data BigQuery – Scalable, managed enterprise data warehouse for analytics. Cloud Dataflow – Managed service based on Apache Beam for stream and batch data processing. Cloud Data Fusion – A managed ETL service based on the Open Source Cask Data Application Platform. Dataproc – Big data platform for running Apache Hadoop and Apache Spark jobs. Cloud Composer – Managed workflow orchestration service built on Apache Airflow. Cloud Datalab – Tool for data exploration, analysis, visualization and machine learning. This is a fully managed Jupyter Notebook service. Cloud Dataprep – Data service based on Trifacta to visually explore, clean, and prepare data for analysis. Cloud Pub/Sub – Scalable event ingestion service based on message queues. Looker Studio – Business intelligence tool to visualize data through dashboards and reports. Looker – Business intelligence platform. Cloud AI Cloud AutoML – Service to train and deploy custom machine learning models. As of September 2018, the service is in Beta. Cloud TPU – Accelerators used by Google to train machine learning models. Cloud Machine Learning Engine – Managed service for training and building machine learning models based on mainstream frameworks. Cloud Talent Solution (formerly Cloud Job Discovery) – Service based on Google's search and machine learning capabilities for the recruiting ecosystem. Dialogflow Enterprise – Development environment based on Google's machine learning for building conversational interfaces. Cloud Natural Language – Text analysis service based on Google Deep Learning models. Cloud Speech-to-Text – Speech to text conversion service based on machine learning. Cloud Text-to-Speech – Text to speech conversion service based on machine learning. Cloud Translation API – Service to dynamically translate between thousands of available language pairs. Cloud Vision API – Image analysis service based on machine learning. Cloud Video Intelligence – Video analysis service based on machine learning. Management tools Operations suite (formerly Stackdriver ) – Monitoring, logging, tracing, and diagnostics for applications on Google Cloud Platform. Cloud Deployment Manager  - Tool to deploy Google Cloud Platform resources defined in templates created in YAML, Python or Jinja2. Cloud Console – Web interface to manage Google Cloud Platform resources. Cloud Shell – Browser-based shell command-line access to manage Google Cloud Platform resources. Cloud Console Mobile App – Android and iOS application to manage Google Cloud Platform resources. Cloud APIs – APIs to programmatically access Google Cloud Platform resources Identity and security Cloud Identity – Single sign-on (SSO) service based on SAML 2.0 and OpenID. Cloud IAM – Identity & Access Management (IAM) service for defining policies based on role-based access control. Cloud Identity-Aware Proxy – Service to control access to cloud applications running on Google Cloud Platform without using a VPN. Cloud Data Loss Prevention API – Service to automatically discover, classify, and redact sensitive data. Security Key Enforcement – Two-step verification service based on a security key. Cloud Key Management Service – Cloud-hosted key management service integrated with IAM and audit logging. Cloud Resource Manager – Service to manage resources by project, folder, and organization based on the hierarchy. Cloud Security Command Center – Security and data risk platform for data and services running in Google Cloud Platform. Cloud Security Scanner – Automated vulnerability scanning service for applications deployed in App Engine. Access Transparency – Near real-time audit logs providing visibility to Google Cloud Platform administrators. VPC Service Controls – Service to manage security perimeters for sensitive data in Google Cloud Platform services. Internet of things (IoT) Cloud IoT Core – Secure device connection and management service for Internet of Things. Edge TPU – Purpose-built ASIC designed to run inference at the edge. As of September 2018, this product is in private beta. Cloud IoT Edge – Brings AI to the edge computing layer. API platform Maps Platform – APIs for maps, routes, and places based on Google Maps. Apigee API Platform – Lifecycle management platform to design, secure, deploy, monitor, and scale APIs. API Monetization – Tool for API providers to create revenue models, reports, payment gateways, and developer portal integrations. Developer Portal – Self-service platform for developers to publish and manage APIs. API Analytics – Service to analyze API-driven programs through monitoring, measuring, and managing APIs. Apigee Sense – Enables API security by identifying and alerting administrators to suspicious API behaviors. Cloud Endpoints – An NGINX-based proxy to deploy and manage APIs. Service Infrastructure – A set of foundational services for building Google Cloud products. Regions and zones A region is a specific geographical location where users can deploy cloud resources. Each region is an independent geographic area that consists of zones. A zone is a deployment area for Google Cloud Platform resources within a region. Zones should be considered a single failure domain within a region. Most regions have three zones. , Google Cloud Platform is available in 40 regions and 121 zones. This is a list of those regions and zones: Similarity to services by other cloud service providers For those familiar with other notable cloud service providers, a comparison of similar services may be helpful in understanding Google Cloud Platform's offerings. Timeline April 2008 – Google App Engine announced in preview May 2010 – Google Cloud Storage launched May 2010 – Google BigQuery and Prediction API announced in preview October 2011 – Google Cloud SQL is announced in preview June 2012 – Google Compute Engine is launched in preview May 2013 – Google Compute Engine is released to GA August 2013 -  Cloud Storage begins automatically encrypting each Storage object's data and metadata under the 128-bit Advanced Encryption Standard (AES-128), and each encryption key is itself encrypted with a regularly rotated set of master keys February 2014 – Google Cloud SQL becomes GA May 2014 – Stackdriver is acquired by Google June 2014 – Kubernetes is announced as an open source container manager June 2014 – Cloud Dataflow is announced in preview October 2014 – Google acquires Firebase November 2014 – Alpha release Google Kubernetes Engine (formerly Container Engine) is announced January 2015 – Google Cloud Monitoring based on Stackdriver goes into Beta March 2015 – Google Cloud Pub/Sub becomes available in Beta April 2015 – Google Cloud DNS becomes generally available April 2015 – Google Dataflow launched in beta July 2015 – Google releases v1 of Kubernetes; Hands it over to The Cloud Native Computing Foundation August 2015 – Google Cloud Dataflow, Google Cloud Pub/Sub, Google Kubernetes Engine, and Deployment Manager graduate to GA November 2015 – Bebop is acquired, and Diane Greene joins Google February 2016 – Google Cloud Functions becomes available in Alpha September 2016 – Apigee, a provider of application programming interface (API) management company, is acquired by Google September 2016 – Stackdriver becomes generally available November 2016 – Qwiklabs, an EdTech company is acquired by Google February 2017 – Cloud Spanner, highly available, globally-distributed database is released into Beta March 2017 – Google acquires Kaggle, world's largest community of data scientists and machine learning enthusiasts April 2017 – MIT professor Andrew Sutherland breaks the record for the largest ever Compute Engine cluster with 220,000 cores on Preemptible VMs. May 2017 – Google Cloud IoT Core is launched in Beta November 2017 – Google Kubernetes Engine gets certified by the CNCF February 2018 – Google Cloud IoT Core becomes generally available February 2018 – Google announces its intent to acquire Xively February 2018 – Cloud TPUs, ML accelerators for Tensorflow, become available in Beta May 2018 – Google Cloud Memorystore becomes available in Beta April 2019 – Google Cloud Run (fully managed) Beta release April 2019 – Google Anthos announced November 2019 – Google Cloud Run (fully managed) General availability release March 2020 – Due to the COVID-19 pandemic, Google Cloud postponed the online streaming version of its Google Cloud Next mega-conference, two weeks after it canceled the in-person version. October 2020 – Google Cloud announced that it will become a block producer candidate for the EOS network and EOS.IO protocol. Currently the top block producers are cryptocurrency exchanges like OKEx and Binance. February 2021 – Google Kubernetes Engine Autopilot introduced. May 2021 – Vertex AI announced at Google.io June 2021 – In 2021, Apple was Google Cloud's biggest customer. April 2022 – MobiledgeX acquired and joins Google Cloud. March 2023 – Google brings generative AI capabilities to Google Cloud. May 2024 Google Cloud partnered with Airtel. Public Customers Customers announced in 2023 include: Kingfisher plc, the Government of Kuwait, Deutsche Börse Group, Unity Technologies, Uber, FanCode, HackerNoon, and Daimler.
Technology
Cloud server
null
34215399
https://en.wikipedia.org/wiki/Cliffed%20coast
Cliffed coast
A cliffed coast, also called an abrasion coast, is a form of coast where the action of marine waves has formed steep cliffs that may or may not be precipitous. It contrasts with a flat or alluvial coast. Formation In coastal areas in which the land surface dips at a relatively steep angle below the water table, the continuous action of marine waves on the coastline, known as abrasion, may create a steep declivity known as a cliff, the slope angle of which depends on a variety of factors including the jointing, bedding and hardness of the materials making up the cliff as well as the erosional processes themselves. The slope is constantly being eroded. The waves attacking the cliff-foot form a wave-cut notch by constant abrasion action producing an overhang. This overhang grows in size as the cliff is undercut, until it collapses under its own weight. The loose debris that has broken off is gradually carried away from the area in front of the cliff by the action of the sea. As the coastal cliffs collapse, the shoreline recedes inland. The speed at which this happens depends, in particular, on the strength of the surf, the height of the cliff, the frequency of storm surges and the hardness of the bedrock. Thus, the Mecklenburg coast in Germany recedes by about 25 centimetres per year, whereas the chalk cliffs of southern England retreat by just half a centimetre each year. A cliffed coast is made of a loose bedrock material, such as at the Red Cliff on the German island of Sylt, but can also occur in hard rock like the red sandstone cliffs on Heligoland. There are, however, differences between the former and the latter regarding some peculiarities of the coast line. In the case of the large and widespread coastal cliffs of Atacama Desert the modern cliffs originated from a process of scarp retreat of a fault scarp, thus at present the cliffs do not follow any geological fault. Rocky cliffed coast On a rocky cliffed coast made up of material which is relatively resistant to erosion such as sandstone, limestone or granite, a flat rocky wave-cut platform or abrasion platform is formed in front of the cliff. It represents the foot of the cliff preserved at and below the level of water table. If there is a tectonic uplift of the coast, these abrasion platforms can be raised to form coastal terraces, from which the amount of uplift can be calculated from their elevation relative to the sea level, taking into account any eustatic sea level changes. On a cliffed coast made up of material which is only fairly or even hardly resistant to erosion no wave-cut platform but a beach is formed in front of the sea cliff. If waves carve notches at a narrow point on both sides of a promontory on the rocky cliffed coast, a natural arch may be formed. When the arch collapses as the coastline recedes further a stack is left behind on the wave-cut platform. The best-known example in Germany is the Lange Anna on Heligoland, while, in England, a prominent example are Old Harry Rocks in Dorset. Furthermore, on a rocky cliffed coast wave action is not the only driving force for coastline retreat. General weathering of the bedrock is almost equally important. Living and dead cliffs "Living cliffs" are those on a coast that is still active, i.e. that is being eroded and is receding. A "dead cliff", by contrast, is only reached by very high marine waves and is therefore subjected to very little change. A clear indication of a lack of activity at a dead cliff is a covering of vegetation that appears on the cliff as wave action against it subsides. Well-known coasts with living cliffs in Germany are the Red Cliff (Rote Kliff) in Kampen on the island of Sylt or the chalk cliffs on the Jasmund Peninsula. The Königsstuhl on the island of Rügen is a good example of a dead cliff. Others may be found in the regions of the present-day Wadden Sea coast of the North Sea a few kilometres inland. These show the former coastline from which the sea retreated as the level of water in the North Sea fell. Other processes Steep sea cliffs can also be caused by catastrophic debris avalanches. These have been common on the submerged flanks of ocean island volcanos such as the Hawaiian Islands and the Cape Verde Islands.
Physical sciences
Oceanic and coastal landforms
Earth science
60261153
https://en.wikipedia.org/wiki/Ictonychinae
Ictonychinae
Ictonychinae is a subfamily of the mammal family Mustelidae found mainly in the Neotropics (three species) and Africa (three species), with one Eurasian member. It includes the grisons, Patagonian weasel, striped polecats, African striped weasel, and marbled polecat. These genera were formerly included within a paraphyletic definition of the mustelid subfamily Mustelinae. Most members have a mask-like bar or larger dark marking across their faces; the African representatives of the group are striped. A defense mechanism common to the group is use of a chemical spray similar to (but not necessarily as strong as) that of skunks. Species Subfamily Ictonychinae Fossil genera Cernictis Enhydrictis Martellictis Lutravus Oriensictis Pannonictis Sminthosinis Stipanicicia Trigonictis Trochictis
Biology and health sciences
Mustelidae
Animals
38199788
https://en.wikipedia.org/wiki/Large%20quasar%20group
Large quasar group
A large quasar group (LQG) is a collection of quasars (a form of supermassive black hole active galactic nuclei) that form what are thought to constitute the largest astronomical structures in the observable universe. LQGs are thought to be precursors to the sheets, walls and filaments of galaxies found in the relatively nearby universe. Prominent LQGs On January 11, 2013, the discovery of the Huge-LQG was announced by the University of Central Lancashire, as the largest known structure in the universe by that time. It is composed of 74 quasars and has a minimum diameter of 1.4 billion light-years, but over 4 billion light-years at its widest point. According to researcher and author, Roger Clowes, the existence of structures with the size of LQGs was believed theoretically impossible. Cosmological structures had been believed to have a size limit of approximately 1.2 billion light-years. List of LQGs Redshift, denoted as "z," is a fundamental concept in astrophysics used to measure the spectral line shift in light emitted by celestial objects like quasars due to their motion away from Earth. In the table below, higher redshift values directly correspond to greater cosmic distances.
Physical sciences
Large-scale structures
Astronomy
34227079
https://en.wikipedia.org/wiki/Flat%20coast
Flat coast
At a flat coast or flat shoreline, the land descends gradually into the sea. Flat coasts can be formed either as a result of the sea advancing into gently sloping terrain or through the abrasion of loose rock. They may be basically divided into two parallel strips: the shoreface and the beach. Flat coasts consist of loose material such as sand and gravel. Wind transports finer grains of sand inland over the dunes. The sea washes pebbles and sand away from the coast and dumps it at other locations. Flat coast littoral series The typical sequence of landforms created by the sea is described as a "littoral series". Sandbars, runnels and creeks The littoral series of a flat coast starts in the permanently flooded shallow water region, or shoreface, with a sand or gravel reef (also called a bar). The longshore bar is an elongated ridge of sand found parallel to the shore in the surf zone on many flat coasts. It consists mainly of sand or gravel, depending on the material available along the coast. The sides of the sandbar fall gently away. The basin between a sandbar and the shore zone is called the runnel or swale. The presence of a bar clearly indicates that the movement of waves is transporting and depositing material on the seabed. There may be several bars whose longitudinal axes all run parallel to the beach and which are separated by equally parallel runnels or creeks. The drainage troughs in areas of tidal flats also run parallel to the coast. Shoreface and beach The shoreface (or underwater platform) on flat coasts encompasses in its narrow sense that area which is subject to the constant action of moving water. This means that the landward boundary between shoreface and beach is the line of the average low-water mark. However this definition is not universal and frequently varies from author to author in the literature. Whilst some define the beach as the landward transition to the shoreface that extends from the low-water mark to the highest high-water mark, i.e. the zone that is only periodically or episodically (after a storm surge) flooded by water; other authors do not use the term "beach" for the landward element of a flat coast at all. They describe the region between the mean low-water mark and the mean high-water mark of the tides as the intertidal zone or foreshore and that area above the average high-water mark as the supratidal zone or backshore that is only directly attacked by water during storms. Because the backshore is often considerably flatter in appearance than the foreshore, which slopes clearly down towards the sea, it is also often referred to as a beach platform, which is why this part of the shore can be considered in practice to be the actual beach. The farthest point inland that is reached by storm surges is bounded by a belt of dunes, where floods can form a dune cliff. Berm On the beach (the beach platform) there is very often a bank of sand or a gravel ridge parallel to the shoreline and a few tens of centimetres high, known as the berm. On its landward side there is often a shallow runnel. The berm is formed by material transported by the breaking waves that is thrown beyond the average level of the sea. The coarse-grained material that can no longer be washed away by the backwash remains behind. The location and size of the berm is subject to seasonal changes. For example, a winter berm that has been thrown up by storm surges in winter is usually much more prominent and higher up the beach than berms formed by summer high tides. A similar landform is a beach ridge. Beach losses and gains Beaches are usually heavily eroded during storm surges and the beach profile steepened, whereas normal wave action on flat coasts tends to raise the beach. Not infrequently a whole series of parallel berms is formed, one behind the other. There is a consequent gradual increase in height with the result that, over time, the shoreline advances seawards. A striking example of land-forming system of berms is Skagen Odde on the northern tip of Vendsyssel in the extreme north of Denmark. This headland is still growing today as more berms are added. Coastal defences against erosion are groynes, stone walls, or tetrapods of concrete, which act as breakwaters. The first plants to colonise the dunes include sea buckthorn or beach grass which prevent wind erosion.
Physical sciences
Oceanic and coastal landforms
Earth science
52568792
https://en.wikipedia.org/wiki/Epinephrine%20%28medication%29
Epinephrine (medication)
Epinephrine, also known as adrenaline, is a medication and hormone. As a medication, it is used to treat several conditions, including anaphylaxis, cardiac arrest, asthma, and superficial bleeding. Inhaled epinephrine may be used to improve the symptoms of croup. It may also be used for asthma when other treatments are not effective. It is given intravenously, by injection into a muscle, by inhalation, or by injection just under the skin. Common side effects include shakiness, anxiety, and sweating. A fast heart rate and high blood pressure may occur. Occasionally, it may result in an abnormal heart rhythm. While the safety of its use during pregnancy and breastfeeding is unclear, the benefits to the mother must be taken into account. Epinephrine is normally produced by both the adrenal glands and a small number of neurons in the brain, where it acts as a neurotransmitter. It plays an essential role in the fight-or-flight response by increasing blood flow to muscles, heart output, pupil dilation, and blood sugar. Epinephrine does this through its effects on alpha and beta receptors. It is found in many animals and some single-celled organisms, but the medication is produced synthetically and is not harvested from animals. Jōkichi Takamine first isolated epinephrine in 1901, and it came into medical use in 1905. It is on the World Health Organization's List of Essential Medicines. It is available as a generic medication. In 2022, it was the 240th most commonly prescribed medication in the United States, with more than 1million prescriptions. Etymology The word epinephrine is formed from the Ancient Greek ἐπι- (epi-, "on") and νεφρός (nephros, "kidney") because the adrenal glands, where it is produced, are located just above the kidneys. Medical uses Epinephrine is used to treat a number of conditions, including cardiac arrest, anaphylaxis, and superficial bleeding. It has been used historically for bronchospasm and low blood sugar, but newer treatments for these that are selective for β2 adrenoceptors, such as salbutamol, are preferred. Heart problems While epinephrine is often used to treat cardiac arrest, it has not been shown to improve long-term survival or mental function after recovery. It does, however, improve return of spontaneous circulation. Anaphylaxis Epinephrine is the only life-saving treatment for anaphylaxis. The commonly used epinephrine autoinjector delivers a 0.3 mg epinephrine injection (0.3 mL, 1:1000). It is indicated in the emergency treatment of allergic reactions, including anaphylaxis to stings, contrast agents, medicines, or people with a history of anaphylactic reactions to known triggers. A lower-strength product is available for children. Intramuscular injection can be complicated in that the depth of subcutaneous fat varies and may result in subcutaneous injection, or may be injected intravenously in error, or the wrong strength used. Intramuscular injection gives a faster and higher pharmacokinetic profile compared to subcutaneous injection. In August 2024, an epinephrine nasal spray (brand name Neffy) was approved in the United States for the emergency treatment of allergic reactions (type I), including those that are life-threatening (anaphylaxis), in people who weigh at least . It is the first nasal spray for the treatment of anaphylaxis approved by the US Food and Drug Administration (FDA). The approval of epinephrine nasal spray is based on four studies in 175 healthy adults, without anaphylaxis, that measured the epinephrine concentrations in the blood following administration of epinephrine nasal spray or approved epinephrine injection products. Results from these studies showed comparable epinephrine blood concentrations between epinephrine nasal spray and approved epinephrine injection products. Epinephrine nasal spray also demonstrated similar increases in blood pressure and heart rate as epinephrine injection products, two critical effects of epinephrine in the treatment of anaphylaxis. A study of epinephrine nasal spray in children weighing more than 66 pounds showed that epinephrine concentrations in children were similar to adults who received epinephrine nasal spray. The most common side effects of epinephrine nasal spray include throat irritation, tingling nose (intranasal paresthesia), headache, nasal discomfort, feeling jittery, tingling sensation (paresthesia), fatigue, tremor, runny nose (rhinorrhea), itchiness inside the nose (nasal pruritus), sneezing, abdominal pain, gum (gingival) pain, numbness in the mouth (hypoesthesia oral), nasal congestion, dizziness, nausea and vomiting. The FDA granted the application of epinephrine nasal spray fast track designation and granted the approval of Neffy to ARS Pharmaceuticals. Asthma Epinephrine is also used as a bronchodilator for asthma if specific β2 agonists are unavailable or ineffective. Because of the high intrinsic efficacy (receptor binding ability) of epinephrine, high drug concentrations cause adverse side effects when treating asthma. The value of using nebulized epinephrine in acute asthma is unclear. Croup Racemic epinephrine has been used for the treatment of croup. Racemic adrenaline is a 1:1 mixture of the two enantiomers of adrenaline. The L-form is the active component. Racemic adrenaline works by stimulating the alpha-adrenergic receptors in the airway, with resultant mucosal vasoconstriction and decreased subglottic edema, and by stimulating the β adrenergic receptors, with resultant relaxation of the bronchial smooth muscle. Bronchiolitis There is a lack of consensus as to whether inhaled nebulized epinephrine is beneficial in the treatment of bronchiolitis, with most guidelines recommending against its use. Local anesthetics When epinephrine is mixed with local anesthetics, such as bupivacaine or lidocaine, and used for local anesthesia or intrathecal injection, it prolongs the numbing effect and motor block effect of the anesthetic by up to an hour. Epinephrine is frequently combined with local anesthetic and can cause panic attacks. Epinephrine is mixed with cocaine to form Moffett's solution, used in nasal surgery. Upper airway obstruction Upper airway obstruction with edema and stridor can be treated with racemic epinephrine. Adverse effects Adverse reactions to adrenaline include palpitations, tachycardia, arrhythmia, anxiety, panic attack, headache, anorexia, tremor, hypertension, and acute pulmonary edema. The use of epinephrine based eye-drops, commonly used to treat glaucoma, may also lead to a buildup of adrenochrome pigments in the conjunctiva, iris, lens, and retina. Rarely, exposure to medically administered epinephrine may cause Takotsubo cardiomyopathy. Use is contraindicated in people on nonselective β-blockers because severe hypertension and even cerebral hemorrhage may result. The most common side effects of epinephrine nasal spray include throat irritation, tingling nose (intranasal paresthesia), headache, nasal discomfort, feeling jittery, tingling sensation (paresthesia), fatigue, tremor, runny nose (rhinorrhea), itchiness inside the nose (nasal pruritus), sneezing, abdominal pain, gum (gingival) pain, numbness in the mouth (hypoesthesia oral), nasal congestion, dizziness, nausea and vomiting. The FDA granted the application of epinephrine nasal spray fast track designation and granted the approval of Neffy to ARS Pharmaceuticals. Mechanism of action Epinephrine acts by binding to a variety of adrenergic receptors. Epinephrine is a nonselective agonist of all adrenergic receptors, including the major subtypes α1, α2, β1, β2, and β3. Epinephrine's binding to these receptors triggers several metabolic changes. Binding to α-adrenergic receptors inhibits insulin secretion by the pancreas, stimulates glycogenolysis in the liver and muscle, and stimulates glycolysis and inhibits insulin-mediated glycogenesis in muscle. β adrenergic receptor binding triggers glucagon secretion in the pancreas, increased adrenocorticotropic hormone (ACTH) secretion by the pituitary gland, and increased lipolysis by adipose tissue. Together, these effects increase blood glucose and fatty acids, providing substrates for energy production within cells throughout the body. In the heart, the coronary arteries have a predominance of β2 receptors, which cause vasodilation of the coronary arteries in the presence of epinephrine. Its actions increase peripheral resistance via α1 receptor-dependent vasoconstriction and increase cardiac output via its binding to β1 receptors. The goal of reducing peripheral circulation is to increase coronary and cerebral perfusion pressures and therefore increase oxygen exchange at the cellular level. While epinephrine does increase aortic, cerebral, and carotid circulation pressure, it lowers carotid blood flow and end-tidal CO2 or ETCO2 levels. It appears that epinephrine may improve macrocirculation at the expense of the capillary beds where perfusion takes place. Chemistry Epinephrine, or adrenaline, also known as 3,4,β-trihydroxy-N-methylphenethylamine, is a substituted phenethylamine and catecholamine. It is the N-methylated analogue of norepinephrine (noradrenaline; 3,4,β-trihydroxyphenethylamine) and the N-methylated and β-hydroxylated analogue of dopamine (3,4-dihydroxyphenethylamine). History Extracts of the adrenal gland were first obtained by Polish physiologist Napoleon Cybulski in 1895. These extracts, which he called nadnerczyna, contained adrenaline and other catecholamines. American ophthalmologist William H. Bates discovered adrenaline's usage for eye surgeries prior to 20 April 1896. Japanese chemist Jōkichi Takamine and his assistant Keizo Uenaka independently discovered adrenaline in 1900. In 1901, Takamine successfully isolated and purified the hormone from the adrenal glands of sheep and oxen. Adrenaline was first synthesized in the laboratory by Friedrich Stolz and Henry Drysdale Dakin, independently, in 1904. Society and culture Names Brand names Epinephrine is the generic name of the drug and its and , while adrenaline is its . Epinephrine is sold under various brand names including Asthmanefrin, Micronefrin, Neffy, Nephron, VapoNefrin, and Primatene Mist, among others. Legal status In June 2024, the Committee for Medicinal Products for Human Use of the European Medicines Agency adopted a positive opinion, recommending the granting of a marketing authorization for the medicinal product Eurneffy, intended for emergency treatment of allergic reactions (anaphylaxis) due to insect stings or bites, foods, medicinal products, and other allergens as well as idiopathic or exercise-induced anaphylaxis. The applicant for this medicinal product is ARS Pharmaceuticals IRL Limited. Eurneffy was approved for medical use in the European Union in August 2024. Delivery forms Epinephrine is available in an autoinjector delivery system and a nasal spray. There is an epinephrine metered-dose inhaler sold over the counter in the United States to relieve bronchial asthma. It was introduced in 1963 by Armstrong Pharmaceuticals. A common concentration for epinephrine is 2.25% w/v epinephrine in solution, which contains 22.5 mg/mL, while a 1% solution is typically used for aerosolization.
Biology and health sciences
Specific drugs
Health
38203359
https://en.wikipedia.org/wiki/Reverse%20genetics
Reverse genetics
Reverse genetics is a method in molecular genetics that is used to help understand the function(s) of a gene by analysing the phenotypic effects caused by genetically engineering specific nucleic acid sequences within the gene. The process proceeds in the opposite direction to forward genetic screens of classical genetics. While forward genetics seeks to find the genetic basis of a phenotype or trait, reverse genetics seeks to find what phenotypes are controlled by particular genetic sequences. Automated DNA sequencing generates large volumes of genomic sequence data relatively rapidly. Many genetic sequences are discovered in advance of other, less easily obtained, biological information. Reverse genetics attempts to connect a given genetic sequence with specific effects on the organism. Reverse genetics systems can also allow the recovery and generation of infectious or defective viruses with desired mutations. This allows the ability to study the virus in vitro and in vivo. Techniques used In order to learn the influence a sequence has on phenotype, or to discover its biological function, researchers can engineer a change or disrupt the DNA. After this change has been made a researcher can look for the effect of such alterations in the whole organism. There are several different methods of reverse genetics: Directed deletions and point mutations Site-directed mutagenesis is a sophisticated technique that can either change regulatory regions in the promoter of a gene or make subtle codon changes in the open reading frame to identify important amino residues for protein function. Alternatively, the technique can be used to create null alleles so that the gene is not functional. For example, deletion of a gene by gene targeting (gene knockout) can be done in some organisms, such as yeast, mice and moss. Unique among plants, in Physcomitrella patens, gene knockout via homologous recombination to create knockout moss (see figure) is nearly as efficient as in yeast. In the case of the yeast model system directed deletions have been created in every non-essential gene in the yeast genome. In the case of the plant model system huge mutant libraries have been created based on gene disruption constructs. In gene knock-in, the endogenous exon is replaced by an altered sequence of interest. In some cases conditional alleles can be used so that the gene has normal function until the conditional allele is activated. This might entail 'knocking in' recombinase sites (such as lox or frt sites) that will cause a deletion at the gene of interest when a specific recombinase (such as CRE, FLP) is induced. Cre or Flp recombinases can be induced with chemical treatments, heat shock treatments or be restricted to a specific subset of tissues. Another technique that can be used is TILLING. This is a method that combines a standard and efficient technique of mutagenesis with a chemical mutagen such as ethyl methanesulfonate (EMS) with a sensitive DNA-screening technique that identifies point mutations in a target gene. In the field of virology, reverse-genetics techniques can be used to recover full-length infectious viruses with desired mutations or insertions in the viral genomes or in specific virus genes. Technologies that allow these manipulations include circular polymerase extension reaction (CPER) which was first used to generate infectious cDNA for Kunjin virus a close relative of West Nile virus. CPER has also been successfully utilised to generate a range of positive-sense RNA viruses such as SARS-CoV-2, the causative agent of COVID-19. Gene silencing The discovery of gene silencing using double stranded RNA, also known as RNA interference (RNAi), and the development of gene knockdown using Morpholino oligos, have made disrupting gene expression an accessible technique for many more investigators. This method is often referred to as a gene knockdown since the effects of these reagents are generally temporary, in contrast to gene knockouts which are permanent. RNAi creates a specific knockout effect without actually mutating the DNA of interest. In C. elegans, RNAi has been used to systematically interfere with the expression of most genes in the genome. RNAi acts by directing cellular systems to degrade target messenger RNA (mRNA). RNAi interference, specifically gene silencing, has become a useful tool to silence the expression of genes and identify and analyze their loss-of-function phenotype. When mutations occur in alleles, the function which it represents and encodes also is mutated and lost; this is generally called a loss-of-function mutation. The ability to analyze the loss-of-function phenotype allows analysis of gene function when there is no access to mutant alleles. While RNA interference relies on cellular components for efficacy (e.g. the Dicer proteins, the RISC complex) a simple alternative for gene knockdown is Morpholino antisense oligos. Morpholinos bind and block access to the target mRNA without requiring the activity of cellular proteins and without necessarily accelerating mRNA degradation. Morpholinos are effective in systems ranging in complexity from cell-free translation in a test tube to in vivo studies in large animal models. Interference using transgenes A molecular genetic approach is the creation of transgenic organisms that overexpress a normal gene of interest. The resulting phenotype may reflect the normal function of the gene. Alternatively it is possible to overexpress mutant forms of a gene that interfere with the normal (wildtype) gene's function. For example, over-expression of a mutant gene may result in high levels of a non-functional protein resulting in a dominant negative interaction with the wildtype protein. In this case the mutant version will out compete for the wildtype proteins partners resulting in a mutant phenotype. Other mutant forms can result in a protein that is abnormally regulated and constitutively active ('on' all the time). This might be due to removing a regulatory domain or mutating a specific amino residue that is reversibly modified (by phosphorylation, methylation, or ubiquitination). Either change is critical for modulating protein function and often result in informative phenotypes. Vaccine synthesis Reverse genetics plays a large role in vaccine synthesis. Vaccines can be created by engineering novel genotypes of infectious viral strains which diminish their pathogenic potency enough to facilitate immunity in a host. The reverse genetics approach to vaccine synthesis utilizes known viral genetic sequences to create a desired phenotype: a virus with both a weakened pathological potency and a similarity to the current circulating virus strain. Reverse genetics provides a convenient alternative to the traditional method of creating inactivated vaccines, viruses which have been killed using heat or other chemical methods. Vaccines created through reverse genetics methods are known as attenuated vaccines, named because they contain weakened (attenuated) live viruses. Attenuated vaccines are created by combining genes from a novel or current virus strain with previously attenuated viruses of the same species. Attenuated viruses are created by propagating a live virus under novel conditions, such as a chicken's egg. This produces a viral strain that is still live, but not pathogenic to humans, as these viruses are rendered defective in that they cannot replicate their genome enough to propagate and sufficiently infect a host. However, the viral genes are still expressed in the host's cell through a single replication cycle, allowing for the development of an immunity. Influenza vaccine A common way to create a vaccine using reverse genetic techniques is to utilize plasmids to synthesize attenuated viruses. This technique is most commonly used in the yearly production of influenza vaccines, where an eight plasmid system can rapidly produce an effective vaccine. The entire genome of the influenza A virus consists of eight RNA segments, so the combination of six attenuated viral cDNA plasmids with two wild-type plasmids allow for an attenuated vaccine strain to be constructed. For the development of influenza vaccines, the fourth and sixth RNA segments, encoding for the hemagglutinin and neuraminidase proteins respectively, are taken from the circulating virus, while the other six segments are derived from a previously attenuated master strain. The HA and NA proteins exhibit high antigen variety, and therefore are taken from the current strain for which the vaccine is being produced to create a well matching vaccine. The plasmid used in this eight-plasmid system contains three major components that allow for vaccine development. Firstly, the plasmid contains restriction sites that will enable the incorporation of influenza genes into the plasmid. Secondly, the plasmid contains an antibiotic resistance gene, allowing the selection of merely plasmids containing the correct gene. Lastly, the plasmid contains two promotors, human pol 1 and pol 2 promotor that transcribe genes in opposite directions. cDNA sequences of viral RNA are synthesized from attenuated master strains by using RT-PCR. This cDNA can then be inserted between an RNA polymerase I (Pol I) promoter and terminator sequence through restriction enzyme digestion. The cDNA and pol I sequence is then, in turn, surrounded by an RNA polymerase II (Pol II) promoter and a polyadenylation site. This entire sequence is then inserted into a plasmid. Six plasmids derived from attenuated master strain cDNA are cotransfected into a target cell, often a chicken egg, alongside two plasmids of the currently circulating wild-type influenza strain. Inside the target cell, the two "stacked" Pol I and Pol II enzymes transcribe the viral cDNA to synthesize both negative-sense viral RNA and positive-sense mRNA, effectively creating an attenuated virus. The result is a defective vaccine strain that is similar to the current virus strain, allowing a host to build immunity. This synthesized vaccine strain can then be used as a seed virus to create further vaccines. Advantages and disadvantages Vaccines engineered from reverse genetics carry several advantages over traditional vaccine designs. Most notable is speed of production. Due to the high antigenic variation in the HA and NA glycoproteins, a reverse-genetic approach allows for the necessary genotype (i.e. one containing HA and NA proteins taken from currently circulating virus strains) to be formulated rapidly. Additionally, since the final product of a reverse genetics attenuated vaccine production is a live virus, a higher immunogenicity is exhibited than in traditional inactivated vaccines, which must be killed using chemical procedures before being transferred as a vaccine. However, due to the live nature of attenuated viruses, complications may arise in immunodeficient patients. There is also the possibility that a mutation in the virus could result the vaccine to turning back into a live unattenuated virus.
Biology and health sciences
Genetics
Biology
42430399
https://en.wikipedia.org/wiki/Grey%20parrot
Grey parrot
The grey parrot (Psittacus erithacus), also known as the Congo grey parrot, African grey parrot or Congo African grey parrot, is an African parrot in the family Psittacidae. The Timneh parrot (Psittacus timneh) was previously treated as a subspecies of the grey parrot, but has since been elevated to a full species. Taxonomy The grey parrot was formally described in 1758 by Swedish naturalist Carl Linnaeus in the tenth edition of Systema Naturae. He placed it with all other parrots in the genus Psittacus and coined the binomial name Psittacus erithacus. Linnaeus erroneously specified the type locality as "Guinea": the locality was later designated as Ghana in West Africa. The genus name is Latin for "parrot". The specific epithet erithacus is Latin and is derived from the Ancient Greek εριθακος (erithakos) for an unknown bird that was said to mimic human sounds, perhaps the black redstart. The species is monotypic; no subspecies are accepted. The Timneh parrot was formerly treated as a subspecies of the grey parrot, but is now considered a separate species based mainly on results from a genetic and morphological study published in 2007. Although Linnaeus placed all the parrots known to him in the genus Psittacus, only the grey parrot and the Timneh parrot are now assigned to this genus. The Clements Checklist classifies the Príncipe grey parrot as a subspecies of the grey parrot. It therefore recognises two subspecies: P. e. erithacus Linnaeus, 1758–Ivory Coast to Kenya, Tanzania, Príncipe, São Tomé, and Bioko Príncipe grey parrot (P. e. principe) Alexander, 1909 – Príncipe, an island off the west coast of Africa This is in contrast to the IOC, which instead classifies it as a subspecies of the Timneh parrot. Description The grey parrot is medium-sized, predominantly grey and black-billed. Its typical weight is , with an approximate length of , and a wingspan of . The head and wings are generally darker than the body. The head and body feathers have slight white edges; the tail feathers are red. Both sexes appear similar. Juvenile colouration is similar to that of adults, but typically their eyes are dark grey to black, compared to the adults' yellow irises around dark pupils, and their undertail coverts are tinged with grey. Adults weigh . Grey parrots may live for 40–60 years in captivity, although their mean lifespan in the wild appears to be shorter — approximately 23 years. They start breeding at an age of 3–5 years and lay 3–5 eggs per brood. Distribution and habitat The grey parrot is native to equatorial Africa, including Angola, Cameroon, the Congo, Gabon, Ivory Coast, Ghana, Kenya, and Uganda. The species is found inside a range from Kenya to the eastern part of Ivory Coast. Current estimates for the global population are uncertain and range from 630,000 to 13 million birds. Populations are decreasing worldwide. The species seems to favour dense forests, but can also be found at forest edges and in more open vegetation types, such as gallery and savanna forests. A population study published in 2015 found that the species had been "virtually eliminated" from Ghana with numbers declining 90 to 99% since 1992. They were found in only 10 of 42 forested areas, and three roosts that once held 700–1200 birds each, now had only 18 in total. Local people mainly blamed the pet trade and the felling of timber for the decline. Populations are thought to be stable in Cameroon. In the Congo, an estimated 15,000 are taken every year for the pet trade, from the eastern part of the country, although the annual quota is stated to be 5,000. The Florida Fish and Wildlife Conservation Commission indicated they have observed grey parrots escaped or released into South Florida since at least 1984, but no evidence indicates that the population is breeding naturally. Behaviour and ecology in the wild Little is known about the behaviour and activities of these birds in the wild. In addition to a lack of research funding, it can be particularly difficult to study these birds in wild situations due to their status as prey animals, which leads them to have rather secretive personalities. It has been shown that wild grey parrots may also imitate a wide variety of sounds they hear, much like their captive relatives. In the Democratic Republic of the Congo, two greys sound-recorded while roosting reportedly had a repertoire of over 200 different calls, including nine imitations of other wild bird songs and one of a bat. Feeding Grey parrots are mainly frugivorous, with most of their diet consisting of fruit, nuts, and seeds, including oil palm fruit. They sometimes also eat flowers and tree bark, as well as insects and snails. In the wild, the grey parrot is partly a ground feeder. Breeding Grey parrots are monogamous breeders who nest in tree cavities. Each mated pair of parrots needs their own tree for their nest. The hen lays three to five eggs, which she incubates for 30 days while being fed by her mate. The adults defend their nesting sites. Grey parrot chicks require feeding and care from their parents in the nest. The parents take care of them until 4–5 weeks after they are fledged. Young leave the nest at the age of 12 weeks. Little is known about the courtship behaviour of this species in the wild. They weigh at hatching and when they leave their parents. Conservation Natural predators for this species include palm-nut vultures and several raptors. Monkeys target eggs and the young for food. Humans are by far the largest threat to wild grey populations. Between 1994 and 2003, more than 359,000 grey parrots were traded on the international market. Approximately 21% of the wild population was being harvested every year. Mortality rates are extremely high between the time they are captured and they reach the market, ranging from 60 to 66%. This species also is hunted for its meat and for its body parts, which are used in traditional medicines. As a result of the extensive harvest of wild birds, in addition to habitat loss, this species is believed to be undergoing a rapid decline in the wild and therefore, has been rated as endangered by the International Union for Conservation of Nature. In October 2016, the Convention on the International Trade of Endangered Fauna and Flora (CITES) extended the highest level of protection to grey parrots by listing the species under Appendix 1, which regulates international trade in the species. In 2021, the Kenyan government held a short amnesty, during which grey parrot owners could pay a fee to obtain a permit for their birds and facilitate legal ownership. Following the expiry of this time period, it is now illegal to own this species without a permit. In captivity The species is common in captivity and regularly kept by humans as a companion parrot, prized for its ability to mimic human speech, which makes it one of the most popular avian pets. An escaped pet in Japan was returned to his owner after repeating the owner's name and address. Grey parrots are notorious for mimicking noises heard in their environment and using them tirelessly. They are highly intelligent birds, needing extensive behavioural and social enrichment as well as extensive attention in captivity or else they may become distressed. Feather plucking is a common symptom seen among such distressed grey parrots, affecting up to 40% of captive individuals. They may also be prone to behavioural problems due to their sensitive nature. Social isolation hastens stress and aging. The grey parrot is a highly social species which relies on a flock-type structure, even when raised in captivity. Because they are so dependent on the other birds within their flock, much of their speech and vocal ability is acquired through interaction with the humans with whom they reside. Both wild and captive parrots have been shown to use contact calls, which allow them to interact with their flock mates and communicate information about their location, detection of predators, availability of food, and safety status. In addition, contact calls are used to form strong social bonds with their flock mates, or in the case of captive greys, with their human housemates. In captivity, they have been shown to display communicative competence, meaning they not only use human language correctly, but also in such a way that is appropriate for the social situation which they are in. In January 2024, keepers at the Lincolnshire Wildlife Park in Friskney, England, moved a notorious group of eight swearing parrots into a group of more than 100 birds in an attempt to "dilute" their excessive profane mimicry. Diet In captivity, grey parrots may be fed bird pellets, a variety of fruits such as pear, orange, pomegranate, apple, and banana, and vegetables such as carrot, cooked sweet potato, celery, fresh kale, green beans and peas. The birds also need a source of calcium. Disease Grey parrots in captivity have been observed to be susceptible to fungal infections, bacterial infections, nutritional insufficiency, malignant tumors, psittacine beak and feather disease, tapeworms, and blood-worms. Young grey parrots are more commonly infected by psittacine beak and feather disease than adults. Infected birds show symptoms such as loss of appetite, fluffy feathers, sluggishness, and reduced walking abilities due to brittle bones. Grey parrots are more likely to have rhinitis, an inflammatory and infectious disease of the nasal cavity. Birds may exhibit signs such as wheezing, sneezing, nasal snuffling, and swelling or occlusion of the nares. Treatment options include gentle debridement and nasal irrigation. Intelligence and cognition Grey parrots are highly intelligent and are considered to be one of the most intelligent species of psittacines. Many individuals have been shown to perform some tasks at the cognitive level of a four- to six-year-old human child. Several studies have been conducted indicating a suite of higher-level cognitive abilities. Experiments have shown grey parrots can learn number sequences and can learn to associate human voices with the faces of the humans who create them. It has been reported that grey parrots are capable of using existing English words to create new labels for objects when the bird does not know the name of the object, for example "banerry" ("banana" + "cherry") for "apple", "banana crackers" for "dried banana chips" or "yummy bread" for "cake". American scientist Irene Pepperberg's research with Alex the parrot showed his ability to learn more than 100 words, differentiating between objects, colours, materials and shapes. Pepperberg spent several decades working with Alex, and wrote numerous scientific papers on experiments performed, indicating the bird's advanced cognitive abilities. One such study found that Alex had the ability to add numbers as well as having a zero-like concept, similar to that of young children and apes. In addition to their striking cognitive abilities, grey parrots have displayed altruistic behaviour and concern for others. Researchers found that while blue-headed macaws were unlikely to share a nut with other members of their own species, grey parrots would actively give their conspecific partner a nut even if it meant they would not be able to get one themselves. When the roles were reversed, their partners were overwhelmingly likely to return the favour, forgoing their own nut to their partner's benefits. This indicates not only a display of selflessness but also an act of reciprocity. A 2012 study demonstrated that captive grey parrots have individual musical preferences. When presented with the opportunity to choose between two different pieces of music via a touch screen monitor located in their cage, the two birds in the test consistently chose different songs, to which they then danced and sang along. Some pet grey parrots have also been observed using the music feature of smart speakers (such as Google Nest or Amazon Echo) to verbally request playback of specific favoured songs. Some research has shown that foot preference can be linked to the number of words a particular parrot may know and use. Researchers found grey parrots who prefer to use their right foot showed a marked increase in the number of words within their lexicon as compared to left-footed parrots. Scientists postulate that parrots may have lateralization of brain function, much as mammals do. In two murder trials, there was consideration to use the deceased victim's pet grey parrot's "testimony" as evidence due to the parrot's witnessing and repeating the victim's last words. In the 1993 murder trial of Gary Joseph Rasp, the defendant was accused of murdering Jane Gill. Public defender Charles Ogulnik wanted to use Jane's pet grey parrot Max to prove Gary's innocence, due to Max repeating Jane's last words, "Richard, no, no, no!". In the 2017 murder trial of Glenna Duram, the defendant was accused of murdering her husband Martin Duram. The prosecutor was exploring the possibility of using the couple's pet parrot Bud as evidence when Bud kept repeating Martin's last words, "Don't fucking shoot." Mutations Grey mutations occur naturally in the wild, such as the Blue Ino (albino), the Incomplete Ino, and the Blue varieties. The Blue Ino is all white.The Incomplete Ino has light pigmentation. The Blue has a white tail. Due to selection by breeders, some captive grey parrots are partly or completely red. Breeders from South Africa, Australia, New Zealand, and Scandinavia have bred grey parrots intensively since the 1800s. These bred varieties include the Red Pied, F2 Pied, Grizzles, Ino, Incomplete, Parino, Lutino, Cinnamon, and Red Factor. South African bird breeder Von van Antwerpen and New Zealand partner Jaco Bosman selected F2 Pieds and created the first Red Factor Greys. They are rare, may be predominantly red-pigmented, and vary in price depending upon the extent of the red plumage displayed. History The domestication of grey parrots dates back to 2000 B.C., depicting native birds as pets in Egyptian hieroglyphics. They were considered valuable by the Greeks and the Romans, who kept them in cages. Grey parrots, due to illegal trading in recent years, were listed as Endangered in 2016 on the IUCN Red List.
Biology and health sciences
Psittaciformes
Animals
38214506
https://en.wikipedia.org/wiki/Cervalces
Cervalces
Cervalces is an extinct deer genus that lived during the Pliocene and Pleistocene epochs. Cervalces gallicus is either classified as a species of the related Libralces, or an ancestral species to other members of Cervalces. It lived in Europe from the Pliocene to the Pleistocene. Cervalces scotti, the stag-moose, lived in Pleistocene North America. Cervalces latifrons, the broad-fronted moose, and Cervalces carnutorum were found in Pleistocene Europe and Asia. The genus has been suggested to be paraphyletic and ancestral with respect to Alces, the genus which contains the modern moose, and as such, some authors synonymise Cervalces with Alces.
Biology and health sciences
Deer
Animals
38220608
https://en.wikipedia.org/wiki/Huge-LQG
Huge-LQG
The Huge Large Quasar Group, (Huge-LQG, also called U1.27) is a possible structure or pseudo-structure of 73 quasars, referred to as a large quasar group, that measures about 4 billion light-years across. At its discovery, it was identified as the largest and the most massive known structure in the observable universe, though it has been superseded by the Hercules–Corona Borealis Great Wall at 10 billion light-years. There are also issues about its structure (see Dispute section below). Discovery Roger G. Clowes, together with colleagues from the University of Central Lancashire in Preston, United Kingdom, had reported on January 11, 2013 a grouping of quasars within the vicinity of the constellation Leo. They used data from the DR7QSO catalogue of the comprehensive Sloan Digital Sky Survey, a major multi-imaging and spectroscopic redshift survey of the sky. They reported that the grouping was, as they announced, the largest known structure in the observable universe. The structure was initially discovered in November 2012 and took two months of verification before its announcement. News about the structure's announcement spread worldwide, and has received great attention from the scientific community. Characteristics The Huge-LQG was estimated to be about 1.24 Gpc in length, by 640 Mpc and 370 Mpc on the other dimensions, and contains 73 quasars, respectively. Quasars are very luminous active galactic nuclei, thought to be supermassive black holes feeding on matter. Since they are only found in dense regions of the universe, quasars can be used to find overdensities of matter within the universe. It has the approximate binding mass of 6.1 (6.1 trillion (long scale) or 6.1 quintillion (short scale)) . The Huge-LQG was initially named U1.27 due to its average redshift of 1.27 (where the "U" refers to a connected unit of quasars), placing its distance at about 9 billion light-years from Earth. The Huge-LQG is 615 Mpc from the Clowes–Campusano LQG (U1.28), a group of 34 quasars also discovered by Clowes in 1991. Cosmological principle In Clowes' initial announcement of the structure, he has reported that the structure has contradicted the cosmological principle. The cosmological principle implies that at sufficiently large scales, the universe is approximately homogeneous, meaning that the statistical fluctuations in quantities such as the matter density between different regions of the universe are small. However, different definitions exist for the homogeneity scale above which these fluctuations may be considered sufficiently small, and the appropriate definition depends on the context in which it is used. Jaswant Yadav et al. have suggested a definition of the homogeneity scale based on the fractal dimension of the universe; they conclude that, according to this definition, an upper limit for the homogeneity scale in the universe is 260/h Mpc. Some studies that have attempted to measure the homogeneity scale according to this definition have found values in the range 70–130/h Mpc. The Sloan Great Wall, discovered in 2003, has a length of 423 Mpc, which is marginally larger than the homogeneity scale as defined above. The Huge-LQG is three times longer than, and twice as wide as the Yadav et al. upper limit to the homogeneity scale, and has therefore been claimed to challenge our understanding of the universe on large scales. However, due to the existence of long-range correlations, it is known that structures can be found in the distribution of galaxies in the universe that extend over scales larger than the homogeneity scale. Dispute One of the questions that arose after the discovery of the Huge-LQG was regarding the method used in its identification. In the initial paper by Clowes et al., the standard used was statistical friend-of-friends method, which has also been used to identify other similar LQGs. This method has been put into question in a paper by Seshadri Nadathur from the University of Bielefeld. By utilizing a new map that includes all the quasars in the region (including those not included from the 73 quasars of the group), the presence of a structure became less noticeable. After performing a number of statistical analyses on the quasar data, and finding extreme changes in the Huge-LQG membership and shape with small changes in the cluster finding parameters, he determined the probability that apparent clusters the size of the Huge-LQG would appear in a random assortment of quasars, by utilizing the similar friends-of-friends method originally used. Using a Monte Carlo method of at least a thousand runs, he generated a set of random points in three-dimensional space and identified 10,000 regions identical in size to that studied by Clowes, and filled them with randomly distributed quasars with the same position statistics as did the actual quasars in the sky. The original method by Clowes produces at least a thousand clusterings identical to the Huge-LQG, even on regions where one should expect the distribution to be truly random. The data is supporting the study of the homogeneity scale by Yadav et al., and that there is, therefore, no challenge to the cosmological principle. The identification of the Huge-LQG, together with the clusterings identified by Nadathur, is therefore referred to be false positive identifications or errors due to a miscalculation of the statistical measurement used, finally arriving at the conclusion that the Huge-LQG is not a real structure at all. Nevertheless, Clowes et al. found independent support for the reality of the structure from its coincidence with Mg II absorbers (once-ionised magnesium gas, commonly used to probe distant galaxies). The Mg II gas suggests that the Huge-LQG is associated with an enhancement of the mass, rather than being a false positive identification. This point is not discussed by the critical paper. Further support for the reality of the Huge-LQG comes from the work of Hutsemékers et al. in September 2014. They measured the polarization of quasars in the Huge-LQG and found "a remarkable correlation" of the polarization vectors on scales larger than 500 Mpc.
Physical sciences
Notable patches of universe
Astronomy
39667666
https://en.wikipedia.org/wiki/Carborane%20acid
Carborane acid
Carborane acids (X, Y, Z = H, Alk, F, Cl, Br, CF3) are a class of superacids, some of which are estimated to be at least one million times stronger than 100% pure sulfuric acid in terms of their Hammett acidity function values (H0 ≤ –18) and possess computed pKa values well below –20, establishing them as some of the strongest known Brønsted acids. The best-studied example is the highly chlorinated derivative . The acidity of was found to vastly exceed that of triflic acid, , and bistriflimide, , compounds previously regarded as the strongest isolable acids. Their high acidities stem from the extensive delocalization of their conjugate bases, carboranate anions (CXB11Y5Z6−), which are usually further stabilized by electronegative groups like Cl, F, and CF3. Due to the lack of oxidizing properties and the exceptionally low nucleophilicity and high stability of their conjugate bases, they are the only superacids known to protonate C60 fullerene without decomposing it. Additionally, they form stable, isolable salts with protonated benzene, C6H7+, the parent compound of the Wheland intermediates encountered in electrophilic aromatic substitution reactions. The fluorinated carborane acid, , is even stronger than chlorinated carborane acid. It is able to protonate butane to form tert-butyl cation at room temperature and is the only known acid to protonate carbon dioxide to give the bridged cation, , making it possibly the strongest known acid. In particular, CO2 does not undergo observable protonation when treated with the mixed superacids HF-SbF5 or HSO3F-SbF5. As a class, the carborane acids form the most acidic group of well-defined, isolable substances known, far more acidic than previously known single-component strong acids like triflic acid or perchloric acid. In certain cases, like the nearly perhalogenated derivatives mentioned above, their acidities rival (and possibly exceed) those of the traditional mixed Lewis-Brønsted superacids like magic acid and fluoroantimonic acid. (However, a head-to-head comparison has not been possible thus far, due to the lack of a measure of acidity that is suitable for both classes of acids: pKa values are ill-defined for the chemically complex mixed acids while H0 values cannot be measured for the very high melting carborane acids). Acidity A Brønsted-Lowry acid's strength corresponds with its ability to release a hydrogen ion. One common measure of acid strength for concentrated, superacidic liquid media is the Hammett acidity function, H0. Based on its ability to quantitatively protonate benzene, the chlorinated carborane acid was conservatively estimated to have an H0 value at or below −18, leading to the common assertion that carborane acids are at least a million times stronger than 100% sulfuric acid (H0 = −12). However, since the H0 value measures the protonating ability of a liquid medium, the crystalline and high-melting nature of these acids precludes direct measurement of this parameter. In terms of pKa, a slightly different measure of acidity defined as the ability of a given solute to undergo ionization in a solvent, carborane acids are estimated to have pKa values below −20, even without electron-withdrawing substituents on the boron atoms (e.g., is estimated to have a pKa of −24), with the (yet unknown) fully fluorinated analog having a calculated pKa of −46. The known acid with one fewer fluorine is expected to be only slightly weaker (pKa < −40). In the gas phase, has a computed acidity of 216 kcal/mol, compared to an experimentally determined acidity of 241 kcal/mol (in reasonable agreement with the computed value of 230 kcal/mol) for . In contrast, HSbF6 (a simplified model for the proton donating species in fluoroantimonic acid) has a computed gas phase acidity of 255 kcal/mol, while the previous experimentally determined record holder was (C4F9SO2)2NH, a congener of bistriflimide, at 291 kcal/mol. Thus, is likely the most acidic substance so far synthesized in bulk, in terms of its gas phase acidity. In view of its unique reactivity, it is also a strong contender for being the most acidic substance in the condensed phase (see above). Some even more strongly acidic derivatives have been predicted, with gas phase acidities < 200 kcal/mol. Carborane acids differ from classical superacids in being well-defined one component substances. In contrast, classical superacids are often mixtures of a Brønsted acid and Lewis acid (e.g. HF/SbF5). Despite being the strongest acid, the boron-based carborane acids are described as being "gentle", cleanly protonating weakly basic substances without further side reactions. Whereas conventional superacids decompose fullerenes due to their strongly oxidizing Lewis acidic component, carborane acid has the ability to protonate fullerenes at room temperature to yield an isolable salt. Furthermore, the anion that forms as a result of proton transfer is nearly completely inert. This property is what makes the carborane acids the only substances that are comparable in acidity to the mixed superacids that can also be stored in a glass bottle, as various fluoride-donating species (which attack glass) are not present or generated. History Carborane acid was first discovered and synthesized by Professor Christopher Reed and his colleagues in 2004 at the University of California, Riverside. The parent molecule from which carborane acid is derived, an icosahedral carboranate anion, , was first synthesized at DuPont in 1967 by Walter Knoth. Research into this molecule's properties was put on hiatus until the mid 1980s when the Czech group of boron scientists, Plešek, Štíbr, and Heřmánek improved the process for halogenation of carborane molecules. These findings were instrumental in developing the current procedure for carborane acid synthesis. The process consists of treating Cs+[HCB11H11]− with , refluxing under dry argon to fully chlorinate the molecule yielding carborane acid, but this has been shown to fully chlorinate only under select conditions. In 2010, Reed published a guide giving detailed procedures for the synthesis of carborane acids and their derivatives. Nevertheless, the synthesis of carborane acids remains lengthy and difficult and requires a well-maintained glovebox and some specialized equipment. The starting material is commercially available decaborane(14), a highly toxic substance. The most well-studied carborane acid is prepared in 13 steps. The last few steps are especially sensitive and require a glovebox at < 1 ppm H2O without any weakly basic solvent vapors, since bases as weak as benzene or dichloromethane will react with carborane-based electrophiles and Brønsted acids. The final step of the synthesis is the metathesis of the μ-hydridodisilylium carboranate salt with excess liquid, anhydrous hydrogen chloride, presumably driven by the formation of strong Si–Cl and H–H bonds in the volatile byproducts: [Et3Si–H–SiEt3]+[HCB11Cl11]− + 2HCl → + 2Et3SiCl + H2 The product was isolated by evaporation of the byproducts and was characterized by its infrared (νCH = 3023 cm−1) and nuclear magnetic resonance (δ 4.55 (s, 1H, CH), 20.4 (s, 1H, H+) in liquid SO2) spectra (note the extremely downfield chemical shift of the acidic proton). Although the reactions used in the synthesis are analogous, obtaining a pure sample of the more acidic turned out to be even more difficult, requiring extremely rigorous procedures to exclude traces of weakly basic impurities. Structure Carborane acid consists of 11 boron atoms; each boron atom is bound to a chlorine atom. The chlorine atoms serve to enhance acidity and act as shields against attacks from the outside due to the steric hindrance they form around the cluster. The cluster, consisting of the 11 borons, 11 chlorines, and a single carbon atom, is paired with a hydrogen atom, bound to the carbon atom. The boron and carbon atoms are allowed to form six bonds due to boron's ability to form three-center, two-electron bonds. Although the structure of the carborane acid differs greatly from conventional acids, both distribute charge and stability in a similar fashion. The carboranate anion distributes its charge by delocalizing the electrons throughout the 12 cage atoms. This was shown in a single crystal X-ray diffraction study revealing shortened bond lengths in the heterocyclic portion of the ring suggesting electronic delocalization. The chlorinated carba-closo-dodecaborate anion is an outstandingly stable anion with what has previously been described as "substitutionally inert" B–Cl vertices. The descriptor closo indicates that the molecule is formally derived (by B-to-C+ replacement) from a borane of stoichiometry and charge [BnHn]2− (n = 12 for known carborane acids). The cagelike structure formed by the 11 boron atoms and 1 carbon atom allows the electrons to be highly delocalized through the 3D cage (the special stabilization of the carborane system has been termed "σ-aromaticity"), and the high energy required to disrupt the boron cluster portion of the molecule is what gives the anion its remarkable stability. Because the anion is extremely stable, it will not behave as a nucleophile toward the protonated substrate, while the acid itself is completely non-oxidizing, unlike the Lewis acidic components of many superacids like antimony pentafluoride. Hence, sensitive molecules like C60 can be protonated without decomposition. Usage There are many proposed applications for the boron-based carborane acids. For instance, they have been proposed as catalysts for hydrocarbon cracking and isomerization of n-alkanes to form branched isoalkanes ("isooctane", for example). Carborane acids may also be used as strong, selective Brønsted acids for fine chemical synthesis, where the low nucleophilicity of the counteranion may be advantageous. In mechanistic organic chemistry, they may be used in the study of reactive cationic intermediates. In inorganic synthesis, their unparalleled acidity may allow for the isolation of exotic species like salts of protonated xenon.
Physical sciences
Specific acids
Chemistry
51080574
https://en.wikipedia.org/wiki/Chemical%20safety
Chemical safety
Chemical safety includes all safety policies, procedures and practices designed to minimize the risk of exposure to potentially hazardous chemicals. This includes the risks of exposure to persons handling the chemicals, to the surrounding environment, and to the communities and ecosystems within that environment. Manufactured chemicals, either pure or in mixtures, solutions and emulsions, are ubiquitous in modern society, at industrial, occupational and private scale. However, there are chemicals that should not mix or get in contact with others, as they can produce byproducts that may be toxic, carcinogenic, explosive etc., or can be dangerous in themselves. To avoid disasters and mishaps, maintaining safety is paramount. Chemical safety refers to safety issues surrounding the use, production, transport and handling of chemicals at large or small manufacturing facilities, laboratories, non-chemical sites that use manufactured chemicals for their business, or homes during everyday activities. While there is some overlap, it is different from process safety, which is concerned with more than just hazardous chemicals (extending for example to refined and unrefined hydrocarbons). Process safety is specific to industrial process plants, and focuses primarily on major accidents rather than both immediate and long-term effects (such as chemical carcinogenity). The hazardous nature of many chemicals may be increased when mixed with other chemicals, heated or handled inappropriately. In a chemically safe environment, users are able to take appropriate actions in case of accidents, although many incidents of exposure to chemical hazards occur outside controlled environments such as manufacturing plants or laboratories. It is estimated that 1.6 million human deaths occur each year from contact with hazardous chemicals and that in 2016, 45 million disability-adjusted life-years were lost, a significant increase from 2012. Risks and hazards Chemicals in use in industry and research have a range of properties which cause them to be hazardous to life. These include explosiveness, flammability, toxicity, carcinogenicity and teratogenicity. Corrosive substances such as strong alkalis or acids can cause chemical burning. Any one chemical or mixture may exhibit several of these properties. Toxic materials may be solids in powdered or finely divided form, liquids and gases. Any of these materials may all be absorbed by inhalation, directly through the skin of by contact with mucous membranes in the nose or eyes. Some chemicals may persist in the body for substantial periods and can continue to exhibit toxicity. Examples of such materials include mercury, arsenic, dioxins and many organic solvents which can be stored in fat cells. Environmental risks may be difficult to evaluate and may take years to become apparent. The risk to the Earth's ozone layer from the release of CFCs required the investigative powers of scientists throughout the world to understand fully. Science is still working out the seriousness of the effects of persistent halogenated organics on the marine food chain, with some of these chemicals becoming concentrated in the fatty deposits of top predators in concentrations that appear to effect their reproductive success. Management and control The management and control of chemical safety is widely developed through primary legislation, and orders derived from such legislation in the western world and in Australasia. The implementation of such legislation follows a variety of patterns, from the European model of detailed directives and orders implemented through country-specific legislation to the US model of wide-ranging federal enactments. The control of this is divided between State legislation and federal government. Examples from these areas are described below. European Union The Chemicals Agents Directive, a daughter directive of Directive 89/391/EEC, provides the framework for managing chemical safety. The European Chemicals Agency is the responsible agency and specifically implements Registration, Evaluation, Authorisation and Restriction of Chemicals (REACH), sets the standards and ensures compliance across the European Union. The European Chemicals Agency sits under REACH and manages the technical and administrative aspects of the implementation of the Directive. New Zealand The administrative framework is based on Health and Safety at Work (Hazardous Substances) Regulations 2017 and is implemented and enforced by Worksafe, a government agency. Although this legislation is comprehensive in its coverage it does not extend beyond the workplace and imposes no duties or responsibilities in regard to hazardous materials in domestic or educational establishments. United Kingdom In the UK legislation to address chemical safety has been incorporated in many pieces of legislation from the early factories acts onwards. The current Health and Safety at Work etc. Act 1974 provided an all encompassing piece of legislation which covered chemical safety amongst a wide range of other measures designed to improve the safety in the workplace in the UK. Enforcement of chemical safety is the responsibility of the Health and Safety Executive (HSE), which implements relevant sections of the Health and Safety at Work etc. Act 1974, formulates regulations, provides safety advice and guidance and investigates major chemical incidents. United States In the US, the U.S. Chemical Safety and Hazard Investigation Board is responsible for investigating major chemical accidents and making recommendations to mitigate such events in the future. Risk areas Manufacturing The manufacture and purification of chemicals can involve a range of reagents which may themselves be hazardous, and a range of products which equally may be hazardous. For example, in order to produce the herbicide 2,4,5-Trichlorophenol, chlorine, an acutely toxic gas, is reacted with phenol, a hazardous organic liquid. The output is typically a mixture of chlorinated organic compounds, only some of which is the desired product. In this example, contaminants can include 2,3,7,8-tetrachlorodibenzodioxin, a dioxin, one of the most toxic synthetic chemicals known which is both acutely and chronically toxic and teratogenic and whose use on one occasion led to the abandonment of the Times Beach, Missouri. This reaction was also the cause of the infamous Bhopal disaster, during which the highly poisonous gas methyl isocyanate was released. Major chemical accidents (and events that have the potential to escalate to major accidents) are covered in the specialized domain of process safety. Laboratories Laboratories in schools, university, research establishments and manufacturing typically store and handle a wide range of chemicals. Safety standards for such areas are high and most laboratories provide specific infrastructure to minimise risk including fume cupboards, impervious and inert work surfaces, emergency shower stations, emergency eye wash stations, and strict policies on the wearing of appropriate PPE. Domestic use There are many hazardous chemicals in routine use in the domestic environment including cleaning agents such as bleach and caustic soda. Some modern cleaning formulations also contain sodium silicate and other highly alkaline components. Modern packaging into "pods" may increase the risk of misuse, particularly for small children. Waste disposal Surplus hazardous materials often reach the waste stream , whether by being placed in the solid waste stream or being flushed away down sinks, basins or toilets. Although dilution may reduce the immediate risk, the long term environmental risk remains and can be made more serious as more hazardous materials are disposed of in the waste water stream. Disposal with solid waste poses risks to those handling the waste and may pose unexpected risks to uninformed members of the public. Some industrial waste chemical dumps have been known to spontaneously ignite, years after the waste had been deposited. Aluminium dross processing can produce a flux-rich waste that evolves ammonia gas if wetted, and can also spontaneously ignite when stored in bulk. Common safety practices Personal protective equipment (PPE) Basic chemical safety practice includes wearing protective personal protective equipment such as safety goggles. Personal protective equipment alone does not provide sufficient protection from the risks posed by hazardous chemicals but it helps minimize the risk of exposure in controlled environments. Safety googles are required when handling chemicals to prevent chemicals from getting into the eyes. Wearing standard gloves, closed-toed shoes, long trousers, and laboratory coats to protect the stomach, back and forearm is usually required in laboratories, with similar provisions for other workplaces. Regulation of use of PPE is varies by country. Labelling For most of the world, a standard set of illustrative pictograms have been adopted to indicate where hazards exists and the type of hazard present. These pictograms are routinely displayed on containers, transport vehicles, safety advice and anywhere where the material occurs. These have been extended and standardized as the Globally Harmonized System of Classification and Labelling of Chemicals and are now used throughout much of the world. In the U.S., an NFPA diamond is used to identify chemical hazards such as flammability, corrosivity, toxicity, and reactivity. This label is made up of four colour-coded fields: red (flammability), blue (health hazard), yellow (chemical reactivity), and white (special hazard). The numbering ranges from 0 to 4 (for colours except white), with 0 meaning there is no potential hazard and 4 indicating the chemical is extremely hazardous. Material safety data sheets (MSDS) Manufacturers provide a material safety data sheet (MSDS; also 'safety data sheet', SDS) for each hazardous chemical they produce. MSDSs are typically used and affixed by regulation at worksites handling those chemicals. An MSDS includes workplace health aspects, restrictions, emergency numbers, and other safety information. Transportation In a number of countries, the Hazchem system is used whenever a potentially hazardous cargo is transported whether by road, rail, sea or air. A standardized Hazchem sign affixed on the vehicle provides details of the material being transported, the nature of the hazard and the approved emergency response.
Physical sciences
Basics: General
Chemistry
36813475
https://en.wikipedia.org/wiki/Prawn
Prawn
Prawn is a common name for small aquatic crustaceans with an exoskeleton and ten legs (members of the order of decapods), some of which are edible. The term prawn is used particularly in the United Kingdom, Ireland, and Commonwealth nations, for large swimming crustaceans or shrimp, especially those with commercial significance in the fishing industry. Shrimp in this category often belong to the suborder Dendrobranchiata. In North America, the term is used less frequently, typically for freshwater shrimp. The terms shrimp and prawn themselves lack scientific standing. Over the years, the way they are used has changed, and in contemporary usage, the terms are almost interchangeable. Shrimp vs. prawn Regional distinctions The terms shrimp and prawn originated in Britain. In the use of common names for species, shrimp is applied to smaller species, particularly species that are dorsoventrally depressed (wider than deep) with a shorter rostrum. It is the only term used for species in the family Crangonidae, such as the common shrimp or brown shrimp, Crangon crangon. Prawn is never applied to very small species. It is applied to most of the larger forms, particularly species that are laterally compressed (deeper than wide) and have a long rostrum. However, the terms are not used consistently. For example, some authors refer to Pandalus montagui as an Aesop shrimp, while others refer to it as an Aesop prawn. Commonwealth countries, and Ireland, tend to follow British usage. Some exceptions occur in Australia, where some authors refer to small species of the Palaemonidae as prawns and call the Alpheidae pistol shrimp. Other Australian authors have given the name banded coral shrimp to the prawn-like Stenopus hispidus and listed "the Processidae and Atyidae as shrimps, the Hippolytidae, Alpheidae, Pandalidae and Campylonotoidea as prawns". New Zealand broadly follows British usage. A rule of thumb given by some New Zealand authors states: "In common usage, shrimp are small, some three inches or less in length, taken for food by netting, usually from shallow water. Prawn are larger, up to 12 inches long, taken by trapping and trawling." In Canada, the terms are often used interchangeably as in New Zealand (larger species are prawns, and smaller are often shrimp), but regional variations exist. South Africa and the former British colonies in Asia also seem to follow British usage generally. Shrimp is the more general term in the United States. The term prawn is less commonly used in the United States, being applied mainly to larger shrimp and those living in freshwater.
Biology and health sciences
Shrimps and prawns
Animals
43856084
https://en.wikipedia.org/wiki/Icosahedron
Icosahedron
In geometry, an icosahedron ( or ) is a polyhedron with 20 faces. The name comes . The plural can be either "icosahedra" () or "icosahedrons". There are infinitely many non-similar shapes of icosahedra, some of them being more symmetrical than others. The best known is the (convex, non-stellated) regular icosahedron—one of the Platonic solids—whose faces are 20 equilateral triangles. Regular icosahedra There are two objects, one convex and one nonconvex, that can both be called regular icosahedra. Each has 30 edges and 20 equilateral triangle faces with five meeting at each of its twelve vertices. Both have icosahedral symmetry. The term "regular icosahedron" generally refers to the convex variety, while the nonconvex form is called a great icosahedron. Convex regular icosahedron The convex regular icosahedron is usually referred to simply as the regular icosahedron, one of the five regular Platonic solids, and is represented by its Schläfli symbol {3, 5}, containing 20 triangular faces, with 5 faces meeting around each vertex. Its dual polyhedron is the regular dodecahedron {5, 3} having three regular pentagonal faces around each vertex. Great icosahedron The great icosahedron is one of the four regular star Kepler-Poinsot polyhedra. Its Schläfli symbol is {3, }. Like the convex form, it also has 20 equilateral triangle faces, but its vertex figure is a pentagram rather than a pentagon, leading to geometrically intersecting faces. The intersections of the triangles do not represent new edges. Its dual polyhedron is the great stellated dodecahedron {, 3}, having three regular star pentagonal faces around each vertex. Stellated icosahedra Stellation is the process of extending the faces or edges of a polyhedron until they meet to form a new polyhedron. It is done symmetrically so that the resulting figure retains the overall symmetry of the parent figure. In their book The Fifty-Nine Icosahedra, Coxeter et al. enumerated 59 such stellations of the regular icosahedron. Of these, many have a single face in each of the 20 face planes and so are also icosahedra. The great icosahedron is among them. Other stellations have more than one face in each plane or form compounds of simpler polyhedra. These are not strictly icosahedra, although they are often referred to as such. Pyritohedral symmetry A regular icosahedron can be distorted or marked up as a lower pyritohedral symmetry, and is called a snub octahedron, snub tetratetrahedron, snub tetrahedron, and pseudo-icosahedron. This can be seen as an alternated truncated octahedron. If all the triangles are equilateral, the symmetry can also be distinguished by colouring the 8 and 12 triangle sets differently. Pyritohedral symmetry has the symbol (3*2), [3+,4], with order 24. Tetrahedral symmetry has the symbol (332), [3,3]+, with order 12. These lower symmetries allow geometric distortions from 20 equilateral triangular faces, instead having 8 equilateral triangles and 12 congruent isosceles triangles. These symmetries offer Coxeter diagrams: and respectively, each representing the lower symmetry to the regular icosahedron , (*532), [5,3] icosahedral symmetry of order 120. Cartesian coordinates The Cartesian coordinates of the 12 vertices can be defined by the vectors defined by all the possible cyclic permutations and sign-flips of coordinates of the form (2, 1, 0). These coordinates represent the truncated octahedron with alternated vertices deleted. This construction is called a snub tetrahedron in its regular icosahedron form, generated by the same operations carried out starting with the vector (ϕ, 1, 0), where ϕ is the golden ratio. Jessen's icosahedron In Jessen's icosahedron, sometimes called Jessen's orthogonal icosahedron, the 12 isosceles faces are arranged differently so that the figure is non-convex and has right dihedral angles. It is scissors congruent to a cube, meaning that it can be sliced into smaller polyhedral pieces that can be rearranged to form a solid cube. Cuboctahedron A regular icosahedron is topologically identical to a cuboctahedron with its 6 square faces bisected on diagonals with pyritohedral symmetry. The icosahedra with pyritohedral symmetry constitute an infinite family of polyhedra which include the cuboctahedron, regular icosahedron, Jessen's icosahedron, and double cover octahedron. Cyclical kinematic transformations among the members of this family exist. Other icosahedra Rhombic icosahedron The rhombic icosahedron is a zonohedron made up of 20 congruent rhombs. It can be derived from the rhombic triacontahedron by removing 10 middle faces. Even though all the faces are congruent, the rhombic icosahedron is not face-transitive. Pyramid and prism symmetries Common icosahedra with pyramid and prism symmetries include: 19-sided pyramid (plus 1 base = 20). 18-sided prism (plus 2 ends = 20). 9-sided antiprism (2 sets of 9 sides + 2 ends = 20). 10-sided bipyramid (2 sets of 10 sides = 20). 10-sided trapezohedron (2 sets of 10 sides = 20). Johnson solids Several Johnson solids are icosahedra:
Mathematics
Three-dimensional space
null
53946357
https://en.wikipedia.org/wiki/Stokes%20problem
Stokes problem
In fluid dynamics, Stokes problem also known as Stokes second problem or sometimes referred to as Stokes boundary layer or Oscillating boundary layer is a problem of determining the flow created by an oscillating solid surface, named after Sir George Stokes. This is considered one of the simplest unsteady problems that has an exact solution for the Navier–Stokes equations. In turbulent flow, this is still named a Stokes boundary layer, but now one has to rely on experiments, numerical simulations or approximate methods in order to obtain useful information on the flow. Flow descriptionLagerstrom, Paco Axel. Laminar flow theory. Princeton University Press, 1996. Consider an infinitely long plate which is oscillating with a velocity in the direction, which is located at in an infinite domain of fluid, where is the frequency of the oscillations. The incompressible Navier–Stokes equations reduce to where is the kinematic viscosity. The pressure gradient does not enter into the problem. The initial, no-slip condition on the wall is and the second boundary condition is due to the fact that the motion at is not felt at infinity. The flow is only due to the motion of the plate, there is no imposed pressure gradient. SolutionLandau, Lev Davidovich, and Evgenii Mikhailovich Lifshitz. "Fluid mechanics." (1987). The initial condition is not required because of periodicity. Since both the equation and the boundary conditions are linear, the velocity can be written as the real part of some complex function because . Substituting this into the partial differential equation reduces it to ordinary differential equation with boundary conditions The solution to the above problem is The disturbance created by the oscillating plate travels as the transverse wave through the fluid, but it is highly damped by the exponential factor. The depth of penetration of this wave decreases with the frequency of the oscillation, but increases with the kinematic viscosity of the fluid. The force per unit area exerted on the plate by the fluid is There is a phase shift between the oscillation of the plate and the force created. Vorticity oscillations near the boundary An important observation from Stokes' solution for the oscillating Stokes flow is that vorticity oscillations are confined to a thin boundary layer and damp exponentially when moving away from the wall. This observation is also valid for the case of a turbulent boundary layer. Outside the Stokes boundary layer – which is often the bulk of the fluid volume – the vorticity oscillations may be neglected. To good approximation, the flow velocity oscillations are irrotational outside the boundary layer, and potential flow theory can be applied to the oscillatory part of the motion. This significantly simplifies the solution of these flow problems, and is often applied in the irrotational flow regions of sound waves and water waves. Fluid bounded by an upper wall If the fluid domain is bounded by an upper, stationary wall, located at a height , the flow velocity is given by where . Fluid bounded by a free surface Suppose the extent of the fluid domain be with representing a free surface. Then the solution as shown by Chia-Shun Yih in 1968 is given by where Flow due to an oscillating pressure gradient near a plane rigid plate The case for an oscillating far-field flow, with the plate held at rest, can easily be constructed from the previous solution for an oscillating plate by using linear superposition of solutions. Consider a uniform velocity oscillation far away from the plate and a vanishing velocity at the plate . Unlike the stationary fluid in the original problem, the pressure gradient here at infinity must be a harmonic function of time. The solution is then given by which is zero at the wall y = 0, corresponding with the no-slip condition for a wall at rest. This situation is often encountered in sound waves near a solid wall, or for the fluid motion near the sea bed in water waves. The vorticity, for the oscillating flow near a wall at rest, is equal to the vorticity in case of an oscillating plate but of opposite sign. Stokes problem in cylindrical geometry Torsional oscillation Consider an infinitely long cylinder of radius exhibiting torsional oscillation with angular velocity where is the frequency. Then the velocity approaches after the initial transient phase to where is the modified Bessel function of the second kind. This solution can be expressed with real argument as: where and are Kelvin functions and is to the dimensionless oscillatory Reynolds number defined as , being the kinematic viscosity. Axial oscillation If the cylinder oscillates in the axial direction with velocity , then the velocity field is where is the modified Bessel function of the second kind. Stokes–Couette flow In the Couette flow, instead of the translational motion of one of the plate, an oscillation of one plane will be executed. If we have a bottom wall at rest at and the upper wall at is executing an oscillatory motion with velocity , then the velocity field is given by The frictional force per unit area on the moving plane is and on the fixed plane is .
Physical sciences
Fluid mechanics
Physics
38233680
https://en.wikipedia.org/wiki/Peatland
Peatland
A peatland is a type of wetland whose soils consist of organic matter from decaying plants, forming layers of peat. Peatlands arise because of incomplete decomposition of organic matter, usually litter from vegetation, due to water-logging and subsequent anoxia. Peatlands are unusual landforms that derive mostly from biological rather than physical processes, and can take on characteristic shapes and surface patterning. The formation of peatlands is primarily controlled by climatic conditions such as precipitation and temperature, although terrain relief is a major factor as waterlogging occurs more easily on flatter ground and in basins. Peat formation typically initiates as a paludification of a mineral soil forests, terrestrialisation of lakes, or primary peat formation on bare soils on previously glaciated areas. A peatland that is actively forming peat is called a mire. All types of mires share the common characteristic of being saturated with water, at least seasonally with actively forming peat, while having their own ecosystem. Peatlands are the largest natural carbon store on land. Covering around 3 million km2 globally, they sequester 0.37 gigatons (Gt) of carbon dioxide () a year. Peat soils store over 600 Gt of carbon, more than the carbon stored in all other vegetation types, including forests. This substantial carbon storage represents about 30% of the world's soil carbon, underscoring their critical importance in the global carbon cycle. In their natural state, peatlands provide a range of ecosystem services, including minimising flood risk and erosion, purifying water and regulating climate. Peatlands are under threat by commercial peat harvesting, drainage and conversion for agriculture (notably palm oil in the tropics) and fires, which are predicted to become more frequent with climate change. The destruction of peatlands results in release of stored greenhouse gases into the atmosphere, further exacerbating climate change. Types For botanists and ecologists, the term peatland is a general term for any terrain dominated by peat to a depth of at least , even if it has been completely drained (i.e., a peatland can be dry). A peatland that is still capable of forming new peat is called a mire, while drained and converted peatlands might still have a peat layer but are not considered mires as the formation of new peat has ceased. There are two types of mire: bog and fen. A bog is a mire that, due to its raised location relative to the surrounding landscape, obtains all its water solely from precipitation (ombrotrophic). A fen is located on a slope, flat, or in a depression and gets most of its water from the surrounding mineral soil or from groundwater (minerotrophic). Thus, while a bog is always acidic and nutrient-poor, a fen may be slightly acidic, neutral, or alkaline, and either nutrient-poor or nutrient-rich. All mires are initially fens when the peat starts to form, and may turn into bogs once the height of the peat layer reaches above the surrounding land. A quagmire or is a floating (quaking) mire, bog, or any peatland being in a stage of hydrosere or hydrarch (hydroseral) succession, resulting in pond-filling yields underfoot (floating mats). Ombrotrophic types of quagmire may be called quaking bog (quivering bog). Minerotrophic types can be named with the term quagfen. Some swamps can also be peatlands (e.g.: peat swamp forest), while marshes are generally not considered to be peatlands. Swamps are characterized by their forest canopy or the presence of other tall and dense vegetation like papyrus. Like fens, swamps are typically of higher pH level and nutrient availability than bogs. Some bogs and fens can support limited shrub or tree growth on hummocks. A marsh is a type of wetland within which vegetation is rooted in mineral soil. Global distribution Peatlands are found around the globe, although are at their greatest extent at high latitudes in the Northern Hemisphere. Peatlands are estimated to cover around 3% of the globe's surface, although estimating the extent of their cover worldwide is difficult due to the varying accuracy and methodologies of land surveys from many countries. Mires occur wherever conditions are right for peat accumulation: largely where organic matter is constantly waterlogged. Hence the distribution of mires is dependent on topography, climate, parent material, biota and time. The type of mire—bog, fen, marsh or swamp—depends also on each of these factors. The largest accumulation of mires constitutes around 64% of global peatlands and is found in the temperate, boreal and subarctic zones of the Northern Hemisphere. Mires are usually shallow in polar regions because of the slow rate of accumulation of dead organic matter, and often contain permafrost and palsas. Very large swathes of Canada, northern Europe and northern Russia are covered by boreal mires. In temperate zones mires are typically more scattered due to historical drainage and peat extraction, but can cover large areas. One example is blanket bog where precipitation is very high i.e., in maritime climates inland near the coasts of the north-east and south Pacific, and the north-west and north-east Atlantic. In the sub-tropics, mires are rare and restricted to the wettest areas. Mires can be extensive in the tropics, typically underlying tropical rainforest (for example, in Kalimantan, the Congo Basin and Amazon basin). Tropical peat formation is known to occur in coastal mangroves as well as in areas of high altitude. Tropical mires largely form where high precipitation is combined with poor conditions for drainage. Tropical mires account for around 11% of peatlands globally (more than half of which can be found in Southeast Asia), and are most commonly found at low altitudes, although they can also be found in mountainous regions, for example in South America, Africa and Papua New Guinea. Indonesia, particularly on the islands of Sumatra, Kalimantan and Papua, has one of the largest peatlands in the world, with an area of about 24 million hectares. These peatlands play an important role in global carbon storage and have very high biodiversity. However, peatlands in Indonesia also face major threats from deforestation and forest fires. In the early 21st century, the world's largest tropical mire was found in the Central Congo Basin, covering 145,500 km2 and storing up to 1013 kg of carbon. The total area of mires has declined globally due to drainage for agriculture, forestry and peat harvesting. For example, more than 50% of the original European mire area which is more than 300,000 km2 has been lost. Some of the largest losses have been in Russia, Finland, the Netherlands, the United Kingdom, Poland and Belarus. A catalog of the peat research collection at the University of Minnesota Duluth provides references to research on worldwide peat and peatlands. Biochemical processes Peatlands have unusual chemistry that influences, among other things, their biota and water outflow. Peat has very high cation-exchange capacity due to its high organic matter content: cations such as Ca2+ are preferentially adsorbed onto the peat in exchange for H+ ions. Water passing through peat declines in nutrients and pH. Therefore, mires are typically nutrient-poor and acidic unless the inflow of groundwater (bringing in supplementary cations) is high. Generally, whenever the inputs of carbon into the soil from dead organic matter exceed the carbon outputs via organic matter decomposition, peat is formed. This occurs due to the anoxic state of water-logged peat, which slows down decomposition. Peat-forming vegetation is typically also recalcitrant (poorly decomposing) due to high lignin and low nutrient content. Topographically, accumulating peat elevates the ground surface above the original topography. Mires can reach considerable heights above the underlying mineral soil or bedrock: peat depths of above 10 m have been commonly recorded in temperate regions (many temperate and most boreal mires were removed by ice sheets in the last Ice Age), and above 25 m in tropical regions.[7] When the absolute decay rate of peat in the catotelm (the lower, water-saturated zone of the peat layer) matches the rate of input of new peat into the catotelm, the mire will stop growing in height.[8] Carbon storage and methanogenesis Despite accounting for just 3% of Earth's land surfaces, peatlands are collectively a major carbon store containing between 500 and 700 billion tonnes of carbon. Carbon stored within peatlands equates to over half the amount of carbon found in the atmosphere. Peatlands interact with the atmosphere primarily through the exchange of carbon dioxide, methane and nitrous oxide, and can be damaged by excess nitrogen from agriculture or rainwater. The sequestration of carbon dioxide takes place at the surface via the process of photosynthesis, while losses of carbon dioxide occur through living plants via autotrophic respiration and from the litter and peat via heterotrophic respiration. In their natural state, mires are a small atmospheric carbon dioxide sink through the photosynthesis of peat vegetation, which outweighs their release of greenhouse gases. On the other hand, most mires are generally net emitters of methane and nitrous oxide. Due to the continued sequestration over millennia, and because of the longer atmospheric lifespan of the molecules compared with methane and nitrous oxide, peatlands have had a net cooling effect on the atmosphere. The water table position of a peatland is the main control of its carbon release to the atmosphere. When the water table rises after a rainstorm, the peat and its microbes are submerged under water inhibiting access to oxygen, reducing release via respiration. Carbon dioxide release increases when the water table falls lower, such as during a drought, as this increases the availability of oxygen to the aerobic microbes thus accelerating peat decomposition. Levels of methane emissions also vary with the water table position and temperature. A water table near the peat surface gives the opportunity for anaerobic microorganisms to flourish. Methanogens are strictly anaerobic organisms and produce methane from organic matter in anoxic conditions below the water table level, while some of that methane is oxidised by methanotrophs above the water table level. Therefore, changes in water table level influence the size of these methane production and consumption zones. Increased soil temperatures also contribute to increased seasonal methane flux. A study in Alaska found that methane may vary by as much as 300% seasonally with wetter and warmer soil conditions due to climate change. Peatlands are important for studying past climate because they are sensitive to changes in the environment and can reveal levels of isotopes, pollutants, macrofossils, metals from the atmosphere and pollen. For example, carbon-14 dating can reveal the age of the peat. The dredging and destruction of a peatland will release the carbon dioxide that could reveal irreplaceable information about the past climatic conditions. Many kinds of microorganisms inhabit peatlands, due to the regular supply of water and abundance of peat forming vegetation. These microorganisms include but are not limited to methanogens, algae, bacteria, zoobenthos, of which Sphagnum species are most abundant. Humic substances Peat contains substantial organic matter, where humic acid dominates. Humic materials can store substantial amounts of water, making them an essential component in the peat environment, contributing to increased carbon storage due to the resulting anaerobic condition. If the peatland is dried from long-term cultivation and agricultural use, it will lower the water table, and the increased aeration will release carbon. Upon extreme drying, the ecosystem can undergo a state shift, turning the mire into a barren land with lower biodiversity and richness. Humic acid formation occurs during the biogeochemical degradation of vegetation debris and animal residue. The loads of organic matter in the form of humic acid is a source of precursors of coal. Prematurely exposing the organic matter to the atmosphere promotes the conversion of organics to carbon dioxide to be released in the atmosphere. Use by humans Records of past human behaviour and environments can be contained within peatlands. These may take the form of human artefacts, or palaeoecological and geochemical records. Peatlands are used by humans in modern times for a range of purposes, the most dominant being agriculture and forestry, which accounts for around a quarter of global peatland area. This involves cutting drainage ditches to lower the water table with the intended purpose of enhancing the productivity of forest cover or for use as pasture or cropland. Agricultural uses for mires include the use of natural vegetation for hay crop or grazing, or the cultivation of crops on a modified surface. In addition, the commercial extraction of peat for energy production is widely practiced in Northern European countries, such as Russia, Sweden, Finland, Ireland and the Baltic states. Tropical peatlands comprise 0.25% of Earth's terrestrial land surface but store 3% of all soil and forest carbon stocks. The use of this land by humans, including draining and harvesting of tropical peat forests, results in the emission of large amounts of carbon dioxide into the atmosphere. In addition, fires occurring on peatland dried by the draining of peat bogs release even more carbon dioxide. The economic value of a tropical peatland was once derived from raw materials, such as wood, bark, resin and latex, the extraction of which did not contribute to large carbon emissions. In Southeast Asia, peatlands are drained and cleared for human use for a variety of reasons, including the production of palm oil and timber for export in primarily developing nations. This releases stored carbon dioxide and preventing the system from sequestering carbon again. Tropical peatlands The global distribution of tropical peatlands is concentrated in Southeast Asia where agricultural use of peatlands has been increased in recent decades. Large areas of tropical peatland have been cleared and drained for the production of food and cash crops such as palm oil. Large-scale drainage of these plantations often results in subsidence, flooding, fire and deterioration of soil quality. Small scale encroachment on the other hand, is linked to poverty and is so widespread that it also has negatively impacts these peatlands. The biotic and abiotic factors controlling Southeast Asian peatlands are interdependent. Its soil, hydrology and morphology are created by the present vegetation through the accumulation of its own organic matter, building a favorable environment for this specific vegetation. This system is therefore vulnerable to changes in hydrology or vegetation cover. These peatlands are mostly located in developing regions with impoverished and rapidly growing populations. These lands have become targets for commercial logging, paper pulp production and conversion to plantations through clear-cutting, drainage and burning. Drainage of tropical peatlands alters the hydrology and increases their susceptibility to fire and soil erosion, as a consequence of changes in physical and chemical compositions. The change in soil strongly affects the sensitive vegetation and forest die-off is common. The short-term effect is a decrease in biodiversity but the long-term effect, since these encroachments are hard to reverse, is a loss of habitat. Poor knowledge about peatlands' sensitive hydrology and lack of nutrients often lead to failing plantations, resulting in increasing pressure on remaining peatlands. Biology and peat characteristics Tropical peatland vegetation varies with climate and location. Three different characterizations are mangrove woodlands present in the littoral zones and deltas of salty water, followed inland by swamp forests. These forests occur on the margin of peatlands with a palm rich flora with trees 70 m tall and 8 m in girth accompanied by ferns and epiphytes. The third, padang, from the Malay and Indonesian word for forest, consists of shrubs and tall thin trees and appear in the center of large peatlands. The diversity of woody species, like trees and shrubs, are far greater in tropical peatlands than in peatlands of other types. Peat in the tropics is therefore dominated by woody material from trunks of trees and shrubs and contain little to none of the sphagnum moss that dominates in boreal peatlands. It's only partly decomposed and the surface consists of a thick layer of leaf litter. Forestry in peatlands leads to drainage and rapid carbon losses since it decreases inputs of organic matter and accelerate the decomposition. In contrast to temperate wetlands, tropical peatlands are home to several species of fish. Many new, often endemic, species has been discovered but many of them are considered threatened. Greenhouse gases and fires The tropical peatlands in Southeast Asia only cover 0.2% of Earth's land area but CO2 emissions are estimated to be 2 Gt per year, equal to 7% of global fossil fuel emissions. These emissions get bigger with drainage and burning of peatlands and a severe fire can release up to 4,000 t of CO2/ha. Burning events in tropical peatlands are becoming more frequent due to large-scale drainage and land clearance and in the past ten years, more than 2 million hectares was burnt in Southeast Asia alone. These fires last typically for 1–3 months and release large amounts of CO2. Indonesia is one of the countries suffering from peatland fires, especially during years with ENSO-related drought, an increasing problem since 1982 as a result of developing land use and agriculture. During the El Niño-event in 1997–1998 more than 24,400 km2 of peatland was lost to fires in Indonesia alone from which 10,000 km2 was burnt in Kalimantan and Sumatra. The output of CO2 was estimated to 0.81–2.57 Gt, equal to 13–40% of that year's global output from fossil fuel burning. Indonesia is now considered the third-biggest contributor to global CO2 emissions, caused primarily by these fires. The 2015 El Niño event further exacerbated the condition of these peatlands, as wildfires burned approximately 3 million hectares of forests and peatlands on the east coast of Sumatra and in Central Kalimantan, emitting 11.3 teragrams of CO2 per day during the months of September and October that year. With a warming climate, these burnings are expected to increase in intensity and number. This is a result of a dry climate together with an extensive rice farming project, called the Mega Rice Project, started in the 1990s, which converted 1 Mha of peatlands to rice paddies. Forest and land was cleared by burning and 4000 km of channels drained the area. Drought and acidification of the lands led to bad harvest and the project was abandoned in 1999. Similar projects in China have led to immense loss of tropical marshes and fens due to rice production. Drainage, which also increases the risk of burning, can cause additional emissions of CO2 by 30–100 t/ha/year if the water table is lowered by only 1 m. The draining of peatlands is likely the most important and long-lasting threat to peatlands globally, but is especially prevalent in the tropics. Peatlands release the greenhouse gas methane which has strong global warming potential. However, subtropical wetlands have shown high CO2 binding per mol of released methane, which is a function that counteracts global warming. Tropical peatlands are suggested to contain about 100 Gt carbon, corresponding to more than 50% of the carbon present as CO2 in the atmosphere. Accumulation rates of carbon during the last millennium were close to 40 g C/m2/yr. Northern peatlands Northern peatlands are associated with boreal and subarctic climates. Northern peatlands were mostly built up during the Holocene after the retreat of Pleistocene glaciers, but in contrast tropical peatlands are much older. Total northern peat carbon stocks are estimated to be 1055 Gt of carbon. Of all northern circumpolar countries, Russia has the largest area of peatlands, and contains the largest peatland in the world, The Great Vasyugan Mire. Nakaikemi Wetland in southwest Honshu, Japan is more than 50,000 years old and has a depth of 45 m. The Philippi Peatland in Greece has probably one of the deepest peat layers with a depth of 190 m. Impacts on global climate According to the IPCC Sixth Assessment Report, the conservation and restoration of wetlands and peatlands has large economic potential to mitigate greenhouse gas emissions, providing benefits for adaptation, mitigation and biodiversity. Wetlands provide an environment where organic carbon is stored in living plants, dead plants and peat, as well as converted to carbon dioxide and methane. Three main factors give wetlands the ability to sequester and store carbon: high biological productivity, high water table and low decomposition rates. Suitable meteorological and hydrological conditions are necessary to provide an abundant water source for the wetland. Fully water-saturated wetland soils allow anaerobic conditions to manifest, storing carbon but releasing methane. Wetlands make up about 5-8% of Earth's terrestrial land surface but contain about 20-30% of the planet's 2500 Gt soil carbon stores. Peatlands contain the highest amounts of soil organic carbon of all wetland types. Wetlands can become sources of carbon, rather than sinks, as the decomposition occurring within the ecosystem emits methane. Natural peatlands do not always have a measurable cooling effect on the climate in a short time span as the cooling effects of sequestering carbon are offset by the emission of methane, which is a strong greenhouse gas. However, given the short "lifetime" of methane (12 years), it is often said that methane emissions are unimportant within 300 years compared to carbon sequestration in wetlands. Within that time frame or less, most wetlands become both net carbon and radiative sinks. Hence, peatlands do result in cooling of the Earth's climate over a longer time period as methane is oxidised quickly and removed from the atmosphere whereas atmospheric carbon dioxide is continuously absorbed. Throughout the Holocene (the past 12,000 years), peatlands have been persistent terrestrial carbon sinks and have had a net cooling effect, sequestering 5.6 to 38 grams of carbon per square metre per year. On average, it has been estimated that today northern peatlands sequester 20 to 30 grams of carbon per square metre per year. Peatlands insulate the permafrost in subarctic regions, thus delaying thawing during summer, as well as inducing the formation of permafrost. As the global climate continues to warm, wetlands could become major carbon sources as higher temperatures cause higher carbon dioxide emissions. Compared with untilled cropland, wetlands can sequester around two times the carbon. Carbon sequestration can occur in constructed wetlands as well as natural ones. Estimates of greenhouse gas fluxes from wetlands indicate that natural wetlands have lower fluxes, but man-made wetlands have a greater carbon sequestration capacity. The carbon sequestration abilities of wetlands can be improved through restoration and protection strategies, but it takes several decades for these restored ecosystems to become comparable in carbon storage to peatlands and other forms of natural wetlands. Studies highlight the critical role of peatlands in biodiversity conservation and hydrological stability. These ecosystems are unique habitats for diverse species, including specific insects and amphibians, and act as natural water reservoirs, releasing water during dry periods to sustain nearby freshwater ecosystems and agriculture. Drainage for agriculture and forestry The exchange of carbon between the peatlands and the atmosphere has been of current concern globally in the field of ecology and biogeochemical studies. The drainage of peatlands for agriculture and forestry has resulted in the emission of extensive greenhouse gases into the atmosphere, most notably carbon dioxide and methane. By allowing oxygen to enter the peat column within a mire, drainage disrupts the balance between peat accumulation and decomposition, and the subsequent oxidative degradation results in the release of carbon into the atmosphere. As such, drainage of mires for agriculture transforms them from net carbon sinks to net carbon emitters. Although the emission of methane from mires has been observed to decrease following drainage, the total magnitude of emissions from peatland drainage is often greater as rates of peat accumulation are low. Peatland carbon has been described as "irrecoverable" meaning that, if lost due to drainage, it could not be recovered within time scales relevant to climate mitigation. When undertaken in such a way that preserves the hydrological state of a mire, the anthropogenic use of mires' resources can avoid significant greenhouse gas emissions. However, continued drainage will result in increased release of carbon, contributing to global warming. As of 2016, it was estimated that drained peatlands account for around 10% of all greenhouse gas emissions from agriculture and forestry. Palm oil plantations Palm oil has increasingly become one of the world's largest crops. In comparison to alternatives, palm oil is considered to be among the most efficient sources of vegetable oil and biofuel, requiring only 0.26 hectares of land to produce 1 ton of oil. Palm oil has therefore become a popular cash crop in many low-income countries and has provided economic opportunities for communities. With palm oil as a leading export in countries such as Indonesia and Malaysia, many smallholders have found economic success in palm oil plantations. However, the land selected for plantations are typically substantial carbon stores that promote biodiverse ecosystems. Palm oil plantations have replaced much of the forested peatlands in Southeast Asia. Estimates now state that 12.9 Mha or about 47% of peatlands in Southeast Asia were deforested by 2006. In their natural state, peatlands are waterlogged with high water tables making for an inefficient soil. To create viable soil for plantation, the mires in tropical regions of Indonesia and Malaysia are drained and cleared. The peatland forests harvested for palm oil production serve as above- and below-ground carbon stores, containing at least 42,069 million metric tonnes (Mt) of soil carbon. Exploitation of this land raises many environmental concerns, namely increased greenhouse gas emissions, risk of fires and a decrease in biodiversity. Greenhouse gas emissions for palm oil planted on peatlands is estimated to be between the equivalent of 12.4 (best case) to 76.6 t CO2/ha (worst case). Tropical peatland converted to palm oil plantation can remain a net source of carbon to the atmosphere after 12 years. In their natural state, peatlands are resistant to fire. Drainage of peatlands for palm oil plantations creates a dry layer of flammable peat. As peat is carbon dense, fires occurring in compromised peatlands release extreme amounts of both carbon dioxide and toxic smoke into the air. These fires add to greenhouse gas emissions while also causing thousands of deaths every year. Decreased biodiversity due to deforestation and drainage makes these ecosystem more vulnerable and less resilient to change. Homogenous ecosystems are at an increased risk to extreme climate conditions and are less likely to recover from fires. Fires Some peatlands are being dried out by climate change. Drainage of peatlands due to climatic factors may also increase the risk of fires, presenting further risk of carbon and methane to release into the atmosphere. Due to their naturally high moisture content, pristine mires have a generally low risk of fire ignition. The drying of this waterlogged state means that the carbon-dense vegetation becomes vulnerable to fire. In addition, due to the oxygen deficient nature of the vegetation, the peat fires can smolder beneath the surface causing incomplete combustion of the organic matter and resulting in extreme emissions events. In recent years, the occurrence of wildfires in peatlands has increased significantly worldwide particularly in the tropical regions. This can be attributed to a combination of drier weather and changes in land use which involve the drainage of water from the landscape. This resulting loss of biomass through combustion has led to significant emissions of greenhouse gasses both in tropical and boreal/temperate peatlands. Fire events are predicted to become more frequent with the warming and drying of the global climate. Management and rehabilitation The United Nations Convention on Biological Diversity highlights peatlands as key ecosystems to be conserved and protected. The convention requires governments at all levels to present action plans for the conservation and management of wetland environments. Wetlands are also protected under the 1971 Ramsar Convention. Often, restoration is done by blocking drainage channels in the peatland, and allowing natural vegetation to recover. Rehabilitation projects undertaken in North America and Europe usually focus on the rewetting of peatlands and revegetation of native species. This acts to mitigate carbon release in the short term before the new growth of vegetation provides a new source of organic litter to fuel the peat formation in the long term. UNEP is supporting peatland restoration in Indonesia. Peat extraction is forbidden in Chile since April 2024. Global Peatlands Initiative
Physical sciences
Wetlands
Earth science
34240222
https://en.wikipedia.org/wiki/Computational%20astrophysics
Computational astrophysics
Computational astrophysics refers to the methods and computing tools developed and used in astrophysics research. Like computational chemistry or computational physics, it is both a specific branch of theoretical astrophysics and an interdisciplinary field relying on computer science, mathematics, and wider physics. Computational astrophysics is most often studied through an applied mathematics or astrophysics programme at PhD level. Well-established areas of astrophysics employing computational methods include magnetohydrodynamics, astrophysical radiative transfer, stellar and galactic dynamics, and astrophysical fluid dynamics. A recently developed field with interesting results is numerical relativity. Research Many astrophysicists use computers in their work, and a growing number of astrophysics departments now have research groups specially devoted to computational astrophysics. Important research initiatives include the US Department of Energy (DoE) SciDAC collaboration for astrophysics and the now defunct European AstroSim collaboration. A notable active project is the international Virgo Consortium, which focuses on cosmology. In August 2015 during the general assembly of the International Astronomical Union a new commission C.B1 on Computational Astrophysics was inaugurated, therewith recognizing the importance of astronomical discovery by computing. Important techniques of computational astrophysics include particle-in-cell (PIC) and the closely related particle-mesh (PM), N-body simulations, Monte Carlo methods, as well as grid-free (with smoothed particle hydrodynamics (SPH) being an important example) and grid-based methods for fluids. In addition, methods from numerical analysis for solving ODEs and PDEs are also used. Simulation of astrophysical flows is of particular importance as many objects and processes of astronomical interest such as stars and nebulae involve gases. Fluid computer models are often coupled with radiative transfer, (Newtonian) gravity, nuclear physics and (general) relativity to study highly energetic phenomena such as supernovae, relativistic jets, active galaxies and gamma-ray bursts and are also used to model stellar structure, planetary formation, evolution of stars and of galaxies, and exotic objects such as neutron stars, pulsars, magnetars and black holes. Computer simulations are often the only means to study stellar collisions, galaxy mergers, as well as galactic and black hole interactions. In recent years the field has made increasing use of parallel and high performance computers. Tools Computational astrophysics as a field makes extensive use of software and hardware technologies. These systems are often highly specialized and made by dedicated professionals, and so generally find limited popularity in the wider (computational) physics community. Hardware Like other similar fields, computational astrophysics makes extensive use of supercomputers and computer clusters . Even on the scale of a normal desktop it is possible to accelerate the hardware. Perhaps the most notable such computer architecture built specially for astrophysics is the GRAPE (gravity pipe) in Japan. As of 2010, the biggest N-body simulations, such as DEGIMA, do general-purpose computing on graphics processing units. Software Many codes and software packages, exist along with various researchers and consortia maintaining them. Most codes tend to be n-body packages or fluid solvers of some sort. Examples of n-body codes include ChaNGa, MODEST, nbodylab.org and Starlab. For hydrodynamics there is usually a coupling between codes, as the motion of the fluids usually has some other effect (such as gravity, or radiation) in astrophysical situations. For example, for SPH/N-body there is GADGET and SWIFT; for grid-based/N-body RAMSES, ENZO, FLASH, and ART. AMUSE , takes a different approach (called Noah's Ark) than the other packages by providing an interface structure to a large number of publicly available astronomical codes for addressing stellar dynamics, stellar evolution, hydrodynamics and radiative transport.
Physical sciences
Astronomy basics
Astronomy
42453747
https://en.wikipedia.org/wiki/EciRGB
EciRGB
eciRGB is one of the standard RGB color spaces. It is recommended by the ECI (European Color Initiative) as a working color space for professional image editing and covers virtually all printing processes as well as all common display techniques. This fulfilled the requirements for a particular color-correct production in the graphics industry. The corresponding ICC profile for inclusion in image editing programs or for the calibration of the monitor can be obtained free of charge from the ECI website.
Physical sciences
Basics
Physics
46190717
https://en.wikipedia.org/wiki/Discovery%20of%20the%20neutron
Discovery of the neutron
The discovery of the neutron and its properties was central to the extraordinary developments in atomic physics in the first half of the 20th century. Early in the century, Ernest Rutherford developed a crude model of the atom, based on the gold foil experiment of Hans Geiger and Ernest Marsden. In this model, atoms had their mass and positive electric charge concentrated in a very small nucleus. By 1920, isotopes of chemical elements had been discovered, the atomic masses had been determined to be (approximately) integer multiples of the mass of the hydrogen atom, and the atomic number had been identified as the charge on the nucleus. Throughout the 1920s, the nucleus was viewed as composed of combinations of protons and electrons, the two elementary particles known at the time, but that model presented several experimental and theoretical contradictions. The essential nature of the atomic nucleus was established with the discovery of the neutron by James Chadwick in 1932 and the determination that it was a new elementary particle, distinct from the proton. The uncharged neutron was immediately exploited as a new means to probe nuclear structure, leading to such discoveries as the creation of new radioactive elements by neutron irradiation (1934) and the fission of uranium atoms by neutrons (1938). The discovery of fission led to the creation of both nuclear power and nuclear weapons by the end of World War II. Both the proton and the neutron were presumed to be elementary particles until the 1960s, when they were determined to be composite particles built from quarks. Discovery of radioactivity At the start of the 20th century, the vigorous debate as to the existence of atoms had not yet been resolved. Philosophers such as Ernst Mach and Wilhelm Ostwald denied that atoms were real, viewing them as a convenient mathematical construct, while scientists such as Arnold Sommerfeld and Ludwig Boltzmann saw that physical theories required the existence of atoms. Radioactivity was discovered in 1896 by the French scientist Henri Becquerel, while working with phosphorescent materials. In 1898, Ernest Rutherford at Cavendish Laboratory distinguished two types of radioactivity, alpha rays and beta rays, which differed in their ability to penetrate, or travel into, ordinary objects or gases. Two years later, Paul Villard discovered gamma rays, which possessed even more penetrating power. These radiations were soon identified with known particles: beta rays were shown to be electrons by Walter Kaufmann in 1902; alpha rays were shown to be helium ions by Rutherford and Thomas Royds in 1907; and gamma rays were shown to be electromagnetic radiation, that is, a form of light, by Rutherford and Edward Andrade in 1914. These radiations had also been identified as emanating from atoms, hence they provided clues to processes occurring within atoms. Conversely, the radiations were also recognized as tools that could be exploited in scattering experiments to probe the interior of atoms. Gold foil experiment and the discovery of the atomic nucleus At the University of Manchester between 1908 and 1913, Rutherford directed Hans Geiger and Ernest Marsden in a series of experiments to determine what happens when alpha particles scatter from metal foil. Now called the Rutherford gold foil experiment, or the Geiger–Marsden experiment, these measurements made the extraordinary discovery that although most alpha particles passing through a thin gold foil experienced little deflection, a few scattered to a high angle. The scattering indicated that some of the alpha particles ricocheted back from a small, but dense, component inside the atoms. Based on these measurements, by 1911 it was apparent to Rutherford that the atom consisted of a small, massive nucleus with positive charge surrounded by a much larger cloud of negatively charged electrons. The concentrated atomic mass was required to provide the observed deflection of the alpha particles, and Rutherford developed a mathematical model that accounted for the scattering. While the Rutherford model was largely ignored at the time, when Niels Bohr joined Rutherford's group he developed the Bohr model for electrons orbiting the nucleus in 1913 and this eventually led to an atomic model based on quantum mechanics by the mid-1920s. Discovery of isotopes Concurrent with the work of Rutherford, Geiger, and Marsden, the radiochemist Frederick Soddy at the University of Glasgow was studying chemistry related problems on radioactive materials. Soddy had worked with Rutherford on radioactivity at McGill University. By 1910, about 40 different radioactive elements, referred to as radioelements, had been identified between uranium and lead, although the periodic table only allowed for 11 elements. Soddy and Kazimierz Fajans independently found in 1913 that an element undergoing alpha decay will produce an element two places to the left in the periodic system and an element undergoing beta decay will produce an element one place to the right in the periodic system. Also, those radioelements that reside in the same places in the periodic system are chemically identical. Soddy called these chemically identical elements isotopes. For his study of radioactivity and the discovery of isotopes, Soddy was awarded the 1921 Nobel Prize in Chemistry. Building from work by J. J. Thomson on the deflection of positively charged atoms by electric and magnetic fields, Francis Aston built the first mass spectrograph at the Cavendish Laboratory in 1919. He was able then to separate the two isotopes of neon, and . Aston discovered the whole number rule, that the masses of all the particles have whole number relationships to oxygen-16, which he took to have a mass of exactly 16. (Today the whole-number rule is expressed in multiples of an atomic mass unit (amu) relative to carbon-12.). Significantly, the one exception to this rule was hydrogen itself, which had a mass value of 1.008. The excess mass was small, but well outside the limits of experimental uncertainty. Since Einstein's mass-energy equivalence had been known since 1905, Aston and others quickly realized that the mass discrepancy is due to the binding energy of atoms. When the contents of a number of hydrogen atoms are bound into a single atom, the single atom's energy must be less than the sum of the energies of the separate hydrogen atoms, and therefore the single atom's mass is less than the sum of the hydrogen atom masses. Aston's work on isotopes won him the 1922 Nobel Prize in Chemistry for the discovery of isotopes in a large number of non-radioactive elements, and for his enunciation of the whole number rule. Noting Aston's recent discovery of nuclear binding energy, in 1920 Arthur Eddington suggested that stars may obtain their energy by fusing hydrogen (protons) into helium and that the heavier elements may form in stars. Atomic number and Moseley's law Rutherford and others had noted the disparity between the mass of an atom, computed in atomic mass units, and the approximate charge required on the nucleus for the Rutherford model to work. The required charge of the atomic nucleus was usually about half its atomic mass. Antonius van den Broek boldly hypothesized that the required charge, denoted by Z, was not half of the atomic weight for elements, but instead was exactly equal to the element's ordinal position in the periodic table. At that time, the positions of the elements in the periodic table were not known to have any physical significance. If the elements were ordered based on increasing atomic mass, however, periodicity in chemical properties was exhibited. Exceptions to this periodicity were apparent, however, such as cobalt and nickel. At the University of Manchester in 1913 Henry Moseley discussed the new Bohr model of the atom with the visiting Bohr. The model accounted for the electromagnetic emission spectrum from the hydrogen atom, and Moseley and Bohr wondered if the electromagnetic emission spectra of heavier elements such as cobalt and nickel would follow their ordering by weight, or by their position in the periodic table. In 1913–1914 Moseley tested the question experimentally by using X-ray diffraction techniques. He found that the most intense short-wavelength line in the X-ray spectrum of a particular element, known as the K-alpha line, was related to the element's position in the periodic table, that is, its atomic number, Z. Indeed, Moseley introduced this nomenclature. Moseley found that the frequencies of the radiation were related in a simple way to the atomic number of the elements for a large number of elements. Within a year it was noted that the equation for the relation, now called Moseley's law, could be explained in terms of the 1913 Bohr model, with reasonable extra assumptions about atomic structure in other elements. Moseley's result, by Bohr's later account, not only established atomic number as a measurable experimental quantity, but gave it a physical meaning as the positive charge on the atomic nucleus. The elements could be ordered in the periodic system in order of atomic number, rather than atomic weight. The result tied together the organization of the periodic table, the Bohr model for the atom, and Rutherford's model for alpha scattering from nuclei. It was cited by Rutherford, Bohr, and others as a critical advance in understanding the nature of the atomic nucleus. Further research in atomic physics was interrupted by the outbreak of World War I. Moseley was killed in 1915 at the Battle of Gallipoli, while Rutherford's student James Chadwick was interned in Germany for the duration of the war, 1914–1918. In Berlin, Lise Meitner's and Otto Hahn's research work on determining the radioactive decay chains of radium and uranium by precise chemical separation was interrupted. Meitner spent much of the war working as a radiologist and medical X-ray technician near the Austrian front, while Hahn, a chemist, worked on research in poison gas warfare. Rutherford nucleus In 1920 Rutherford gave a Bakerian lecture at the Royal Society entitled the "Nuclear Constitution of Atoms", a summary of recent experiments on atomic nuclei and conclusions as to the structure of atomic nuclei. By 1920, the existence of electrons within the atomic nucleus was widely assumed. It was assumed the nucleus consisted of hydrogen nuclei in number equal to the atomic mass. But since each hydrogen nucleus had charge +1, the nucleus required a smaller number of "internal electrons" each of charge −1 to give the nucleus its correct total charge. The mass of protons is about 1800 times greater than that of electrons, so the mass of the electrons is incidental in this computation. Such a model was consistent with the scattering of alpha particles from heavy nuclei, as well as the charge and mass of the many isotopes that had been identified. There were other motivations for the proton–electron model. As noted by Rutherford at the time, "We have strong reason for believing that the nuclei of atoms contain electrons as well as positively charged bodies...", namely, it was known that beta radiation was electrons emitted from the nucleus. In that lecture, Rutherford conjectured the existence of new particles. The alpha particle was known to be very stable, and it was assumed to retain its identity within the nucleus. The alpha particle was presumed to consist of four protons and two closely bound electrons to give it +2 charge and mass 4. In a 1919 paper, Rutherford had reported the apparent discovery of a new doubly charged particle of mass 3, denoted the X++, interpreted to consist of three protons and a closely bound electron. This result suggested to Rutherford the likely existence of two new particles: one of two protons with a closely bound electron, and another of one proton and a closely bound electron. The X++ particle was later determined to have mass 4 and to be just a low-energy alpha particle. Nevertheless, Rutherford had conjectured the existence of the deuteron, a +1 charge particle of mass 2, and the neutron, a neutral particle of mass 1. The former is the nucleus of deuterium, discovered in 1931 by Harold Urey. The mass of the hypothetical neutral particle would be little different from that of the proton. Rutherford determined that such a zero-charge particle would be difficult to detect by available techniques. About the time of Rutherford's lecture, other publications appeared with similar suggestions of a proton–electron composite in the nucleus, and in 1921 William Harkins, an American chemist, named the uncharged particle the neutron. About that same time the word proton was adopted for the hydrogen nucleus. Neutron was apparently constructed from the Latin root for neutral and the Greek ending -on (by imitation of electron and proton).
Physical sciences
Particle physics: General
Physics
36819989
https://en.wikipedia.org/wiki/Spinel%20group
Spinel group
The spinels are any of a class of minerals of general formulation which crystallise in the cubic (isometric) crystal system, with the X anions (typically chalcogens, like oxygen and sulfur) arranged in a cubic close-packed lattice and the cations A and B occupying some or all of the octahedral and tetrahedral sites in the lattice. Although the charges of A and B in the prototypical spinel structure are +2 and +3, respectively (), other combinations incorporating divalent, trivalent, or tetravalent cations, including magnesium, zinc, iron, manganese, aluminium, chromium, titanium, and silicon, are also possible. The anion is normally oxygen; when other chalcogenides constitute the anion sublattice the structure is referred to as a thiospinel. A and B can also be the same metal with different valences, as is the case with magnetite, (as ), which is the most abundant member of the spinel group. Spinels are grouped in series by the B cation. The group is named for spinel (), which was once known as "spinel ruby". (Today the term ruby is used only for corundum.) Spinel group members Members of the spinel group include: Aluminium spinels: Spinel: , after which this class of minerals is named Gahnite: Hercynite: Galaxite: Pleonaste: Iron spinels: Cuprospinel: Franklinite: Jacobsite: Magnesioferrite: Magnetite: , where one Fe is +2 and two Fe's are +3, respectively. Trevorite: Ulvöspinel: Zinc ferrite: Chromium spinels: Chromite: Magnesiochromite: Zincochromite: Cobalt spinels: Manganesecobaltite: Vanadium spinels: Coulsonite: Magnesiocoulsonite: Others with the spinel structure: Ringwoodite: , an abundant olivine polymorph within the Earth's mantle from about 520 to 660 km depth, and a rare mineral in meteorites Musgravite: a type of "multi-spinel". There are many more compounds with a spinel structure, e.g. the thiospinels and selenospinels, that can be synthesized in the lab or in some cases occur as minerals. The heterogeneity of spinel group members varies based on composition with ferrous and magnesium based members varying greatly as in solid solution, which requires similarly sized cations. However, ferric and aluminium based spinels are almost entirely homogeneous due to their large size difference. The spinel structure The space group for a spinel group mineral may be Fdm (the same as for diamond), but in some cases (such as spinel itself, , beyond 452.6 K) it is actually the tetrahedral F3m. Normal spinel structures have oxygen ions closely approximating a cubic close-packed latice with eight tetrahedral and four octahedral sites per formula unit (but eight times as many per unit cell). The tetrahedral spaces are smaller than the octahedral spaces. B ions occupy half the octahedral holes, while A ions occupy one-eighth of the tetrahedral holes. The mineral spinel has a normal spinel structure. In a normal spinel structure, the ions are in the following positions, where i, j, and k are arbitrary integers and δ, ε, and ζ are small real numbers (note that the unit cell can be chosen differently, giving different coordinates): X: (1/4-δ, δ, δ ) + ((i+j)/2, (j+k)/2, (i+k)/2) ( δ, 1/4-δ, δ ) + ((i+j)/2, (j+k)/2, (i+k)/2) ( δ, δ, 1/4-δ) + ((i+j)/2, (j+k)/2, (i+k)/2) (1/4-δ, 1/4-δ, 1/4-δ) + ((i+j)/2, (j+k)/2, (i+k)/2) (3/4+ε, 1/2-ε, 1/2-ε) + ((i+j)/2, (j+k)/2, (i+k)/2) (1-ε, 1/4+ε, 1/2-ε) + ((i+j)/2, (j+k)/2, (i+k)/2) (1-ε, 1/2-ε, 1/4+ε) + ((i+j)/2, (j+k)/2, (i+k)/2) (3/4+ε, 1/4+ε, 1/4+ε) + ((i+j)/2, (j+k)/2, (i+k)/2) A: (1/8, 1/8, 1/8) + ((i+j)/2, (j+k)/2, (i+k)/2) (7/8, 3/8, 3/8) + ((i+j)/2, (j+k)/2, (i+k)/2) B: (1/2+ζ, ζ, ζ ) + ((i+j)/2, (j+k)/2, (i+k)/2) (1/2+ζ, 1/4-ζ, 1/4-ζ) + ((i+j)/2, (j+k)/2, (i+k)/2) (3/4-ζ, 1/4-ζ, ζ ) + ((i+j)/2, (j+k)/2, (i+k)/2) (3/4-ζ, ζ, 1/4-ζ) + ((i+j)/2, (j+k)/2, (i+k)/2) The first four X positions form a tetrahedron around the first A position, and the last four form one around the second A position. When the space group is Fdm then δ=ε and ζ=0. In this case, a three-fold rotoinversion with axis in the 111 direction is centred on the point (0, 0, 0) (where there is no ion) and can also be centred on the B ion at (1/2, 1/2, 1/2), and in fact every B ion is the centre of a three-fold rotoinversion (point group D). Under this space group the two A positions are equivalent. If the space group is F3m then the three-fold rotoinversions become simple three-fold rotations (point group C) because the inversion disappears, and the two A positions are no longer equivalent. Every ion is on at least three mirror planes and at least one three-fold rotation axis. The structure has tetrahedral symmetry around each A ion, and the A ions are arranged just like the carbon atoms in diamond. There are another eight tetrahedral sites per unit cell that are empty, each one surrounded by a tetrahedron of B as well as a tetrahedron of X ions. Inverse spinel structures have a different cation distribution in that all of the A cations and half of the B cations occupy octahedral sites, while the other half of the B cations occupy tetrahedral sites. An example of an inverse spinel is , if the Fe2+ (A2+) ions are d6 high-spin and the Fe3+ (B3+) ions are d5 high-spin. In addition, intermediate cases exist where the cation distribution can be described as (A1−xBx)[AB1−]2O4, where parentheses () and brackets [] are used to denote tetrahedral and octahedral sites, respectively. The so-called inversion degree, x, adopts values between 0 (normal) and 1 (inverse), and is equal to for a completely random cation distribution. The cation distribution in spinel structures are related to the crystal field stabilization energies (CFSE) of the constituent transition metals. Some ions may have a distinct preference for the octahedral site depending on the d-electron count. If the A2+ ions have a strong preference for the octahedral site, they will displace half of the B3+ ions from the octahedral sites to tetrahedral sites. Similarly, if the B3+ ions have a low or zero octahedral site stabilization energy (OSSE), then they will occupy tetrahedral sites, leaving octahedral sites for the A2+ ions. Burdett and co-workers proposed an alternative treatment of the problem of spinel inversion, using the relative sizes of the s and p atomic orbitals of the two types of atom to determine their site preferences. This is because the dominant stabilizing interaction in the solids is not the crystal field stabilization energy generated by the interaction of the ligands with the d electrons, but the σ-type interactions between the metal cations and the oxide anions. This rationale can explain anomalies in the spinel structures that crystal-field theory cannot, such as the marked preference of Al3+ cations for octahedral sites or of Zn2+ for tetrahedral sites, which crystal field theory would predict neither has a site preference. Only in cases where this size-based approach indicates no preference for one structure over another do crystal field effects make any difference; in effect they are just a small perturbation that can sometimes affect the relative preferences, but which often do not. Common uses in industry and technology Spinels commonly form in high temperature processes. Either native oxide scales of metals, or intentional deposition of spinel coatings can be used to protect base metals from oxidation or corrosion. The presence of spinels may hereby serve as thin (few micrometer thick) functional layers, that prevent the diffusion of oxygen (or other atmospheric) ions or specific metal ions such as chromium, which otherwise exhibits a fast diffusion process at high temperatures.
Physical sciences
Minerals
Earth science
36820307
https://en.wikipedia.org/wiki/Rec.%202020
Rec. 2020
ITU-R Recommendation BT.2020, more commonly known by the abbreviations Rec. 2020 or BT.2020, defines various aspects of ultra-high-definition television (UHDTV) with standard dynamic range (SDR) and wide color gamut (WCG), including picture resolutions, frame rates with progressive scan, bit depths, color primaries, RGB and luma-chroma color representations, chroma subsamplings, and an opto-electronic transfer function. The first version of Rec. 2020 was posted on the International Telecommunication Union (ITU) website on August 23, 2012, and two further editions have been published since then. Rec. 2020 is extended for high-dynamic-range (HDR) by Rec. 2100, which uses the same color primaries as Rec. 2020. Technical details Resolution Rec. 2020 defines two standard image formats of 3840 × 2160 ("4K") and 7680 × 4320 ("8K"). These both have an aspect ratio of 16:9 and use square pixels. Frame rate Rec. 2020 specifies the following frame rates: 120p, 119.88p, 100p, 60p, 59.94p, 50p, 30p, 29.97p, 25p, 24p, 23.976p. Only progressive scan frame rates are allowed. Digital representation Rec. 2020 defines a bit depth of either 10 bits per sample or 12 bits per sample. 10 bits per sample Rec. 2020 uses video levels where the black level is defined as code 64 and the nominal peak is defined as code 940. Codes 0–3 and 1,020–1,023 are used for the timing reference. Codes 4 through 63 provide video data below the black level while codes 941 through 1,019 provide video data above the nominal peak. 12 bits per sample Rec. 2020 uses video levels where the black level is defined as code 256 and the nominal peak is defined as code 3760. Codes 0–15 and 4,080–4,095 are used for the timing reference. Codes 16 through 255 provide video data below the black level while codes 3,761 through 4,079 provide video data above the nominal peak. System colorimetry The Rec. 2020 (UHDTV/UHD-1/UHD-2) color space can reproduce colors that cannot be shown with the Rec. 709 (HDTV) color space. The RGB primaries used by Rec. 2020 are equivalent to monochromatic light sources on the CIE 1931 spectral locus. The wavelength of the Rec. 2020 primary colors is 630 nm for the red primary color, 532 nm for the green primary color, and 467 nm for the blue primary color. In coverage of the CIE 1931 color space, the Rec. 2020 color space covers 75.8%, the DCI-P3 digital cinema color space covers 53.6%, the Adobe RGB color space covers 52.1%, and the Rec. 709 color space covers 35.9%. During the development of the Rec. 2020 color space it was decided that it would use real colors, instead of imaginary colors, so that it would be possible to show the Rec. 2020 color space on a display without the need for conversion circuitry. Since a larger color space increases the difference between colors, an increase of 1 bit per sample is needed for Rec. 2020 to equal or exceed the color precision of Rec. 709. The NHK measured contrast sensitivity for the Rec. 2020 color space using Barten's equation which had previously been used to determine the bit depth for digital cinema. 11 bits per sample for the Rec. 2020 color space is below the visual modulation threshold, the ability to discern a one-value difference in luminance, for the entire luminance range. The NHK is planning for their UHDTV system, Super Hi-Vision, to use 12 bits per sample RGB. Transfer characteristics Rec. 2020 defines a nonlinear transfer function for gamma correction that is the same nonlinear transfer function that is used by Rec. 709, except that its parameters are (for 12 bit only) given with higher precision: where E is the signal proportional to camera-input light intensity and E′ is the corresponding nonlinear signal where α = 1 + 5.5 * β ≈ 1.09929682680944 and β ≈ 0.018053968510807 (values chosen to achieve a continuous function with a continuous first derivative) The standard says that for practical purposes, the following values of α and β can be used: α = 1.099 and β = 0.018 for 10 bits per sample system (the values given in Rec. 709) α = 1.0993 and β = 0.0181 for 12 bits per sample system While the Rec. 2020 transfer function can be used for encoding, it is expected that most productions will use a reference monitor that has an appearance of using equivalent of gamma 2.4 transfer function as defined in ITU-R BT.1886 and that the reference monitor will be evaluated under viewing conditions as defined in Rec. ITU-R BT.2035. RGB and luma-chroma formats Rec. 2020 allows for RGB and luma-chroma signal formats with 4:4:4 full-resolution sampling and luma-chroma signal formats with 4:2:2 and 4:2:0 chroma subsampling. It supports two types of luma-chroma signals, called YCbCr and YcCbcCrc. YCbCr may be used when the top priority is compatibility with existing SDTV and HDTV operating practices. The luma (Y′) signal for YCbCr is calculated as the weighted average Y′ = KR⋅R′ + KG⋅G′ + KB⋅B′, using the gamma-corrected RGB values (denoted R′G′B′) and the weighting coefficients KR = 0.2627, KG = 1−KR−KB = 0.678, and KB = 0.0593. As in similar schemes, the chroma components in YCbCr are calculated as C′B = 0.5⋅(B′−Y′)/(1−KB) = (B'−Y′)/1.8814 and C′R = 0.5⋅(R′−Y′)/(1−KR) = (R′−Y′)/1.4746, and for digital representation the Y′, C′B, and C′R signals are scaled, offset by constants, and rounded to integers. The YcCbcCrc scheme is a "constant luminance" luma-chroma representation. YcCbcCrc may be used when the top priority is the most accurate retention of luminance information. The luma component in YcCbcCrc is calculated using the same coefficient values as for YCbCr, but it is calculated from linear RGB and then gamma corrected, rather than being calculated from gamma-corrected R′G′B′ and is done as follows: Y′ = (KR⋅R + KG⋅G + KB⋅B)′. The chroma components in YcCbcCrc are calculated from the Y′, B′, and R′ signals with equations that depend on the range of values of B′−Y′ and R′−Y′. Color management Just like standard definition content that uses SMPTE C or NTSC 1953, BT.2020 primaries should be color managed to primaries of display. That is different from changing YCbCr matrix. HD content is color managed to BT.709 primaries on linear values. BT.2020 and BT.2100 are usually color managed to P3-D65. The reference color bars for BT.2020 are ARIB STD-B66. Implementations HDMI 2.0 supports the Rec. 2020 color space. HDMI 2.0 can transmit 12 bits per sample RGB at a resolution of 2160p and a frame rate of 24/25/30 fps or it can transmit 12 bits per sample 4:2:2/4:2:0 YCbCr at a resolution of 2160p and a frame rate of 50/60 fps. The Rec. 2020 color space is supported by H.264/MPEG-4 AVC and H.265/High Efficiency Video Coding (HEVC). The Main 10 profile in HEVC was added based on proposal JCTVC-K0109 which proposed that a 10-bit profile be added to HEVC for consumer applications. The proposal stated that this was to allow for improved video quality and to support the Rec. 2020 color space that will be used by UHDTV. On September 11, 2013, ViXS Systems announced the XCode 6400 SoC which supports 4K resolution at 60 fps, the Main 10 profile of HEVC, and the Rec. 2020 color space. 2014 On May 22, 2014, Nanosys announced that using a quantum dot enhancement film (QDEF) a current LCD TV was modified so that it could cover 91% of the Rec. 2020 color space. Nanosys engineers believe that with improved LCD color filters it is possible to make a LCD that covers 97% of the Rec. 2020 color space. On September 4, 2014, Canon Inc. released a firmware upgrade that added support for the Rec. 2020 color space to their EOS C500 and EOS C500 PL camera models and their DP-V3010 4K display. On September 5, 2014, the Blu-ray Disc Association revealed that the future 4K Blu-ray Disc format will support 4K UHD (3840 x 2160 resolution) video at frame rates up to 60 fps. The standard will encode videos under the High Efficiency Video Coding standard. 4K Blu-ray Discs support both a higher color precision by increasing the color depth to 10 bits per color, and a greater color gamut by using the Rec. 2020 color space. The 4K Blu-ray specification allows for three disc sizes: 50 gb, 66 gb and 100 gb. Depending on the disc size and physical configuration, the data rate can reach up to 128 Mbit/s. The first Ultra HD Blu-ray titles were officially released from four studios on March 1, 2016. On November 6, 2014, Google added support for the Rec. 2020 color space to VP9. On November 7, 2014, DivX developers announced that DivX265 version 1.4.21 had added support for the Main 10 profile of HEVC and the Rec. 2020 color space. On December 22, 2014, Avid Technology released an update for Media Composer that added support for 4K resolution, the Rec. 2020 color space, and a bit rate of up to 3,730 Mbit/s with the DNxHD codec. 2015 On January 6, 2015, the MHL Consortium announced the release of the superMHL specification which will support 8K resolution at 120 fps, 48-bit video, the Rec. 2020 color space, high dynamic range support, a 32-pin reversible superMHL connector, and power charging of up to 40 watts. On January 7, 2015, Ateme added support for the Rec. 2020 color space to their TITAN File video platform. On March 18, 2015, Arri announced the SXT line of Arri Alexa cameras which will support Apple ProRes recording at 4K resolution and the Rec. 2020 color space. On April 8, 2015, Canon Inc. announced the DP-V2410 4K display and EOS C300 Mark II camera with support for the Rec. 2020 color space. On May 26, 2015, the NHK announced a 4K LCD with a laser diode backlight that covers 98% of the Rec. 2020 color space. Using a laser allows for generating almost monochromatic light. The NHK stated that at the time it was announced this 4K LCD has the widest color gamut of any display in the world. On June 17, 2015, Digital Projection International presented a 4K LED projector with support for the Rec. 2020 color space. 2016 On January 4, 2016, the UHD Alliance announced their specifications for Ultra HD Premium which includes support for the Rec. 2020 color space. On January 27, 2016, VESA announced that DisplayPort version 1.4 will support the Rec. 2020 color space. On April 17, 2016, Sony presented a 4K OLED display with the support of Rec. 2020 color space. On April 18, 2016, the Ultra HD Forum announced industry guidelines for UHD Phase A which includes support for the Rec. 2020 color space. 2017 At SID display week 2017, AUO displayed a 5" foldable 720p HD AMOLED display able to display 95% of the Rec. 2020 colorspace. Although 720p is not specified by Rec. 2020, the color space coverage is of note. The Ultra HD Forum guidelines for UHD Phase A include support for SDR formats with 10 bits of color bit depth based on both Rec. 709 and Rec. 2020 color gamuts and also both the HDR10 and HLG formats of Rec. 2100, which are supposed to start by 2017. 2018 At SID display week 2018, various companies showcased displays that are able to cover over 90% of the Rec. 2020 color space. JDI showcased an improvement of their 17.3" LCD 8k broadcast monitor that is powered by an RGB laser backlight system. This allows the display to reproduce 97% of the Rec. 2020 color space. Web browsers Rec. 2020 colors are supported in CSS Color Level 4 on Safari since 2022 (version 15.1) and Google Chrome since 2023 (version 111) browsers. Rec. 2100 Rec. 2100 is an ITU-R Recommendation released in July 2016 that defines high dynamic range (HDR) formats for both HDTV 1080p and 4K/8K UHDTV resolutions. These formats use the same color primaries as Rec. 2020, but with different transfer functions for HDR use. Rec. 2100 does not support the YcCbcCrc scheme of Rec. 2020.
Physical sciences
Basics
Physics
55507358
https://en.wikipedia.org/wiki/Strigolactone
Strigolactone
Strigolactones are a group of chemical compounds produced by roots of plants. Due to their mechanism of action, these molecules have been classified as plant hormones or phytohormones. So far, strigolactones have been identified to be responsible for three different physiological processes: First, they promote the germination of parasitic organisms that grow in the host plant's roots, such as Striga lutea and other plants of the genus Striga. Second, strigolactones are fundamental for the recognition of the plant by symbiotic fungi, especially arbuscular mycorrhizal fungi, because they establish a mutualistic association with these plants, and provide phosphate and other soil nutrients. Third, strigolactones have been identified as branching inhibition hormones in plants; when present, these compounds prevent excess bud growing in stem terminals, stopping the branching mechanism in plants. Strigolactones comprise a diverse group, but they all have core common chemical structure, as shown in the image to the right. The structure is based on a tricyclic lactone linked to a hydroxymethyl butenolide; the former is represented in the figure as the A-B-C part, while the latter is the D part of the molecule. Most strigolactones present variations in the ABC part, but the D ring is quite constant across the different species, which led researchers to suspect that the biological activity relies on this part of the molecule. Different studies have demonstrated that the activity of the molecules is lost when the C-D section of the molecules is modified. Since strigolactones are involved in the signaling pathway required for germination of parasitic species (such as Striga sp.), they have been a proposed target to control pests and overgrowth of these parasitic organism. Using a molecule similar to strigolactones could be the key to designing a chemical and biological mechanism to stop the colonization of a plant's root by parasitic plants. Discovery and functions Germination of parasitic plant Strigolactones were first isolated in 1966 from cotton plants, specifically from the roots. However its role in germination of other organisms was not determined until later. Previous studies with Striga lutea had already shown that root extracts from the host plants were necessary for the parasitic seed to start germinating, which made obvious that a substance produced in the roots was stimulating this process. The isolation of strigolactones lead to a series of tests that proved that this compound was the necessary molecule to induce germination of Striga species. Later on, similar compounds were proven to produce the same effect: sorgolactone and alectrol, both of them presented the characteristic lactone group, so they were classified as strigolactones. To induce germination of parasitic plants, strigolactones only needed to be present in trace amounts, in the order of 5 parts per million. Shoot branching hormone inhibition The role of strigolactones as branching inhibitor hormone was discovered because of the use of a new set of mutant plants. These mutants presented excessive growth in the axillary buds, which induced their terminal stem to start branching abnormally. Previously, cytokinins were thought to be the only molecule involved in the regulation of stem branching, but these mutants presented normal production and signaling of cytokinins, leading to the conclusion that another substance was acting on the axillary buds. Different tests that consisted in inserting part of the mutants plants into wild specimens (and vice versa), were able to demonstrated that the mutants were either not able to recognize a signal molecule coming from the roots and the lower part of the plant, or not able to produce the require molecules to inhibit branching. This molecule, that was involved in branching regulation, was later identified to be a strigolactone. The conclusion was that, in presence of strigolactones, the plant would be prevented from overgrowing and would develop excessive branches, but when is not present, the axillary bud will start inducing abnormal branching. Chemistry Properties Although strigolactones vary in some of their functional groups, their melting point is usually found always between 200 and 202 degrees Celsius. The decomposition of the molecule occurs after reaching 195 °C. They are highly soluble in polar solvents, such as acetone; soluble in benzene, and almost insoluble in hexane. Chemical structures Some examples of strigolactones include: Biosynthesis Carotenoid pathway via carlactone The biosynthetic pathway of the strigolactones has not been fully elucidated, but different steps have been identified, including the required enzymes to carry out the chemical transformation. The first step is the isomerization of the 9th chemical bond of the -carotene, changing from trans configuration to cis. This first step is carried out by the enzyme -carotene isomerase, also called DWARF27 or D27 for short, which required iron as a cofactor. The second step is the chemical separation of 9-cis--carotene into two different compounds: the first one is 9-cis-aldehyde and the second is -ionone. This second step is catalized by the carotenoid cleavage deoxygenase 7 (CCD7). In the third step, another carotenoid cleavage oxygenase, called CCD8 (from the same family as CCD7), catalyze the conversion and rearrangement of the aldehyde created in the previous step into 9-cis--apo-10 and subsequently producing carlactone. Recent research has identified two parallel strigolactone biosynthetic pathways in maize, both of which produce the major maize strigolactone, zealactone. The enzyme ZmCYP706C37 catalyzes several consecutive oxidative reactions with 3-hydroxy-MeCLA and 3-oxo-MeCLA as putative intermediates to form zealactone from MeCLA. Both 3-hydroxy-MeCLA and 3-oxo-MeCLA were successfully converted to zealactone by ZmCYP706C37. The balance between zealactone and two other strigolactones, zealactol and zealactonoic acid (ZA), can be altered by changes in flux through these pathways. This discovery has implications for breeding Striga resistance in maize by modifying the strigolactone blend, potentially reducing the devastating effects of this parasitic weed in Africa. It is still not clear how exactly carlactone is transformed into the different strigolactones identified so far, but several studies have proved that carlactone is definitely the precursor of strigolactones. This last step of the biosynthesis should involve the addition of at least two oxygen molecules to convert the carlactone in 5-deoxystrigol, a simple strigolactone, and more oxidation should be required to produce other more complex strigolactone. The protein MAX1 has been proposed to catalyze the last step of the biosynthesis of strigolactones due its role in oxidative metabolism in plants. Role of ABA in biosynthesis Both, abscisic acid (ABA) and strigolactones have a common group of enzymes that carried out the synthesis of the two compounds, previously it had been demonstrated the existence of a correlation of the two biosynthesis pathways, and it has been supported by different studies. The ABA biosynthesis relies in a set of enzymes, called 9-cis-epoxycarotenoid dyoxygenase (NCED). But, mutants plants that were defective in the production of the NCED enzymes, not just presented low levels of ABA, rather they also present low levels of strigolactones, specifically in the roots extracts where this hormone is mostly synthesized, this finding provided the basis for the existence of a common enzymatic machinery, Other experiments that consist in blocking the NCED enzymes and using mutants unable to detect ABA changes, were used to support this theory. So far there is a clear correlation of both synthesis that is related to the used of NCED enzymes in both biosynthesis, but the exact mechanism in which they are connected remains unclear. Molecular perception In plants, strigolactones are perceived by the dual receptor/hydrolase protein DWARF14 (D14), a member of the α/β hydrolase superfamily. Despite being considered hydrolases with poor substrate turnover, an intact catalytic triad is required for the protein's biological function. Molecular dynamics studies have suggested that the ligand binding pocket is flexible and that the catalytic triad plays an important role for ligand binding and positioning. Several (in part competing) models have been proposed for the involvement of the catalytic triad in ligand perception: Hydrolysis of strigolactone, resulting in the D-ring being covalently attached to the active site serine. Hydrolysis of strigolactone, resulting in a free D-ring that serves as a molecular glue at the entrance of the receptor, mediating interaction with another protein. Binding of non-hydrolyzed, intact strigolactone that generates an altered DWARF14 protein surface, mediating interaction with another protein. Hydrolysis of strigolactone, resulting in the D-ring being covalently attached to the active site histidine. Hydrolysis of strigolactone, resulting in the D-ring being covalently attached to the active site serine and histidine at the same time, inducing a conformational change of DWARF14, leading to interaction with another protein. Kinetic results have suggested that the intact strigolactone triggers a signaling cascade after which hydrolysis is carried out as the final step to inactivate the strigolactone molecule. Mechanism of action Germination of arbuscular mycorrhiza Strigolactones are known to stimulate the germination of arbuscular mycorrhiza spores. Since they produce this effect at extremely low concentrations, it has been proposed that the mechanism of activation must be a signaling pathway. Different studies with diverse type of fungi, have found that after stimulation with strigolactones, the fungal cells present a higher amount of mitochondria and an increase in their oxidative activity. Due to the role of mitochondria in oxidative metabolism of macronutrients, it is thought that the spores remain inactive before finding the host plant, and once they are stimulated with strigolactones, the oxidative machinery in the mitochondrion gets activated to produce energy and nutrients necessaries for germination of the spore and fungal branching. Studies with root extracts support this hypothesis, and so far strigolactones are the candidate molecules that better explain this increased in mitochondrial activity. Auxin-mediated secondary growth It has been established that secondary growth in plant is mainly regulated by the phytohormone auxin. However, the mechanism of auxin secretion is at the same time regulated by strigolactones, thus the latter can control secondary growth through auxin. When applied in terminal buds of stem, strigolactone can block the expression of transport proteins required to move auxin across the buds, these proteins are denominated PIN1. Thus, it was not surprising that when analyzing strigolactone deficient mutants, they were found to present an over-expression of PIN1 protein, which facilitate the transport of auxin in the terminal buds; auxin prevented the mitotic activity of these buds, stopping the plant to initiate secondary growth and branching. In conclusion, plants depend in auxin transport for secondary growth initiation or inhibition, but these transport mechanism is dependent of the production of strigolactones, which can easily travel from the site of production (roots) to the terminal buds of the stem through the xylem. Ecology Plant-fungi interaction Strigolactones play a fundamental role in plant-fungi interaction. One of the first studies made in Lotus japonicus had already demonstrated that compounds extracted from the root were necessary for the development of arbuscular mycorrhizal fungi that will establish a symbiotic relationship with the plant's root. These same findings were true for different plants such as maize and sorghum. Later on, the compounds responsible for the branching of the arbuscular fungi were isolated, and they include 5-deoxystrigol, strigol and sorgolactone, all of them belonging to the strigolactone family of compounds. The process of branching is crucial to establish the symbiosis. Since this branching only occurs after the germination of the spores and the initial growth of the hypha, the strigolactones required for germination have to be secreted by the plant and reached to fungi, meaning that strigolactones are also part of the recognition process by the fungi. Because arbuscula mychorriza can form symbiotic associations with the majority of the angiosperms, and many gymnosperms, it is expected to found different strigolactones compounds distributed in a whole variety of plants. Unfortunately, while strigolactones are supposedly found in most plants, the studies done with strigolactones and AM fungi so far, have only studied a very limited range of plant species, mostly due to the difficulty to extract these compounds and due its ease to disintegrate in solution. Strigolactones are not only necessary for the recognition of the plant by the fungi, they are also required by the recognition of the fungi by the plant. The mechanism of fungal recognition occurs in a similar fashion to the recognition of bacteria such as Rhizobia sp. It has been proposed that the recognition mechanism for bacteria evolved from the mechanism to recognize fungi, because the latter is known to be more primitive and ancient. Just like bacteria use Nod factors, the fungi use a set of molecules denominated Myc factor. These fungal products can be recognized by different plants and are not designed to be plant-specific. When these Myc factors are recognized by the plant's root, they stimulate the expression of different genes involved in the initiation of the symbiotic association. However, the secretion of the Myc factor by the fungi occurs only after being previously stimulated by strigolactones from the plant, demonstrating the necessary role of these compounds for both recognition (from fungi and from plant). Strigolactones also have been reported to produce other changes in fungal cells, such as an increase in the concentration of intracellular calcium and an increase in lipochitoolisaccharides (LCOs), the latter has been proved to be one of the Myc factors produced by the fungi for its recognition by the plant. One of the main roles of arbuscular fungi contained in symbiotic association with plants is to provide soil nutrients to the plants, especially phosphate. Thus when the phosphate in the depletion zone gets really low, the plant depend mainly in the AM fungi to fulfill its phosphate demands. Studies with tomato plants have shown that, when plants undergo a deficit in phosphate, they produce higher amount of strigolactones, which in turn will increase the branching of AM fungi. This excess development of the fungi is expected to provide the additional phosphate required for the plant, since the fungi can now spread to more soil areas. However, since strigolactone also stimulate the germination of parasitic plants, these phosphate-deficient plants also present higher invasion of parasitic species such as Striga sp. Providing adequate phosphate through soil fertilization has been proved to reduce the proliferation of these parasites, because they require strigolactone for its germination.
Biology and health sciences
Plant hormone
Biology
57307464
https://en.wikipedia.org/wiki/Word%20processor
Word processor
A word processor (WP) is a device or computer program that provides for input, editing, formatting, and output of text, often with some additional features. Early word processors were stand-alone devices dedicated to the function, but current word processors are word processor programs running on general purpose computers. The functions of a word processor program fall somewhere between those of a simple text editor and a fully functioned desktop publishing program. While the distinction between a text editor and a word processor is clear—namely the capability of editing rich text—the distinctions between a word processor and a desktop publishing program has become unclear as word processing software has gained features such as ligature support added to the 2010 version of Microsoft Word. Common word processor programs include LibreOffice Writer, Google Docs and Microsoft Word. Background Word processors developed from mechanical machines, later merging with computer technology. The history of word processing is the story of the gradual automation of the physical aspects of writing and editing, and then to the refinement of the technology to make it available to corporations and Individuals. The term word processing appeared in American offices in the early 1970s centered on the idea of streamlining the work to typists, but the meaning soon shifted toward the automation of the whole editing cycle. At first, the designers of word processing systems combined existing technologies with emerging ones to develop stand-alone equipment, creating a new business distinct from the emerging world of the personal computer. The concept of word processing arose from the more general data processing, which since the 1950s had been the application of computers to business administration. Through history, there have been three types of word processors: mechanical, electronic and software. Mechanical word processing The first word processing device (a "Machine for Transcribing Letters" that appears to have been similar to a typewriter) was patented in 1714 by Henry Mill for a machine that was capable of "writing so clearly and accurately you could not distinguish it from a printing press". More than a century later, another patent appeared in the name of William Austin Burt for the typographer. In the late 19th century, Christopher Latham Sholes created the first recognizable typewriter, which was described as a "literary piano". The only "word processing" these mechanical systems could perform was to change where letters appeared on the page, to fill in spaces that were previously left on the page, or to skip over lines. It was not until decades later that the introduction of electricity and electronics into typewriters began to help the writer with the mechanical part. The term "word processing" (translated from the German word Textverarbeitung) itself was possibly created in the 1950s by Ulrich Steinhilper, a German IBM typewriter sales executive, or by an American electro-mechanical typewriter executive, George M. Ryan, who obtained a trademark registration in the USPTO for the phrase. However, it did not make its appearance in 1960s office management or computing literature (an example of grey literature), though many of the ideas, products, and technologies to which it would later be applied were already well known. Nonetheless, by 1971, the term was recognized by the New York Times as a business "buzz word". Word processing paralleled the more general "data processing", or the application of computers to business administration. Thus, by 1972, the discussion of word processing was common in publications devoted to business office management and technology; by the mid-1970s, the term would have been familiar to any office manager who consulted business periodicals. Electromechanical and electronic word processing By the late 1960s, IBM had developed the IBM MT/ST (Magnetic Tape/Selectric Typewriter). It was a model of the IBM Selectric typewriter from earlier in 1961, but it came built into its own desk, integrated with magnetic tape recording and playback facilities along with controls and a bank of electrical relays. The MT/ST automated word wrap, but it had no screen. This device allowed a user to rewrite text that had been written on another tape, and it also allowed limited collaboration in the sense that a user could send the tape to another person to let them edit the document or make a copy. It was a revolution for the word processing industry. In 1969, the tapes were replaced by magnetic cards. These memory cards were inserted into an extra device that accompanied the MT/ST, able to read and record users' work. Throughout the 1960s and 70s, word processing began to slowly shift from glorified typewriters augmented with electronic features to become fully computer-based (although only with single-purpose hardware) with the development of several innovations. Just before the arrival of the personal computer (PC), IBM developed the floppy disk. In the 1970s, the first proper word-processing systems appeared, which allowed display and editing of documents on CRT screens. During this era, these early stand-alone word processing systems were designed, built, and marketed by several pioneering companies. Linolex Systems was founded in 1970 by James Lincoln and Robert Oleksiak. Linolex based its technology on microprocessors, floppy drives and software. It was a computer-based system for application in the word processing businesses and it sold systems through its own sales force. With a base of installed systems in over 500 sites, Linolex Systems sold 3 million units in 1975 — a year before the Apple computer was released. At that time, the Lexitron Corporation also produced a series of dedicated word-processing microcomputers. Lexitron was the first to use a full-sized video display screen (CRT) in its models by 1978. Lexitron also used 5 inch floppy diskettes, which became the standard in the personal computer field. The program disk was inserted in one drive, and the system booted up. The data diskette was then put in the second drive. The operating system and the word processing program were combined in one file. Another of the early word processing adopters was Vydec, which created in 1973 the first modern text processor, the "Vydec Word Processing System". It had built-in multiple functions like the ability to share content by diskette and print it. The Vydec Word Processing System sold for $12,000 at the time, (about $60,000 adjusted for inflation). The Redactron Corporation (organized by Evelyn Berezin in 1969) designed and manufactured editing systems, including correcting/editing typewriters, cassette and card units, and eventually a word processor called the Data Secretary. The Burroughs Corporation acquired Redactron in 1976. A CRT-based system by Wang Laboratories became one of the most popular systems of the 1970s and early 1980s. The Wang system displayed text on a CRT screen, and incorporated virtually every fundamental characteristic of word processors as they are known today. While early computerized word processor system were often expensive and hard to use (that is, like the computer mainframes of the 1960s), the Wang system was a true office machine, affordable to organizations such as medium-sized law firms, and easily mastered and operated by secretarial staff. The phrase "word processor" rapidly came to refer to CRT-based machines similar to Wang's. Numerous machines of this kind emerged, typically marketed by traditional office-equipment companies such as IBM, Lanier (AES Data machines - re-badged), CPT, and NBI. All were specialized, dedicated, proprietary systems, with prices in the $10,000 range. Cheap general-purpose personal computers were still the domain of hobbyists. Japanese word processor devices In Japan, even though typewriters with Japanese writing system had widely been used for businesses and governments, they were limited to specialists and required special skills due to the wide variety of letters, until computer-based devices came onto the market. In 1977, Sharp showcased a prototype of a computer-based word processing dedicated device with Japanese writing system in Business Show in Tokyo. Toshiba released the first Japanese word processor in February 1979. The price was 6,300,000 JPY, equivalent to US$45,000. This is selected as one of the milestones of IEEE. The Japanese writing system uses a large number of kanji (logographic Chinese characters) which require 2 bytes to store, so having one key per each symbol is infeasible. Japanese word processing became possible with the development of the Japanese input method (a sequence of keypresses, with visual feedback, which selects a character) -- now widely used in personal computers. Oki launched OKI WORD EDITOR-200 in March 1979 with this kana-based keyboard input system. In 1980 several electronics and office equipment brands including entered this rapidly growing market with more compact and affordable devices. For instance, NEC introduced the , and Fujitsu launched the . While the average unit price in 1980 was 2,000,000 JPY (US$14,300), it was dropped to 164,000 JPY (US$1,200) in 1985. Even after personal computers became widely available, Japanese word processors remained popular as they tended to be more portable (an "office computer" was initially too large to carry around), and become commonplace for business and academics, even for private individuals in the second half of the 1980s. The phrase "word processor" has been abbreviated as "Wa-pro" or "wapuro" in Japanese. Word processing software The final step in word processing came with the advent of the personal computer in the late 1970s and 1980s and with the subsequent creation of word processing software. Word processing software that would create much more complex and capable output was developed and prices began to fall, making them more accessible to the public. By the late 1970s, computerized word processors were still primarily used by employees composing documents for large and midsized businesses (e.g., law firms and newspapers). Within a few years, the falling prices of PCs made word processing available for the first time to all writers in the convenience of their homes. The first word processing program for personal computers (microcomputers) was Electric Pencil, from Michael Shrayer Software, which went on sale in December 1976. In 1978, WordStar appeared and because of its many new features soon dominated the market. WordStar was written for the early CP/M (Control Program–Micro) operating system, ported to CP/M-86, then to MS-DOS, and was the most popular word processing program until 1985 when WordPerfect sales first exceeded WordStar sales. Early word processing software was not as intuitive as word processor devices. Most early word processing software required users to memorize semi-mnemonic key combinations rather than pressing keys such as "copy" or "bold". Moreover, CP/M lacked cursor keys; for example WordStar used the E-S-D-X-centered "diamond" for cursor navigation. A notable exception was the software Lexitype for MS-DOS that took inspiration from the Lexitron dedicated word processor's user interface and which mapped individual functions to particular keyboard function keys, and a set of stick-on "keycaps" describing the function were provided with the software. Lexitype was popular with large organizations that had previously used the Lexitron. Eventually, the price differences between dedicated word processors and general-purpose PCs, and the value added to the latter by software such as "killer app" spreadsheet applications, e.g. VisiCalc and Lotus 1-2-3, were so compelling that personal computers and word processing software became serious competition for the dedicated machines and soon dominated the market. In the late 1980s, innovations such as the advent of laser printers, a "typographic" approach to word processing (WYSIWYG - What You See Is What You Get), using bitmap displays with multiple fonts (pioneered by the Xerox Alto computer and Bravo word processing program), and graphical user interfaces such as “copy and paste” (another Xerox PARC innovation, with the Gypsy word processor). These were popularized by MacWrite on the Apple Macintosh in 1983, and Microsoft Word on the IBM PC in 1984. These were probably the first true WYSIWYG word processors to become known to many people. Of particular interest also is the standardization of TrueType fonts used in both Macintosh and Windows PCs. While the publishers of the operating systems provide TrueType typefaces, they are largely gathered from traditional typefaces converted by smaller font publishing houses to replicate standard fonts. Demand for new and interesting fonts, which can be found free of copyright restrictions, or commissioned from font designers, developed. The growing popularity of the Windows operating system in the 1990s later took Microsoft Word along with it. Originally called "Microsoft Multi-Tool Word", this program quickly became a synonym for “word processor”. Early in the 21st century, Google Docs popularized the transition to online or offline web browser based word processing. This was enabled by the widespread adoption of suitable internet connectivity in businesses and domestic households and later the popularity of smartphones. Google Docs enabled word processing from within any vendor's web browser, which could run on any vendor's operating system on any physical device type including tablets and smartphones, although offline editing is limited to a few Chromium based web browsers. Google Docs also enabled the significant growth of use of information technology such as remote access to files and collaborative real-time editing, both becoming simple to do with little or no need for costly software and specialist IT support.
Technology
Computer software
null
42472109
https://en.wikipedia.org/wiki/Law%20of%20reciprocal%20proportions
Law of reciprocal proportions
The law of reciprocal proportions, also called law of equivalent proportions or law of permanent ratios, is one of the basic laws of stoichiometry. It relates the proportions in which elements combine across a number of different elements. It was first formulated by Jeremias Richter in 1791. A simple statement of the law is: If element A combines with element B and also with C, then, if B and C combine together, the proportion by weight in which they do so will be simply related to the weights of B and C which separately combine with a constant weight of A. As an example, 1 gram of sodium (Na = A) is observed to combine with either 1.54 grams of chlorine (Cl = B) or 5.52 grams of iodine (I = C). (These ratios correspond to the modern formulas NaCl and NaI). The ratio of these two weights is 5.52/1.54 = 3.58. It is also observed that 1 gram of chlorine reacts with 1.19 g of iodine. This ratio of 1.19 obeys the law because it is a simple fraction (1/3) of 3.58. (This is because it corresponds to the formula ICl3, which is one known compound of iodine and chlorine.) Similarly, hydrogen, carbon, and oxygen follow the law of reciprocal proportions. The acceptance of the law allowed tables of element equivalent weights to be drawn up. These equivalent weights were widely used by chemists in the 19th century. The other laws of stoichiometry are the law of definite proportions and the law of multiple proportions. The law of definite proportions refers to the fixed composition of any compound formed between element A and element B. The law of multiple proportions describes the stoichiometric relationship between two or more different compounds formed between element A and element B. The law states that if two different elements combine separately with a fixed mass of a third element, the ratio of the masses in which they combine are either the same or are in simple multiple ratio of the masses in which they combine with each other . History The law of reciprocal proportions was proposed in essence by Richter, following his determination of neutralisation ratios of metals with acids. In the early 19th century it was investigated by Berzelius, who formulated it as follows: When two substances, A and B have an affinity for two others, C and D, the ratio of the quantities C and D which saturate the same amount of A is the same as that between the quantities C and D which saturate the same amount of B. Later Jean Stas showed that within experimental error the stoichiometric laws were correct.
Physical sciences
Reaction
Chemistry
34251640
https://en.wikipedia.org/wiki/Acid%20strength
Acid strength
Acid strength is the tendency of an acid, symbolised by the chemical formula HA, to dissociate into a proton, H+, and an anion, A-. The dissociation or ionization of a strong acid in solution is effectively complete, except in its most concentrated solutions. HA -> H+ + A- Examples of strong acids are hydrochloric acid (HCl), perchloric acid (HClO4), nitric acid (HNO3) and sulfuric acid (H2SO4). A weak acid is only partially dissociated, or is partly ionized in water with both the undissociated acid and its dissociation products being present, in solution, in equilibrium with each other. HA <=> H+ + A- Acetic acid (CH3COOH) is an example of a weak acid. The strength of a weak acid is quantified by its acid dissociation constant, value. The strength of a weak organic acid may depend on substituent effects. The strength of an inorganic acid is dependent on the oxidation state for the atom to which the proton may be attached. Acid strength is solvent-dependent. For example, hydrogen chloride is a strong acid in aqueous solution, but is a weak acid when dissolved in glacial acetic acid. Measures of acid strength The usual measure of the strength of an acid is its acid dissociation constant (), which can be determined experimentally by titration methods. Stronger acids have a larger and a smaller logarithmic constant () than weaker acids. The stronger an acid is, the more easily it loses a proton, H+. Two key factors that contribute to the ease of deprotonation are the polarity of the H-A bond and the size of atom A, which determine the strength of the H-A bond. Acid strengths also depend on the stability of the conjugate base. While the value measures the tendency of an acidic solute to transfer a proton to a standard solvent (most commonly water or DMSO), the tendency of an acidic solvent to transfer a proton to a reference solute (most commonly a weak aniline base) is measured by its Hammett acidity function, the value. Although these two concepts of acid strength often amount to the same general tendency of a substance to donate a proton, the and values are measures of distinct properties and may occasionally diverge. For instance, hydrogen fluoride, whether dissolved in water ( = 3.2) or DMSO ( = 15), has values indicating that it undergoes incomplete dissociation in these solvents, making it a weak acid. However, as the rigorously dried, neat acidic medium, hydrogen fluoride has an value of –15, making it a more strongly protonating medium than 100% sulfuric acid and thus, by definition, a superacid. (To prevent ambiguity, in the rest of this article, "strong acid" will, unless otherwise stated, refer to an acid that is strong as measured by its value ( < –1.74). This usage is consistent with the common parlance of most practicing chemists.) When the acidic medium in question is a dilute aqueous solution, the is approximately equal to the pH value, which is a negative logarithm of the concentration of aqueous H+ in solution. The pH of a simple solution of an acid in water is determined by both and the acid concentration. For weak acid solutions, it depends on the degree of dissociation, which may be determined by an equilibrium calculation. For concentrated solutions of acids, especially strong acids for which pH < 0, the value is a better measure of acidity than the pH. Strong acids A strong acid is an acid that dissociates according to the reaction HA + S <=> SH+ + A- where S represents a solvent molecule, such as a molecule of water or dimethyl sulfoxide (DMSO), to such an extent that the concentration of the undissociated species HA is too low to be measured. For practical purposes a strong acid can be said to be completely dissociated. An example of a strong acid is perchloric acid. HClO_{4} -> H+ + ClO_{4}- (in aqueous solution) Any acid with a value which is less than about -2 behaves as a strong acid. This results from the very high buffer capacity of solutions with a pH value of 1 or less and is known as the leveling effect. The following are strong acids in aqueous and dimethyl sulfoxide solution. As mentioned above, because the dissociation is so strongly favored, the concentrations of HA and thus the values of cannot be measured experimentally. The values in the following table are average values from as many as 8 different theoretical calculations. {| class="wikitable" |+Estimated pKa values |- !Acid!!Formula!! in water||in DMSO |- |Hydrochloric acid ||HCl||−5.9 ± 0.4 ||−2.0 ± 0.6 |- |Hydrobromic acid ||HBr ||−8.8 ± 0.8||−6.8 ± 0.8 |- |Hydroiodic acid|| HI ||−9.5 ± 1||−10.9 ± 1 |- |Triflic acid|| H[CF3SO3] ||−14 ± 2||−14 ± 2 |- |Perchloric acid|| H[ClO4] ||−15 ± 2||−15 ± 2 |} Also, in water Nitric acid HNO3 = −1.6 Sulfuric acid H2SO4 (first dissociation only, ≈ −3) The following can be used as protonators in organic chemistry Fluoroantimonic acid H[SbF6] Magic acid H[FSO3SbF5] Carborane superacid H[CHB11Cl11] Fluorosulfuric acid H[FSO3]( = −6.4) Sulfonic acids, such as p-toluenesulfonic acid (tosylic acid) are a class of strong organic oxyacids. Some sulfonic acids can be isolated as solids. Polystyrene functionalized into polystyrene sulfonate is an example of a substance that is a solid strong acid. Weak acids A weak acid is a substance that partially dissociates or partly ionizes when it is dissolved in a solvent. In solution, there is an equilibrium between the acid, HA, and the products of dissociation. The solvent (e.g. water) is omitted from this expression when its concentration is effectively unchanged by the process of acid dissociation. The strength of a weak acid can be quantified in terms of a dissociation constant, , defined as follows, where [X] signifies the concentration of a chemical moiety, X. When a numerical value of is known it can be used to determine the extent of dissociation in a solution with a given concentration of the acid, , by applying the law of conservation of mass. where is the value of the analytical concentration of the acid. When all the quantities in this equation are treated as numbers, ionic charges are not shown and this becomes a quadratic equation in the value of the hydrogen ion concentration value, [H+]. This equation shows that the pH of a solution of a weak acid depends on both its value and its concentration. Typical examples of weak acids include acetic acid and phosphorous acid. An acid such as oxalic acid (HOOC-COOH) is said to be dibasic because it can lose two protons and react with two molecules of a simple base. Phosphoric acid (H3PO4) is tribasic. For a more rigorous treatment of acid strength see acid dissociation constant. This includes acids such as the dibasic acid succinic acid, for which the simple method of calculating the pH of a solution, shown above, cannot be used. Experimental determination The experimental determination of a value is commonly performed by means of a titration. A typical procedure would be as follows. A quantity of strong acid is added to a solution containing the acid or a salt of the acid, to the point where the compound is fully protonated. The solution is then titrated with a strong base HA + OH- -> A- + H2O until only the deprotonated species, A-, remains in solution. At each point in the titration pH is measured using a glass electrode and a pH meter. The equilibrium constant is found by fitting calculated pH values to the observed values, using the method of least squares. Conjugate acid/base pair It is sometimes stated that "the conjugate of a weak acid is a strong base". Such a statement is incorrect. For example, acetic acid is a weak acid which has a = 1.75 x 10−5. Its conjugate base is the acetate ion with Kb = 10−14/Ka = 5.7 x 10−10 (from the relationship Ka × Kb = 10−14), which certainly does not correspond to a strong base. The conjugate of a weak acid is often a weak base and vice versa. Acids in non-aqueous solvents The strength of an acid varies from solvent to solvent. An acid which is strong in water may be weak in a less basic solvent, and an acid which is weak in water may be strong in a more basic solvent. According to Brønsted–Lowry acid–base theory, the solvent S can accept a proton. HA + S<=> A- + HS+ For example, hydrochloric acid is a weak acid in solution in pure acetic acid, HO2CCH3, which is more acidic than water. HO2CCH3 + HCl <=> (HO)2CCH3+ + Cl- The extent of ionization of the hydrohalic acids decreases in the order HI > HBr > HCl. Acetic acid is said to be a differentiating solvent for the three acids, while water is not. An important example of a solvent which is more basic than water is dimethyl sulfoxide, DMSO, (CH3)2SO. A compound which is a weak acid in water may become a strong acid in DMSO. Acetic acid is an example of such a substance. An extensive bibliography of values in solution in DMSO and other solvents can be found at Acidity–Basicity Data in Nonaqueous Solvents. Superacids are strong acids even in solvents of low dielectric constant. Examples of superacids are fluoroantimonic acid and magic acid. Some superacids can be crystallised. They can also quantitatively stabilize carbocations. Lewis acids reacting with Lewis bases in gas phase and non-aqueous solvents have been classified in the ECW model, and it has been shown that there is no one order of acid strengths. The relative acceptor strength of Lewis acids toward a series of bases, versus other Lewis acids, can be illustrated by C-B plots. It has been shown that to define the order of Lewis acid strength at least two properties must be considered. For the qualitative HSAB theory the two properties are hardness and strength while for the quantitative ECW model the two properties are electrostatic and covalent. Factors determining acid strength The inductive effect In organic carboxylic acids, an electronegative substituent can pull electron density out of an acidic bond through the inductive effect, resulting in a smaller value. The effect decreases, the further the electronegative element is from the carboxylate group, as illustrated by the following series of halogenated butanoic acids. Effect of oxidation state In a set of oxoacids of an element, values decrease with the oxidation state of the element. The oxoacids of chlorine illustrate this trend. † theoretical
Physical sciences
Concepts
Chemistry
34255537
https://en.wikipedia.org/wiki/Tautavel%20Man
Tautavel Man
Tautavel Man refers to the archaic humans which—from approximately 550,000 to 400,000 years ago—inhabited the Caune de l’Arago, a limestone cave in Tautavel, France. They are generally grouped as part of a long and highly variable lineage of transitional morphs which inhabited the Middle Pleistocene of Europe, and would eventually evolve into the Neanderthals (Homo neanderthalensis or H. sapiens neanderthalensis). They have been variably assigned to either H. (s.?) heidelbergensis, or as a European subspecies of H. erectus as H. e. tautavelensis. The skull is reconstructed based on the specimens Arago 21 and 47 (probably male), and it is, to a degree, more characteristic of what might be considered a typical H. erectus (sensu stricto) morphology than a typical H. heidelbergensis morphology. The brain capacity is 1,166 cc. They seem to have had an overall robust skeleton. Average height may have been . The Caune de l'Arago opens on a cliffside above a river, overlooking the Tautavel plain, with a plateau above, and mountainous terrain to the sides. During and after human occupation, the area swung from temperate and humid forestland, to cold and dry steppeland. Stratigraphically, humans are present from beds Q–C. Bed G, dating to roughly 455,000 years old during a forested event, has yielded the most remains. They seem to have hunted a variety of animals, including red deer, fallow deer, argali, tahr, horse, reindeer, beaver, and more. They made Acheulean stone tools, but mainly produced smaller retouched tools such as scrapers, rather than more iconic macro-tools such as bifaces (hand axes). In beds G and F, they may have been practicing ritual cannibalism. Evidence of fire is absent until bed C (400,000 years ago). Research history Animal fossils were first reported from the Caune de l’Arago in 1828 by French geologist Marcel de Serres, who considered them antediluvian remains (before the Great Flood in the biblical chronology). In 1963, French archaeologist recovered stone tools, which inspired French archaeologist Henry de Lumley to continue excavation for human remains. He found such remains in 1964, and recovered the first face (Arago 21) in 1971. He and fellow archaeologist Marie-Antoinette de Lumley (his wife) formally described the remains that same year. They dated them to the Riss glaciation—that is, predating the European Neanderthals (H. neanderthalensis or H. sapiens neanderthalensis). They and subsequent researchers made several parallels between the Arago material and Homo erectus (both sensu stricto Asian specimens and sensu lato specimens beyond Asia) as well as to Neanderthals. Similarly, these fossils were initially postulated to represent an intermediate form between H. erectus and Neanderthals, and were commonly referred to as "Pre-Neanderthals" to avoid assigning them to a specific species. Nonetheless, in 1979, the de Lumley's suggested the name "H. e. tautavelensis", but subsequent authors preferred to classify the remains into the then strictly Middle Pleistocene European H. heidelbergensis, described from the German Mauer 1 mandible in 1908. H. heidelbergensis is typically defined as the direct ancestor of Neanderthals. The skull of the Tautavel inhabitants can only be reconstructed using the crushed (and thus distorted) partial face Arago 21 and the partial parietal bone Arago 47. The actual shape of the skull before crushing was speculated using the skulls of contemporaneous European humans (that is, H. heidelbergensis). It is possible Arago 21 and 47 represent the same individual. Several such reconstructions were done in the 1980s, with the conclusion of strong affinities to H. heidelbergensis. The first reconstruction was done by French moulder René David, with the oversight of Mrs. de Lumley, and was completed in 1982. Other reconstructions were made in 1982 and again in 1986 by Italian palaeontologist Antonio Ascenzi; in 1982 and again in 1984 by French anthropologist Éliane Spitery; in 1986 by Czech palaeoanthropologist Emanuel Vlček; in 1982 and again in 1991 by French palaeoanthropologist Dominique Grimaud-Hervé; in 2005 by French anthropologist Gaspard Guipert (digitally uncrushing and reconstructing the skull); and most recently by Mrs. de Lumley in 2015. Like his predecessors, Guipert decided to relegate the material to H. heidelbergensis, and defined H. heidelbergensis as a highly polymorphic (variable) species. In 2015, Mrs. de Lumley redescribed the Arago material, and decided to consider H. e. tautavelensis as a completely distinct and valid subspecies (diverging from H. heidelbergensis and the Neanderthal line) by using a much stricter definition of H. heidelbergensis and focusing on the parallels between the Tautavel remains and H. erectus s. s. The Tautavel remains are quite similar to other archaic human remains from Europe which have variously been classified as H. erectus or H. heidelbergensis depending on the definition of these species: Ceprano, Italy; Galería, Spain; Swanscombe, England; Vértesszőlős, Hungary; and Petralona, Greece. By 2014, a total of 148 human bones had been recovered from the Caune de l'Arago, including 123 teeth, 5 jawbones, 9 upper limb elements, and 19 lower limb elements. These represent 18 adults and 12 juveniles, 30 individuals in total. Based on dental development, about 30% of the specimens died between the ages of 7 and 12, 37% between 18 and 30, 30% between 30 and 40, and 3% over 40. This would give an average life expectancy (assuming the individual survived infancy) of 20–25 years. The perceived infant mortality rate (from ages 1–6) is 11%, which is conspicuously low. If especially robust bones are assumed to be males, then females slightly outnumber males, but the proportion is roughly equally. All the non-tooth skull specimens (Arago 21, 47, and 45) are assumed to be male. For the jawbone specimens, only Arago 13 is assumed to be male, and the others (Arago 2, 89, 119, 130, and 131) are assumed to be female. The iliac specimens (a hip bone), Arago 44 and 121, are female. Anatomy The reconstructed skull of Tautavel Man (based on Arago 21 and 47) shares many similarities with that of H. erectus s. s. These include: strongly defined brows, a receding forehead, a relatively low face, a depression between the eyebrows, post-orbital constriction, strongly defined ridges below the eye sockets, a weak chin (with developed prognathism), strong and thick jaws, U-shaped tooth rows, and marked sexual dimorphism (with males notably more robust than females). However, it differs in having a wider base, more forwardly oriented cheek bones, more massive supraorbital trigons (the triangles on the frontal bone formed by the brows and the two temporal lines), and a more defined post-orbital constriction. According to Mrs. de Lumley, they differ from the roughly contemporaneous H. heidelbergensis by retaining basal (archaic) characteristics, including a longer skull, more defined brow ridge, more receding forehead, less defined post-orbital constriction, less developed prognathism, and smaller brain capacity—that is, by being somewhat more H. erectus grade than H. neanderthalensis grade. The Arago 21 face probably belonged to a 20-year-old, as indicated by the state of the fronto-pariental suture; based on its robustness, it is assumed to be male. The reconstructed Tautavel skull measures along its long axis. This maximum measurement is similar to that of H. erectus from Sangiran and longer than the Sima de los Huesos (SH) hominins (which are typically assigned to H. heidelbergensis), but shorter than that of Neanderthals, whose braincase was further developed. The length of the short axis is , which is typical of contemporaneous and more ancient H. erectus s. s. and within the exceptionally wide range reported for the SH hominins, but is narrower than more recent hominins, including more recent H. erectus s. s. For comparison, the dimensions of a modern human skull average for men and for women. Like H. erectus s. s., the Tautavel face strongly projects from back to front, with the face-to-skull-base length ratio (ratio between the distance from the face to the base of the skull, versus the length of the base of the skull itself) being 48.1%. In comparison, this ratio is 44% in Neanderthals, and 38% in modern humans. As for the mandibles, the robust male jawbones align more closely to H. erectus s. s., whereas the more gracile female ones cluster closer to the Mauer mandible, the SH hominins, and Neanderthals. This demonstrates rather strong jaw musculature in the Tautavel inhabitants, as well as pronounced sexual dimorphism. The Tautavel mandibles all have strongly developed mandibular tori (ridges bordering the tongue in the cheek), subhorizontal mylohyoid lines (ridges running on the exterior of the body of the mandible), deep and narrow submandibular fovea (below the mylohyoid lines), and a narrow and convex plane which merges into the tori. The teeth are proportionally quite large for a Middle Pleistocene European human, notably the P4 (2nd premolar) and M2 (2nd molar). The cusps of the molars retain an anterior fovea, a mesial and distal trigonid crest, a cusp of Carabelli, and one or two accessory cusps, which are basal traits. In 1983, American anthropologist Ralph Holloway estimated the brain volume as using the Arago 21 face, Arago 47 parietal, and Swanscombe occipital bone. This volume is comparable to that of Peking Man from Zhoukoudian, China, and is on the lower end of the range of variation for modern humans. Originally, Holloway theorized that Tautavel Man was a "very garrulous [talkative] individual" based on the expanded Broca's area (which is associated with speech production in modern humans), but in 2004 he admitted "this was pure speculation." Evidence regarding the speech capability of Middle Pleistocene European hominins is mixed. A 400,000-year-old hyoid bone (which supports the tongue and thus humanlike speech production) from Castel di Guido, Italy, assigned to a terminal population of H. erectus is quite apelike, but a hyoid bone from the SH hominins (on the Neanderthal line) appears quite humanlike. As for the rest of the skeleton, the only elements of the spine and torso identified are a single atlas and axis bones (the first two neck vertebrae), and two clavicles. For the arms, four humeri and an ulna, which are notably massive, were discovered. For the pelvis, four pelvic bones and a sacrum were identified; the two identified iliac wings are quite robust. The acetabulum (where the leg connects to the hip) is oval-shaped much like in H. erectus, as opposed to circular, as in modern humans and Neanderthals. For the legs, seven femora, two tibiae, and seven fibulae have been identified, and, as in H. erectus, they are quite thick; this would have constricted the medullary cavity, where bone marrow is stored. This thickening is generally explained as being due to fast bone growth in adolescence. As in Neanderthals, the leg bones are quite robust, which may be a response to the cold climate, greater physical activity, or both. Using the femora and fibulae, Tautavel Man's average height has been estimated as roughly . Culture Palaeoenvironment The Caune de l’Arago is currently long and varies in width, but the walls and roof have likely caved in significantly over the last hundreds of thousands of years. Excavation of the site is overseen by the IPH (Institut de Paléontologie Humaine) and the CERPT (Centre Européen de Recherche Préhistorique de Tautavel). This limestone cave opens along a cliff wall above the Verdouble river, overlooking the Tautavel plain. The fossil-bearing deposits go down . The deposits are stratified into Lower Stratigraphic, Middle Stratigraphic, Upper Stratigraphic, and Upper Stalagmitic Complexes, and human remains come from the Middle and the very beginning of the Upper Stratigraphic Complexes. These deposits are further subdivided into 4 units and 17 beds (from bottom to surface): Unit 1 (beds Q, P, O, N, M, L, and K), Unit 2 (J, I, and H), Unit 3 (G, F, E, and D), and Unit 4 (C, B, and A). Beds Q–C bear human remains, and span oxygen isotope stages 14–10 (roughly 550–400 thousand years ago). They are made of sand and aeolian sandy loam, overlain by a thick stalagmite layer, overlain by a breccia. Almost all human remains came from bed G, which has been dated to 455,000 years ago using uranium–thorium dating. The cave is one of the earliest known inhabited caves in the Pyrenees. Throughout human occupation, the Caune de l'Arago provided access to a mountainous and riverine habitat, a plateau above, and a plain below. The plain and plateau repeatedly swung between a temperate and humid forested region dominated by pine, deciduous, and cypress trees and mediterranean plants, to a cold and dry grassy steppe (switching from forest to steppe about 550,000 years ago; reverting 480,000 years ago; switching again 420,000 years ago; and continuing this pattern after occupation). During the forested periods, the mammal assemblage of the cave can predominantly feature red deer (Cervus elaphus), fallow deer (Dama clactoniana), argali (Ovis ammon antiqua), narrow-nosed rhinoceros (Stephanorhinus hemitoechus), and tahrs (Hemitragus bosali). Predators in the temperate intervals include the cave lynx (Lynx spelaeus), cave lion (Panthera spelaea), dhole (Cuon priscus), red fox (Vulpes vulpes), the wolf Canis mosbachensis, and the bear Ursus deningeri. Bears and humans possibly occupied the cave during different seasons of the year when human occupation was intermittent. In the cold events, horse (Equus mosbachensis), reindeer (Rangifer tarandus), steppe bison (Bos priscus), giant musk ox (Praeovibos priscus), and rhinoceros could be abundant. This cave also has the earliest evidence of beaver (Castor fiber) hunting in beds G and J. Like many other human sites, the inhabitants in different beds preferentially targeted a certain prey item over others, such as reindeer in bed L, red and fallow deer in bed J, and musk ox at the top of bed G. Argali is commonly found in all beds, and may have been brought in not only by humans but also non-human carnivores, particularly in beds O, N, and M. Looking at the dental development of animals under two years old, it is possible to tell what time of year the animal was hunted, and thus, when the inhabitants occupied the cave. Based on this, long-term occupation was featured in bed G; intermittent occupation of a few months in beds P, J, I, F, E, and D; and short occupation with little hunting at all in bed L. Similarly, human baby teeth are present in the long term and seasonally inhabited beds, indicating entire families with children inhabited the cave; these are absent in bed L, which could mean only brief habitation by a small hunting party. Technology To describe the stone tool industry present in the Caune de l'Arago, the de Lumley's had coined the term "Proto-Cherantian" (the Cherantian is a possible subtype of the Neanderthal Mousterian industry), defined as a tradition which produces few bifaces (hand axes). They changed this to "Mediterranean Acheulean" in 2004, and the label "Proto-Cherantian" is no longer widely used. About 63% of the tools are large stone shards, 32% retouched tools, 3% lithic cores, and 2% macro-tools. Excluding debris and simple chipping, smaller retouched tools make up 90% of the tool assemblage, and macro-tools are 10%. Among these retouched tools, 36% are simple scrapers, 16% retouched notches, 11% Clactonian notches, 12% denticulate tools, 3% denticulated scrapers (with a notched edge), and 2% convergent scrapers (which have a point at the end). Among the macro-tools, 64% are elaborate choppers (pebbles with multiple flakes cleaved out to make it sharp), 13% primary choppers (with a single flake cleaved out), 9% chopping tools, 7% rabots (a chopper made out of a lithic flake rather than a core), 4% bifaces (hand axes), 3% unifaces, and 0.8% polyhedron- and spheroid-shaped tools. Of the elaborate choppers, about 60% have a single edge, 26% have multiple points, and 9% are converging points with two edges and a single point. Bifaces are tools which feature perfect symmetry on both sides, and are sometimes interpreted as having been produced this way for purely aesthetic purposes. Low quality quartz, sandstone, quartzose sandstone, and limestone (raw materials for tools) could have been collected from the river cobble. Higher quality jasper, flint, quartzite, and blue translucent quartz rocks (more suitable for knapping tools) could have been collected within . The most common material used was vein quartz, probably because it was a common river cobble, and because it produced the most reliable cutting edge among local minerals and the inhabitants were predominantly manufacturing various types of simple scrapers. About 90% of the bed G tool assemblage is made of vein quartz sourced from the river below. Macro-tools and hammerstones were commonly made with more durable limestone, more complex retouched tools with higher quality flint or quartzite, and bifaces with hornfel. Evidence of fire is present in only the upper part of bed C, dating to roughly 400,000 years ago. Similarly, the archaeological record reports sparse and infrequent fire usage until around 400,000 years ago, which may correlate with the true domestication of fire and the invention of fire-starting technology, or simply better fire maintenance strategies. Cannibalism Some humans bones in beds G and F appear to have been cracked open while still fresh, or have striations consistent with skinning and butchering, which may attest to the practice of cannibalism. This could explain the conspicuous absence of chest and (by-and-large) hand and foot bones, which should have been left behind if these individuals had been eaten by animals. If this is correct, then the Tautavel inhabitants of bed G specifically consumed brains, tongues, and the flesh and bone marrow of the limbs of the recently deceased or killed. This would indicate ritual cannibalism as opposed to survival cannibalism; otherwise the butcherers would presumably have utilised the entire body rather than only certain sections.
Biology and health sciences
Homo
Biology
48753242
https://en.wikipedia.org/wiki/Contact%20%28geology%29
Contact (geology)
A geological contact is a boundary which separates one rock body from another. A contact can be formed during deposition, by the intrusion of magma, or through faulting or other deformation of rock beds that brings distinct rock bodies into contact. The geologic subdiscipline of stratigraphy is primarily concerned with depositional contacts, while faults and shear zones are of particular interest in structural geology. Faults and shear zones can be regarded as a form of secondary structure in the rock beds. Types of contacts Depositional Conformable Conformable contacts represent no time gap in the geologic record. They are usually planar, though they may have slightly irregular topography. These contacts represent continual, uninterrupted deposition and accumulation of sedimentary rocks, or represent lava flows. A conformable contact can be abrupt, where the contact separates beds of distinct lithology. Abrupt contacts coincide with bedding planes and represent a change in depositional environment, but with only a minor hiatus in deposition with no significant erosion. A brief hiatus without erosion is known as a diastem. Gradational contacts occur where the change in depositional environment takes place over a longer period of time. They are further divided into progressive gradual contacts where the change in lithology is more or less continuous, or intercalated contacts where thin beds of a new lithology appear at the base of the contact zone, become thicker upwards, and completely replaced the old lithology at the top of the contact zone. Closely related to intercalated contacts is interfingering (also known as intertonguing, interdigitating, or interlocking) of laterally adjacent sedimentary rock bodies. Here the contact breaks down into a series of wedges or tongues that penetrate the adjacent rock body and pinch out individually. These record fluctuations in the location of the boundary between different depositional environments where sediments were simultaneously being laid down. Non-conformable Unconformities are gaps in the geologic record within a stratigraphic unit. These gaps can be caused by periods of non-deposition or by erosion. As a result, two adjacent rock units may have significantly different ages. Intrusive Intrusive contacts are the surfaces between host (or country) rock and an intrusive magmatic body. The older country rock is crosscut by a younger magmatic body. The nature of the intruding body depends on its composition and depth. Common examples are igneous dikes, sills, plutons, and batholiths. Depending on the composition of the magma, the intrusive body may have a complex internal structure which can provide insight into its emplacement. The country rock responds to the immense heat from the intruding body in numerous ways. Chill margins are created if the magma is cooled too quickly to fully crystallise. The result is a distinct boundary of very fine grain igneous rock along the border of the country rock. The surrounding rock may be "baked" through contact metamorphism, resulting in non-foliated metamorphic rocks. Rocks that were originally limestone, quartz sandstone, and shale become marble, quartzite, and hornfels, respectively. Fault Fault and shear zone contacts can be represented by either discrete breaks and discontinuities, or ductile deformation without a physical break in stratigraphy. Fault surface contacts show discrete breaks and have an attitude and position which describes the contact between two formations. These fault surfaces can be polished into slickensided surfaces which depict striations in the direction of the fault movement. Shear zones are different as there is no physical break displayed, but there is displacement. Significance and application Identifying and understanding the relationship between contacts is important in determining relative ages of rocks and formations. Contacts are a key feature used to create geological maps. Cross-cutting relationships of these contacts can be used to determine the relative geological history of an area or of an outcrop. Important geological contacts The golden spikes on geologic timescales represent internationally agreed upon references for the boundaries of the stages in the geologic timescale. These contacts have been identified by the International Commission on Stratigraphy and they are known as GSSPs, or global boundary stratotype section and points. Some of these boundary points are at physical locations, while others are in ice drill core sections, or have been defined chronometrically. The GSSP for the Danian Stage marks the end of the Cretaceous and the beginning of the Paleocene Series. Located in Tunisia, the contact is described as a reddish layer at the base of a dark clay layer. This reddish layer is the Iridium Anomaly, representative of the fallout of the major impact that resulted in the mass extinction event that ended the Cretaceous.
Physical sciences
Stratigraphy
Earth science
48754621
https://en.wikipedia.org/wiki/Terra%20Australis%20Orogen
Terra Australis Orogen
The Terra Australis Orogen (TAO) was a late Neoproterozoic- to Paleozoic-age accretionary orogen that ringed the ancient, active southern margin of the supercontinents Rodinia and later Pannotia (also called Greater Gondwana). This vast orogenic belt stretched for along-strike and involved, from west to east (in the ancient, paleogeographic reference frame), landmasses belonging to the modern-day Andean margin of South America, the South African Cape, West Antarctica, Victoria Land in East Antarctica, Eastern Australia, Tasmania, and New Zealand. The formation of the Terra Australis Orogen is associated with the breakup of Rodinia at the end of the Neoproterozoic Era and the creation of Panthalassa, the paleo-Pacific Ocean, and it was succeeded by the Gondwanide orogeny with the formation of the supercontinent Pangea in the middle Paleozoic Era. Origins Terra Australis Orogen formed in the Neoproterozoic and Paleozoic. The decline of orogenic activity in the late Paleozoic is related to the assembly of the supercontinent Pangea. The orogeny did not end by a continental collision and was succeeded by the Gondwanide orogeny. long and up to wide, the TAO was one of the longest and longest-lived active continental margin in the history of Earth, lasting from the beginning of its formation during the break-up of the Neoproterozoic supercontinent Rodinia. The TAO evolved through a series of extensional back-arcs separated by compressional events when the subducting oceanic plate got stuck in Gondwana's margin. As Gondwana was amalgamated in the Early Palaeozoic during the so-called Pan-African orogenies the TAO propagated along the southern (modern coordinates) Proto-Pacific/Iapetus margin of the supercontinent. The TAO ended with the Gondwanide orogeny. This and younger orogens covers most of the outboard margin of the TAO, and, likewise, the inboard margin is almost entirely covered by younger deposits and ice but remains exposed in Australia along the Torrens Hinge Line or Delamarian orogeny. One end of the TAO was a series of terranes (Avalonia–Carolina–Cadomia) which were rifted off the western margin of Gondwana and added to Laurentia in the Late Palaeozoic, while its other end probably reached past Australia into New Guinea. In 1937 Alexander du Toit proposed the Samfrau Orogeny as an evidence for Gondwana. His concept includes the orogenies of West Gondwana and orogenies that are now considered separate events but excludes those of East Gondwana.
Physical sciences
Geologic features
Earth science
48758386
https://en.wikipedia.org/wiki/Linear%20regression
Linear regression
In statistics, linear regression is a model that estimates the linear relationship between a scalar response (dependent variable) and one or more explanatory variables (regressor or independent variable). A model with exactly one explanatory variable is a simple linear regression; a model with two or more explanatory variables is a multiple linear regression. This term is distinct from multivariate linear regression, which predicts multiple correlated dependent variables rather than a single dependent variable. In linear regression, the relationships are modeled using linear predictor functions whose unknown model parameters are estimated from the data. Most commonly, the conditional mean of the response given the values of the explanatory variables (or predictors) is assumed to be an affine function of those values; less commonly, the conditional median or some other quantile is used. Like all forms of regression analysis, linear regression focuses on the conditional probability distribution of the response given the values of the predictors, rather than on the joint probability distribution of all of these variables, which is the domain of multivariate analysis. Linear regression is also a type of machine learning algorithm, more specifically a supervised algorithm, that learns from the labelled datasets and maps the data points to the most optimized linear functions that can be used for prediction on new datasets. Linear regression was the first type of regression analysis to be studied rigorously, and to be used extensively in practical applications. This is because models which depend linearly on their unknown parameters are easier to fit than models which are non-linearly related to their parameters and because the statistical properties of the resulting estimators are easier to determine. Linear regression has many practical uses. Most applications fall into one of the following two broad categories: If the goal is error i.e. variance reduction in prediction or forecasting, linear regression can be used to fit a predictive model to an observed data set of values of the response and explanatory variables. After developing such a model, if additional values of the explanatory variables are collected without an accompanying response value, the fitted model can be used to make a prediction of the response. If the goal is to explain variation in the response variable that can be attributed to variation in the explanatory variables, linear regression analysis can be applied to quantify the strength of the relationship between the response and the explanatory variables, and in particular to determine whether some explanatory variables may have no linear relationship with the response at all, or to identify which subsets of explanatory variables may contain redundant information about the response. Linear regression models are often fitted using the least squares approach, but they may also be fitted in other ways, such as by minimizing the "lack of fit" in some other norm (as with least absolute deviations regression), or by minimizing a penalized version of the least squares cost function as in ridge regression (L2-norm penalty) and lasso (L1-norm penalty). Use of the Mean Squared Error (MSE) as the cost on a dataset that has many large outliers, can result in a model that fits the outliers more than the true data due to the higher importance assigned by MSE to large errors. So, cost functions that are robust to outliers should be used if the dataset has many large outliers. Conversely, the least squares approach can be used to fit models that are not linear models. Thus, although the terms "least squares" and "linear model" are closely linked, they are not synonymous. Formulation Given a data set of n statistical units, a linear regression model assumes that the relationship between the dependent variable y and the vector of regressors x is linear. This relationship is modeled through a disturbance term or error variable ε—an unobserved random variable that adds "noise" to the linear relationship between the dependent variable and regressors. Thus the model takes the formwhere T denotes the transpose, so that xiTβ is the inner product between vectors xi and β. Often these n equations are stacked together and written in matrix notation as where Notation and terminology is a vector of observed values of the variable called the regressand, endogenous variable, response variable, target variable, measured variable, criterion variable, or dependent variable. This variable is also sometimes known as the predicted variable, but this should not be confused with predicted values, which are denoted . The decision as to which variable in a data set is modeled as the dependent variable and which are modeled as the independent variables may be based on a presumption that the value of one of the variables is caused by, or directly influenced by the other variables. Alternatively, there may be an operational reason to model one of the variables in terms of the others, in which case there need be no presumption of causality. may be seen as a matrix of row-vectors or of n-dimensional column-vectors , which are known as regressors, exogenous variables, explanatory variables, covariates, input variables, predictor variables, or independent variables (not to be confused with the concept of independent random variables). The matrix is sometimes called the design matrix. Usually a constant is included as one of the regressors. In particular, for . The corresponding element of β is called the intercept. Many statistical inference procedures for linear models require an intercept to be present, so it is often included even if theoretical considerations suggest that its value should be zero. Sometimes one of the regressors can be a non-linear function of another regressor or of the data values, as in polynomial regression and segmented regression. The model remains linear as long as it is linear in the parameter vector β. The values xij may be viewed as either observed values of random variables Xj or as fixed values chosen prior to observing the dependent variable. Both interpretations may be appropriate in different cases, and they generally lead to the same estimation procedures; however different approaches to asymptotic analysis are used in these two situations. is a -dimensional parameter vector, where is the intercept term (if one is included in the model—otherwise is p-dimensional). Its elements are known as effects or regression coefficients (although the latter term is sometimes reserved for the estimated effects). In simple linear regression, p=1, and the coefficient is known as regression slope. Statistical estimation and inference in linear regression focuses on β. The elements of this parameter vector are interpreted as the partial derivatives of the dependent variable with respect to the various independent variables. is a vector of values . This part of the model is called the error term, disturbance term, or sometimes noise (in contrast with the "signal" provided by the rest of the model). This variable captures all other factors which influence the dependent variable y other than the regressors x. The relationship between the error term and the regressors, for example their correlation, is a crucial consideration in formulating a linear regression model, as it will determine the appropriate estimation method. Fitting a linear model to a given data set usually requires estimating the regression coefficients such that the error term is minimized. For example, it is common to use the sum of squared errors as a measure of for minimization. Example Consider a situation where a small ball is being tossed up in the air and then we measure its heights of ascent hi at various moments in time ti. Physics tells us that, ignoring the drag, the relationship can be modeled as where β1 determines the initial velocity of the ball, β2 is proportional to the standard gravity, and εi is due to measurement errors. Linear regression can be used to estimate the values of β1 and β2 from the measured data. This model is non-linear in the time variable, but it is linear in the parameters β1 and β2; if we take regressors xi = (xi1, xi2)  = (ti, ti2), the model takes on the standard form Assumptions Standard linear regression models with standard estimation techniques make a number of assumptions about the predictor variables, the response variable and their relationship. Numerous extensions have been developed that allow each of these assumptions to be relaxed (i.e. reduced to a weaker form), and in some cases eliminated entirely. Generally these extensions make the estimation procedure more complex and time-consuming, and may also require more data in order to produce an equally precise model. The following are the major assumptions made by standard linear regression models with standard estimation techniques (e.g. ordinary least squares): Weak exogeneity. This essentially means that the predictor variables x can be treated as fixed values, rather than random variables. This means, for example, that the predictor variables are assumed to be error-free—that is, not contaminated with measurement errors. Although this assumption is not realistic in many settings, dropping it leads to significantly more difficult errors-in-variables models. Linearity. This means that the mean of the response variable is a linear combination of the parameters (regression coefficients) and the predictor variables. Note that this assumption is much less restrictive than it may at first seem. Because the predictor variables are treated as fixed values (see above), linearity is really only a restriction on the parameters. The predictor variables themselves can be arbitrarily transformed, and in fact multiple copies of the same underlying predictor variable can be added, each one transformed differently. This technique is used, for example, in polynomial regression, which uses linear regression to fit the response variable as an arbitrary polynomial function (up to a given degree) of a predictor variable. With this much flexibility, models such as polynomial regression often have "too much power", in that they tend to overfit the data. As a result, some kind of regularization must typically be used to prevent unreasonable solutions coming out of the estimation process. Common examples are ridge regression and lasso regression. Bayesian linear regression can also be used, which by its nature is more or less immune to the problem of overfitting. (In fact, ridge regression and lasso regression can both be viewed as special cases of Bayesian linear regression, with particular types of prior distributions placed on the regression coefficients.) Constant variance (a.k.a. homoscedasticity). This means that the variance of the errors does not depend on the values of the predictor variables. Thus the variability of the responses for given fixed values of the predictors is the same regardless of how large or small the responses are. This is often not the case, as a variable whose mean is large will typically have a greater variance than one whose mean is small. For example, a person whose income is predicted to be $100,000 may easily have an actual income of $80,000 or $120,000—i.e., a standard deviation of around $20,000—while another person with a predicted income of $10,000 is unlikely to have the same $20,000 standard deviation, since that would imply their actual income could vary anywhere between −$10,000 and $30,000. (In fact, as this shows, in many cases—often the same cases where the assumption of normally distributed errors fails—the variance or standard deviation should be predicted to be proportional to the mean, rather than constant.) The absence of homoscedasticity is called heteroscedasticity. In order to check this assumption, a plot of residuals versus predicted values (or the values of each individual predictor) can be examined for a "fanning effect" (i.e., increasing or decreasing vertical spread as one moves left to right on the plot). A plot of the absolute or squared residuals versus the predicted values (or each predictor) can also be examined for a trend or curvature. Formal tests can also be used; see Heteroscedasticity. The presence of heteroscedasticity will result in an overall "average" estimate of variance being used instead of one that takes into account the true variance structure. This leads to less precise (but in the case of ordinary least squares, not biased) parameter estimates and biased standard errors, resulting in misleading tests and interval estimates. The mean squared error for the model will also be wrong. Various estimation techniques including weighted least squares and the use of heteroscedasticity-consistent standard errors can handle heteroscedasticity in a quite general way. Bayesian linear regression techniques can also be used when the variance is assumed to be a function of the mean. It is also possible in some cases to fix the problem by applying a transformation to the response variable (e.g., fitting the logarithm of the response variable using a linear regression model, which implies that the response variable itself has a log-normal distribution rather than a normal distribution). Independence of errors. This assumes that the errors of the response variables are uncorrelated with each other. (Actual statistical independence is a stronger condition than mere lack of correlation and is often not needed, although it can be exploited if it is known to hold.) Some methods such as generalized least squares are capable of handling correlated errors, although they typically require significantly more data unless some sort of regularization is used to bias the model towards assuming uncorrelated errors. Bayesian linear regression is a general way of handling this issue. Lack of perfect multicollinearity in the predictors. For standard least squares estimation methods, the design matrix X must have full column rank p; otherwise perfect multicollinearity exists in the predictor variables, meaning a linear relationship exists between two or more predictor variables. This can be caused by accidentally duplicating a variable in the data, using a linear transformation of a variable along with the original (e.g., the same temperature measurements expressed in Fahrenheit and Celsius), or including a linear combination of multiple variables in the model, such as their mean. It can also happen if there is too little data available compared to the number of parameters to be estimated (e.g., fewer data points than regression coefficients). Near violations of this assumption, where predictors are highly but not perfectly correlated, can reduce the precision of parameter estimates (see Variance inflation factor). In the case of perfect multicollinearity, the parameter vector β will be non-identifiable—it has no unique solution. In such a case, only some of the parameters can be identified (i.e., their values can only be estimated within some linear subspace of the full parameter space Rp). See partial least squares regression. Methods for fitting linear models with multicollinearity have been developed, some of which require additional assumptions such as "effect sparsity"—that a large fraction of the effects are exactly zero. Note that the more computationally expensive iterated algorithms for parameter estimation, such as those used in generalized linear models, do not suffer from this problem. Violations of these assumptions can result in biased estimations of β, biased standard errors, untrustworthy confidence intervals and significance tests. Beyond these assumptions, several other statistical properties of the data strongly influence the performance of different estimation methods: The statistical relationship between the error terms and the regressors plays an important role in determining whether an estimation procedure has desirable sampling properties such as being unbiased and consistent. The arrangement, or probability distribution of the predictor variables x has a major influence on the precision of estimates of β. Sampling and design of experiments are highly developed subfields of statistics that provide guidance for collecting data in such a way to achieve a precise estimate of β. Interpretation A fitted linear regression model can be used to identify the relationship between a single predictor variable xj and the response variable y when all the other predictor variables in the model are "held fixed". Specifically, the interpretation of βj is the expected change in y for a one-unit change in xj when the other covariates are held fixed—that is, the expected value of the partial derivative of y with respect to xj. This is sometimes called the unique effect of xj on y. In contrast, the marginal effect of xj on y can be assessed using a correlation coefficient or simple linear regression model relating only xj to y; this effect is the total derivative of y with respect to xj. Care must be taken when interpreting regression results, as some of the regressors may not allow for marginal changes (such as dummy variables, or the intercept term), while others cannot be held fixed (recall the example from the introduction: it would be impossible to "hold ti fixed" and at the same time change the value of ti2). It is possible that the unique effect be nearly zero even when the marginal effect is large. This may imply that some other covariate captures all the information in xj, so that once that variable is in the model, there is no contribution of xj to the variation in y. Conversely, the unique effect of xj can be large while its marginal effect is nearly zero. This would happen if the other covariates explained a great deal of the variation of y, but they mainly explain variation in a way that is complementary to what is captured by xj. In this case, including the other variables in the model reduces the part of the variability of y that is unrelated to xj, thereby strengthening the apparent relationship with xj. The meaning of the expression "held fixed" may depend on how the values of the predictor variables arise. If the experimenter directly sets the values of the predictor variables according to a study design, the comparisons of interest may literally correspond to comparisons among units whose predictor variables have been "held fixed" by the experimenter. Alternatively, the expression "held fixed" can refer to a selection that takes place in the context of data analysis. In this case, we "hold a variable fixed" by restricting our attention to the subsets of the data that happen to have a common value for the given predictor variable. This is the only interpretation of "held fixed" that can be used in an observational study. The notion of a "unique effect" is appealing when studying a complex system where multiple interrelated components influence the response variable. In some cases, it can literally be interpreted as the causal effect of an intervention that is linked to the value of a predictor variable. However, it has been argued that in many cases multiple regression analysis fails to clarify the relationships between the predictor variables and the response variable when the predictors are correlated with each other and are not assigned following a study design. Extensions Numerous extensions of linear regression have been developed, which allow some or all of the assumptions underlying the basic model to be relaxed. Simple and multiple linear regression The simplest case of a single scalar predictor variable x and a single scalar response variable y is known as simple linear regression. The extension to multiple and/or vector-valued predictor variables (denoted with a capital X) is known as multiple linear regression, also known as multivariable linear regression (not to be confused with multivariate linear regression). Multiple linear regression is a generalization of simple linear regression to the case of more than one independent variable, and a special case of general linear models, restricted to one dependent variable. The basic model for multiple linear regression is for each observation . In the formula above we consider n observations of one dependent variable and p independent variables. Thus, Yi is the ith observation of the dependent variable, Xij is ith observation of the jth independent variable, j = 1, 2, ..., p. The values βj represent parameters to be estimated, and εi is the ith independent identically distributed normal error. In the more general multivariate linear regression, there is one equation of the above form for each of m > 1 dependent variables that share the same set of explanatory variables and hence are estimated simultaneously with each other: for all observations indexed as i = 1, ... , n and for all dependent variables indexed as j = 1, ... , m. Nearly all real-world regression models involve multiple predictors, and basic descriptions of linear regression are often phrased in terms of the multiple regression model. Note, however, that in these cases the response variable y is still a scalar. Another term, multivariate linear regression, refers to cases where y is a vector, i.e., the same as general linear regression. General linear models The general linear model considers the situation when the response variable is not a scalar (for each observation) but a vector, yi. Conditional linearity of is still assumed, with a matrix B replacing the vector β of the classical linear regression model. Multivariate analogues of ordinary least squares (OLS) and generalized least squares (GLS) have been developed. "General linear models" are also called "multivariate linear models". These are not the same as multivariable linear models (also called "multiple linear models"). Heteroscedastic models Various models have been created that allow for heteroscedasticity, i.e. the errors for different response variables may have different variances. For example, weighted least squares is a method for estimating linear regression models when the response variables may have different error variances, possibly with correlated errors. (
Mathematics
Statistics
null
55518323
https://en.wikipedia.org/wiki/Missing%20baryon%20problem
Missing baryon problem
In cosmology, the missing baryon problem is an observed discrepancy between the amount of baryonic matter detected from shortly after the Big Bang and from more recent epochs. Observations of the cosmic microwave background and Big Bang nucleosynthesis studies have set constraints on the abundance of baryons in the early universe, finding that baryonic matter accounts for approximately 4.8% of the energy contents of the Universe. At the same time, a census of baryons in the recent observable universe has found that observed baryonic matter accounts for less than half of that amount. This discrepancy is commonly known as the missing baryon problem. The missing baryon problem is different from the dark matter problem, which is non-baryonic in nature. Early universe measurements The abundance of baryonic matter in the early universe can be obtained indirectly from two independent methods: The theory of Big Bang nucleosynthesis, which predicts the observed relative abundance of the chemical elements in observations of the recent universe. Higher numbers of baryons in the early universe should produce higher ratios for helium, lithium, and heavier elements relative to hydrogen. Agreement with observed abundances requires that baryonic matter makes up between 4–5% of the universe's critical density. Detailed analysis of the small fluctuations (anisotropies) in the cosmic microwave background (CMB), especially the second peak of the CMB power spectrum. Baryonic matter interacts with photons and therefore leaves a visible imprint on the CMB. CMB analysis also yields a baryon fraction on the order of 5%. The CMB constraint is much more precise than the BBN constraint, but the two are in agreement. Late universe observations The density of baryonic matter can be obtained directly by summing up all the known baryonic matter. This is highly nontrivial, since although luminous matter such as stars and galaxies are easily summed, baryonic matter can also exist in highly non-luminous form, such as black holes, planets, and highly diffuse interstellar gas. Nonetheless, it can still be done, using a range of techniques: Making use of the Lyman-alpha forest; clouds of diffuse, baryonic gas or dust are sometimes visible when backlit by stars. The resulting spectra can be used to infer the mass between the star and the observer. Gravitational microlensing. If a planet or other dark object moves between the observer and a faraway source, the image of the source is distorted. The mass of the dark object can be inferred based on the amount of distortion. Sunyaev–Zel'dovich effect. The interaction between CMB photons and free electrons leaves an imprint in the CMB. This effect is sensitive to all free electrons independently of their temperature or the density of the surrounding medium, and thus it can be used to study baryonic matter otherwise not hot enough to be detected. Prior to 2017, baryons were found to be distributed 10% inside galaxies, 50–60% in the circum-galactic medium, and 30–40% unaccounted, therefore accounting for about 70% of theoretical predictions. Large scale galaxy surveys in the 2000s revealed a baryon deficit. This led theorists to reexamine their models and predict that gas must flow between galaxies and galaxy clusters. Warm-hot intergalactic medium The Lambda-CDM model of the big bang predicts that matter between galaxies in the universe is distributed in web-like formations with a low density (1–10 particles per cubic meter) known as the Warm-hot intergalactic medium (WHIM). Cosmological hydrodynamical simulations from theory predict that a fraction of the missing baryons are located in galactic haloes at temperatures of 106 K and the (WHIM) at temperatures of 105–107 K, with recent observations providing strong support. The WHIM is composed of three states: A warm state with temperatures 105–105.7 K. Neutral hydrogen is present in this state. (Observed via Oxygen-VI absorption lines) A hot state with temperatures of 105.7–106.3 K. (Observed via Oxygen-VII in soft x-rays) A very hot state with temperatures of 106.3–107 K. Very few hydrogen or hydrogen like metals, mostly present near the outskirts of galaxy clusters. The warm phase of the WHIM had been previously detected and composes around 15% of the baryon content. The WHIM is mostly composed of ionized hydrogen. This creates difficulty for astronomers trying to detect baryons in the WHIM. It is easier to detect the WHIM through highly ionized oxygen such as OVI and OVII absorption. Universe composition The census of known baryons in the universe tallied to around 60% of total baryons until the resolution of the missing baryon problem. This is in distinction from composition of the entire universe which includes dark energy and dark matter of which baryonic matter composes only 5%. Around 7% of baryons exists in stars and galaxies, while most of it exists around galaxies or galaxy clusters. The Lyman-alpha forest contains around 28% of the baryons. The warm phase of the WHIM was detected by soft X-ray absorption in 2012 to establish 15% of total baryon content. The intracluster medium (ICM) accounts for around 4% of total baryon content. It is composed of mostly ionized hydrogen and is about 10% of a galaxy cluster's total mass; the rest being dark matter. The ICM is low density with around 10−3 particles per cm3. The circum-galactic medium (CGM) was confirmed in 2003 by Chandra and Xmm-Newton. The CGM is a large sphere surrounding galaxies with a radius > 70 - 200 kpc. The CGM accounts for 5% of total baryons in the universe. Detection methods There are three main methods of detecting the WHIM where the missing baryons lie: the Sunyaev-Zel'dovich effect, Lyman-alpha emission lines, and metal absorption lines. Sunyaev-Zel'dovich effect The thermal Sunyaev-Zel'dovich (tSZ) effect occurs when photons from the CMB inverse Compton scatter off ionized gas. For detecting baryons, the ionized gas from the WHIM is scattered by the CMB photons. The y-parameter quantifies the strength of the tSZ effect and is defined as: , where is the Boltzmann constant, is the Thompson cross-section, is electron number density, is the electron rest mass energy, and is the temperature. Finding the y-parameter and overlaying that with a map of cosmic filament from millions of galaxies allows astronomers to find the weak signal from the WHIM. The y-parameter signal from a galaxy pair is overlaid on a model for galaxy halos. The signals are subtracted to reveal a signal between the two galaxies. This resulting signal is the filament. To ensure the signal is not coming from any other source, astronomers generate a control simulation which they use to compare and are able to determine that source must be from the WHIM. Lyman-Alpha emission The Lyman-alpha (Lyα) emission lines are detected from ionized hydrogen in cosmic filament. A source, such as a quasar, ionizes hydrogen in the cosmic filament leaving detectable dips in the absorption lines. Metal absorption lines Highly ionized oxygen like O+6, O+7, and O+8 absorption lines in the soft X-rays at energies of 0.6–0.8 keV. The column density of these lines can be derived: , where is the abundance of a particular oxygen ion, is Hubble's constant, is the critical density . Claimed resolutions In general, the missing baryon problem is a major unsolved problem in physics. Various scientists have proposed explanations, but none have received acceptance as adequately addressing the issue. One claim of a solution was published in 2017 when two groups of scientists said they found evidence for the location of missing baryons in intergalactic matter. The missing baryons had been postulated to exist as hot strands between galaxy pairs in the Warm-hot intergalactic medium (WHIM). Since the strands are diffuse and they are not hot enough to emit x-rays, they are difficult to detect. The groups used the thermal Sunyaev–Zeldovich effect to measure the density of the strands in the local universe. If baryons are present there, then some amount of energy should be lost when light from the cosmic microwave background scatters off them. These show up as very dim patches in the CMB. The patches are too dim to see directly, but when overlaid with the visible galaxy distribution, become detectable. The density of the strands comes up to about 30% of the baryonic density, which the groups said was the exact amount needed to solve the problem. Even if granted to be accurate, these works only describe the distribution of baryons between nearby galaxies and do not provide a complete picture of cosmic gas in the late universe. A 2021 article postulated that approximately 50% of all baryonic matter is outside dark matter haloes, filling the space between galaxies, and that this would explain the missing baryons not accounted for in the 2017 paper. Late 2010s and early 2020s In the late 2010s and early 2020s, several groups observed the intergalactic medium and circum-galactic medium to obtain more measurements and observations of baryons to support the leading observations. Baryons have more or less been found, so groups are working to detect them to a higher level of significance. Methods used include soft X-ray, OVI, OVII, and OVIII absorption. In 2019, a group led by Orsolya E. Kovács detected OVII absorption in the X-ray spectrum of 17 stacked quasars, corresponding to WHIM in filaments of overdensity around 5–9 times the average cosmological density at the epochs of the individual quasars. In 2020 astrophysicists reported the first direct X-ray emissions measurement of baryonic matter of cosmic web filaments. Both results are consistent with WHIM accounting for the missing baryons.
Physical sciences
Physical cosmology
Astronomy
57313102
https://en.wikipedia.org/wiki/Parasequence
Parasequence
A parasequence is a fundamental concept of sequence stratigraphy. Parasequences are not directly related to sequences. Definition A parasequence is defined as a genetically related succession of bedsets that is bounded by marine flooding surfaces (or their correlative surfaces) on top and at the bottom. The succession is supposed to be relatively conformable in the sense that breaks in deposition within the parasequence are much shorter than the time of deposition of the parasequence itself. Most parasequences show a shallowing upward, which is sometimes also included into the definition. Properties Since parasequences are relatively conformable, so Walther's law applies within a parasequence. This is not necessarily the case for transitions from one parasequence to another. They are also typically of the size of one up to tens of meters.
Physical sciences
Stratigraphy
Earth science
42478508
https://en.wikipedia.org/wiki/Cainotheriidae
Cainotheriidae
Cainotheriidae is an extinct family of artiodactyls known from the Late Eocene to Middle Miocene of Europe. They are mostly found preserved in karstic deposits. These animals were small in size, and generally did not exceed in height at the shoulders, ranging in size from those of rabbits to tragulids. For a long time, they were considered to have a similar lifestyle to hares and rabbits. The dentition was full and highly selenodont, i.e. the premolars and molars had curved and crescent-shaped cutting edges (as in today's ruminants). The skull was small, with a short snout and orbits closed posteriorly placed at the center of the skull. A peculiar characteristic of this group were the auditory bulla, protective structures of the bones of the ear : they were very large, like those that are found today in small mammals that live in open and dry environments. This evolved cranial anatomy was in stark contrast to the rest of the skeleton, rather primitive: the cainotheriids still possessed four non-reduced toes (an ancestral condition of the artiodactyls), even if the lateral fingers were shorter; they ended in long claws, as in modern rabbits. The hind legs were much longer than the front ones, and indicate a remarkable adaptation to running and jumping. Classification The bizarre anatomical features of the cainotheriids denote them as primitive artiodactyls. Previous research placed them near to Tylopoda, more recent research places them closer to Ruminantia. Robiacina of the middle/upper Eocene was previously classified as a member of this family, but has been more recently split into a separate family as the sister taxon to Cainotheriidae. In the course of the Upper Eocene primitive genera such as Oxacron and Paroxacron developed, considered the first true cainotheriids as sister taxa in the subfamily Oxacroninae. Subsequently, in the course of the Oligocene, the subfamily Cainotheriinae underwent a discrete evolutionary radiation, with the rabbit-sized genera Plesiomeryx and Caenomeryx. The most specialized genus, Cainotherium, was also the last to disappear, during the middle Miocene. Even at the beginning of the Miocene, these animals were quite common in various parts of Europe, with numerous species (e.g. C. laticurvatum, C. miocenicum, C. bavaricum). The cainotheriids died out definitively when, during the Miocene, the climate became colder. Paleoecology These small ungulate animals possessed exceptionally long hind legs; this characteristic, combined with small size and large auditory bulla, has led many scholars to consider cainoterids a sort of ecological parallel of rabbits. It was therefore assumed that these animals proceeded to leap thanks to the long hind legs, but the discovery of fossil traces of the lower Miocene found in the locality of Salinas de Anana in Spain, clearly left by Cainotherium, showed that the locomotion of these animals had to be very different from that of rabbits, and quite similar to that of the current small ruminants.
Biology and health sciences
Other artiodactyla
Animals
38265686
https://en.wikipedia.org/wiki/Sturtian%20glaciation
Sturtian glaciation
The Sturtian glaciation was a worldwide glaciation during the Cryogenian Period when the Earth experienced repeated large-scale glaciations. , the Sturtian glaciation is thought to have lasted from c. 717 Ma to c. 660 Ma, a time span of approximately 57 million years. It is hypothesised to have been a Snowball Earth event, or contrastingly multiple regional glaciations, and is the longest and most severe known glacial event preserved in the geologic record after the much earlier Huronian glaciation. Etymology of name Ultimately, current usage of the term is in reference to the globally significant Sturt Formation (originally Sturtian Tillite) within the Adelaide Superbasin of Australia. The Sturt Formation is named after Sturt Gorge, South Australia; itself named after the Sturt River, which was given its name in April 1831 by British Military Officer Captain Collet Barker, after fellow officer and explorer Charles Sturt. The Sturtian glaciation is an informal, but commonly used name for the older of two worldwide glacial events (the other is known as the Marinoan/Elatina glaciation) preserved in Cryogenian rocks. The term Sturtian was originally defined as a chronostratigraphic unit (Series) and later proposed as an international chronostratigraphic division; however, this has been superseded by international nomenclature. The suggestion of the glacial nature of the Sturt Formation during the early 20th century resulted in discussion about Neoproterozoic glaciations (thought to be Cambrian at the time) and encouraged the research that eventually resulted in the Snowball Earth hypothesis. Geology Rocks preserving evidence for the Sturtian Glaciation are found on every continent. Notable sections are found in Australia, Canada, China, Ethiopia, Namibia, Siberia, and Svalbard. According to Eyles and Young, "Glaciogenic rocks figure prominently in the Neoproterozoic stratigraphy of southeastern Australia and the northern Canadian Cordillera. The Sturtian glaciogenic succession (c. 740 Ma) unconformably overlies rocks of the Burra Group." The Sturtian succession includes two major diamictite-mudstone sequences which represent glacial advance and retreat cycles. It is stratigraphically correlated with the Rapitan Group of North America. Reusch's Moraine in northern Norway may have been deposited during this period. In 2024 researchers at the University of Adelaide and University of Sydney, using a combination of known geological formations from the Cryogenian Period and plate tectonic modelling, using EarthByte computer models, proposed the low temperature was the result of low levels of degassing along mid-ocean ridges, the result of the break-up of the supercontinent Rodinia. The duration of the ice sheet advance at the start of the Sturtian glaciation lasted for less than a million years. Effects on life In the aftermath of the Sturtian glaciation, biomarkers and body fossils indicate an increase in biological complexity.
Physical sciences
Events
Earth science
48768598
https://en.wikipedia.org/wiki/DCI-P3
DCI-P3
DCI-P3 is an RGB color space defined in 2005 as part of the Digital Cinema Initiative, for use in theatrical digital motion picture distribution (DCDM). Display P3 is a variant developed by Apple Inc. for wide-gamut displays. History Development of the standard In 2005, Digital Cinema Initiatives, LLC in Hollywood, California released the Digital Cinema System Specification version 1.0, which defined the colorimetry of what would become known as the DCI-P3 color space. According to section 8.3.4 in the specification, the blue primary color is the same as Rec. 709, sRGB, and Adobe RGB, with a dominant wavelength of 464.2 nm. The red primary is a deeper red than sRGB and Adobe RGB, as it emits a longer dominant wavelength at 614.9 nm. The red primary is also farther from the white point (being nearly spectral, at least perceptually), implying that it is more chromatic than the red-orange primary of sRGB. The most significant difference is the green primary, which is much closer to the spectral locus than either sRGB or Adobe RGB. DCI-P3's green primary has a dominant wavelength of 544.2 nm. Adobe RGB's green primary is more blueish with a dominant wavelength of 534.7 nm. sRGB's green primary is more yellowish at 549.1 nm. DCI-P3 covers 53.6% of the CIE 1931 chromaticity diagram (see inset image), which describes the color gamut of human color vision. A smaller, practical gamut for comparison is the Pointer's gamut, which consists of diffusely reflecting surface colors. DCI-P3 covers 86.9% of Pointer's gamut, while in comparison, Rec. 709/sRGB only covers 69.4%. While DCI-P3 was developed by the Digital Cinema Initiatives (DCI) organization, many of the relevant technical standards are published by the Society of Motion Picture and Television Engineers (SMPTE) such as SMPTE EG 432-1 and SMPTE RP 431-2. On November 10, 2010, SMPTE published SMPTE EG 432-1:2010, which includes a variant of the color space using a D65 white point (about 6503.51 K) instead of the ~6300 K white point of DCI-P3. On April 6, 2011, SMPTE published SMPTE RP 431-2:2011 which defines the reference viewing environment. Display P3 colors (a variant of DCI-P3) are supported in CSS Color Level 4 on Safari since 2017 (version 10.1) and Google Chrome since March 2023 (version 111) browsers. Display technology Initially, DCI-P3 was available with theatrical xenon-arc projection systems. This emerging technology presented challenges for filmmakers working with digital media on desktop workstations—that is, how to accurately view the colorspace of the theatrical viewing environment during the production and post production process. In 2008, HP released the first "HP DreamColor" monitor which could display 97% of DCI-P3 color space. In 2014, Eizo introduced the first professional 4K monitor with support of the P3 color space. In 2015, Apple's iMac desktop became the first consumer computer with a built-in wide-gamut display, supporting the P3 color space. Apple's implementation, known as Display P3, uses a D65 white point, and uses the sRGB tone reproduction curve (sometimes referred to as gamma). In 2016, the UHD Alliance announced their specifications for Ultra HD Premium which requires devices to display at least 90% of the DCI-P3 color space (in area, not volume). Also in 2016, Apple, Samsung, and Microsoft released mobile and desktop devices with P3 support. P3 colorimetry DCI P3 Created by the Digital Cinema Initiative, DCI-P3 is designed for viewing in a fully darkened theater environment. The projection system uses a simple 2.6 gamma curve, the nominal white luminance is 48 cd/m2 (14ftL), and the white point is based on a projector with a xenon bulb, for a correlated color temperature (CCT) of ~6300 K. It is incorrect to refer to this as "D63" as this white point is not a CIE standard illuminant, and is not on the Planckian locus. Instead, the white point is slightly greener. This resulted from optimizing for best light output with the xenon arc lamp projectors commonly used in theaters. DCI-P3 "D60 sim" When mastering content utilizing a display or projector in P3 color space, users have the option of using an output transform to DCI-P3 D60-sim which matches the nominal white point of the ACES color spaces. Display P3 Apple Inc developed displays using the P3 primaries, and the corresponding Display P3 color space. While it uses standard P3 RGB primaries, the white point is D65 instead of the DCI ~6300 K white point. The D65 white point is the existing standard for common sRGB and devices (Adobe RGB also uses D65). Display P3 uses the sRGB transfer curve, which is approximately equivalent to a gamma of 2.2. Display P3's gamut is approximately 50% larger than sRGB in volume and 25% in surface. Since iPhone 7, the built in camera creates images tagged with the Display P3 ICC profile. P3-D65 for HDR Some Netflix HDR deliverables use P3-D65 in conjunction with the PQ transfer curve. DCI-P3+ Canon created an expanded gamut color space they call DCI-P3+ using the same ~6300 K white point as DCI-P3. Otherwise, P3+ has no relation to DCI-P3 nor the Digital Cinema Initiative. Unlike the DCI-P3 color space, which defines an actual display technology, Canon's DCI-P3+ color space uses imaginary primaries which cannot be realized by any physical display technology. Criticism Because the P3 gamut is larger than sRGB, designs created for P3 but presented on an sRGB display without proper color management would appear less saturated. Apple suggests creating a separate set of image or color elements for each color space, sRGB and Display-P3, adding to complexity. Accessibility issues Because the P3 red primary emits a longer wavelength (i.e. a deeper red) than sRGB, individuals with some forms of color vision deficiency (aka colorblind) will see colors involving red as even darker than on sRGB. This creates an accessibility issue for instance for red, purple, or orange text against black.
Physical sciences
Basics
Physics
55528050
https://en.wikipedia.org/wiki/Prism%20lighting
Prism lighting
Prism lighting is the use of prisms to improve the distribution of light in a space. It is usually used to distribute daylight, and is a form of anidolic lighting. Prism lighting was popular from its introduction in the 1890s through to the 1930s, when cheap electric lights became commonplace and prism lighting became unfashionable. While mass production of prism lighting systems ended around 1940, the 2010s have seen a revival using new materials. How it works The human eye's response to light is non-linear: halving the light level does not halve the perceived brightness of a space, it makes it look only slightly dimmer. If light is redistributed from the brightest parts of a room to the dimmest, the room therefore appears brighter overall, and more space can be given a useful and comfortable level of illumination (see before and after images from an 1899 article, below). This can reduce the need for artificial lighting. Refraction and total internal reflection inside optical prisms can bend beams of light. This bending of the light allows it to be redistributed. Many small prisms may be joined at the edges into a sheet. A prism sheet is somewhat like a linear Fresnel lens, but each ridge may be identical. Unlike a Fresnel lens, the light is not intended to be focused, but used for anidolic lighting. Types Deck prisms carried light through the upper decks of ships and spread it in the decks below. Similarly, on land, prisms in sidewalk lights were used to light basements and vaults. Prism tiles were used vertically, usually as a transom light above a window or door. They were also built into fixed and movable canopies, sloped glazing, and skylights. They bend light upwards, so that it penetrates more deeply into the room, rather than lighting the floor near the window. Modern prismatic panels are essentially an acrylic version of the old glass prism tiles. Like glass tiles, they can be mounted on adjustable canopies. Channel panels use slits that reflect light internally. Holographic optical elements can also be used to redirect light. Daylight redirecting window film (DRF) is a thin, flexible peel-and-stick sheet, with the optical layer generally made of acrylic. There are two types of film. Some film is moulded with tiny prisms, making a flexible peel-and-stick miniature prismatic panel. Other film is moulded with thin near-horizontal voids protruding into or through the acrylic; the slits reflect light hitting their top surfaces upwards. Refraction is minimized, to avoid colouring the light. The reflection-based films are more transparent (both are translucent), but they tend to send the light up at the ceiling, not deeper into the room. Refraction-based films are translucent rather than transparent, but offer finer control over the direction of the outgoing light beam; the film can be made in a variety of prism shapes to refract light by a variety of angles. Manufacture and repair Older glass elements were cast, and might be cut and polished. Prism tiles were often made of single prisms joined with zinc, lead, or electroglazed copper strips (rather like the methods used to join traditional European stained glass). Sidewalk prisms were cast in one piece as single or multiple-prism lenses, and inserted into load-bearing frames. Daylight redirecting film is made of acrylic. Damaged prism tiles may be repaired, and as they came in standard designs, there is a salvage market in replacements. Replacements for one-piece castings can be commissioned. Weakened prism tiles may be reinforced with hidden bars, much like those used to reinforce stained glass. Architectural design Sophisticated systems for lighting different sorts of spaces with prism tiles were developed. Generally, the goal was to send the available light across the room nearly horizontally. One company sold tiles with nine prescriptions, giving different angles of refraction. Different prescriptions were often used in different parts of the same window transom, sometimes to disperse the light vertically, and sometimes to bend light horizontally around obstacles like pillars. Prism tiles sometimes have elaborate artistic designs moulded into the outside; Frank Lloyd Wright created over forty prism tile designs. Prism lighting works more effectively in light, open spaces. Some believe that it contributed to the trend away from dark, subdivided Victorian interiors to open-plan, light-coloured ones. The removal or covering of old prism transom lights often leaves characteristically tall signage spaces over shop windows (see pictures). Daylight redirecting window film was initially made of one redirecting film and one glare-reducing diffusing film, often located on different interior surfaces of a double-glazed window, but integrated single films are now available. Some daylight redirecting films reflect incoming light upwards off tiny near-horizontal reflectors, so at high sun angles they bend it sharply, throwing it upwards to the ceiling, where a typical ceiling diffuses the daylight somewhat deeper into the space. Other daylight redirecting films refract light at any specified angle, ideally sending it nearly horizontally into the room. Redirecting films can be used as a substitute for opaque blinds. Gallery
Technology
Lighting
null
55528362
https://en.wikipedia.org/wiki/Sling%20%28medicine%29
Sling (medicine)
A sling, also known as arm sling, is a device to limit movement of the shoulder or elbow while it heals. A sling can be created from a triangular bandage.
Technology
Devices
null
39712387
https://en.wikipedia.org/wiki/Linear%20function%20%28calculus%29
Linear function (calculus)
In calculus and related areas of mathematics, a linear function from the real numbers to the real numbers is a function whose graph (in Cartesian coordinates) is a non-vertical line in the plane. The characteristic property of linear functions is that when the input variable is changed, the change in the output is proportional to the change in the input. Linear functions are related to linear equations. Properties A linear function is a polynomial function in which the variable has degree at most one: . Such a function is called linear because its graph, the set of all points in the Cartesian plane, is a line. The coefficient a is called the slope of the function and of the line (see below). If the slope is , this is a constant function defining a horizontal line, which some authors exclude from the class of linear functions. With this definition, the degree of a linear polynomial would be exactly one, and its graph would be a line that is neither vertical nor horizontal. However, in this article, is required, so constant functions will be considered linear. If then the linear function is said to be homogeneous. Such function defines a line that passes through the origin of the coordinate system, that is, the point . In advanced mathematics texts, the term linear function often denotes specifically homogeneous linear functions, while the term affine function is used for the general case, which includes . The natural domain of a linear function , the set of allowed input values for , is the entire set of real numbers, One can also consider such functions with in an arbitrary field, taking the coefficients in that field. The graph is a non-vertical line having exactly one intersection with the -axis, its -intercept point The -intercept value is also called the initial value of If the graph is a non-horizontal line having exactly one intersection with the -axis, the -intercept point The -intercept value the solution of the equation is also called the root or zero of Slope The slope of a nonvertical line is a number that measures how steeply the line is slanted (rise-over-run). If the line is the graph of the linear function , this slope is given by the constant . The slope measures the constant rate of change of per unit change in x: whenever the input is increased by one unit, the output changes by units: , and more generally for any number . If the slope is positive, , then the function is increasing; if , then is decreasing In calculus, the derivative of a general function measures its rate of change. A linear function has a constant rate of change equal to its slope , so its derivative is the constant function . The fundamental idea of differential calculus is that any smooth function (not necessarily linear) can be closely approximated near a given point by a unique linear function. The derivative is the slope of this linear function, and the approximation is: for . The graph of the linear approximation is the tangent line of the graph at the point . The derivative slope generally varies with the point c. Linear functions can be characterized as the only real functions whose derivative is constant: if for all x, then for . Slope-intercept, point-slope, and two-point forms A given linear function can be written in several standard formulas displaying its various properties. The simplest is the slope-intercept form: , from which one can immediately see the slope a and the initial value , which is the y-intercept of the graph . Given a slope a and one known value , we write the point-slope form: . In graphical terms, this gives the line with slope a passing through the point . The two-point form starts with two known values and . One computes the slope and inserts this into the point-slope form: . Its graph is the unique line passing through the points . The equation may also be written to emphasize the constant slope: . Relationship with linear equations Linear functions commonly arise from practical problems involving variables with a linear relationship, that is, obeying a linear equation . If , one can solve this equation for y, obtaining where we denote and . That is, one may consider y as a dependent variable (output) obtained from the independent variable (input) x via a linear function: . In the xy-coordinate plane, the possible values of form a line, the graph of the function . If in the original equation, the resulting line is vertical, and cannot be written as . The features of the graph can be interpreted in terms of the variables x and y. The y-intercept is the initial value at . The slope a measures the rate of change of the output y per unit change in the input x. In the graph, moving one unit to the right (increasing x by 1) moves the y-value up by a: that is, . Negative slope a indicates a decrease in y for each increase in x. For example, the linear function has slope , y-intercept point , and x-intercept point . Example Suppose salami and sausage cost €6 and €3 per kilogram, and we wish to buy €12 worth. How much of each can we purchase? If x kilograms of salami and y kilograms of sausage costs a total of €12 then, €6×x + €3×y = €12. Solving for y gives the point-slope form , as above. That is, if we first choose the amount of salami x, the amount of sausage can be computed as a function . Since salami costs twice as much as sausage, adding one kilo of salami decreases the sausage by 2 kilos: , and the slope is −2. The y-intercept point corresponds to buying only 4 kg of sausage; while the x-intercept point corresponds to buying only 2 kg of salami. Note that the graph includes points with negative values of x or y, which have no meaning in terms of the original variables (unless we imagine selling meat to the butcher). Thus we should restrict our function to the domain . Also, we could choose y as the independent variable, and compute x by the inverse linear function: over the domain . Relationship with other classes of functions If the coefficient of the variable is not zero (), then a linear function is represented by a degree 1 polynomial (also called a linear polynomial), otherwise it is a constant function – also a polynomial function, but of zero degree. A straight line, when drawn in a different kind of coordinate system may represent other functions. For example, it may represent an exponential function when its values are expressed in the logarithmic scale. It means that when is a linear function of , the function is exponential. With linear functions, increasing the input by one unit causes the output to increase by a fixed amount, which is the slope of the graph of the function. With exponential functions, increasing the input by one unit causes the output to increase by a fixed multiple, which is known as the base of the exponential function. If both arguments and values of a function are in the logarithmic scale (i.e., when is a linear function of ), then the straight line represents a power law: On the other hand, the graph of a linear function in terms of polar coordinates: is an Archimedean spiral if and a circle otherwise.
Mathematics
Specific functions
null
48779551
https://en.wikipedia.org/wiki/Mathematical%20constant
Mathematical constant
A mathematical constant is a number whose value is fixed by an unambiguous definition, often referred to by a special symbol (e.g., an alphabet letter), or by mathematicians' names to facilitate using it across multiple mathematical problems. Constants arise in many areas of mathematics, with constants such as and occurring in such diverse contexts as geometry, number theory, statistics, and calculus. Some constants arise naturally by a fundamental principle or intrinsic property, such as the ratio between the circumference and diameter of a circle (). Other constants are notable more for historical reasons than for their mathematical properties. The more popular constants have been studied throughout the ages and computed to many decimal places. All named mathematical constants are definable numbers, and usually are also computable numbers (Chaitin's constant being a significant exception). Basic mathematical constants These are constants which one is likely to encounter during pre-college education in many countries. Pythagoras' constant The square root of 2, often known as root 2 or Pythagoras' constant, and written as , is the unique positive real number that, when multiplied by itself, gives the number 2. It is more precisely called the principal square root of 2, to distinguish it from the negative number with the same property. Geometrically the square root of 2 is the length of a diagonal across a square with sides of one unit of length; this follows from the Pythagorean theorem. It is an irrational number, possibly the first number to be known as such, and an algebraic number. Its numerical value truncated to 50 decimal places is: . Alternatively, the quick approximation 99/70 (≈ 1.41429) for the square root of two was frequently used before the common use of electronic calculators and computers. Despite having a denominator of only 70, it differs from the correct value by less than 1/10,000 (approx. 7.2 × 10−5). Its simple continued fraction is periodic and given by: Archimedes' constant The constant (pi) has a natural definition in Euclidean geometry as the ratio between the circumference and diameter of a circle. It may be found in many other places in mathematics: for example, the Gaussian integral, the complex roots of unity, and Cauchy distributions in probability. However, its ubiquity is not limited to pure mathematics. It appears in many formulas in physics, and several physical constants are most naturally defined with or its reciprocal factored out. For example, the ground state wave function of the hydrogen atom is where is the Bohr radius. is an irrational number, transcendental number and an algebraic period. The numeric value of is approximately: . Unusually good approximations are given by the fractions 22/7 and 355/113. Memorizing as well as computing increasingly more digits of is a world record pursuit. Euler's number Euler's number , also known as the exponential growth constant, appears in many areas of mathematics, and one possible definition of it is the value of the following expression: The constant is intrinsically related to the exponential function . The Swiss mathematician Jacob Bernoulli discovered that arises in compound interest: If an account starts at $1, and yields interest at annual rate , then as the number of compounding periods per year tends to infinity (a situation known as continuous compounding), the amount of money at the end of the year will approach dollars. The constant also has applications to probability theory, where it arises in a way not obviously related to exponential growth. As an example, suppose that a slot machine with a one in probability of winning is played times, then for large (e.g., one million), the probability that nothing will be won will tend to as tends to infinity. Another application of , discovered in part by Jacob Bernoulli along with French mathematician Pierre Raymond de Montmort, is in the problem of derangements, also known as the hat check problem. Here, guests are invited to a party, and at the door each guest checks his hat with the butler, who then places them into labelled boxes. The butler does not know the name of the guests, and hence must put them into boxes selected at random. The problem of de Montmort is: what is the probability that none of the hats gets put into the right box. The answer is which, as tends to infinity, approaches . is an irrational number and a transcendental number. The numeric value of is approximately: . The imaginary unit The imaginary unit or unit imaginary number, denoted as , is a mathematical concept which extends the real number system to the complex number system The imaginary unit's core property is that . The term "imaginary" was coined because there is no (real) number having a negative square. There are in fact two complex square roots of −1, namely and , just as there are two complex square roots of every other real number (except zero, which has one double square root). In contexts where the symbol is ambiguous or problematic, or the Greek iota () is sometimes used. This is in particular the case in electrical engineering and control systems engineering, where the imaginary unit is often denoted by , because is commonly used to denote electric current. The golden ratio An explicit formula for the th Fibonacci number involving the golden ratio . The number , also called the golden ratio, turns up frequently in geometry, particularly in figures with pentagonal symmetry. Indeed, the length of a regular pentagon's diagonal is times its side. The vertices of a regular icosahedron are those of three mutually orthogonal golden rectangles. Also, it is related to the Fibonacci sequence, related to growth by recursion. Kepler proved that it is the limit of the ratio of consecutive Fibonacci numbers. The golden ratio has the slowest convergence of any irrational number. It is, for that reason, one of the worst cases of Lagrange's approximation theorem and it is an extremal case of the Hurwitz inequality for diophantine approximations. This may be why angles close to the golden ratio often show up in phyllotaxis (the growth of plants). It is approximately equal to: . or, more precisely Constants in advanced mathematics These are constants which are encountered frequently in higher mathematics. The Euler–Mascheroni constant γ Euler's constant or the Euler–Mascheroni constant is defined as the limiting difference between the harmonic series and the natural logarithm: It appears frequently in mathematics, especially in number theoretical contexts such as Mertens' third theorem or the growth rate of the divisor function. It has relations to the gamma function and its derivatives as well as the zeta function and there exist many different integrals and series involving . Despite the ubiquity of the Euler-Mascheroni constant, many of its properties remain unknown. That includes the major open questions of whether it is a rational or irrational number and whether it is algebraic or transcendental. In fact, has been described as a mathematical constant "shadowed only and in importance." The numeric value of is approximately: . Apéry's constant ζ(3) Apery's constant is defined as the sum of the reciprocals of the cubes of the natural numbers:It is the special value of the Riemann zeta function at . The quest to find an exact value for this constant in terms of other known constants and elementary functions originated when Euler famously solved the Basel problem by giving . To date no such value has been found and it is conjectured that there is none. However, there exist many representations of in terms of infinite series. Apéry's constant arises naturally in a number of physical problems, including in the second- and third-order terms of the electron's gyromagnetic ratio, computed using quantum electrodynamics. is known to be an irrational number which was proven by the French mathematician Roger Apéry in 1979. It is however not known whether it is algebraic or transcendental. The numeric value of Apéry's constant is approximately: . Catalan's constant G Catalan's constant is defined by the alternating sum of the reciprocals of the odd square numbers: It is the special value of the Dirichlet beta function at . Catalan's constant appears frequently in combinatorics and number theory and also outside mathematics such as in the calculation of the mass distribution of spiral galaxies. Questions about the arithmetic nature of this constant also remain unanswered, having been called "arguably the most basic constant whose irrationality and transcendence (though strongly suspected) remain unproven." There exist many integral and series representations of Catalan's constant. It is named after the French and Belgian mathematician Charles Eugène Catalan. The numeric value of is approximately: . The Feigenbaum constants α and δ Iterations of continuous maps serve as the simplest examples of models for dynamical systems. Named after mathematical physicist Mitchell Feigenbaum, the two Feigenbaum constants appear in such iterative processes: they are mathematical invariants of logistic maps with quadratic maximum points and their bifurcation diagrams. Specifically, the constant α is the ratio between the width of a tine and the width of one of its two subtines, and the constant δ is the limiting ratio of each bifurcation interval to the next between every period-doubling bifurcation. The logistic map is a polynomial mapping, often cited as an archetypal example of how chaotic behaviour can arise from very simple non-linear dynamical equations. The map was popularized in a seminal 1976 paper by the Australian biologist Robert May, in part as a discrete-time demographic model analogous to the logistic equation first created by Pierre François Verhulst. The difference equation is intended to capture the two effects of reproduction and starvation. The Feigenbaum constants in bifurcation theory are analogous to π in geometry and e in calculus. Neither of them is known to be irrational or even transcendental. However proofs of their universality exist. The respective approximate numeric values of δ and α are: . . Mathematical curiosities Simple representatives of sets of numbers Liouville's constant is a simple example of a transcendental number. Some constants, such as the square root of 2, Liouville's constant and Champernowne constant: are not important mathematical invariants but retain interest being simple representatives of special sets of numbers, the irrational numbers, the transcendental numbers and the normal numbers (in base 10) respectively. The discovery of the irrational numbers is usually attributed to the Pythagorean Hippasus of Metapontum who proved, most likely geometrically, the irrationality of the square root of 2. As for Liouville's constant, named after French mathematician Joseph Liouville, it was the first number to be proven transcendental. Chaitin's constant Ω In the computer science subfield of algorithmic information theory, Chaitin's constant is the real number representing the probability that a randomly chosen Turing machine will halt, formed from a construction due to Argentine-American mathematician and computer scientist Gregory Chaitin. Chaitin's constant, though not being computable, has been proven to be transcendental and normal. Chaitin's constant is not universal, depending heavily on the numerical encoding used for Turing machines; however, its interesting properties are independent of the encoding. Notation Representing constants It is common to express the numerical value of a constant by giving its decimal representation (or just the first few digits of it). For two reasons this representation may cause problems. First, even though rational numbers all have a finite or ever-repeating decimal expansion, irrational numbers don't have such an expression making them impossible to completely describe in this manner. Also, the decimal expansion of a number is not necessarily unique. For example, the two representations 0.999... and 1 are equivalent in the sense that they represent the same number. Calculating digits of the decimal expansion of constants has been a common enterprise for many centuries. For example, German mathematician Ludolph van Ceulen of the 16th century spent a major part of his life calculating the first 35 digits of pi. Using computers and supercomputers, some of the mathematical constants, including π, e, and the square root of 2, have been computed to more than one hundred billion digits. Fast algorithms have been developed, some of which — as for Apéry's constant — are unexpectedly fast. Graham's number defined using Knuth's up-arrow notation. Some constants differ so much from the usual kind that a new notation has been invented to represent them reasonably. Graham's number illustrates this as Knuth's up-arrow notation is used. It may be of interest to represent them using continued fractions to perform various studies, including statistical analysis. Many mathematical constants have an analytic form, that is they can be constructed using well-known operations that lend themselves readily to calculation. Not all constants have known analytic forms, though; Grossman's constant and Foias' constant are examples. Symbolizing and naming of constants Symbolizing constants with letters is a frequent means of making the notation more concise. A common convention, instigated by René Descartes in the 17th century and Leonhard Euler in the 18th century, is to use lower case letters from the beginning of the Latin alphabet or the Greek alphabet when dealing with constants in general. However, for more important constants, the symbols may be more complex and have an extra letter, an asterisk, a number, a lemniscate or use different alphabets such as Hebrew, Cyrillic or Gothic. Erdős–Borwein constant Embree–Trefethen constant Brun's constant for twin prime Champernowne constants cardinal number aleph naught Examples of different kinds of notation for constants. Sometimes, the symbol representing a constant is a whole word. For example, American mathematician Edward Kasner's 9-year-old nephew coined the names googol and googolplex. Other names are either related to the meaning of the constant (universal parabolic constant, twin prime constant, ...) or to a specific person (Sierpiński's constant, Josephson constant, and so on). Selected mathematical constants Abbreviations used: Gen – General, NuT – Number theory, ChT – Chaos theory, Com – Combinatorics, Ana – Mathematical analysis
Mathematics
Basics
null
56926421
https://en.wikipedia.org/wiki/Missing%20link%20%28human%20evolution%29
Missing link (human evolution)
"Missing link" is a recently-discovered transitional fossil. It is often used in popular science and in the media for any new transitional form. The term originated to describe the intermediate form in the evolutionary series of anthropoid ancestors to anatomically modern humans (hominization). The term was influenced by the pre-Darwinian evolutionary theory of the Great Chain of Being and the now-outdated notion (orthogenesis) that simple organisms are more primitive than complex organisms. The term "missing link" has been supported by geneticists since evolutionary trees only have data at the tips and nodes of their branches; the rest is inference and not evidence of fossils. However, it has fallen out of favor with anthropologists because it implies the evolutionary process is a linear phenomenon and that forms originate consecutively in a chain. Instead, last common ancestor is preferred since this does not have the connotation of linear evolution, as evolution is a branching process. There is no singular missing link. The scarcity of transitional fossils can be attributed to the incompleteness of the fossil record. Historical origins The term "missing link" was influenced by the 18th-century Enlightenment thinkers such as Alexander Pope and Jean-Jacques Rousseau who thought of humans as links in the Great Chain of Being, a hierarchical structure of all matter and life. Influenced by Aristotle's theory of higher and lower animals, the Great Chain of Being was created during the Medieval period in Europe and was strongly influenced by religious thought. God was at the top of the chain followed by man and then animals. It was during the 18th century that the set nature of species and their immutable place in the great chain was questioned. The dual nature of the chain, divided yet united, had always allowed for seeing creation as essentially one continuous whole, with the potential for overlap between the links. Radical thinkers like Jean-Baptiste Lamarck saw a progression of life forms from the simplest creatures striving towards complexity and perfection, a schema accepted by zoologists like Henri de Blainville. The very idea of an ordering of organisms, even if supposedly fixed, laid the basis for the idea of transmutation of species, for example Charles Darwin's theory of evolution. The earliest publication that explicitly uses the term “missing link” was in 1844 in Vestiges of the Natural History of Creation by Robert Chambers, which uses the term in an evolutionary context relating to gaps in the fossil record. Charles Lyell employed the term a few years later in 1851 in his third edition of Elements of Geology too as a metaphor for the missing gaps in the continuity of the geological column. It was used as a name for transitional types between different taxa was in 1863, in Lyell's Geological Evidences of the Antiquity of Man. Historical beliefs about the missing link Jean-Baptiste Lamarck envisioned that life is generated in the form of the simplest creatures constantly, and then strive towards complexity and perfection (i.e. humans) through a series of lower forms. In his view, lower animals were simply newcomers on the evolutionary scene. After Darwin's On the Origin of Species, the idea of "lower animals" representing earlier stages in evolution lingered, as demonstrated in Ernst Haeckel's figure of the human pedigree. While the vertebrates were then seen as forming a sort of evolutionary sequence, the various classes were distinct, the undiscovered intermediate forms being called "missing links". Haeckel claimed that human evolution occurred in 24 stages and that the 23rd stage was a theoretical missing link he named Pithecanthropus alalus ("ape-man lacking speech"). Haeckel claimed the origin of humanity was to be found in Asia. He theorized that the missing link was to be found on the lost continent of Lemuria located in the Indian Ocean. He believed that Lemuria was the home of the first humans and that Asia was the home of many of the earliest primates; he thus supported that Asia was the cradle of hominid evolution. Haeckel argued that humans were closely related to the primates of Southeast Asia and rejected Darwin's hypothesis of human origins in Africa. The search for a fossil that connected man and ape was unproductive until the Dutch paleontologist Eugene Dubois went to Indonesia. Between 1886 and 1895 Dubois discovered remains that he later described as "an intermediate species between humans and monkeys". He named the hominin Pithecanthropus erectus (erect ape-man), which has now been reclassified as Homo erectus. In the media, the Java Man was hailed as the missing link. For instance, the headline of the Philadelphia Inquirer on February 3, 1895, was "The Missing Link: A Dutch Surgeon in Java Unearths the Needed Specimen". Famous "missing links" in human evolution Among the famous fossil finds credited as the "missing link" in human evolution are: Neanderthal (Homo neanderthalensis): A sibling human species. Java Man (Homo erectus): Discovered by Eugene Dubois in 1891 in Indonesia. Originally named Pithecanthropus erectus. Piltdown Man: A set of bones found in 1912 thought to be the "missing link" between ape and man. Eventually revealed to be a hoax. Nebraska Man: Originally described as an ape by Henry Fairfield Osborn in 1922, on the basis of a tooth found by rancher and geologist Harold Cook in Nebraska in 1917. Later, the original classification proved to be a mistake, and was retracted in 1927. It was never widely accepted. Taung Child (Australopithecus africanus): Discovered by Raymond Dart in 1924 in South Africa. Heidelberg Man: Probably ancestral to Homo sapiens & Homo neanderthalensis. Homo habilis (described in 1964) has features intermediate between Australopithecus and Homo erectus, and its classification in Homo rather than Australopithecus has been questioned. Lucy (Australopithecus afarensis): Discovered in 1974 by Donald Johanson in Ethiopia Australopithecus sediba: A series of skeletons discovered in South Africa between 2008 and 2010
Biology and health sciences
Human evolution: General
Biology
51137781
https://en.wikipedia.org/wiki/Goat%20farming
Goat farming
Goat farming involves the raising and breeding of domestic goats (Capra aegagrus hircus) as a branch of animal husbandry. People farm goats principally for their meat, milk, fibre and skins. Goat farming can be very suited to production alongside other livestock (such as sheep and cattle) on low-quality grazing land. Goats efficiently convert sub-quality grazing matter that is less desirable for other livestock into quality lean meat. Furthermore, goats can be farmed with a relatively small area of pasture and with limited resources. Pasture As with other herbivores, the number of animals that a goat farmer can raise and sustain is dependent on the quality of the pasture. However, since goats will eat vegetation that most other domesticated livestock decline, they will subsist even on very poor land. Therefore, goat herds remain an important asset in regions with sparse and low quality vegetation. Worldwide goat population statistics According to the Food and Agriculture Organization (FAO), the top producers of goat milk in 2008 were India (4 million metric tons), Bangladesh (2.16 million metric tons) and the Sudan (1.47 million metric tons). In the US Meat goats are farmed in all US states, although most meat goat production occurs in the Southeast. Texas is the primary producer of meat goats, representing 38% of US production. Male goats are generally not required for the dairy-goat industry and are usually slaughtered for meat soon after birth. In the UK, approximately 30,000 billy goats from the dairy industry are slaughtered each year. Gallery
Technology
Animal husbandry
null
34298991
https://en.wikipedia.org/wiki/Geological%20history%20of%20Mars
Geological history of Mars
The geological history of Mars follows the physical evolution of Mars as substantiated by observations, indirect and direct measurements, and various inference techniques. Methods dating back to 17th-century techniques developed by Nicholas Steno, including the so-called law of superposition and stratigraphy, used to estimate the geological histories of Earth and the Moon, are being actively applied to the data available from several Martian observational and measurement resources. These include landers, orbiting platforms, Earth-based observations, and Martian meteorites. Observations of the surfaces of many Solar System bodies reveal important clues about their evolution. For example, a lava flow that spreads out and fills a large impact crater is likely to be younger than the crater. On the other hand, a small crater on top of the same lava flow is likely to be younger than both the lava and the larger crater since it can be surmised to have been the product of a later, unobserved, geological event. This principle, called the law of superposition, along with other principles of stratigraphy first formulated by Nicholas Steno in the 17th century, allowed geologists of the 19th century to divide the history of the Earth into the familiar eras of Paleozoic, Mesozoic, and Cenozoic. The same methodology was later applied to the Moon and then to Mars. Another stratigraphic principle used on planets where impact craters are well preserved is that of crater number density. The number of craters greater than a given size per unit surface area (usually a million km2) provides a relative age for that surface. Heavily cratered surfaces are old, and sparsely cratered surfaces are young. Old surfaces have many big craters, and young surfaces have mostly small craters or none at all. These stratigraphic concepts form the basis for the Martian geologic timescale. Relative ages from stratigraphy Stratigraphy establishes the relative ages of layers of rock and sediment by denoting differences in composition (solids, liquids, and trapped gasses). Assumptions are often incorporated about the rate of deposition, which generates a range of potential age estimates across any set of observed sediment layers. Absolute ages On Earth, the primary method for calibrating geological ages to a calendar is radiometric dating. Combining the constraints from multiple different radioisotope systems can improve the precision in an age estimate. Using stratigraphic principles, the ages of geological units can usually only be determined relative to each other. For example, Mesozoic rock strata making up the Cretaceous system lie on top of rocks of the Jurassic system, so the Cretaceous is more recent than the Jurassic. However, this tells us nothing about how long ago either the Cretaceous or Jurassic periods were, only their relative order. Absolute ages from radiometric dating are required to calibrate the stratigraphic sequence. This requires laboratory analysis of physical samples retrieved from locations with known stratigraphy, which is generally only possible for rocks on Earth. A small number of absolute ages have also been determined for rock units on the Moon, from which samples have been returned to Earth. Lunar relative ages are provided by crater counting. Although the number of calibration points is small, this has allowed the derivation of an approximate dating system for the Moon. Assigning absolute ages to rock units on Mars is far more problematic. Numerous attempts have been made over the years to determine an absolute Martian chronology (timeline) by comparing estimated impact cratering rates for Mars to those on the Moon. If the rate of impact crater formation on Mars by crater size per unit area over geologic time (the production rate or flux) is known with precision, then crater densities also provide a way to determine absolute ages. Unfortunately, practical difficulties in crater counting and uncertainties in estimating the flux still create huge uncertainties in the ages derived from these methods. Martian meteorites have provided datable samples that are consistent with ages calculated thus far, but the locations on Mars from where the meteorites came (provenance) are unknown, limiting their value as chronostratigraphic tools. Absolute ages determined by crater density should therefore be taken with some skepticism. Crater density timescale Studies of impact crater densities on the Martian surface have delineated four broad periods in the planet's geologic history. The periods were named after places on Mars that had large-scale surface features, such as large craters or widespread lava flows, that date back to these time periods. The absolute ages given here are only approximate. From oldest to youngest, the time periods are: Pre-Noachian: the interval from the accretion and differentiation of the planet about 4.5 billion years ago (Gya) to the formation of the Hellas impact basin, between 4.1 and 3.8 Gya. Most of the geologic record of this interval has been erased by subsequent erosion and high impact rates. The crustal dichotomy is thought to have formed during this time, along with the Argyre and Isidis basins. Noachian Period (named after Noachis Terra): Formation of the oldest extant surfaces of Mars between 4.1 and about 3.7 Gya. Noachian-aged surfaces are scarred by many large impact craters. The Tharsis bulge is thought to have formed during the Noachian, along with extensive erosion by liquid water producing river valley networks. Large lakes or oceans may have been present. Hesperian Period (named after Hesperia Planum): 3.7 to approximately 3.0 Gya. It is marked by the formation of extensive lava plains. The formation of Olympus Mons probably began during this period. Catastrophic releases of water carved out extensive outflow channels around Chryse Planitia and elsewhere. Ephemeral lakes or seas may have formed in the northern lowlands. Amazonian Period (named after Amazonis Planitia): 3.0 Gya to present. Amazonian regions have few meteorite impact craters but are otherwise quite varied. Lava flows, glacial/periglacial activity, and minor releases of liquid water continued during this period. The date of the Hesperian/Amazonian boundary is particularly uncertain and could range anywhere from 3.0 to 1.5 Gya. Basically, the Hesperian is thought of as a transitional period between the end of heavy bombardment and the cold, dry Mars seen today. Mineral alteration timescale In 2006, researchers using data from the OMEGA Visible and Infrared Mineralogical Mapping Spectrometer on board the Mars Express orbiter proposed an alternative Martian timescale based on the predominant type of mineral alteration that occurred on Mars due to different styles of chemical weathering in the planet's past. They proposed dividing the history of Mars into three eras: the Phyllocian, Theiikian and Siderikan. The Phyllocian (named after phyllosilicate or clay minerals that characterize the era) lasted from the formation of the planet until around the Early Noachian (about 4.0 Gya). OMEGA identified outcroppings of phyllosilicates at numerous locations on Mars, all in rocks that were exclusively Pre-Noachian or Noachian in age (most notably in rock exposures in Nili Fossae and Mawrth Vallis). Phyllosillicates require a water-rich, alkaline environment to form. The Phyllocian era correlates with the age of valley network formation on Mars, suggesting an early climate that was conducive to the presence of abundant surface water. It is thought that deposits from this era are the best candidates in which to search for evidence of past life on the planet. The Theiikian (named after sulphurous in Greek, for the sulphate minerals that were formed) lasted until about 3.5 Gya. It was an era of extensive volcanism, which released large amounts of sulphur dioxide (SO2) into the atmosphere. The SO2 combined with water to create a sulphuric acid-rich environment that allowed the formation of hydrated sulphates (notably kieserite and gypsum). The Siderikan (named for iron in Greek, for the iron oxides that formed) lasted from 3.5 Gya until the present. With the decline of volcanism and available water, the most notable surface weathering process has been the slow oxidation of the iron-rich rocks by atmospheric peroxides producing the red iron oxides that give the planet its familiar colour.
Physical sciences
Solar System
Astronomy
60373549
https://en.wikipedia.org/wiki/Phase%20separation
Phase separation
Phase separation is the creation of two distinct phases from a single homogeneous mixture. The most common type of phase separation is between two immiscible liquids, such as oil and water. This type of phase separation is known as liquid-liquid equilibrium. Colloids are formed by phase separation, though not all phase separations forms colloids - for example oil and water can form separated layers under gravity rather than remaining as microscopic droplets in suspension. A common form of spontaneous phase separation is termed spinodal decomposition; it is described by the Cahn–Hilliard equation. Regions of a phase diagram in which phase separation occurs are called miscibility gaps. There are two boundary curves of note: the binodal coexistence curve and the spinodal curve. On one side of the binodal, mixtures are absolutely stable. In between the binodal and the spinodal, mixtures may be metastable: staying mixed (or unmixed) absent some large disturbance. The region beyond the spinodal curve is absolutely unstable, and (if starting from a mixed state) will spontaneously phase-separate. The upper critical solution temperature (UCST) and the lower critical solution temperature (LCST) are two critical temperatures, above which or below which the components of a mixture are miscible in all proportions. It is rare for systems to have both, but some exist: the nicotine-water system has an LCST of 61 °C, and also a UCST of 210 °C at pressures high enough for liquid water to exist at that temperature. The components are therefore miscible in all proportions below 61 °C and above 210 °C (at high pressure), and partially miscible in the interval from 61 to 210 °C. Physical basis Mixing is governed by the Gibbs free energy, with phase separation or mixing occurring for whichever case lowers the Gibbs free energy. The free energy can be decomposed into two parts: , with the enthalpy, the temperature, and the entropy. Thus, the change of the free energy in mixing is the sum of the enthalpy of mixing and the entropy of mixing. The enthalpy of mixing is zero for ideal mixtures, and ideal mixtures are enough to describe many common solutions. Thus, in many cases, mixing (or phase separation) is driven primarily by the entropy of mixing. It is generally the case that the entropy will increase whenever a particle (an atom, a molecule) has a larger space to explore; and thus, the entropy of mixing is generally positive: the components of the mixture can increase their entropy by sharing a larger common volume. Phase separation is then driven by several distinct processes. In one case, the enthalpy of mixing is positive, and the temperature is low: the increase in entropy is insufficient to lower the free energy. In another, considerably more rare case, the entropy of mixing is "unfavorable", that is to say, it is negative. In this case, even if the change in enthalpy is negative, phase separation will occur unless the temperature is low enough. It is this second case which gives rise to the idea of the lower critical solution temperature. Phase separation in cold gases A mixture of two helium isotopes (helium-3 and helium-4) in a certain range of temperatures and concentrations separates into parts. The initial mix of the two isotopes spontaneously separates into ^{4}He-rich and {}^3He-rich regions. Phase separation also exists in ultracold gas systems. It has been shown experimentally in a two-component ultracold Fermi gas case. The phase separation can compete with other phenomena as vortex lattice formation or an exotic Fulde-Ferrell-Larkin-Ovchinnikov phase.
Physical sciences
Phase separations
Chemistry
41077022
https://en.wikipedia.org/wiki/Earth%27s%20internal%20heat%20budget
Earth's internal heat budget
Earth's internal heat budget is fundamental to the thermal history of the Earth. The flow of heat from Earth's interior to the surface is estimated at 47±2 terawatts (TW) and comes from two main sources in roughly equal amounts: the radiogenic heat produced by the radioactive decay of isotopes in the mantle and crust, and the primordial heat left over from the formation of Earth. Earth's internal heat travels along geothermal gradients and powers most geological processes. It drives mantle convection, plate tectonics, mountain building, rock metamorphism, and volcanism. Convective heat transfer within the planet's high-temperature metallic core is also theorized to sustain a geodynamo which generates Earth's magnetic field. Despite its geological significance, Earth's interior heat contributes only 0.03% of Earth's total energy budget at the surface, which is dominated by 173,000 TW of incoming solar radiation. This external energy source powers most of the planet's atmospheric, oceanic, and biologic processes. Nevertheless on land and at the ocean floor, the sensible heat absorbed from non-reflected insolation flows inward only by means of thermal conduction, and thus penetrates only a few dozen centimeters on the daily cycle and only a few dozen meters on the annual cycle. This renders solar radiation minimally relevant for processes internal to Earth's crust. Global data on heat-flow density are collected and compiled by the International Heat Flow Commission of the International Association of Seismology and Physics of the Earth's Interior. Heat and early estimate of Earth's age Based on calculations of Earth's cooling rate, which assumed constant conductivity in the Earth's interior, in 1862 William Thomson, later Lord Kelvin, estimated the age of the Earth at 98 million years, which contrasts with the age of 4.5 billion years obtained in the 20th century by radiometric dating. As pointed out by John Perry in 1895 a variable conductivity in the Earth's interior could expand the computed age of the Earth to billions of years, as later confirmed by radiometric dating. Contrary to the usual representation of Thomson's argument, the observed thermal gradient of the Earth's crust would not be explained by the addition of radioactivity as a heat source. More significantly, mantle convection alters how heat is transported within the Earth, invalidating Thomson's assumption of purely conductive cooling. Global internal heat flow Estimates of the total heat flow from Earth's interior to surface span a range of 43 to 49 terawatts (TW) (a terawatt is 1012 watts). One recent estimate is 47 TW, equivalent to an average heat flux of 91.6 mW/m2, and is based on more than 38,000 measurements. The respective mean heat flows of continental and oceanic crust are 70.9 and 105.4 mW/m2. While the total internal Earth heat flow to the surface is well constrained, the relative contribution of the two main sources of Earth's heat, radiogenic and primordial heat, are highly uncertain because their direct measurement is difficult. Chemical and physical models give estimated ranges of 15–41 TW and 12–30 TW for radiogenic heat and primordial heat, respectively. The structure of Earth is a rigid outer crust that is composed of thicker continental crust and thinner oceanic crust, solid but plastically flowing mantle, a liquid outer core, and a solid inner core. The fluidity of a material is proportional to temperature; thus, the solid mantle can still flow on long time scales, as a function of its temperature and therefore as a function of the flow of Earth's internal heat. The mantle convects in response to heat escaping from Earth's interior, with hotter and more buoyant mantle rising and cooler, and therefore denser, mantle sinking. This convective flow of the mantle drives the movement of Earth's lithospheric plates; thus, an additional reservoir of heat in the lower mantle is critical for the operation of plate tectonics and one possible source is an enrichment of radioactive elements in the lower mantle. Earth heat transport occurs by conduction, mantle convection, hydrothermal convection, and volcanic advection. Earth's internal heat flow to the surface is thought to be 80% due to mantle convection, with the remaining heat mostly originating in the Earth's crust, with about 1% due to volcanic activity, earthquakes, and mountain building. Thus, about 99% of Earth's internal heat loss at the surface is by conduction through the crust, and mantle convection is the dominant control on heat transport from deep within the Earth. Most of the heat flow from the thicker continental crust is attributed to internal radiogenic sources; in contrast the thinner oceanic crust has only 2% internal radiogenic heat. The remaining heat flow at the surface would be due to basal heating of the crust from mantle convection. Heat fluxes are negatively correlated with rock age, with the highest heat fluxes from the youngest rock at mid-ocean ridge spreading centers (zones of mantle upwelling), as observed in the global map of Earth heat flow. Sources of heat Radiogenic heat The radioactive decay of elements in the Earth's mantle and crust results in production of daughter isotopes and release of geoneutrinos and heat energy, or radiogenic heat. About 50% of the Earth's internal heat originates from radioactive decay. Four radioactive isotopes are responsible for the majority of radiogenic heat because of their enrichment relative to other radioactive isotopes: uranium-238 (238U), uranium-235 (235U), thorium-232 (232Th), and potassium-40 (40K). Due to a lack of rock samples from below 200 km depth, it is difficult to determine precisely the radiogenic heat throughout the whole mantle, although some estimates are available. For the Earth's core, geochemical studies indicate that it is unlikely to be a significant source of radiogenic heat due to an expected low concentration of radioactive elements partitioning into iron. Radiogenic heat production in the mantle is linked to the structure of mantle convection, a topic of much debate, and it is thought that the mantle may either have a layered structure with a higher concentration of radioactive heat-producing elements in the lower mantle, or small reservoirs enriched in radioactive elements dispersed throughout the whole mantle. Geoneutrino detectors can detect the decay of 238U and 232Th and thus allow estimation of their contribution to the present radiogenic heat budget, while 235U and 40K are not thus detectable. Regardless, 40K is estimated to contribute 4 TW of heating. However, due to the short half-lives the decay of 235U and 40K contributed a large fraction of radiogenic heat flux to the early Earth, which was also much hotter than at present. Initial results from measuring the geoneutrino products of radioactive decay from within the Earth, a proxy for radiogenic heat, yielded a new estimate of half of the total Earth internal heat source being radiogenic, and this is consistent with previous estimates. Primordial heat Primordial heat is the heat lost by the Earth as it continues to cool from its original formation, and this is in contrast to its still actively-produced radiogenic heat. The Earth core's heat flow—heat leaving the core and flowing into the overlying mantle—is thought to be due to primordial heat, and is estimated at 5–15 TW. Estimates of mantle primordial heat loss range between 7 and 15 TW, which is calculated as the remainder of heat after removal of core heat flow and bulk-Earth radiogenic heat production from the observed surface heat flow. The early formation of the Earth's dense core could have caused superheating and rapid heat loss, and the heat loss rate would slow once the mantle solidified. Heat flow from the core is necessary for maintaining the convecting outer core and the geodynamo and Earth's magnetic field; therefore primordial heat from the core enabled Earth's atmosphere and thus helped retain Earth's liquid water. Primordial heat energy comes from the potential energy released by collapsing a large amount of matter into a gravity well, and the kinetic energy of accreted matter. Heat flow and tectonic plates Controversy over the exact nature of mantle convection makes the linked evolution of Earth's heat budget and the dynamics and structure of the mantle difficult to unravel. There is evidence that the processes of plate tectonics were not active in the Earth before 3.2 billion years ago, and that early Earth's internal heat loss could have been dominated by advection via heat-pipe volcanism. Terrestrial bodies with lower heat flows, such as the Moon and Mars, conduct their internal heat through a single lithospheric plate, and higher heat flows, such as on Jupiter's moon Io, result in advective heat transport via enhanced volcanism, while the active plate tectonics of Earth occur with an intermediate heat flow and a convecting mantle.
Physical sciences
Geophysics
Earth science
41083964
https://en.wikipedia.org/wiki/Genetically%20modified%20tree
Genetically modified tree
A genetically modified tree (GMt, GM tree, genetically engineered tree, GE tree or transgenic tree) is a tree whose DNA has been modified using genetic engineering techniques. In most cases the aim is to introduce a novel trait to the plant which does not occur naturally within the species. Examples include resistance to certain pests, diseases, environmental conditions, and herbicide tolerance, or the alteration of lignin levels in order to reduce pulping costs. Genetically modified forest trees are not yet approved ("deregulated") for commercial use with the exception of insect-resistant poplar trees in China and one case of GM Eucalyptus in Brazil. Several genetically modified forest tree species are undergoing field trials for deregulation, and much of the research is being carried out by the pulp and paper industry, primarily with the intention of increasing the productivity of existing tree stock. Certain genetically modified orchard tree species have been deregulated for commercial use in the United States including the papaya and plum. The development, testing and use of GM trees remains at an early stage in comparison to GM crops. Research Research into genetically modified trees has been ongoing since 1988. Concerns surrounding the biosafety implications of releasing genetically modified trees into the wild have held back regulatory approval of GM forest trees. This concern is exemplified in the Convention on Biological Diversity's stance: A precondition for further commercialization of GM forest trees is likely to be their complete sterility. Plantation trees remain phenotypically similar to their wild cousins in that most are the product of no more than three generations of artificial selection, therefore, the risk of transgene escape by pollination with compatible wild species is high. One of the most credible science-based concerns with GM trees is their potential for wide dispersal of seed and pollen. The fact that pine pollen travels long distances is well established, moving up to 3,000 kilometers from its source. Additionally, many tree species reproduce for a long time before being harvested. In combination these factors have led some to believe that GM trees are worthy of special environmental considerations over GM crops. Ensuring sterility for GM trees has proven elusive, but efforts are being made. While tree geneticist Steve Strauss predicted that complete containment might be possible by 2020, many questions remain. Proposed uses GM trees under experimental development have been modified with traits intended to provide benefit to industry, foresters or consumers. Due to high regulatory and research costs, the majority of genetically modified trees in silviculture consist of plantation trees, such as eucalyptus, poplar, and pine. Lignin alteration Several companies and organizations (including ArborGen, GLBRC, ...) in the pulp and paper industry are interested in utilizing GM technology to alter the lignin content of plantation trees (particularly eucalyptus and poplar trees). It is estimated that reducing lignin in plantation trees by genetic modification could reduce pulping costs by up to $15 per cubic metre. Lignin removal from wood fibres conventionally relies on costly and environmentally hazardous chemicals. By developing low-lignin GM trees it is hoped that pulping and bleaching processes will require fewer inputs, therefore, mills supplied by low-lignin GM trees may have a reduced impact on their surrounding ecosystems and communities. However, it is argued that reductions in lignin may compromise the structural integrity of the plant, thereby making it more susceptible to wind, snow, pathogens and disease, which could necessitate pesticide use exceeding that on traditional plantations. This has proven correct, and an alternative approach followed by the University of Columbia was developed. This approach was to introduce chemically labile linkages instead (by inserting a gene from the plant Angelica sinensis), which allows the lignin to break down much more easy. Due to this new approach, the lignin from the trees not only easily breaks apart when treated with a mild base at temperatures of 100 degrees C, but the trees also maintained their growth potential and strength. Frost tolerance Genetic modification can allow trees to cope with abiotic stresses such that their geographic range is broadened. Freeze-tolerant GM eucalyptus trees for use in southern US plantations are currently being tested in open air sites with such an objective in mind. ArborGen, a tree biotechnology company and joint venture of pulp and paper firms Rubicon (New Zealand), MeadWestvaco (US) and International Paper (US) is leading this research. Until now the cultivation of eucalyptus has only been possible on the southern tip of Florida, freeze-tolerance would substantially extend the cultivation range northwards. Reduced vigour Orchard trees require a rootstock with reduced vigour to allow them to remain small. Genetic modification could allow the elimination of the rootstock, by making the tree less vigorous, hence reducing its height when fully mature. Research is being done into which genes are responsible for the vigour in orchard trees (such as apples, pears, ...). Accelerated growth In Brazil, field trials of fast growing GM eucalyptus are currently underway, they were set to conclude in 2015–2016 with commercialization to result. FuturaGene, a biotechnology company owned by Suzano, a Brazilian pulp and paper company, has been leading this research. Stanley Hirsch, chief executive of FuturaGene has stated: "Our trees grow faster and thicker. We are ahead of everyone. We have shown we can increase the yields and growth rates of trees more than anything grown by traditional breeding." The company is looking to reduce harvest cycles from 7 to 5.5 years with 20-30% more mass than conventional eucalyptus. There is concern that such objectives may further exacerbate the negative impacts of plantation forestry. Increased water and soil nutrient demand from faster growing species may lead to irrecoverable losses in site productivity and further impinge upon neighbouring communities and ecosystems. Researchers at the University of Manchester's Faculty of Life Sciences modified two genes in poplar trees, called PXY and CLE, which are responsible for the rate of cell division in tree trunks. As a result, the trees are growing twice as fast as normal, and also end up being taller, wider and with more leaves. Disease resistance Ecologically motivated research into genetic modification is underway. There are ongoing schemes that aim to foster disease resistance in trees such as the American chestnut (see Chestnut blight) and the English elm (see Dutch elm disease) for the purpose of their reintroduction to the wild. Specific diseases have reduced the populations of these emblematic species to the extent that they are mostly lost in the wild. Genetic modification is being pursued concurrently with traditional breeding techniques in an attempt to endow these species with disease resistance. Current uses Poplars in China In 2002 China's State Forestry Administration approved GM poplar trees for commercial use. Subsequently, 1.4 million Bt (insecticide) producing GM poplars were planted in China. They were planted both for their wood and as part of China's 'Green Wall' project, which aims to impede desertification. Reports indicate that the GM poplars have spread beyond the area of original planting and that contamination of native poplars with the Bt gene is occurring. There is concern with these developments, particularly because the pesticide producing trait may impart a positive selective advantage on the poplar, allowing it a high level of invasiveness. Living Carbon in the USA Living Carbon, an American biotechnology company founded in 2019, has developed genetically engineered hybrid poplar trees aimed at enhancing carbon sequestration. These trees have been modified to improve photosynthetic efficiency, enabling them to capture more carbon dioxide (CO₂) and produce greater woody biomass than conventional trees. Living Carbon’s mission is to leverage technology to combat climate change while promoting biodiversity and restoring degraded ecosystems. Development and Deployment Living Carbon’s genetically modified trees were first planted in a bottomland forest in Georgia, USA, in February 2023. Early field trials indicated that these trees achieved a 53% increase in above-ground biomass compared to control groups, enabling them to absorb 27% more carbon. The company generates revenue by selling carbon credits derived from these forests to individuals and businesses seeking to offset greenhouse gas emissions. Benefits and Potential Supporters of Living Carbon’s approach highlight its potential to contribute to global climate solutions, particularly if deployed on a large scale. The modified trees are targeted for use in afforestation and reforestation projects on degraded land, where they can aid in carbon capture and ecosystem restoration without displacing native species. These projects also aim to enhance biodiversity while addressing environmental degradation. Controversies and Challenges The deployment of genetically modified trees has been met with skepticism. Critics, including some forestry and genetic experts, question whether the trees will meet carbon absorption expectations outside controlled laboratory settings. Concerns have also been raised about the potential ecological risks, such as the unintended spread of genetically modified traits to wild tree populations, which could disrupt native ecosystems. Maddie Hall, co-founder of Living Carbon, has addressed these concerns, emphasizing the urgency of climate action and the limitations of waiting for natural evolutionary processes to improve tree resilience. However, experts note that achieving success in lab or greenhouse trials does not guarantee similar outcomes in complex, natural environments.
Technology
Horticulture
null
47100486
https://en.wikipedia.org/wiki/Cherax%20pulcher
Cherax pulcher
Cherax pulcher is a species of crayfish from West Papua in Indonesia. It is popular as a freshwater aquarium species across Asia, Europe, and North America. Etymology The specific name is derived from the Latin word pulcher, which means "beautiful". The name was chosen due to the species' bright coloration. Discovery The species was first observed being sold for aquariums, where it was ascertained that it likely represented a new species. However, it proved difficult to track down, as dealers' claims as to its location proved difficult to corroborate. It was eventually traced to the Bird's Head Peninsula in New Guinea, where a number of specimens were collected with the help of the Maju Aquarium in Jakarta. Description Males of the species measure in length, while females measure long. The body is slightly oval in shape. The species has notably large eyes. The species is brightly colored, with a pink cephalothorax that fades into greenish-grey on the sides. The rostrum is a greenish-blue, and the pleon is dark blue or black, with pinkish-grey or cream coloration on the margins (in some individuals the pleon is greenish-grey with pink speckles). The tail fan is cream-colored or pink around the outside margin. The legs are blue, and the chelae are blue with varying intensities, with cream-colored or white margins. Taxonomy Cherax pulcher is part of the subgenus Astaconephrops. It is the nineteenth Cherax species to have been discovered in West Papua in Indonesia (Cherax are not known from other Indonesian regions). The most morphologically similar species, C. boesemani, is found in the Ajamaru Lake and the Ajamaru River, approximately away. Range and habitat So far the species has only been found in Hoa Creek near the village of Teminabuan in West Papua, Indonesia. The creek is clear, fast flowing in some places, with a sandy and rocky bottom. Human use The species is sold throughout Asia, Europe, and North America on the ornamental fish market, where it is commonly marketed under the names "Hoa Creek", "Blue Moon", or "Irian Jaya". Local people also harvest it as a source of food. Conservation status Due to continued harvesting of the species both for export and for consumption, it has been observed by local collectors that the species' numbers have begun to decrease.
Biology and health sciences
Crayfishes and lobsters
Animals
42518279
https://en.wikipedia.org/wiki/Kepler-186f
Kepler-186f
Kepler-186f (also known by its Kepler object of interest designation KOI-571.05) is an Earth-sized exoplanet orbiting within the habitable zone of the red dwarf star Kepler-186, the outermost of five such planets discovered around the star by NASA's Kepler space telescope. It is located about from Earth in the constellation of Cygnus. Kepler-186f orbits its star at a distance of about from its host star with an orbital period of roughly 130 days, and a mass and radius around 1.44 and 1.17 times that of Earth, respectively. As one of the more promising candidates for habitability, it was the first planet with a radius similar to Earth's to be discovered in the habitable zone of another star. However, key components still need to be found to determine its habitability for life, including an atmosphere and its composition and if liquid water can exist on its surface. Analysis of three years of data was required to find its signal. NASA’s Kepler space telescope detected it using the transit method (in which the dimming effect that a planet causes as it crosses in front of its star is measured), along with four additional planets orbiting much closer to the star (all modestly larger than Earth). The results were presented initially at a conference on 19 March 2014 and some details were reported in the media at the time. The planet was announced on 17 April 2014, simultaneously with publication of a scientific paper in Science. Physical characteristics Mass, radius and temperature The only physical property directly derivable from the observations (besides the orbit) is the size of the planet relative to the central star, which follows from the amount of occultation of stellar light during a transit. This ratio was measured to be 0.021, giving a planetary radius of 1.17 ± 0.08 times that of Earth. The planet is about 11% larger in radius than Earth (between 4.5% smaller and 26.5% larger), giving a volume about 1.37 times that of Earth (between 0.87 and 2.03 times as large). A very wide range of possible masses can be calculated by combining the radius with densities derived from the possible types of matter from which planets can be made. For example, it could be a rocky terrestrial planet or a lower density ocean planet with a thick atmosphere. A massive hydrogen/helium (H/He) atmosphere is thought to be unlikely in a planet with a radius below 1.5 . Planets with a radius of more than 1.5 times that of Earth tend to accumulate the thick atmospheres which make them less likely to be habitable. Red dwarfs emit a much stronger extreme ultraviolet (XUV) flux when young than later in life. The planet's primordial atmosphere would have been subjected to elevated photoevaporation during that period, which would probably have largely removed any H/He-rich envelope through hydrodynamic mass loss. Mass estimates range from 0.32 for a pure water/ice composition to 3.77 if made up entirely of iron (both implausible extremes). For a body with radius 1.11 , a composition similar to that of Earth (i.e., 1/3 iron, 2/3 silicate rock) yields a mass of 1.44 , taking into account the higher density due to the higher average pressure compared to Earth. That would make the force of gravity on the surface 17% higher than on Earth. The estimated equilibrium temperature for Kepler-186f, which is the surface temperature without an atmosphere, is said to be around , somewhat colder than the equilibrium temperature of Mars. Host star The planet orbits Kepler-186, an M-type red dwarf star which has a total of five known planets. The star has a mass of 0.54 and a radius of 0.52 . It has a temperature of 3755 K and is about 4 billion years old, about 600 million years younger than the Sun, which is 4.6 billion years old and has a temperature of . The star's apparent magnitude, or how bright it appears from Earth's perspective, is 14.62. This is too dim to be seen with the naked eye, which can only see objects with a magnitude up to at least 6.5 – 7 or lower. Orbit Kepler-186f orbits its star with about 5% of the Sun's luminosity with an orbital period of 129.9 days and an orbital radius of about 0.40 times that of Earth's (compared to for Mercury). The habitable zone for this system is estimated conservatively to extend over distances receiving from 88% to 25% of Earth's illumination (from ). Kepler-186f receives about 32%, placing it within the conservative zone but near the outer edge, similar to the position of Mars in the Solar System. Habitability Kepler-186f's location within the habitable zone does not necessarily mean it is habitable; this is also dependent on its atmospheric characteristics, which are unknown. However, Kepler-186f is too distant for its atmosphere to be analyzed by existing telescopes (e.g., NESSI) or next-generation instruments such as the James Webb Space Telescope. A simple climate model – in which the planet's inventory of volatiles is restricted to nitrogen, carbon dioxide and water, and clouds are not accounted for – suggests that the planet's surface temperature would be above if at least 0.5 to 5 bars of CO2 is present in its atmosphere, for assumed N2 partial pressures ranging from 10 bar to zero, respectively. The star hosts four other planets discovered so far, although Kepler-186 b, c, d, and e (in order of increasing orbital radius), being too close to their star, are considered too hot to have liquid water. The four innermost planets are probably tidally locked, but Kepler-186f is in a higher orbit, where the star's tidal effects are much weaker, so the time could have been insufficient for its spin to slow down significantly. Because of the very slow evolution of red dwarfs, the age of the Kepler-186 system was poorly constrained, although it is likely to be greater than a few billion years. Recent results have placed the age at around 4 billion years. The chance that it is tidally locked is approximately 50%. Since it is closer to its star than Earth is to the Sun, it will probably rotate much more slowly than Earth; its day could be weeks or months long (see Tidal effects on rotation rate, axial tilt and orbit). Kepler-186f's axial tilt (obliquity) is likely very small, in which case it would not have tilt-induced seasons like Earth's. Its orbit is probably close to circular, so it will also lack eccentricity-induced seasonal changes like those of Mars. However, the axial tilt could be larger (about 23 degrees) if another undetected non-transiting planet orbits between it and Kepler-186e; planetary formation simulations have shown that the presence of at least one additional planet in this region is likely. If such a planet exists, it cannot be much more massive than Earth as it would then cause orbital instabilities. One review essay in 2015 concluded that Kepler-186f, along with the exoplanets Kepler-442b and Kepler-62f, were likely the best candidates for being potentially habitable planets. In June 2018, studies suggest that Kepler-186f may have seasons and a climate similar to those on Earth. Follow-up studies Target of SETI investigation As part of the SETI Institute's search for extraterrestrial intelligence, the Allen Telescope Array had listened for radio emissions from the Kepler-186 system for about a month as of 17 April 2014. No signals attributable to extraterrestrial technology were found in that interval; however, to be detectable, such transmissions, if radiated in all directions equally and thus not preferentially towards the Earth, would need to be at least 10 times as strong as those from Arecibo Observatory. Another search, undertaken at the crowdsourcing project SETI-Live, reports inconclusive but optimistic-looking signs in the radio noise from the Allen Array observations. The more well known SETI @ Home search does not cover any object in the Kepler field of view. Another follow-up survey using the Green Bank Telescope has not reviewed Kepler 186f. Given the interstellar distance of , the signals would have left the planet many years ago. Future technology and observations At approximately distant, Kepler-186f is too far and its star too faint for current telescopes or the next generation of planned telescopes to determine its mass or whether it has an atmosphere. However, the discovery of Kepler-186f demonstrates conclusively that there are other Earth-sized planets in habitable zones. The Kepler spacecraft focused on a single small region of the sky but next-generation planet-hunting space telescopes, such as TESS and CHEOPS, will examine nearby stars throughout the sky. Nearby stars with planets can then be studied by the James Webb Space Telescope and future large ground-based telescopes to analyze atmospheres, determine masses and infer compositions. Additionally the Square Kilometer Array would significantly improve radio observations over the Arecibo Observatory and Green Bank Telescope. Previous names As the Kepler telescope observational campaign proceeded, an initially identified system was entered in the Kepler Input Catalog (KIC), and then progressed as a candidate host of planets to a Kepler Object of Interest (KOI). Thus, Kepler-186 started as KIC 8120608 and then was identified as KOI-571. Kepler-186f was mentioned when known as KOI-571-05 or KOI-571.05 or using similar nomenclatures in 2013 in various discussions and publications before its full confirmation. Comparison The nearest-to-Earth-size planet in a habitable zone previously known was Kepler-62f with 1.4 Earth radii. Kepler-186f orbits an M-dwarf star, while Kepler-62f orbits a K-type star. A study of atmospheric evolution in Earth-size planets in habitable zones of G-Stars (a class containing the Sun, but not Kepler-186) suggested that 0.8–1.15 R🜨 is the size range for planets small enough to lose their initial accreted hydrogen envelope but large enough to retain an outgassed secondary atmosphere such as Earth's. In popular culture Along with five other exoplanets, Kepler-186f was included in Civilization: Beyond Earths exoplanet DLC as a playable map. Dutch rock band named their 2017 album Kepler-186f after this exoplanet. Kepler-186f is the location of a future earth colony in the short story "Stars" by Drew Hayden Taylor. In season 2 of the 2020 Animaniacs reboot, Kepler 186f, alongside Pegasi 51b & the fictional WB-1, is referenced in the song "Yakko's Big Idea" In season 12 of The Big Bang Theory episode "The Conjugal Configuration", Kepler 186F is shown on a poster in Neil deGrasse Tyson's office.
Physical sciences
Notable exoplanets
Astronomy
39738728
https://en.wikipedia.org/wiki/Hexapoda
Hexapoda
The subphylum Hexapoda (from Greek for 'six legs') or hexapods comprises the largest clade of arthropods and includes most of the extant arthropod species. It includes the crown group class Insecta (true insects), as well as the much smaller clade Entognatha, which includes three classes of wingless arthropods that were once considered insects: Collembola (springtails), Protura (coneheads) and Diplura (two-pronged bristletails). The insects and springtails are very abundant and are some of the most important pollinators, basal consumers, scavengers/detritivores and micropredators in terrestrial environments. Hexapods are named for their most distinctive feature: a three-part body plan with a consolidated thorax and three pairs of legs. Most other arthropods have more than three pairs of legs. Most recent studies have recovered Hexapoda as a subgroup of Crustacea. Morphology Hexapods have bodies ranging in length from 0.5 mm to over 300 mm which are divided into an anterior head, thorax, and posterior abdomen. The head is composed of a presegmental acron that usually bears eyes (absent in Protura and Diplura), followed by six segments, all closely fused together, with the following appendages: Segment I. None Segment II. Antennae (sensory), absent in Protura Segment III. None Segment IV. Mandibles (crushing jaws) Segment V. Maxillae (chewing jaws) Segment VI. Labium (lower lip) The mouth lies between the fourth and fifth segments and is covered by a projection from the sixth, called the labrum (upper lip). In true insects (class Insecta) the mouthparts are exposed or ectognathous, while in other groups they are enveloped or endognathous. Similar appendages are found on the heads of Myriapoda and Crustacea, although the crustaceans have secondary antennae. Collembola and Diplura have segmented antenna: each segment has its own set of muscles. The antennae of insects consist of just three segments: the scape, the pedicel and the flagellum. Muscles occur only in the first two segments. The third segment, the flagellum, has no muscles and is composed of a various number of annuli. This type of antenna is therefore called an annulated antenna. Johnston's organ, which is found on the pedicel, is absent in the Entognatha. The thorax is composed of three segments, each of which bears a single pair of legs. As is typical of arthropods adapted to life on land, each leg has a single walking branch composed of five segments. The legs do not have the gill branches found in some other arthropods. In most insects the second and third thoracic segments also support wings. It has been suggested that these may be homologous to the gill branches of crustaceans, or they may have developed from extensions of the segments themselves. The abdomen follows an epimorphic developmental pattern, where all segments are already present at the end of embryonic development, in all the hexapod groups except for the Protura, which follow an anamorphic developmental pattern, where the hatched juveniles have an incomplete complement of segments and go through a post-embryonic segment addition with each molting before reaching the final adult number of segments. All true insects have eleven segments (often reduced in number in many insect species), but in Protura there are twelve, and in Collembola only six (sometimes reduced to only four). The appendages on the abdomen are extremely reduced, restricted to the external genitalia and sometimes a pair of sensory cerci on the last segment. Evolution and relationships The myriapods have traditionally been considered the closest relatives of the hexapods, based on morphological similarity. These were then considered subclasses of a subphylum called Uniramia or Atelocerata. In the first decade of the 21st century, however, this was called into question, and it appears the hexapods' closest relatives may be the crustaceans. The non-insect hexapods have variously been considered a single evolutionary line, typically treated as Class Entognatha, or as several lines with different relationships with the Class Insecta. In particular, the Diplura may be more closely related to the Insecta than to the Collembola (springtails). A 2002 molecular analysis suggests that the hexapods diverged from their sister group, the Anostraca (fairy shrimps), at around the start of the Silurian period , coinciding with the appearance of vascular plants on land. Since then remipedians have been revealed as closest living relative of hexapods. Several hypotheses about their internal relationships have been suggested over the years, with proturans as the sister group to the other hexapods and collembolans and diplurans belonging together in Antennomusculata as the latest suggestion: Entognatha (proturans, collembolans and diplurans) and Ectognatha (insects) Elliplura (proturans and collembolans) and Cercophora (diplurans and insects) Collembolans, Nonoculata (proturans and diplurans) and insects Proturans, Antennomusculata (collembolans and diplurans) and insects The following cladogram is given by Kjer et al. (2016): An incomplete possible insect fossil, Strudiella devonica, has been recovered from the Devonian period. This fossil may help to fill the arthropod gap from 385 million to 325 million years ago, although some researchers oppose this view and suggest that the fossil may instead represent a decomposed crustacean or other non-insect. In 2023, a hexapod-like arthropod fossil from the Ordovician marine fossil site Castle Bank was reported, although further study is needed.
Biology and health sciences
Insects and other hexapods
null
42520223
https://en.wikipedia.org/wiki/Sednoid
Sednoid
A sednoid is a trans-Neptunian object with a large semi-major axis and a high perihelion, similar to the orbit of the dwarf planet Sedna. The consensus among astronomers is that there are only three objects that are known from this population: Sedna, , and 541132 Leleākūhonua (). All three have perihelia greater than . These objects lie outside an apparently nearly empty gap in the Solar System and have no significant interaction with the planets. They are usually grouped with the detached objects. Some astronomers consider the sednoids to be Inner Oort Cloud (IOC) objects, though the inner Oort cloud, or Hills cloud, was originally predicted to lie beyond 2,000 AU, beyond the aphelia of the three known sednoids. One attempt at a precise definition of sednoids is any body with a perihelion greater than and a semi-major axis greater than . However, this definition applies to the objects , , and which have perihelia beyond 50 AU and semi-major axes over 700 AU. Despite this, these objects are thought to not belong to the sednoids, but rather to the same dynamical class as 474640 Alicanto, and . With their high eccentricities (greater than 0.8), sednoids are distinguished from the high-perihelion objects with moderate eccentricities that are in a stable resonance with Neptune, namely , , ("Buffy"), and . Unexplained orbits The sednoids' orbits cannot be explained by perturbations from the giant planets, nor by interaction with the galactic tides. If they formed in their current locations, their orbits must originally have been circular; otherwise accretion (the coalescence of smaller bodies into larger ones) would not have been possible because the large relative velocities between planetesimals would have been too disruptive. Their present elliptical orbits can be explained by several hypotheses: These objects could have had their orbits and perihelion distances "lifted" by the passage of a nearby star when the Sun was still embedded in its birth star cluster. They could have been captured from around passing stars, most likely in the Sun's birth cluster. Their orbits could have been disrupted by an as-yet-unknown planet-sized body beyond the Kuiper belt such as the hypothesized Planet Nine. Their perihelion distances could have been "lifted" by a temporarily-present rogue planet in the early solar system. Known members The first three known sednoids, like all of the more extreme detached objects (objects with semi-major axes > 150 AU and perihelia > 30 AU; the orbit of Neptune), have a similar orientation (argument of perihelion) of ≈ 0° (). This is not due to an observational bias and is unexpected, because interaction with the giant planets should have randomized their arguments of perihelion (ω), with precession periods between 40 Myr and 650 Myr and 1.5 Gyr for Sedna. This suggests that one or more undiscovered massive perturbers may exist in the outer Solar System. A super-Earth at 250 AU would cause these objects to librate around ω = for billions of years. There are multiple possible configurations and a low-albedo super-Earth at that distance would have an apparent magnitude below the current all-sky-survey detection limits. This hypothetical super-Earth has been dubbed Planet Nine. Larger, more-distant perturbers would also be too faint to be detected. , 27 known objects have a semi-major axis greater than 150 AU, a perihelion beyond Neptune, an argument of perihelion of , and an observation arc of more than 1 year., , , , , and are near the limit of perihelion of 50 AU, but are not considered sednoids. On 1 October 2018, Leleākūhonua, then known as , was announced with perihelion of 65 AU and a semi-major axis of 1094 AU. With an aphelion over 2100 AU, it brings the object further out than Sedna. In late 2015, V774104 was announced at the Division for Planetary Science conference as a further candidate sednoid, but its observation arc was too short to know whether its perihelion was even outside Neptune's influence. The talk about V774104 was probably meant to refer to Leleākūhonua () even though V774104 is the internal designation for non-sednoid . Sednoids might constitute a proper dynamical class, but they may have a heterogeneous origin; the spectral slope of is very different from that of Sedna. Malena Rice and Gregory Laughlin applied a targeted shift-stacking search algorithm to analyze data from TESS sectors 18 and 19 looking for candidate outer Solar System objects. Their search recovered known objects like Sedna and produced 17 new outer Solar System body candidates located at geocentric distances in the range 80–200 AU, that need follow-up observations with ground-based telescope resources for confirmation. Early results from a survey with the William Herschel Telescope aimed at recovering these distant TNO candidates have failed to confirm two of them. Theoretical population Each of the proposed mechanisms for Sedna's extreme orbit would leave a distinct mark on the structure and dynamics of any wider population. If a trans-Neptunian planet were responsible, all such objects would share roughly the same perihelion (≈80 AU). If Sedna had been captured from another planetary system that rotated in the same direction as the Solar System, then all of its population would have orbits on relatively low inclinations and have semi-major axes ranging from 100 to 500 AU. If it rotated in the opposite direction, then two populations would form, one with low and one with high inclinations. The perturbations from passing stars would produce a wide variety of perihelia and inclinations, each dependent on the number and angle of such encounters. Acquiring a larger sample of such objects would therefore help in determining which scenario is most likely. "I call Sedna a fossil record of the earliest Solar System", said Brown in 2006. "Eventually, when other fossil records are found, Sedna will help tell us how the Sun formed and the number of stars that were close to the Sun when it formed." A 2007–2008 survey by Brown, Rabinowitz and Schwamb attempted to locate another member of Sedna's hypothetical population. Although the survey was sensitive to movement out to 1,000 AU and discovered the likely dwarf planet Gonggong, it detected no new sednoids. Subsequent simulations incorporating the new data suggested about 40 Sedna-sized objects probably exist in this region, with the brightest being about Eris's magnitude (−1.0). Following the discovery of Leleākūhonua, Sheppard et al. concluded that it implies a population of about 2 million Inner Oort Cloud objects larger than 40 km, with a total mass in the range of , about the mass of Pluto and several times the mass of the asteroid belt.
Physical sciences
Solar System
Astronomy
38315736
https://en.wikipedia.org/wiki/Macrotermes%20bellicosus
Macrotermes bellicosus
Macrotermes bellicosus is a species of Macrotermes. The queens are the largest amongst known termites, measuring about long when physogastric. The workers average in length and soldiers are slightly larger. means "combative" in Latin. The species is a member of a genus indigenous to Africa and South-East Asia. Behavior Caste polyethism There are two worker castes in M. bellicosus, the major worker and the minor worker. In both cases, the workers begin their lives by taking care of the queen, and later on leave the nest to begin foraging. The point at which the worker leaves the nest to begin gathering food differs slightly between the two castes. Major workers will leave at any point between 13 and 25 days after moulting, and the minor worker exits between 9 and 32 days. Activities of M. bellicosus that go on outside the nest, which mainly consist of exploration and food gathering, show a division of labor between the two worker castes. The exploration phase, where underground passages are built radiating outward from the nest, is mainly attributed to the minor workers, and major worker activity is low during this period. Once food is discovered, however, there is a shift in this division of labor and the major workers will then be the main caste that is charged with gathering the food. Recruitment of new minor workers during this period is low, and those minor workers that have already been recruited will continue to construct passages near the food. This division of labor normally favors a major to minor worker ratio where minor workers are in much higher numbers until food is discovered, at which point the ratio will continually lean towards increasing numbers of major workers. Two different feeding groups were established in M. bellicosus based on examining the gut contents of workers in foraging sites, the fungus comb and queen cell. These two groups differed in abdomen coloration, with the majority of workers having a dark brown abdomen that was correlated with feeding on the fungus comb, and a smaller amount having a reddish-brown abdomen caused by feeding on plant litter. There is a division of food intake between the major and minor workers, based on the fact that most of the termites with reddish-brown abdomens, caused by consuming plant litter, were major workers. Because the fungus comb is built with the feces of consumed plant litter, major workers are dominant in fungus comb construction and are also the dominant worker caste for food processing. Caste-specific pheromones The division of labor between minor and major workers also causes there to be a difference in the pheromone trails that each type of caste will leave. Trails that are left by minor workers, which can contain information about the presence of food based on the existence of certain pheromones, will attract both types of castes. The food information in these trails is only detected by major workers, who will orient themselves toward the food, while minor workers will follow all trails regardless of food information. Through this system, major workers will usually be traveling towards food, while minor workers will be exploring new territory. These minor worker trails have been shown to be generally more attractive than trails left by a mixed population of major and minor workers, indicating that the major workers may leave a trail that is antagonistic to the minor worker trail. This could be a mechanism for dividing major worker labor between various food sources by causing trails that are already being followed by major workers to be less attractive. Intraspecific colony recognition M. bellicosus individuals showed different types of intraspecific colony recognition behavior depending on caste. Minor soldiers would typically act aggressive, while major workers showed no aggression but instead would exhibit varying degrees of examination behavior. The variation of this behavior could not be correlated to mound size, age or spatial difference between colonies, indicating that there was no type of dear enemy effect. These behaviors were consistent with their colonies over long periods of time. Environmental influence on behavior Mound construction M. bellicosus will construct mounds differently based on the surrounding habitat, as seen in observations made in Comoé National Park. In the forest, the mounds are constructed with thick walls and are dome shaped, whereas the mounds that are constructed in the savanna have thin walls and deviate from the simple dome construction to more complicated structures. Heating experiments demonstrated that this difference in structure is due to different thermal properties of each mound as a response to the unique habitats. Mounds in the cooler forest habitat will retain their temperature for longer periods of time while mounds in the warmer savanna will shed heat faster. M. bellicosus individuals will burrow themselves in the subsoil and collect clay in their mouths. The clay is moistened by their saliva. Mound building is usually most labor-intensive in the wet months. Predation risk Pressures from predation play a role in the foraging behavior of M. bellicosus, which will vary between the two types of environments. This was measured by observing when the termites will stop foraging in an area while varying the pressure from predators. In the savanna, there was gradual increase in the amount of unused food remaining in response to increasing predation, while in contrast food was immediately abandoned in response to any predation in the forest. Also, in the absence of predation, less food was left not utilized in the savanna, indicating that a higher value is placed on food in this particular habitat. These observations are in accordance with the higher availability of food in the forest in comparison to the savanna. Lifetime reproductive success Examining the reproductive outputs of the two types of M. bellicosus habitats showed that the colonies in the savanna reproduced more frequently than the forest colonies, and also produce higher numbers of offspring. Likewise, the growth rates of the colony mounds, which can be correlated with the overall growth rates of the colonies, are also higher in the savanna mounds. Despite the higher availability of food in the forest habitat, and lower probability of survival in the savanna, the lifetime reproductive success of the colonies in the savanna were estimated to be much higher than those in the forest. Interspecific competition Foraging activities of M. bellicosus and other detritivorous termites are at a peak during the rainy season. It was observed that in the savanna, where even during the rainy season the availability of food is limited, the other termites exhibit complementary foraging in response to M. bellicosus, where they were more active in the absence of this dominant detritivore. This behavior indicates that there may be interspecific competition occurring between termites for available resources.
Biology and health sciences
Cockroaches &amp; Termites (Blattodea)
Animals
32704140
https://en.wikipedia.org/wiki/Old%20World%20quail
Old World quail
Old World quail is a collective name for several genera of mid-sized birds in the tribe Coturnicini of the pheasant family Phasianidae. Although all species commonly referred to as "Old World quail" are in the same tribe, they are paraphyletic with respect to the other members of the tribe, such as Alectoris, Tetraogallus, Ammoperdix, Margaroperdix, and Pternistis. New World quail are also found in the Galliformes, but are not in the same family (Odontophoridae). Buttonquails are not closely related at all, but are named for their similar appearance. They are presently placed in the family Turnicidae of the Charadriiformes, classified as shorebirds. The collective noun for a group of quail is flock, bevy or covey. Taxonomy Old World quail may refer to the following species of Coturnicini: Genus Synoicus Brown quail, Synoicus ypsilophorus Snow Mountain quail, Synoicus monorthonyx Blue quail, Synoicus adansonii King quail, Synoicus chinensis Genus Coturnix Rain quail, Coturnix coromandelica Harlequin quail, Coturnix delegorguei Common quail, Coturnix coturnix †Canary Islands quail, Coturnix gomerae (fossil) Japanese quail, Coturnix japonica †New Zealand quail, Coturnix novaezelandiae (extinct) Stubble quail, Coturnix pectoralis Genus Perdicula Jungle bush quail, Perdicula asiatica Rock bush quail, Perdicula argoondah Painted bush quail, Perdicula erythrorhyncha Manipur bush quail, Perdicula manipurensis Genus Ophrysia Himalayan quail, Ophrysia superciliosa (critically endangered/extinct) Behaviour Old World quail are small, plump terrestrial birds. They are seed eaters, but will also take insects and similar small prey. They nest on the ground and are capable of short, rapid bursts of flight. Some species, such as the Japanese and common quail, are migratory and fly for long distances. Some quail are farmed in large numbers. The common and Japanese (or coturnix) quail are both raised for table meat or to produce eggs. They are also readily hunted, often artificially stocked on game farms or to supplement wild populations. Migrating common quail are known to eat some poisonous seeds with no apparent ill effects but store the poison in their body fat, poisoning people who subsequently eat these birds; this condition is known as "coturnism".
Biology and health sciences
Galliformes
Animals
51170771
https://en.wikipedia.org/wiki/Connecticut%20field%20pumpkin
Connecticut field pumpkin
Connecticut field pumpkins are a type of pumpkin (Cucurbita pepo) first attested in the 16th century. They are one of the oldest varieties of pumpkin in existence and are known as an heirloom plant. One of the most popular Halloween pumpkins, Connecticut field pumpkins are commonly used for autumn decorations and jack-o'-lanterns; a strain of Connecticut field pumpkins have been described as "the original commercial jack-o'-lantern pumpkin". Due to the variety's appearance and growth process, it is considered to be well-suited for ornamental use. It also has culinary uses, particularly in canning, and was used for medicinal purposes by Native Americans prior to European contact. Background The Connecticut field pumpkin is of the species Cucurbita pepo, and is the most widely-grown member of the Cucurbita genus of gourds. The Cucurbita pepo group includes other pumpkins, winter squash, summer squash, acorns, and ornamental gourds. It is a fruit which is sensitive to frost. The pumpkin plant has unisexual flowers and vines and large leaves. The Connecticut field pumpkin is similar to winter squash, which was grown by Native Americans in the pre-Columbian era. The name "Connecticut field" references the area where the ancestral variety was found. The variety is colloquially known by various other names, such as Big Tom, Canner's Supreme, Eastern Field, Georgia Field, and Lake Shore. The Connecticut field pumpkin has been grown since before the American Revolutionary War. History Field pumpkins were seen as early as the middle of the 16th century by the French explorer Jacques Cartier. Thomas Dunlap produced a catalog in 1752 that advertised Connecticut field pumpkin seeds: a quart was offered for 37.5 US cents. In 1863 four distinct cultivars of this type of pumpkin were identified: Canada, Common Yellow, Connecticut field, and Long Yellow. The field pumpkin is typically traced back to the European colonization of the Americas. Because of its long use, it is considered to be an heirloom plant. Native Americans in the United States used the pumpkins in food and the making of medicine. They dried and stored pumpkins for use in the winter and they used them to make bread and other foods. The Cherokee tribe ate the seeds to cure internal parasites and other native American tribes used pumpkin as a diuretic. American colonists soon learned the significance and they learned how to best grow it from the Native Americans. Characteristics The pumpkins have many shapes and sizes but they are commonly round. The Connecticut field pumpkin is to in size and it is round, and orange, with smooth, slightly ribbed skin. The Howden pumpkin was developed by John Howden and it is a strain selected from Connecticut field pumpkins for improved production and uniformity of fruits: it is described as "the original commercial jack-o'-lantern pumpkin". Cultivation and harvest The Native Americans taught the colonists that they would get higher yields per acre if they planted potatoes or corn in the same field as the pumpkins. In 1906, the South Dakota State Horticultural Society recommended that six seeds be planted within hills of dirt approximately apart. After the plants grew into seedlings, the horticultural society recommended that the seedlings be thinned to just two plants per hill. The 1917 Cyclopedia of American Agriculture recommended that the variety be planted in sandy loam soil without other crops. They advised farmers to use manure to fertilize and said that of seeds were needed to plant . A good crop was considered to be two pumpkins per vine. One dangerous pest that can destroy the crop is the striped cucumber beetle. The recommended time to harvest them is when there is no green visible on the pumpkin. Farmers are taught to leave a 3 4 inch stem on the pumpkin and to cut them with shears so that they do not break the stem. The Connecticut field pumpkin has a 110-day growing period from planting to harvest. Uses The Connecticut field pumpkin is considered to be the standard general-purpose Halloween pumpkin. It is the variety most often selected for carving jack-o'-lanterns, and is widely considered to be the most popular Halloween pumpkin. Unlike many pumpkin varieties, the Connecticut field pumpkin grows upright. This gives them flat rather than rounded bottoms, which some commentators believe improves their usability for carving and displaying jack-o'-lanterns. The Connecticut field pumpkin also has culinary uses; it is one of the most popular pumpkin varieties for canning, alongside the Dickinson, Kentucky Field, and Golden Delicious pumpkins. The variety can be eaten fresh or canned, but is considered less palatable than culinary pumpkin varieties. The inside of the pumpkin is stringy and thick which makes it less desirable for eating. The variety is also used by ranchers as feed for livestock. The pulp is rich in carotenoids and the seeds are a source of fatty acid. The flowers and the pulp are both edible. Because of its many uses, the variety is a common choice for home gardeners. Economic importance In the United States, all fifty states produce pumpkins. In 2020 and 2021, Illinois led the nation in pumpkin production with devoted to pumpkin growing. In 2021 alone, Illinois produced 652 million pounds of pumpkins. States with 4,500 to devoted to pumpkin growing include: in the west, California; midwestern states Indiana and Michigan; and southern states Texas and Virginia. Prices range from US$35 to US$250 per 1000 pounds. The growers mainly produce ornamental or jack-o'-lantern pumpkins. In addition to North America, pumpkins are grown in Africa, Europe, Asia and South America. Annual pumpkin production likely surpasses 20 million tonnes.
Biology and health sciences
Botanical fruits used as culinary vegetables
Plants
32725359
https://en.wikipedia.org/wiki/North%20American%20donkeys
North American donkeys
North American donkeys constitute approximately 0.1% of the worldwide donkey population. Donkeys were first transported from Europe to the New World in the fifteenth century during the Second Voyage of Christopher Columbus, and subsequently spread south and west into the lands that would become México. They first reached (what is now) the United States in the late seventeenth century. Donkeys arrived in large numbers in the western United States during the gold rushes of the nineteenth century, highly regarded as pack animals and for working in mines and ore-grinding mills. From about 1785, some select larger donkeys were imported from Europe to the eastern part of the continent. There are no true-breeding North American donkey breeds. Breed societies in Canada and the United States register donkeys, according to their size, as miniature, standard or mammoth donkeys. These are reported as breeds to the Domestic Animal Diversity database, a division of the Food and Agriculture Organization of the United Nations, by the National Animal Germplasm Program (of the Agricultural Research Service of the US Department of Agriculture), as are the burros (Spanish for 'donkey') – a feral population of desert-dwelling donkeys in the southwestern U.S. (mainly Arizona) – and the spotted, a color breed. History The first asses came to the Americas aboard ships of the second voyage of Christopher Columbus, landing at Hispaniola in 1495. In the early days of the conquistadores, jackasses were highly valued as sires for mules, which were esteemed as riding animals by the Spanish, and reserved for the nobility. Mules were bred for expeditions to mainland America, with males preferred for pack animals and the females for riding. The first shipment of mules, with three jacks and twelve jennies, arrived in México via Cuba ten years after the conquest of the Aztecs in 1521. Mules were used in silver mines, and each Spanish outpost across the empire bred its own mules with its own jack. The first presence of donkeys in what is today the U.S. is, sometimes, reported to date from 1679, when a Jesuit priest named Eusebio Kino transported several from Sonora, northern Mexico, to a new mission at San Xavier del Bac, in what is now Arizona; however, Kino did not arrive in the Americas prior to ca. 1681, likewise he was not in Sonora until 1687. Roughly 200 years later, donkeys arrived en masse in the western U.S. during the gold rushes of the nineteenth century, as pack animals and for use in mines and ore-grinding mills. The widespread use of donkeys for work largely came to an end with the collapse of the mining boom, as well as the subsequent introduction of railroads across the nation, especially in the west. With little value, many animals were simply turned-loose, becoming the herds of free-roaming burros that inhabit the deserts and highlands of Arizona today. From about 1785, some large donkeys were imported from Europe to the eastern United States, and were used for the production of mules. In 1888 the American Breeders Association of Jacks and Jennets started a stud-book for these animals under the name American Mammoth Jack. In 1923, this stud book merged with the Standard Jack and Jennet Registry of America, which had been set up in 1908; in 1988, its name was changed to the American Mammoth Jackstock Registry. Breeds that may have influenced the mammoth include the Maltese, the Baudet du Poitou, the Andalusian, the Majorcan and the Catalan. In the twentieth century, donkeys came to be more frequently kept as farm animals, pets or as livestock guardian animals in the U.S., among other nations. In 1929, Robert Green of New York imported seven donkeys belonging to the smaller, indigenous Sardinian breed to the U.S.; the first foal was born that same year. Although never considered miniature in their country of origin, these animals were soon known as "Miniature" or "Miniature Mediterranean" donkeys. Green was a lifelong advocate, and said of his animals: "Miniature donkeys possess the affectionate nature of a Newfoundland, the resignation of a cow, the durability of a mule, the courage of a tiger, and an intellectual capability only slightly inferior to man's." By 1935, there were 52 of them, and some were sold. Further Sardinian donkeys were imported, as well as similar—but quite distinct—Sicilian animals. A register of miniature donkeys was formed in 1958, by Bea Langfeld, the first professional breeder of miniature donkeys in the United States; in 1987, it was merged into that of the American Donkey and Mule Society, which was formed in 1967. Characteristics Both the Canadian Donkey and Mule Association and the American Donkey and Mule Society register donkeys according to their size, as miniature, standard or mammoth donkeys. Adult miniature donkeys stand or less at the withers. Small standard donkeys stand from over and large standard donkeys stand from over to for jennies, or for jacks and geldings. Mammoth jennies are taller than , and mammoth jacks and geldings over . Any donkey with at least two spots behind the throatlatch and above the legs can be registered with the American Council of Spotted Asses under the trademarked name American Spotted Ass.
Biology and health sciences
Donkeys
Animals
43989914
https://en.wikipedia.org/wiki/Windows%2010
Windows 10
Windows 10 is a major release of Microsoft's Windows NT operating system. The successor to Windows 8.1, it was released to manufacturing on July 15, 2015, and later to retail on July 29, 2015. Windows 10 was made available for download via MSDN and TechNet, as a free upgrade for retail copies of Windows 8 and Windows 8.1 users via the Microsoft Store, and to Windows 7 users via Windows Update. Unlike previous Windows NT releases, Windows 10 receives new builds on an ongoing basis, which are available at no additional cost to users; devices in enterprise environments can alternatively use long-term support milestones that only receive critical updates, such as security patches. In contrast to the tablet-oriented approach of Windows 8, Microsoft provided the desktop-oriented interface in line with previous versions of Windows in Windows 10. Other features added include Xbox Live integration, Cortana virtual assistant, virtual desktops and the improved Settings component. Windows 10 also replaced Internet Explorer with Microsoft Edge. As with previous versions, Windows 10 has been developed primarily for x86 processors; in 2018, a version of Windows 10 for ARM processors was released. Windows 10 received generally positive reviews upon its original release, with praise given to the return of the desktop interface, improved bundled software compared to Windows 8.1, and other capabilities. However, media outlets had been critical to behavioral changes of the system like mandatory update installation, privacy concerns over data collection and adware-like tactics used to promote the operating system on its release. Microsoft initially aimed to have Windows 10 installed on over one billion devices within three years of its release; that goal was ultimately reached almost five years after release on March 16, 2020, and it had surpassed Windows 7 as the most popular version of Windows worldwide by January 2018, which remains to be the case in 2024. Windows 10 is the most-used traditional PC operating system, with a 46% share of users. It was succeeded by Windows 11, which was released on October 5, 2021. It is the last version of Microsoft Windows that supports 32-bit processors (IA-32 and ARMv7-based), the last non-IoT edition to officially lack a CPU whitelist and support BIOS firmware, and the last version to officially support systems with TPM 1.2 or without any TPM at all. Support for Windows 10 editions which are not in the Long-Term Servicing Channel (LTSC) is set to end on October 14, 2025. Development At the Microsoft Worldwide Partner Conference in 2011, Andrew Lees, the chief of Microsoft's mobile technologies, said that the company intended to have a single software ecosystem for PCs, smartphones, tablets, and other devices: "We won't have an ecosystem for PCs, and one for phones, and one for tabletsthey'll all come together." In December 2013, technology writer Mary Jo Foley reported that Microsoft was working on an update to Windows 8 codenamed "Threshold", after a planet in its Halo franchise. Similarly to "Blue" (which became Windows 8.1), Foley described Threshold, not as a single operating system, but as a "wave of operating systems" across multiple Microsoft platforms and services, quoting Microsoft sources, scheduled for the second quarter of 2015. She also stated that one of the goals for Threshold was to create a unified application platform and development toolkit for Windows, Windows Phone and Xbox One (which all use a similar kernel based on Windows NT). At the Build Conference in April 2014, Microsoft's Terry Myerson unveiled an early build of what would become Windows 10 (build 9697) that added the ability to run Microsoft Store apps inside desktop windows and a more traditional Start menu modeled after Windows 7's design by using only a portion of the screen and including an application listing in the left column, with the right column displaying Windows 8-style app tiles. Myerson said that these changes would occur in a future update, but did not elaborate. Microsoft also unveiled the concept of a "universal Windows app", allowing Windows Store apps created for Windows 8.1 to be ported to Windows Phone 8.1 and Xbox One while sharing a common codebase, with an interface designed for different device form factors, and allowing user data and licenses for an app to be shared between multiple platforms. Windows Phone 8.1 would share nearly 90% of the common Windows Runtime APIs with Windows 8.1 on PCs. A screenshot of a build identifying itself as "Windows Technical Preview" (numbered 9834) was leaked in September 2014, showing a new virtual desktop system, a notification center, and a new File Explorer icon. Announcement On September 30, 2014, Microsoft officially announced that Threshold would be unveiled during a media event as Windows 10. Myerson said that Windows 10 would be Microsoft's "most comprehensive platform ever", providing a single, unified platform for desktop and laptop computers, tablets, smartphones, and all-in-one devices. He emphasized that Windows 10 would take steps towards restoring user interface mechanics from Windows 7 to improve the experience for users on non-touch devices, noting criticism of Windows 8's touch-oriented interface by keyboard and mouse users. Despite these concessions, Myerson noted that the touch-optimized interface would evolve as well on Windows 10. In regards to Microsoft naming the new operating system Windows 10 instead of Windows 9, Terry Myerson said that "based on the product that's coming, and just how different our approach will be overall, it wouldn't be right to call it Windows 9." He also joked that they could not call it "Windows One" (referring to several recent Microsoft products with a similar brand, such as OneDrive, OneNote, and the Xbox One) because Windows 1.0 already existed. At a San Francisco conference in October 2014, Tony Prophet, Microsoft's Vice President of Windows Marketing, said that Windows 9 "came and went", and that Windows 10 would not be "an incremental step from Windows 8.1", but "a material step. We're trying to create one platform, one eco-system that unites as many of the devices from the small embedded Internet of Things, through tablets, through phones, through PCs and, ultimately, into the Xbox." Further details surrounding Windows 10's consumer-oriented features were presented during another media event held on January 21, 2015, titled "Windows 10: The Next Chapter". The keynote featured the unveiling of Cortana integration within the operating system, new Xbox-oriented features, Windows 10 Mobile, an updated Office Mobile suite, Surface Huba large-screened Windows 10 device for enterprise collaboration based upon Perceptive Pixel technology, along with HoloLens‑augmented reality eyewear and an associated platform for building apps that can render holograms through HoloLens. Additional developer-oriented details surrounding the "Universal Windows Platform" concept were revealed and discussed during Microsoft's Build developers' conference. Among them were the unveiling of "Islandwood", which provides a middleware toolchain for compiling Objective-C-based software (particularly iOS) to run as universal apps on Windows 10 and Windows 10 Mobile. A port of Candy Crush Saga made using the toolkit, which shared much of its code with the iOS version, was demonstrated, alongside the announcement that the King-developed game would be bundled with Windows 10 at launch. At the 2015 Ignite conference, Microsoft employee Jerry Nixon stated that Windows 10 would be the "last version of Windows", a statement reflecting the company's intent to apply the software as a service business model to Windows, with new versions and updates to be released over an indefinite period. In 2021, however, Microsoft announced that Windows 10 would be succeeded on compatible hardware by Windows 11–and that Windows 10 support will end on October 14, 2025, marking a departure from what had been dubbed "Windows as a service". Marketing On July 20, 2015, Microsoft began "Upgrade Your World", an advertising campaign centering on Windows 10, with the premiere of television commercials in Australia, Canada, France, Germany, Japan, the United Kingdom, and the United States. The commercials focused on the tagline "A more human way to do", emphasizing new features and technologies supported by Windows 10 that sought to provide a more "personal" experience to users. The campaign culminated with launch events in thirteen cities on July 29, 2015, which celebrated "the unprecedented role our biggest fans played in the development of Windows 10". Release On July 29, 2015, Microsoft officially announced that Windows 10 would be released for retail purchase as a free upgrade from earlier versions of Windows. In comparison to previous Windows releases, which had a longer turnover between the release to manufacturing (RTM) and general release to allow for testing by vendors (and in some cases, the development of "upgrade kits" to prepare systems for installation of the new version), an HP executive explained that because it knew Microsoft targeted the operating system for a 2015 release, the company was able to optimize its then-current and upcoming products for Windows 10 in advance of its release, negating the need for such a milestone. The general availability build of Windows10, numbered 10240, was first released to Windows Insider channels for pre-launch testing on July 15, 2015, prior to its formal release. Although a Microsoft official said there would be no specific RTM build of Windows 10, 10240 was described as an RTM build by media outlets because it was released to all Windows Insider members at once (rather than to users on the "Fast ring" first), it no longer carried pre-release branding and desktop watermark text, and its build number had mathematical connections to the number10 in reference to the operating system's naming. The Enterprise edition was released to volume licensing on August 1, 2015. Windows 10 is distributed digitally through the "Media Creation Tool", which is functionally identical to the Windows 8 online installer, and can also be used to generate an ISO image or USB install media. In-place upgrades are supported from most editions of Windows 7 with Service Pack 1 and Windows8.1 with Update 1, while users with Windows8 must first upgrade to Windows8.1. Changing between architectures (e.g., upgrading from 32-bit edition to a 64-bit editions) via in-place upgrades is not supported; a clean installation is required. In-place upgrades may be rolled back to the device's previous version of Windows, provided that 30days have not passed since installation, and backup files were not removed using Disk Cleanup. Windows 10 was available in 190countries and 111 languages upon its launch, and as part of efforts to "re-engage" with users in China, Microsoft also announced that it would partner with Qihoo and Tencent to help promote and distribute Windows10 in China, and that Chinese PC maker Lenovo would provide assistance at its service centers and retail outlets for helping users upgrade to Windows10. At retail, Windows 10 is priced similarly to editions of Windows 8.1, with U.S. prices set at $119 and $199 for Windows 10Home and Pro respectively. A Windows 10 Pro Pack license allows upgrades from Windows 10 Home to Windows 10 Pro. Retail copies only ship on USB flash drive media; however, system builder copies still ship as DVD-ROM media. New devices shipping with Windows10 were also released during the operating system's launch window. Windows RT devices cannot be upgraded to Windows10. Free upgrade offer During its first year of availability, upgrade licenses for Windows10 could be obtained at no charge for devices with a genuine license for an eligible edition of Windows7 or8.1. This offer did not apply to Enterprise editions, as customers under an active Software Assurance (SA)contract with upgrade rights are entitled to obtain Windows 10 Enterprise under their existing terms. All users running non-genuine copies of Windows, and those without an existing Windows7 or8/8.1 license, were ineligible for this promotion; although upgrades from a non-genuine version were possible, they result in a non-genuine copy of10. On the general availability build of Windows10 (the original release), to activate and generate the "digital entitlement" for Windows10, the operating system must have first been installed as an in-place upgrade. During the free upgrade, a genuineticket.xml file is created in the background and the system's motherboard details are registered with a Microsoft Product Activation server. Once installed, the operating system can be reinstalled on that particular system via normal means without a product key, and the system's license will automatically be detected via online activation – in essence, the Microsoft Product Activation Server will remember the system's motherboard and give it the green light for product re-activation. Because of installation issues with Upgrade Only installs, the November Update (version 1511) included additional activation mechanisms. This build treated Windows7 and Windows8/8.1 product keys as Windows10 product keys, meaning they could be entered during installation to activate the free license, without the need to upgrade first to "activate" the hardware with Microsoft's activation servers. For major Original Equipment Manufacturers (OEMs), Windows 8/8.1 and Windows 10 OEM product keys are embedded in the firmware of the motherboard and if the correct edition of Windows 10 is present on the installation media, they are automatically inputted during installation. Since the release of the Fall Creators Update (version 1709), Microsoft decided to release multi-edition installation media, to alleviate installation and product activation issues users experienced because of accidentally installing the wrong edition of Windows 10. The Windows Insider Preview version of Windows10 automatically updated itself to the generally released version as part of the version progression and continues to be updated to new beta builds, as it had throughout the testing process. Microsoft explicitly stated that Windows Insider was not a valid upgrade path for those running a version of Windows that is ineligible for the upgrade offer; although, if it was not installed with a license carried over from an in-place upgrade to 10 Insider Preview from Windows7 or8.1, the Insider Preview does remain activated as long as the user does not exit the Windows Insider program. The offer was promoted and delivered via the "Get Windows10" application (also known as GWX), which was automatically installed via Windows Update ahead of Windows 10's release, and activated on systems deemed eligible for the upgrade offer. Via a notification area icon, users could access an application that advertised Windows10 and the free upgrade offer, check device compatibility, and "reserve" an automatic download of the operating system upon its release. On July 28, a pre-download process began in which Windows10 installation files were downloaded to some computers that had reserved it. Microsoft said that those who reserved Windows10 would be able to install it through GWX in a phased rollout process. The operating system could alternatively be downloaded at any time using a separate "Media Creation Tool" setup program, that allows for the creation of DVD or USB installation media. In May 2016, Microsoft announced that the free upgrade offer would be extended to users of assistive technologies; however, Microsoft did not implement any means of certifying eligibility for this offer, which some outlets thereby promoted as being a loophole to fraudulently obtain a free Windows 10 upgrade. Microsoft said that the offer is not intended to be used in this manner. In November 2017, Microsoft announced that this program would end on December 31, 2017. However, another method was found that allowed Windows 7 and 8/8.1 users to upgrade to Windows 10 using existing licenses, even though the free upgrade offers officially ended in 2017. Some outlets have continued to promote it as a free method of upgrading from the now-unsupported Windows 7. This free upgrade method was officially closed by Microsoft on September 20, 2023. However, as of 2024, there are some reports that it still works, under certain conditions. Licensing During upgrades, Windows10 licenses are not tied directly to a product key. Instead, the license status of the system's current installation of Windows is migrated, and a "Digital license" (known as "Digital entitlement" in version 1511 or earlier) is generated during the activation process, which is bound to the hardware information collected during the process. If Windows10 is reinstalled cleanly and there have not been any significant hardware changes since installation (such as a motherboard change), the online activation process will automatically recognize the system's digital entitlement if no product key is entered during installations. However, unique product keys are still distributed within retail copies of Windows10. As with previous non-volume-licensed variants of Windows, significant hardware changes will invalidate the digital entitlement, and require Windows to be re-activated. Features Windows 10 makes its user experience and functionality more consistent between different classes of device, and addresses many shortcomings of the user interface introduced in Windows 8. Windows 10 Mobile, the successor to Windows Phone 8.1, shared some user interface elements and apps with its PC counterpart. Windows 10 supports universal apps, an expansion of the Metro-style first introduced in Windows 8. Universal apps can be designed to run across multiple Microsoft product families with nearly identical codeincluding PCs, tablets, smartphones, embedded systems, Xbox One, Surface Hub and Mixed Reality. The Windows user interface was revised to handle transitions between a mouse-oriented interface and a touchscreen-optimized interface based on available input devicesparticularly on 2-in-1 PCs. Both interfaces include an updated Start menu which incorporates elements of Windows 7's traditional Start menu with the tiles of Windows 8. Windows 10 also introduced the Microsoft Edge web browser, a virtual desktop system, a window and desktop management feature called Task View, support for fingerprint and face recognition login, new security features for enterprise environments, and . The Windows Runtime app ecosystem was revised into the Universal Windows Platform (UWP). These universal apps are made to run across multiple platforms and device classes, including smartphones, tablets, Xbox One consoles, and other devices compatible with Windows 10. Windows apps share code across platforms, have responsive designs that adapt to the needs of the device and available inputs, can synchronize data between Windows 10 devices (including notifications, credentials, and allowing cross-platform multiplayer for games), and are distributed through the Microsoft Store (rebranded from Windows Store since September 2017). Developers can allow "cross-buys", where purchased licenses for an app apply to all of the user's compatible devices, rather than only the one they purchased on (e.g., a user purchasing an app on PC is also entitled to use the smartphone version at no extra cost). The ARM version of Windows 10 allows running applications for x86 processors through 32-bit software emulation. On Windows 10, the Microsoft Store serves as a unified storefront for apps, video content, and eBooks. Windows 10 also allows web apps and desktop software (using either Win32 or .NET Framework) to be packaged for distribution on the Microsoft Store. Desktop software distributed through Windows Store is packaged using the App-V system to allow sandboxing. User interface and desktop A new iteration of the Start menu is used on the Windows 10 desktop, with a list of places and other options on the left side, and tiles representing applications on the right. The menu can be resized, and expanded into a full-screen display, which is the default option in Tablet mode. A new virtual desktop system was added by a feature known as Task View, which displays all open windows and allows users to switch between them, or switch between multiple workspaces. Universal apps, which previously could be used only in full screen mode, can now be used in self-contained windows similarly to other programs. Program windows can now be snapped to quadrants of the screen by dragging them to the corner. When a window is snapped to one side of the screen, Task View appears and the user is prompted to choose a second window to fill the unused side of the screen (called "Snap Assist"). The Windows system icons were also changed. Charms have been removed; their functionality in universal apps is accessed from an App commands menu on their title bar. In its place is Action Center, which displays notifications and settings toggles. It is accessed by clicking an icon in the notification area, or dragging from the right of the screen. Notifications can be synced between multiple devices. The Settings app (formerly PC Settings) was refreshed and now includes more options that were previously exclusive to the desktop Control Panel. Windows 10 is designed to adapt its user interface based on the type of device being used and available input methods. It offers two separate user interface modes: a user interface optimized for mouse and keyboard, and a "Tablet mode" designed for touchscreens. Users can toggle between these two modes at any time, and Windows can prompt or automatically switch when certain events occur, such as disabling Tablet mode on a tablet if a keyboard or mouse is plugged in, or when a 2-in-1 PC is switched to its laptop state. In Tablet mode, programs default to a maximized view, and the taskbar contains a back button and hides buttons for opened or pinned programs by default; Task View is used instead to switch between programs. The full screen Start menu is used in this mode, similarly to Windows 8, but scrolls vertically instead of horizontally. System security Windows 10 incorporates multi-factor authentication technology based upon standards developed by the FIDO Alliance. The operating system includes improved support for biometric authentication through the Windows Hello platform. Devices with supported cameras (requiring infrared illumination, such as Intel RealSense) allow users to log in with iris or face recognition, similarly to Kinect. Devices with supported readers allow users to log in through fingerprint recognition. Support was also added for palm-vein scanning through a partnership with Fujitsu in February 2018. Credentials are stored locally and protected using asymmetric encryption. In 2017, researchers demonstrated that Windows Hello could be bypassed on fully-updated Windows 10 version 1703 with a color printout of a person's picture taken with an IR camera. In 2021, researchers were again able to bypass the Windows Hello functionalities by using custom hardware disguised as a camera, which presented an IR photo of the owner's face. In addition to biometric authentication, Windows Hello supports authentication with a PIN. By default, Windows requires a PIN to consist of four digits, but can be configured to permit more complex PINs. However, a PIN is not a simpler password. While passwords are transmitted to domain controllers, PINs are not. They are tied to one device, and if compromised, only one device is affected. Backed by a Trusted Platform Module (TPM) chip, Windows uses PINs to create strong asymmetric key pairs. As such, the authentication token transmitted to the server is harder to crack. In addition, whereas weak passwords may be broken via rainbow tables, TPM causes the much-simpler Windows PINs to be resilient to brute-force attacks. When Windows 10 was first introduced, multi-factor authentication was provided by two components: Windows Hello and Passport (not to be confused with the Passport platform of 1998). Later, Passport was merged into Windows Hello. The enterprise edition of Windows 10 offers additional security features; administrators can set up policies for the automatic encryption of sensitive data, selectively block applications from accessing encrypted data, and enable Device Guarda system which allows administrators to enforce a high-security environment by blocking the execution of software that is not digitally signed by a trusted vendor or Microsoft. Device Guard is designed to protect against zero-day exploits, and runs inside a hypervisor so that its operation remains separated from the operating system itself. Command line The console windows based on Windows Console (for any console app, not just PowerShell and Windows Command Prompt) can now be resized without any restrictions, can be made to cover the full screen by pressing , and can use standard keyboard shortcuts, such as those for cut, copy, and paste. Other features such as word wrap and transparency were also added. These functions can be disabled to revert to the legacy console if needed. The Anniversary Update added Windows Subsystem for Linux (WSL), which allows the installation of a user space environment from a supported Linux distribution that runs natively on Windows. The subsystem translates Linux system calls to those of the Windows NT kernel (only claims full system call compatibility as of WSL 2, included in a later Windows update). The environment can execute the Bash shell and 64-bit command-line programs (WSL 2 also supports 32-bit Linux programs and graphics, assuming supporting software installed, and GPUs support for other uses). Linux distributions for Windows Subsystem for Linux are obtained through Microsoft Store. The feature initially supported an Ubuntu-based environment; Microsoft announced in May 2017 that it would add Fedora and OpenSUSE environment options as well. Storage requirements To reduce the storage footprint of the operating system, Windows 10 automatically compresses system files. The system can reduce the storage footprint of Windows by approximately 1.5GB for 32-bit systems and 2.6GB for 64-bit systems. The level of compression used is dependent on a performance assessment performed during installations or by OEMs, which tests how much compression can be used without harming operating system performance. Furthermore, the Refresh and Reset functions use runtime system files instead, making a separate recovery partition redundant, allowing patches and updates to remain installed following the operation, and further reducing the amount of space required for Windows 10 by up to 12GB. These functions replace the WIMBoot mode introduced on Windows 8.1 Update, which allowed OEMs to configure low-capacity devices with flash-based storage to use Windows system files out of the compressed WIM image typically used for installation and recovery. Windows 10 also includes a function in its Settings app that allows users to view a breakdown of how their device's storage capacity is being used by different types of files, and determine whether certain types of files are saved to internal storage or an SD card by default. Online services and functionality Windows 10 introduces Microsoft Edge, a new default web browser. It initially featured a new standards-compliant rendering engine derived from Trident, and also includes annotation tools and integration with other Microsoft platforms present within Windows 10. Internet Explorer 11 is maintained on Windows 10 for compatibility purposes, but is deprecated in favor of Edge and, since mid-June 2022, is no longer supported on editions which follow Microsoft's Modern Lifecycle Policy. The initial version of Edge was later succeeded by a new iteration derived from the Chromium Project and Blink layout engine, which replaced the previous EdgeHTML-based version of Edge (renamed "Microsoft Edge Legacy"), and is bundled with the OS by default from build 20H2 onwards. Windows 10 incorporates a universal search box located alongside the Start and Task View buttons, which can be hidden or condensed into a single button. Previous versions featured Microsoft's intelligent personal assistant Cortana, which was first introduced with Windows Phone 8.1 in 2014, and supports both text and voice input. Many of its features are a direct carryover from Windows Phone, including integration with Bing, setting reminders, a Notebook feature for managing personal information, as well as searching for files, playing music, launching applications and setting reminders or sending emails. Since the November 2019 update, Microsoft has begun to downplay Cortana as part of a repositioning of the product towards enterprise use, with the May 2020 update removing its Windows shell integration and consumer-oriented features. Microsoft Family Safety is replaced by Microsoft Family, a parental controls system that applies across Windows platforms and Microsoft online services. Users can create a designated family, and monitor and restrict the actions of users designated as children, such as access to websites, enforcing age ratings on Microsoft Store purchases, and other restrictions. The service can also send weekly e-mail reports to parents detailing a child's computer usage. Unlike previous versions of Windows, child accounts in a family must be associated with a Microsoft accountwhich allows these settings to apply across all Windows 10 devices that a particular child is using. Windows 10 also offers the Wi-Fi Sense feature originating from Windows Phone 8.1; users can optionally have their device automatically connect to suggested open hotspots, and share their home network's password with contacts (either via Skype, People, or Facebook) so they may automatically connect to the network on a Windows 10 device without needing to manually enter its password. Credentials are stored in an encrypted form on Microsoft servers and sent to the devices of the selected contacts. Passwords are not viewable by the guest user, and the guest user is not allowed to access other computers or devices on the network. Wi-Fi Sense is not usable on 802.1X-encrypted networks. Adding "_optout" at the end of the SSID will also block the corresponding network from being used for this feature. Universal calling and messaging apps for Windows 10 are built in as of the November 2015 update: Messaging, Skype Video, and Phone. These offer built-in alternatives to the Skype download and sync with Windows 10 Mobile. Multimedia and gaming Windows 10 provides greater integration with the Xbox ecosystem. Xbox SmartGlass is succeeded by the Xbox Console Companion (formerly the Xbox app), which allows users to browse their game library (including both PC and Xbox console games), and Game DVR is also available using a keyboard shortcut, allowing users to save the last 30 seconds of gameplay as a video that can be shared to Xbox Live, OneDrive, or elsewhere. Windows 10 also allows users to control and play games from an Xbox One console over a local network. The Xbox Live SDK allows application developers to incorporate Xbox Live functionality into their apps, and future wireless Xbox One accessories, such as controllers, are supported on Windows with an adapter. Microsoft also intends to allow cross-purchases and save synchronization between Xbox One and Windows 10 versions of games; Microsoft Studios games such as ReCore and Quantum Break are intended as being exclusive to Windows 10 and Xbox One. Candy Crush Saga and Microsoft Solitaire Collection are also automatically installed upon installation of Windows 10. Windows 10 adds native game recording and screenshot capture ability using the newly introduced Game Bar. Users can also have the OS continuously record gameplay in the background, which then allows the user to save the last few moments of gameplay to the storage device. Windows 10 adds FLAC and HEVC codecs and support for the Matroska media container, allowing these formats to be opened in Windows Media Player and other applications. Windows Media Center is no longer bundled. Windows 10 includes DirectX 12, alongside WDDM 2.0. Unveiled March 2014 at GDC, DirectX 12 aims to provide "console-level efficiency" with "closer to the metal" access to hardware resources, and reduced CPU and graphics driver overhead. Most of the performance improvements are achieved through low-level programming, which allow developers to use resources more efficiently and reduce single-threaded CPU bottlenecking caused by abstraction through higher level APIs. DirectX 12 will also feature support for vendor agnostic multi-GPU setups. WDDM 2.0 introduces a new virtual memory management and allocation system to reduce workload on the kernel-mode driver. Font support Windows 10 adds three new default typefaces compared to Windows 8, but omits several others. The removed typefaces are available in supplemental packs and may be added manually over a non-metered internet connection. Editions and pricing Windows 10 is available in five main editions for personal computing devices; the Home and Pro editions of which are sold at retail in most countries, and as pre-loaded software on new computers. Home is aimed at home users, while Pro is aimed at power users and small businesses. Each edition of Windows 10 includes all of the capabilities and features of the edition below it, and adds additional features oriented towards its market segments; for example, Pro adds additional networking and security features such as BitLocker, Device Guard, Windows Update for Business, and the ability to join a domain. Enterprise and Education, contain additional features aimed towards business environments, and are only available through volume licensing. As part of Microsoft's unification strategies, Windows products that are based on Windows 10's common platform but meant for specialized platforms are marketed as editions of the operating system, rather than as separate product lines. An updated version of Microsoft's Windows Phone operating system for smartphones, and also tablets, was branded as Windows 10 Mobile. Editions of Enterprise and Mobile will also be produced for embedded systems, along with Windows 10 IoT Core, which is designed specifically for use in small footprint, low-cost devices and Internet of Things (IoT) scenarios and is similar in scope to Windows Embedded Compact. On May 2, 2017, Microsoft unveiled Windows 10 S (referred to in leaks as Windows 10 Cloud), a feature-limited edition of Windows 10 which was designed primarily for devices in the education market (competing, in particular, with ChromeOS netbooks), such as the Surface Laptop that Microsoft also unveiled at this time. The OS restricts software installation to applications obtained from Microsoft Store; the device may be upgraded to Windows 10 Pro for a fee to enable unrestricted software installation. As a time-limited promotion, Microsoft stated that this upgrade would be free on the Surface Laptop until March 31, 2018. Windows 10 S also contains a faster initial setup and login process, and allows devices to be provisioned using a USB drive with the Windows Intune for Education platform. In March 2018, Microsoft announced that Windows 10 S would be deprecated because of market confusion and would be replaced by "S Mode", an OEM option wherein Windows defaults to only allowing applications to be installed from Microsoft Store, but does not require payment in order to disable these restrictions. Support lifecycle Windows 10 was originally released following Microsoft's fixed lifecycle policy, receiving mainstream support for five years after its original release, followed by five years of extended support. However, starting in February 2018 this was switched to the modern lifecycle policy (excluding LTSC), with each version receiving 18 or 30 (only for H2 versions) months of support after release, depending on edition. Furthermore, Home edition does not support the deferral of feature updates and will thus often receive a new version of Windows 10 prior to the end of the 18-month support period. Microsoft will continue to support at least one standard Windows 10 release until October 14, 2025. On April 27, 2023, Microsoft announced that version 22H2 would be the last of Windows 10, meaning this version will extend beyond the normal 18/30 months of support. Hardware exceptions Microsoft's support lifecycle policy for the operating system notes that updates "are cumulative, with each update built upon all of the updates that preceded it", that "a device needs to install the latest update to remain supported", and that a device's ability to receive future updates will depend on hardware compatibility, driver availability, and whether the device is within the OEM's "support period"‍a new aspect not accounted for in lifecycle policies for previous versions. This policy was first invoked in 2017 to block Intel Clover Trail devices from receiving the Creators Update, as Microsoft asserts that future updates "require additional hardware support to provide the best possible experience", and that Intel no longer provided support or drivers for the platform. Microsoft stated that these devices would no longer receive feature updates, but would still receive security updates through January 2023. Extended Security Updates Microsoft announced an Extended Security Update (ESU) service in December 2023 for Windows 10 devices, lasting until October 2028, on an annual pricing plan. Unlike previous Windows ESU services, Windows 10 ESU will be available to individual consumers as well, albeit only for a single year. Regarding pricing, Windows 365 customers will get ESU for free, for everyone else, however, annual pricing doubles each year. For businesses, per device, the pricing will be $61 for the first year, $122 for the second, and $244 for the third. Businesses using a cloud based update management solution such as Microsoft Intune will get a 25% discount. For education users, per device pricing is $1 for the first year, then $2 and $4 for each subsequent year. Consumer pricing is $30 per device, for one year only, with enrollment starting in 2025. Support and update branches The following table collects current status of the aforementioned updating and support of different branches of Windows 10: Updates and support Unlike previous versions of Windows, Windows Update does not allow the selective installation of updates, and all updates (including patches, feature updates, and driver software) are downloaded and installed automatically. Users can only choose whether their system will reboot automatically to install updates when the system is inactive, or be notified to schedule a reboot. If a wireless network is designated as "Metered"—a function which automatically reduces the operating system's background network activity to conserve limits on Internet usage—most updates are not downloaded until the device is connected to a non-metered network. Version 1703 allows wired (Ethernet) networks to be designated as metered, but Windows may still download certain updates while connected to a metered network. In version 2004, by installing the August 2020 security update and later versions, driver and non-security updates pushed via Windows Update that are considered optional are no longer automatically downloaded and installed in their devices. Users can access them on Settings > Update & Security > Windows Update > View optional update. Updates can cause compatibility or other problems; a Microsoft troubleshooter program allows bad updates to be uninstalled. Under the Windows end-user license agreement, users consent to the automatic installation of all updates, features and drivers provided by the service, and implicitly consent "without any additional notice" to the possibility of features being modified or removed. The agreement also states, specifically for users of Windows10 in Canada, that they may pause updates by disconnecting their device from the Internet. Windows Update can also use a peer-to-peer system for distributing updates; by default, users' bandwidth is used to distribute previously downloaded updates to other users, in combination with Microsoft servers. Users can instead choose to only use peer-to-peer updates within their local area network. Preview releases A public beta program for Windows10 known as the Windows Insider Program began with the first publicly available preview release on October 1, 2014. Insider preview builds are aimed towards enthusiasts and enterprise users for the testing and evaluation of updates and new features. Users of the Windows Insider program receive occasional updates to newer preview builds of the operating system and continue to be able to evaluate preview releases after general availability (GA) in July 2015this is in contrast to previous Windows beta programs, where public preview builds were released less frequently and only during the months preceding GA. Windows Insider builds continue to be released, post release to manufacturing (RTM) of Windows10. Versions Feature updates Windows 10 is often described by Microsoft as being a "service", as it receives regular "feature updates" that contain new features and other updates and fixes. In April 2017, Microsoft stated that these updates would be released twice a year every March and September in the future, which eventually continued until version 21H2. Mainstream builds of Windows 10, up to 2004, were labeled "YYMM", with "YY" representing the two-digit year and "MM" representing the month of release. For example, version 1809 was released in September (the ninth month) of 2018. This was changed with the 20H2 release where "MM" represents the half of the year in which the update was released, for example H1 for the first half and H2 for the second half. Before version 1903, the pace at which feature updates are received by devices was dependent on which release channel was used. The default branch for all users of Windows10 Home and Pro was "Semi-Annual Channel (Targeted)" (formerly "Current Branch", or "CB"), which received stable builds after they were publicly released by Microsoft. Each build of Windows 10 is supported for either 18 or 30 (only for H2 versions) months after its original release, dependent on edition. In enterprise environments, Microsoft officially intended that this branch was used for "targeted" deployments of newly released stable versions so that they could be evaluated and tested on a limited number of devices before a wider deployment. Once a stable build is certified by Microsoft and its partners as being suitable for broad deployment, the build is then released on the "Semi-Annual Channel" (formerly "Current Branch for Business", or "CBB"), which is supported by the Pro and Enterprise editions of Windows 10. Semi-Annual Channel receives stable builds on a four-month delay from their release on the Targeted channel. Administrators can also use the "Windows Update for Business" system, as well as existing tools such as WSUS and System Center Configuration Manager, to organize structured deployments of feature updates across their networks. The Windows Insider branches receive unstable builds as they are released; it is divided into two channels, "Dev" (which receives new builds immediately after their release), and "Beta" (whose releases are slightly delayed from their "Dev" release). Enterprise licensees may use the Windows 10 Enterprise LTSC (formerly LTSB) edition, where "LTSC" stands for "Long-Term Servicing Channel", which only receive quality of life updates (i.e. security patches), and has a full 5 or 10-year support lifecycle for each build. This edition is designed for "special-purpose devices" that perform a fixed function (such as automated teller machines and medical equipment). For this reason, it excludes Cortana, Microsoft Store, and all bundled Universal Windows Platform apps (including but not limited to Microsoft Edge, hence these builds ship only with Internet Explorer as browser). Microsoft director Stella Chernyak explained that "we have businesses [that] may have mission-critical environments where we respect the fact they want to test and stabilize the environment for a long time." Four LTSC builds have been released, correlating with the 1507, 1607, 1809, and 21H2 versions of Windows 10, respectively. In July 2017, Microsoft announced changes in the terminology for Windows branches as part of its effort to unify the update cadence with that of Office 365 ProPlus and Windows Server 2016. The branch system now defines two paces of upgrade deployment in enterprise environments, "targeted" initial deployment of a new version on selected systems immediately after its stable release for final testing, and "broad" deployment afterwards. Hence, "Current Branch" is now known as "Semi-Annual Channel (Targeted)", and "Current Branch for Business" for broad deployment is now referred to as "Semi-Annual Channel". In February 2019, Microsoft announced changes again in delivering updates beginning with release of version 1903: a single SAC will be released, SAC-T will be retired, and users are no longer able to switch to different channels. Instead, these updates can be deferred from 30 to 90 days, or depending on how the device was configured to defer the updates. In April 2019, it was announced that, in addition, feature updates will no longer be automatically pushed to users. However, after the release of version 2004, the update only pushed for those running a feature update version that is nearing end of service or it can be paused for up to 35 days. In November 2021, following the launch of version 21H2, Microsoft made a commitment to deliver feature updates every October or November. Thus, Microsoft rebranded the "Semi-Annual Channel" to the "General Availability Channel". On April 27, 2023, Microsoft announced that version 22H2, released in November 2022, would be the last feature update for Windows 10. Feature updates prior to version 1909 were distributed solely as an in-place upgrade installation, requiring the download of a complete operating system package (approximately 3.5 GB in size for 64-bit systems). Unlike previous builds, version 1909 was designed primarily as an update rollup version of 1903, which focused primarily on minor feature additions and enhancements. For upgrades to 1909 from 1903, a new delivery method was used where its changes were delivered as part of the monthly cumulative update, but were left in a dormant state until the 1909 update "enablement" patch is installed. The full upgrade process was still used for those using builds prior to 1903. Features in development In May 2017, Microsoft unveiled Fluent Design System (previously codenamed "Project Neon"), a revamp of Microsoft Design Language 2 that includes guidelines for the designs and interactions used within software designed for all Windows 10 devices and platforms. The new design language will include the more prominent use of motion, depth, and translucency effects. Microsoft stated that the implementation of this design language would be performed over time, and it had already started to implement elements of it in the Creators Update and Fall Creators Update. On December 7, 2016, Microsoft announced that, as part of a partnership with Qualcomm, it planned to introduce support for running Win32 software on ARM architecture with a 32-bit x86 processor emulator, in 2017. Terry Myerson stated that this move would enable the production of Qualcomm Snapdragon-based Windows devices with cellular connectivity and improved power efficiency over Intel-compatible devices, and still capable of running the majority of existing Windows software (unlike the previous Windows RT, which was restricted to Windows Store apps). Microsoft is initially targeting this project towards laptops. Microsoft launched the branding Always Connected PCs in December 2017 to market Windows 10 devices with cellular connectivity, which included two ARM-based 2-in-1 laptops from Asus and HP featuring the Snapdragon 835 system-on-chip, and the announcement of a partnership between AMD and Qualcomm to integrate its Snapdragon X16 gigabit LTE modem with AMD's Ryzen Mobile platform. In August 2019, Microsoft began testing changes to its handling of the user interface on convertible devices—downplaying the existing "Tablet Mode" option in favor of presenting the normal desktop with optimizations for touch when a keyboard is not present, such as increasing the space between taskbar buttons and displaying the virtual keyboard when text fields are selected. In April 2021, the ability to run Linux applications using a graphical user interface, such as Audacity, directly in Windows, was introduced as a preview. This feature would later be included as part of the updated Windows Subsystem for Linux 2 for Windows 11 only. In November 2022, Microsoft released Windows Subsystem For Linux 2 on the Microsoft Store, for both Windows 10 and 11, allowing Linux graphical user interface applications to be used natively using WSL. System requirements The basic hardware requirements to install Windows 10 were initially the same as those for Windows 8 and Windows 8.1, and only slightly higher than for Windows 7 and Windows Vista. As of the May 2019 update, the minimum disk space requirement has been increased to 32 GB. In addition, on new installations, Windows permanently reserves up to 7 GB of disk space in order to ensure proper installation of future feature updates. The 64-bit variants require a CPU that supports certain instructions. Devices with low storage capacity must provide a USB flash drive or SD card with sufficient storage for temporary files during upgrades. Some pre-built devices may be described as "certified" by Microsoft. Certified tablets must include , , and keys; and keys are no longer required. As with Windows 8, all certified devices must ship with UEFI Secure Boot enabled by default. Unlike Windows 8, OEMs are no longer required to make Secure Boot settings user-configurable, meaning that devices may optionally be locked to run only Microsoft-signed operating systems. A supported infrared-illuminated camera is required for Windows Hello face authentication, and a supported fingerprint reader is required for Windows Hello fingerprint authentication. Device Guard requires a UEFI system with no third-party certificates loaded, and CPU virtualization extensions (including SLAT and IOMMU) enabled in firmware. Beginning with Intel Kaby Lake and AMD Bristol Ridge, Windows 10 is the only version of Windows that Microsoft will officially support on newer CPU microarchitectures. Terry Myerson stated that Microsoft did not want to make further investments in optimizing older versions of Windows and associated software for newer generations of processors. These policies were criticized by the media, who especially noted that Microsoft was refusing to support newer hardware (particularly Intel's Skylake CPUs, which was also originally targeted by the new policy with a premature end of support that was ultimately retracted) on Windows 8.1, a version of Windows that was still in mainstream support until January 2018. In addition, an enthusiast-created modification was released that disabled the check and allowed Windows 8.1 and earlier versions to continue to work on the platform. Windows 10 versions 1703 and later do not support Intel Clover Trail system-on-chips, per Microsoft's stated policy of only providing updates for devices during their OEM support period. Starting with Windows 10 version 2004, Microsoft will require new OEM devices to use 64-bit processors, and will therefore cease the distribution of x86 (32-bit) variants of Windows 10 via OEM channels. The 32-bit variants of Windows 10 will remain available via non-OEM channels, and Microsoft will continue to "[provide] feature and security updates on these devices". This was later followed by Windows 11 dropping support for 32-bit hardware altogether, thus making Windows 10 the final version of Windows to have a 32-bit version available. Physical memory limits The maximum amount of RAM that Windows 10 can support varies depending on the product edition and the processor architecture. All 32-bit editions of Windows 10, including Home and Pro, support up to 4 GB. 64-bit editions of Windows 10 Education and Pro support up to 2 TB, 64-bit editions of Windows 10 Pro for Workstations and Enterprise support up to 6 TB, while the 64-bit edition of Windows 10 Home is limited to 128 GB. Processor limits Windows 10 supports up to two physical processors. A maximum of 32 cores is supported in 32-bit versions of Windows 10, whereas up to 256 cores are supported in the 64-bit versions. Reception Windows 10 received generally positive reviews, with most reviewers considering it superior to its predecessor Windows 8. CNN Business praised every aspect of Windows 10. TechRadar felt that it could be "the new Windows 7", citing the operating system's more familiar user interface, improvements to bundled apps, performance improvements, a "rock solid" search system, and the Settings app being more full-featured than its equivalents on8 and8.1. The Edge browser was praised for its performance, although it was not in a feature-complete state at launch. While considering them a "great idea in principle", concerns were shown for Microsoft's focus on the universal app ecosystem: It's by no means certain that developers are going to flock to Windows10 from iOS and Android simply because they can convert their apps easily. It may well become a no-brainer for them, but at the moment a conscious decision is still required. Engadget was similarly positive, noting that the upgrade process was painless and that Windows10's user interface had balanced aspects of Windows8 with those of previous versions with a more mature aesthetic. Cortana's always-on voice detection was considered to be its "true strength", also citing its query capabilities and personalization features, but noting that it was not as pre-emptive as Google Now. Windows10's stock applications were praised for being improved over their Windows 8 counterparts, and for supporting windowed modes. The Xbox app was also praised for its Xbox One streaming functionality, although recommending its use over a wired network because of inconsistent quality over Wi-Fi. In conclusion, it was argued that "Windows10 delivers the most refined desktop experience ever from Microsoft, and yet it's so much more than that. It's also a decent tablet OS, and it's ready for a world filled with hybrid devices. And, barring another baffling screwup, it looks like a significant step forward for mobile. Heck, it makes the Xbox One a more useful machine." On the other hand Ars Technica panned the new Tablet mode interface for removing the charms and app switching, making the Start button harder to use by requiring users to reach for the button on the bottom-left rather than at the center of the screen when swiping with a thumb, and for making application switching less instantaneous through the use of Task View. Microsoft Edge was praised for being "tremendously promising", and "a much better browser than Internet Explorer ever was", but criticized it for its lack of functionality on-launch. In conclusion, contrasting Windows8 as being a "reliable" platform albeit consisting of unfinished concepts, Windows10 was considered "the best Windows yet", and was praised for having a better overall concept in its ability to be "comfortable and effective" across a wide array of form factors, but that it was buggier than previous versions of Windows were on-launch. ExtremeTech felt that Windows10 restricted the choices of users, citing its more opaque setting menus, forcing users to give up bandwidth for the peer-to-peer distribution of updates, and for taking away user control of specific functions, such as updates, explaining that "it feels, once again, as if Microsoft has taken the seed of a good idea, like providing users with security updates automatically, and shoved the throttle to maximum." Windows 10 has also received criticism because of deleting files without user permission after auto updates. Critics characterized the release of Windows10 as being forced onto users of past versions of Windows. Critics have also noted that Windows10 heavily emphasizes freemium services, and contains various advertising facilities. Some outlets have considered these to be a hidden "cost" of the free upgrade offer. Examples of these have included microtransactions in bundled games such as Microsoft Solitaire Collection, default settings that display promotions of "suggested" apps in the Start menu, "tips" on the lock screen that may contain advertising, ads displayed in File Explorer for Office 365 subscriptions on Creators' Update, and various advertising notifications displayed by default which promote Microsoft Edge when it is not set as the default web browser (including, in a September 2018 build, nag pop-ups displayed to interrupt the installation process of competitors). Due to the high system requirements of its successor Windows 11, some critics have cited Windows 10 being better than its successor and have warned not to switch to Windows 11 given its high system requirements, despite very limited new features compared to Windows 10 at the time of release. Market share and sales Up to August 2016, Windows 10 usage was increasing, with it then plateauing, while eventually in 2018, it became more popular than Windows 7 (though Windows 7 was still more used in some countries in Asia and Africa in 2019). , the operating system is running on over a billion devices, reaching the goal set by Microsoft two years after the initial deadline. Twenty-four hours after it was released, Microsoft announced that over 14million devices were running Windows10. On August 26, Microsoft said over 75million devices were running Windows10, in 192countries, and on over 90,000 unique PC or tablet models. According to Terry Myerson, there were over 110million devices running Windows10 as of October 6, 2015. On January 4, 2016, Microsoft reported that Windows10 had been activated on over 200million devices since the operating system's launch in July 2015. According to StatCounter, Windows 10 overtook Windows 8.1 in December 2015. Iceland was the first country where Windows 10 was ranked first (not only on the desktop, but across all platforms), with several larger European countries following. For one week in late November 2016, Windows 10 overtook first rank from Windows 7 in the United States, before losing it again. By February 2017, Windows 10 was losing market share to Windows 7. In mid-January 2018, Windows 10 had a slightly higher global market share than Windows 7, with it noticeably more popular on weekends, while popularity varies widely by region, e.g. Windows 10 was then still behind in Africa and far ahead in some other regions e.g. Oceania. Windows 10 is now the most used version in virtually all countries. Its market share peaked at 82.5% in December 2021, shortly after the introductions of its successor, and since then the share has been declining ever since Windows 11's release, which is now the second most popular Windows version in many countries. , Windows 10 is estimated to have a 64% share of Windows PCs, still 2 times its successor Windows 11's share of 32% (and 19 times Windows 7's 3.3% share). Windows 10 has an estimated 46% share of all traditional PCs (the rest being other Windows editions and other operating systems such as macOS and Linux), and an estimated 16% share of all devices (including mobile, tablet and console) are running Windows 10. Update system changes Windows 10 Home is permanently set to download all updates automatically, including cumulative updates, security patches, and drivers, and users cannot individually select updates to install or not. Tom Warren of The Verge felt that, given web browsers such as Google Chrome had already adopted such an automatic update system, such a requirement would help to keep all Windows10 devices secure, and felt that "if you're used to family members calling you for technical support because they've failed to upgrade to the latest Windows service pack or some malware disabled Windows Update then those days will hopefully be over." Concerns were raised that because of these changes, users would be unable to skip the automatic installation of updates that are faulty or cause issues with certain system configurations—although build upgrades will also be subject to public beta testing via Windows Insider program. There were also concerns that the forced installation of driver updates through Windows Update, where they were previously designated as "optional", could cause conflicts with drivers that were installed independently of Windows Update. Criticism was also directed towards Microsoft's decision to no longer provide specific details on the contents of cumulative updates for Windows 10. On February 9, 2016, Microsoft retracted this decision and began to provide release notes for cumulative updates on the Windows website. Some users reported that during the installation of the November upgrade, some applications (particularly utility programs such as CPU-Z and Speccy) were automatically uninstalled during the upgrade process, and some default programs were reset to Microsoft-specified defaults (such as Photos app, and Microsoft Edge for PDF viewing), both without warning. Further issues were discovered upon the launch of the Anniversary Update ("Redstone"), including a bug that caused some devices to freeze (but addressed by cumulative update KB3176938, released on August 31, 2016), and that fundamental changes to how Windows handles webcams had caused many to stop working. In June 2017, a Redstone 3 Insider build (RS_EDGE_CASE in PC and rs_IoT on Mobile) was accidentally released to both Insider and non-Insider users on all Windows 10 devices, but the update was retracted, with Microsoft apologizing and releasing a note on their Windows Insider Program blog describing how to prevent the build from being installed on their device. According to Dona Sarkar, this was due to "an inadvertent deployment to the engineering system that controls which builds/which rings to push out to insiders." A Gartner analyst felt that Windows 10 Pro was becoming increasingly inappropriate for use in enterprise environments because of support policy changes by Microsoft, including consumer-oriented upgrade lifecycle length, and only offering extended support for individual builds to Enterprise and Education editions of Windows 10. Critics have acknowledged that Microsoft's update and testing practices had been affecting the overall quality of Windows 10. In particular, it was pointed out that Microsoft's internal testing departments had been prominently affected by a major round of layoffs undertaken by the company in 2014. Microsoft relies primarily on user testing and bug reports via the Windows Insider program (which may not always be of sufficient quality to identify a bug), as well as correspondence with OEMs and other stakeholders. In the wake of the known folder redirection data loss bug in the version 1809, it was pointed out that bug reports describing the issue had been present on the Feedback Hub app for several months prior to the public release. Following the incident, Microsoft updated Feedback Hub so that users may specify the severity of a particular bug report. When announcing the resumption of 1809's rollout, Microsoft stated that it planned to be more transparent in its handling of update quality in the future, through a series of blog posts that will detail its testing process and the planned development of a "dashboard" that will indicate the rollout progress of future updates. Distribution practices Microsoft was criticized for the tactics that it used to promote its free upgrade campaign for Windows 10, including adware-like behaviors, using dark patterns to coax users into installing the operating system, downloading installation files without user consent, and making it difficult for users to suppress the advertising and notifications if they did not wish to upgrade to Windows 10. The upgrade offer was marketed and initiated using the "Get Windows 10" (GWX) application, which was first downloaded and installed via Windows Update in March 2015. Third-party programs were created to assist users in applying measures to disable GWX. In September 2015, it was reported that Microsoft was triggering automatic downloads of Windows 10 installation files on all compatible Windows 7 or 8.1 systems configured to automatically download and install updates, regardless of whether or not they had specifically requested the upgrade. Microsoft officially confirmed the change, claiming it was "an industry practice that reduces the time for installation and ensures device readiness." This move was criticized by users with data caps or devices with low storage capacity, as resources were consumed by the automatic downloads of up to 6 GB of data. Other critics argued that Microsoft should not have triggered any downloading of Windows 10 installation files without user consent. In October 2015, Windows 10 began to appear as an "Optional" update on the Windows Update interface, but pre-selected for installation on some systems. A Microsoft spokesperson said that this was a mistake, and that the download would no longer be pre-selected by default. However, on October 29, 2015, Microsoft announced that it planned to classify Windows 10 as a "recommended" update in the Windows Update interface sometime in 2016, which would cause an automatic download of installation files and a one-time prompt with a choice to install to appear. In December 2015, it was reported that a new advertising dialog had begun to appear, only containing "Upgrade now" and "Upgrade tonight" buttons, and no obvious method to decline installation besides the close button. In March 2016, some users also alleged that their Windows 7 and 8.1 devices had automatically begun upgrading to Windows 10 without their consent. In June 2016, the GWX dialog's behavior changed to make closing the window imply a consent to a scheduled upgrade. In December 2016, Microsoft's chief marketing officer Chris Capossela admitted that the company had "gone too far" by using these tactics, stating "we know we want people to be running Windows 10 from a security perspective, but finding the right balance where you're not stepping over the line of being too aggressive is something we tried and for a lot of the year I think we got it right." On January 21, 2016, Microsoft was sued in small claims court by a user whose computer had attempted to upgrade to Windows 10 without her consent shortly after the release of the operating system. The upgrade failed, and her computer was left in a broken state thereafter, which disrupted the ability to run her travel agency. The court ruled in favor of the user and awarded her $10,000 in damages, but Microsoft appealed. However, in May 2016, Microsoft dropped the appeal and chose to pay the damages. Shortly after the suit was reported on by the Seattle Times, Microsoft confirmed it was updating the GWX software once again to add more explicit options for opting out of a free Windows 10 upgrade; the final notification was a full-screen pop-up window notifying users of the impending end of the free upgrade offer, and contained "Remind me later", "Do not notify me again" and "Notify me three more times" as options. In March 2019, Microsoft announced that it would display notifications informing users on Windows 7 devices of the upcoming end of extended support for the platform, and direct users to a website urging them to upgrade to Windows 10 or purchase new hardware. This dialog will be similar to the previous Windows 10 upgrade prompts, but will not explicitly mention Windows 10. Privacy and data collection Privacy advocates and other critics have expressed concern regarding Windows10's privacy policies and its collection and use of customer data. Under the default "Express" settings, Windows10 is configured to send various information to Microsoft and other parties, including the collection of user contacts, calendar data, and "associated input data" to personalize "speech, typing, and inking input", typing and inking data to improve recognition, allowing apps to use a unique "advertisingID" for analytics and advertising personalization (functionality introduced by Windows 8.1) and allow apps to request the user's location data and send this data to Microsoft and "trusted partners" to improve location detection (Windows8 had similar settings, except that location data collection did not include "trusted partners"). Users can opt out from most of this data collection, but telemetry data for error reporting and usage is also sent to Microsoft, and this cannot be disabled on non-Enterprise editions of Windows10. Microsoft's privacy policy states, however, that "Basic"-level telemetry data is anonymized and cannot be used to identify an individual user or device. The use of Cortana also requires the collection of data "such as Your PC location, data from your calendar, the apps you use, data from your emails and text messages, who you call, your contacts and how often you interact with them on Your PC" to personalize its functionality. Rock Paper Shotgun writer Alec Meer argued that Microsoft's intent for this data collection lacked transparency, stating that "there is no world in which 45pages of policy documents and opt-out settings split across 13different settings screens and an external website constitutes 'real transparency'." Joel Hruska of ExtremeTech wrote that "the company that brought us the 'Scroogled' campaign now hoovers up your data in ways that would make Google jealous." However, it was also pointed out that the requirement for such vast usage of customer data had become a norm, citing the increased reliance on cloud computing and other forms of external processing, as well as similar data collection requirements for services on mobile devices such as Google Now and Siri. In August 2015, Russian politician Nikolai Levichev called for Windows10 to be banned from use within the Russian government, as it sends user data to servers in the United States. The Russian government had passed a federal law requiring all online services to store the data of Russian users on servers within the country by September 2016 or be blocked. Writing for ZDNet, Ed Bott said that the lack of complaints by businesses about privacy in Windows10 indicated "how utterly normal those privacy terms are in 2015." In a Computerworld editorial, Preston Gralla said that "the kind of information Windows10 gathers is no different from what other operating systems gather. But Microsoft is held to a different standard than other companies". The Microsoft Services agreement reads that the company's online services may automatically "download software updates or configuration changes, including those that prevent you from accessing the Services, playing counterfeit games, or using unauthorized hardware peripheral devices." Critics interpreted this statement as implying that Microsoft would scan for and delete unlicensed software installed on devices running Windows10. However, others pointed out that this agreement was specifically for Microsoft online services such as Microsoft account, Office 365, Skype, as well as Xbox Live, and that the offending passage most likely referred to digital rights management on Xbox consoles and first-party games, and not plans to police pirated video games installed on Windows10 PCs. Despite this, some torrent trackers announced plans to block Windows10 users, also arguing that the operating system could send information to anti-piracy groups that are affiliated with Microsoft. Writing about these allegations, Ed Bott of ZDNet compared Microsoft's privacy policy to Apple's and Google's and concluded that he "[did not] see anything that looks remotely like Big Brother." Columnist Kim Komando argued that "Microsoft might in the future run scans and disable software or hardware it sees as a security threat", consistent with the Windows10 update policy. In September 2019, Microsoft hid the option to create a local account during a fresh installation if a PC is connected to the internet. This move was criticized by users who did not want to use an online Microsoft account. Additionally, in Windows 10 Home, the first Microsoft account linked to the primary user's account can no longer be unlinked, but other users can unlink their own Microsoft accounts from their user accounts. In late-July 2020, Windows Defender began to classify modifications of the hosts file that block Microsoft telemetry servers as being a severe security risk.
Technology
Operating Systems
null
42551986
https://en.wikipedia.org/wiki/WISE%200855%E2%88%920714
WISE 0855−0714
WISE 0855−0714 (full designation WISE J085510.83−071442.5, or W0855 for short) is a sub-brown dwarf of spectral class Y4, located from the Sun in the constellation Hydra. It is the fourth-closest star or (sub-) brown dwarf system to the Sun and was discovered by Kevin Luhman in 2013 using data from the Wide-field Infrared Survey Explorer (WISE). It is the coldest brown dwarf found in interstellar space, having a temperature of about . It has an estimated mass between 3–10 Jupiter masses, which makes it a planetary-mass object below the 13 Jupiter mass limit for deuterium fusion. Characterization Observations WISE 0855−0714 was first imaged by the WISE telescope on 4 May 2010 during its primary mission of surveying the entire sky. It was later discovered by Kevin Luhman in March 2013, who noticed the object's unusually high proper motion while searching for potential binary companions of the Sun in WISE images. In the interest of confirming the object's spectral properties and nearby distance to the Sun, Luhman made follow-up observations with the Spitzer Space Telescope and the Gemini North telescope in 2013–2014. The discovery of the object was announced in a NASA press release in April 2014. Since WISE 0855−0714 is an isolated object, its luminosity primarily comes from thermal radiation. WISE 0855−0714's temperature is low enough that it roughly matches room temperature, which means WISE 0855−0714's luminosity is very low and primarily emits infrared light as thermal radiation. Hence, it is best observed with infrared telescopes such as WISE and the James Webb Space Telescope (JWST). WISE 0855−0714 has been detected in spectral wavelengths as short as —in this near-infrared wavelength, the object appears extremely dim with an apparent magnitude of 26.3. WISE 0855−0714's brightness decreases with decreasing wavelength, so the object is practically invisible in visible light. Distance and proper motion Based on direct observations, WISE 0855−0714 has a large parallax of , which corresponds to a distance of around (). This makes WISE 0855−0714 the fourth-closest star or (sub-) brown dwarf system to the Sun. WISE 0855−0714 also has an exceptionally high proper motion of , the third-highest after Barnard's Star () and Kapteyn's Star () Spectrometry Its luminosity in different bands of the thermal infrared in combination with its absolute magnitude—because of its known distance—was used to place it in context of different models; the best characterization of its brightness was in the W2 band of at an apparent magnitude of , though it was brighter into the deeper infrared. Infrared images taken with the Magellan Baade Telescope suggest evidence of sulfide clouds below water ice clouds. Near- and mid-infrared spectra in the L- and M-band were taken with the GNIRS instrument on the Gemini North Telescope. The M-band (4.5–5.1 μm) spectrum is dominated by water vapour (H2O) absorption. The L-band (3.4–4.14 μm) spectrum is dominated by methane absorption. Both the M- and L-band surprisingly have no detection of phosphine (PH3), which appears in the atmosphere of Jupiter. The M-band spectrum shows evidence for water ice clouds and the near-infrared photometry WISE 0855 is faint compared to models, suggesting an additional absorber, probably clouds made of ammonium dihydrogen phosphate (NH4)(H2PO4), which are below the water ice clouds. An approved JWST proposal describes how the team is planning to use a near-infrared time-series to study the hydrological cycle in the atmosphere of WISE 0855 with NIRSpec. Observations with NIRSpec detected methane (CH4), water vapor (H2O), ammonia (NH3) and carbon monoxide (CO) in the atmosphere, but was not able to confirm any phosphine (PH3) or carbon dioxide (CO2) in the atmosphere. Water ice clouds are also not confirmed and the spectrum is well matched with a cloudless model. Observations with MIRI showed a water vapor depletion and a water abundance that is variable with pressure. This is consistent with water condensing out in the upper atmosphere. The observations did however not detect any water ice clouds, which were predicted in previous studies. This discrepancy is explained with the rainout of the water: Water condenses into particles in the upper atmosphere, which quickly sink into the lower atmosphere. Clouds only form if upward mixing is present. A similar process is present for alkali metals in L- and T-dwarfs. A direct rainout would suggest weak mixing, but disequilibrium chemistry suggest rigours mixing. Future variable studies might resolve if upward mixing or settling is the dominant process. Cloud models however potentially detected deep ammonium dihydrogen phosphate (NH4)(H2PO4) clouds. The observations also detected 15NH3 for the first time in WISE 0855. The atmosphere has a mass fraction of 14NH3/15NH3 = , meaning it has about 99.7% 14N and about 0.3% 15N. Compared to solar values and the ratio of WISE 1828, the atmosphere of WISE 0855 is enriched in 15N. The nitrogen isotope ratio is closer to today's 15N-enriched interstellar medium. This could mean that WISE 0855 formed from a younger cloud, but more measurements of 15N in other brown dwarfs are needed to establish evolutionary trends. In November 2024 a team used archived and new NIRSpec data to detect deuterated methane (CH3D) and about one part per billion PH3 in WISE 0855. This detection of deuterium showed that WISE 0855 has a mass below the deuterium-burning-limit. The low amount of PH3 is on the other hand in disagreement with predictions, showing incomplete knowledge of phosphorus chemistry. Variability Variability of WISE 0855 in the infrared was measured with Spitzer IRAC. A relative small amplitude of 4–5% was measured. Water ice cloud models predicted a large amplitude. This small amplitude might suggest that the hemispheres of WISE 0855 have very small deviation in cloud coverage. The light curve is too irregular to produce a good fit and rotation periods between 9.7 and 14 hours were measured. Physical parameters The mass and age of WISE 0855−0714 are neither known with certainty, but can be constrained with its known present-day temperature. The age of WISE 0855−0714 depends on its mass; a lower mass would lead to a faster rate of cooling and thus a younger age for WISE 0855−0714, whereas a higher mass would lead to a slower rate of cooling and thus an older age for WISE 0855−0714. Assuming an age range of 1–10 billion years, evolutionary models for brown dwarfs predict that WISE 0855−0714 should have a mass between . This mass is in the range of a sub-brown dwarf or planetary-mass object. As of 2003, the International Astronomical Union considers an object with a mass above , capable of fusing deuterium, to be a brown dwarf. A lighter object and one orbiting another object is considered a planet. However, if the distinction is based on how the object formed then it might be considered a failed star, a theory advanced for the object Cha 110913-773444. Combining its luminosity, distance, and mass it is estimated to be the coldest-known brown dwarf, with a modeled effective temperature of , depending on the model. Models matching the NIRSpec spectrum are well fitted with a temperature of . Gallery
Physical sciences
Notable stars
Astronomy
39778851
https://en.wikipedia.org/wiki/Outcome%20%28probability%29
Outcome (probability)
In probability theory, an outcome is a possible result of an experiment or trial. Each possible outcome of a particular experiment is unique, and different outcomes are mutually exclusive (only one outcome will occur on each trial of the experiment). All of the possible outcomes of an experiment form the elements of a sample space. For the experiment where we flip a coin twice, the four possible outcomes that make up our sample space are (H, T), (T, H), (T, T) and (H, H), where "H" represents a "heads", and "T" represents a "tails". Outcomes should not be confused with events, which are (or informally, "groups") of outcomes. For comparison, we could define an event to occur when "at least one 'heads'" is flipped in the experiment - that is, when the outcome contains at least one 'heads'. This event would contain all outcomes in the sample space except the element (T, T). Sets of outcomes: events Since individual outcomes may be of little practical interest, or because there may be prohibitively (even infinitely) many of them, outcomes are grouped into sets of outcomes that satisfy some condition, which are called "events." The collection of all such events is a sigma-algebra. An event containing exactly one outcome is called an elementary event. The event that contains all possible outcomes of an experiment is its sample space. A single outcome can be a part of many different events. Typically, when the sample space is finite, any subset of the sample space is an event (that is, all elements of the power set of the sample space are defined as events). However, this approach does not work well in cases where the sample space is uncountably infinite (most notably when the outcome must be some real number). So, when defining a probability space it is possible, and often necessary, to exclude certain subsets of the sample space from being events. Probability of an outcome Outcomes may occur with probabilities that are between zero and one (inclusively). In a discrete probability distribution whose sample space is finite, each outcome is assigned a particular probability. In contrast, in a continuous distribution, individual outcomes all have zero probability, and non-zero probabilities can only be assigned to ranges of outcomes. Some "mixed" distributions contain both stretches of continuous outcomes and some discrete outcomes; the discrete outcomes in such distributions can be called atoms and can have non-zero probabilities. Under the measure-theoretic definition of a probability space, the probability of an outcome need not even be defined. In particular, the set of events on which probability is defined may be some σ-algebra on and not necessarily the full power set. Equally likely outcomes In some sample spaces, it is reasonable to estimate or assume that all outcomes in the space are equally likely (that they occur with equal probability). For example, when tossing an ordinary coin, one typically assumes that the outcomes "head" and "tail" are equally likely to occur. An implicit assumption that all outcomes are equally likely underpins most randomization tools used in common games of chance (e.g. rolling dice, shuffling cards, spinning tops or wheels, drawing lots, etc.). Of course, players in such games can try to cheat by subtly introducing systematic deviations from equal likelihood (for example, with marked cards, loaded or shaved dice, and other methods). Some treatments of probability assume that the various outcomes of an experiment are always defined so as to be equally likely. However, there are experiments that are not easily described by a set of equally likely outcomes— for example, if one were to toss a thumb tack many times and observe whether it landed with its point upward or downward, there is no symmetry to suggest that the two outcomes should be equally likely.
Mathematics
Probability
null
39783039
https://en.wikipedia.org/wiki/Function%20of%20several%20real%20variables
Function of several real variables
In mathematical analysis and its applications, a function of several real variables or real multivariate function is a function with more than one argument, with all arguments being real variables. This concept extends the idea of a function of a real variable to several variables. The "input" variables take real values, while the "output", also called the "value of the function", may be real or complex. However, the study of the complex-valued functions may be easily reduced to the study of the real-valued functions, by considering the real and imaginary parts of the complex function; therefore, unless explicitly specified, only real-valued functions will be considered in this article. The domain of a function of variables is the subset of for which the function is defined. As usual, the domain of a function of several real variables is supposed to contain a nonempty open subset of . General definition A real-valued function of real variables is a function that takes as input real numbers, commonly represented by the variables , for producing another real number, the value of the function, commonly denoted . For simplicity, in this article a real-valued function of several real variables will be simply called a function. To avoid any ambiguity, the other types of functions that may occur will be explicitly specified. Some functions are defined for all real values of the variables (one says that they are everywhere defined), but some other functions are defined only if the value of the variable are taken in a subset of , the domain of the function, which is always supposed to contain an open subset of . In other words, a real-valued function of real variables is a function such that its domain is a subset of that contains a nonempty open set. An element of being an -tuple (usually delimited by parentheses), the general notation for denoting functions would be . The common usage, much older than the general definition of functions between sets, is to not use double parentheses and to simply write . It is also common to abbreviate the -tuple by using a notation similar to that for vectors, like boldface , underline , or overarrow . This article will use bold. A simple example of a function in two variables could be: which is the volume of a cone with base area and height measured perpendicularly from the base. The domain restricts all variables to be positive since lengths and areas must be positive. For an example of a function in two variables: where and are real non-zero constants. Using the three-dimensional Cartesian coordinate system, where the xy plane is the domain and the z axis is the codomain , one can visualize the image to be a two-dimensional plane, with a slope of in the positive x direction and a slope of in the positive y direction. The function is well-defined at all points in . The previous example can be extended easily to higher dimensions: for non-zero real constants , which describes a -dimensional hyperplane. The Euclidean norm: is also a function of n variables which is everywhere defined, while is defined only for . For a non-linear example function in two variables: which takes in all points in , a disk of radius "punctured" at the origin in the plane , and returns a point in . The function does not include the origin , if it did then would be ill-defined at that point. Using a 3d Cartesian coordinate system with the xy-plane as the domain , and the z axis the codomain , the image can be visualized as a curved surface. The function can be evaluated at the point in : However, the function couldn't be evaluated at, say since these values of and do not satisfy the domain's rule. Image The image of a function is the set of all values of when the -tuple runs in the whole domain of . For a continuous (see below for a definition) real-valued function which has a connected domain, the image is either an interval or a single value. In the latter case, the function is a constant function. The preimage of a given real number is called a level set. It is the set of the solutions of the equation . Domain The domain of a function of several real variables is a subset of that is sometimes, but not always, explicitly defined. In fact, if one restricts the domain of a function to a subset , one gets formally a different function, the restriction of to , which is denoted . In practice, it is often (but not always) not harmful to identify and , and to omit the restrictor . Conversely, it is sometimes possible to enlarge naturally the domain of a given function, for example by continuity or by analytic continuation. Moreover, many functions are defined in such a way that it is difficult to specify explicitly their domain. For example, given a function , it may be difficult to specify the domain of the function If is a multivariate polynomial, (which has as a domain), it is even difficult to test whether the domain of is also . This is equivalent to test whether a polynomial is always positive, and is the object of an active research area (see Positive polynomial). Algebraic structure The usual operations of arithmetic on the reals may be extended to real-valued functions of several real variables in the following way: For every real number , the constant function is everywhere defined. For every real number and every function , the function: has the same domain as (or is everywhere defined if ). If and are two functions of respective domains and such that contains a nonempty open subset of , then and are functions that have a domain containing . It follows that the functions of variables that are everywhere defined and the functions of variables that are defined in some neighbourhood of a given point both form commutative algebras over the reals (-algebras). This is a prototypical example of a function space. One may similarly define which is a function only if the set of the points in the domain of such that contains an open subset of . This constraint implies that the above two algebras are not fields. Univariable functions associated with a multivariable function One can easily obtain a function in one real variable by giving a constant value to all but one of the variables. For example, if is a point of the interior of the domain of the function , we can fix the values of to respectively, to get a univariable function whose domain contains an interval centered at . This function may also be viewed as the restriction of the function to the line defined by the equations for . Other univariable functions may be defined by restricting to any line passing through . These are the functions where the are real numbers that are not all zero. In next section, we will show that, if the multivariable function is continuous, so are all these univariable functions, but the converse is not necessarily true. Continuity and limit Until the second part of 19th century, only continuous functions were considered by mathematicians. At that time, the notion of continuity was elaborated for the functions of one or several real variables a rather long time before the formal definition of a topological space and a continuous map between topological spaces. As continuous functions of several real variables are ubiquitous in mathematics, it is worth to define this notion without reference to the general notion of continuous maps between topological space. For defining the continuity, it is useful to consider the distance function of , which is an everywhere defined function of real variables: A function is continuous at a point which is interior to its domain, if, for every positive real number , there is a positive real number such that for all such that . In other words, may be chosen small enough for having the image by of the ball of radius centered at contained in the interval of length centered at . A function is continuous if it is continuous at every point of its domain. If a function is continuous at , then all the univariate functions that are obtained by fixing all the variables except one at the value , are continuous at . The converse is false; this means that all these univariate functions may be continuous for a function that is not continuous at . For an example, consider the function such that , and is otherwise defined by The functions and are both constant and equal to zero, and are therefore continuous. The function is not continuous at , because, if and , we have , even if is very small. Although not continuous, this function has the further property that all the univariate functions obtained by restricting it to a line passing through are also continuous. In fact, we have for . The limit at a point of a real-valued function of several real variables is defined as follows. Let be a point in topological closure of the domain of the function . The function, has a limit when tends toward , denoted if the following condition is satisfied: For every positive real number , there is a positive real number such that for all in the domain such that If the limit exists, it is unique. If is in the interior of the domain, the limit exists if and only if the function is continuous at . In this case, we have When is in the boundary of the domain of , and if has a limit at , the latter formula allows to "extend by continuity" the domain of to . Symmetry A symmetric function is a function that is unchanged when two variables and are interchanged: where and are each one of . For example: is symmetric in since interchanging any pair of leaves unchanged, but is not symmetric in all of , since interchanging with or or gives a different function. Function composition Suppose the functions or more compactly , are all defined on a domain . As the -tuple varies in , a subset of , the -tuple varies in another region a subset of . To restate this: Then, a function of the functions defined on , is a function composition defined on , in other terms the mapping Note the numbers and do not need to be equal. For example, the function defined everywhere on can be rewritten by introducing which is also everywhere defined in to obtain Function composition can be used to simplify functions, which is useful for carrying out multiple integrals and solving partial differential equations. Calculus Elementary calculus is the calculus of real-valued functions of one real variable, and the principal ideas of differentiation and integration of such functions can be extended to functions of more than one real variable; this extension is multivariable calculus. Partial derivatives Partial derivatives can be defined with respect to each variable: Partial derivatives themselves are functions, each of which represents the rate of change of parallel to one of the axes at all points in the domain (if the derivatives exist and are continuous—see also below). A first derivative is positive if the function increases along the direction of the relevant axis, negative if it decreases, and zero if there is no increase or decrease. Evaluating a partial derivative at a particular point in the domain gives the rate of change of the function at that point in the direction parallel to a particular axis, a real number. For real-valued functions of a real variable, , its ordinary derivative is geometrically the gradient of the tangent line to the curve at all points in the domain. Partial derivatives extend this idea to tangent hyperplanes to a curve. The second order partial derivatives can be calculated for every pair of variables: Geometrically, they are related to the local curvature of the function's image at all points in the domain. At any point where the function is well-defined, the function could be increasing along some axes, and/or decreasing along other axes, and/or not increasing or decreasing at all along other axes. This leads to a variety of possible stationary points: global or local maxima, global or local minima, and saddle points—the multidimensional analogue of inflection points for real functions of one real variable. The Hessian matrix is a matrix of all the second order partial derivatives, which are used to investigate the stationary points of the function, important for mathematical optimization. In general, partial derivatives of higher order have the form: where are each integers between and such that , using the definitions of zeroth partial derivatives as identity operators: The number of possible partial derivatives increases with , although some mixed partial derivatives (those with respect to more than one variable) are superfluous, because of the symmetry of second order partial derivatives. This reduces the number of partial derivatives to calculate for some . Multivariable differentiability A function is differentiable in a neighborhood of a point if there is an -tuple of numbers dependent on in general, , so that: where as . This means that if is differentiable at a point , then is continuous at , although the converse is not true - continuity in the domain does not imply differentiability in the domain. If is differentiable at then the first order partial derivatives exist at and: for , which can be found from the definitions of the individual partial derivatives, so the partial derivatives of exist. Assuming an -dimensional analogue of a rectangular Cartesian coordinate system, these partial derivatives can be used to form a vectorial linear differential operator, called the gradient (also known as "nabla" or "del") in this coordinate system: used extensively in vector calculus, because it is useful for constructing other differential operators and compactly formulating theorems in vector calculus. Then substituting the gradient (evaluated at with a slight rearrangement gives: where denotes the dot product. This equation represents the best linear approximation of the function at all points within a neighborhood of . For infinitesimal changes in and as : which is defined as the total differential, or simply differential, of , at . This expression corresponds to the total infinitesimal change of , by adding all the infinitesimal changes of in all the directions. Also, can be construed as a covector with basis vectors as the infinitesimals in each direction and partial derivatives of as the components. Geometrically is perpendicular to the level sets of , given by which for some constant describes an -dimensional hypersurface. The differential of a constant is zero: in which is an infinitesimal change in in the hypersurface , and since the dot product of and is zero, this means is perpendicular to . In arbitrary curvilinear coordinate systems in dimensions, the explicit expression for the gradient would not be so simple - there would be scale factors in terms of the metric tensor for that coordinate system. For the above case used throughout this article, the metric is just the Kronecker delta and the scale factors are all 1. Differentiability classes If all first order partial derivatives evaluated at a point in the domain: exist and are continuous for all in the domain, has differentiability class . In general, if all order partial derivatives evaluated at a point : exist and are continuous, where , and are as above, for all in the domain, then is differentiable to order throughout the domain and has differentiability class . If is of differentiability class , has continuous partial derivatives of all order and is called smooth. If is an analytic function and equals its Taylor series about any point in the domain, the notation denotes this differentiability class. Multiple integration Definite integration can be extended to multiple integration over the several real variables with the notation; where each region is a subset of or all of the real line: and their Cartesian product gives the region to integrate over as a single set: an -dimensional hypervolume. When evaluated, a definite integral is a real number if the integral converges in the region of integration (the result of a definite integral may diverge to infinity for a given region, in such cases the integral remains ill-defined). The variables are treated as "dummy" or "bound" variables which are substituted for numbers in the process of integration. The integral of a real-valued function of a real variable with respect to has geometric interpretation as the area bounded by the curve and the -axis. Multiple integrals extend the dimensionality of this concept: assuming an -dimensional analogue of a rectangular Cartesian coordinate system, the above definite integral has the geometric interpretation as the -dimensional hypervolume bounded by and the axes, which may be positive, negative, or zero, depending on the function being integrated (if the integral is convergent). While bounded hypervolume is a useful insight, the more important idea of definite integrals is that they represent total quantities within space. This has significance in applied mathematics and physics: if is some scalar density field and are the position vector coordinates, i.e. some scalar quantity per unit n-dimensional hypervolume, then integrating over the region gives the total amount of quantity in . The more formal notions of hypervolume is the subject of measure theory. Above we used the Lebesgue measure, see Lebesgue integration for more on this topic. Theorems With the definitions of multiple integration and partial derivatives, key theorems can be formulated, including the fundamental theorem of calculus in several real variables (namely Stokes' theorem), integration by parts in several real variables, the symmetry of higher partial derivatives and Taylor's theorem for multivariable functions. Evaluating a mixture of integrals and partial derivatives can be done by using theorem differentiation under the integral sign. Vector calculus One can collect a number of functions each of several real variables, say into an -tuple, or sometimes as a column vector or row vector, respectively: all treated on the same footing as an -component vector field, and use whichever form is convenient. All the above notations have a common compact notation . The calculus of such vector fields is vector calculus. For more on the treatment of row vectors and column vectors of multivariable functions, see matrix calculus. Implicit functions A real-valued implicit function of several real variables is not written in the form "". Instead, the mapping is from the space to the zero element in (just the ordinary zero 0): is an equation in all the variables. Implicit functions are a more general way to represent functions, since if: then we can always define: but the converse is not always possible, i.e. not all implicit functions have an explicit form. For example, using interval notation, let Choosing a 3-dimensional (3D) Cartesian coordinate system, this function describes the surface of a 3D ellipsoid centered at the origin with constant semi-major axes , along the positive x, y and z axes respectively. In the case , we have a sphere of radius centered at the origin. Other conic section examples which can be described similarly include the hyperboloid and paraboloid, more generally so can any 2D surface in 3D Euclidean space. The above example can be solved for , or ; however it is much tidier to write it in an implicit form. For a more sophisticated example: for non-zero real constants , this function is well-defined for all , but it cannot be solved explicitly for these variables and written as "", "", etc. The implicit function theorem of more than two real variables deals with the continuity and differentiability of the function, as follows. Let be a continuous function with continuous first order partial derivatives, and let ϕ evaluated at a point be zero: and let the first partial derivative of with respect to evaluated at be non-zero: Then, there is an interval containing , and a region containing , such that for every in there is exactly one value of in satisfying , and is a continuous function of so that . The total differentials of the functions are: Substituting into the latter differential and equating coefficients of the differentials gives the first order partial derivatives of with respect to in terms of the derivatives of the original function, each as a solution of the linear equation for . Complex-valued function of several real variables A complex-valued function of several real variables may be defined by relaxing, in the definition of the real-valued functions, the restriction of the codomain to the real numbers, and allowing complex values. If is such a complex valued function, it may be decomposed as where and are real-valued functions. In other words, the study of the complex valued functions reduces easily to the study of the pairs of real valued functions. This reduction works for the general properties. However, for an explicitly given function, such as: the computation of the real and the imaginary part may be difficult. Applications Multivariable functions of real variables arise inevitably in engineering and physics, because observable physical quantities are real numbers (with associated units and dimensions), and any one physical quantity will generally depend on a number of other quantities. Examples of real-valued functions of several real variables Examples in continuum mechanics include the local mass density of a mass distribution, a scalar field which depends on the spatial position coordinates (here Cartesian to exemplify), , and time : Similarly for electric charge density for electrically charged objects, and numerous other scalar potential fields. Another example is the velocity field, a vector field, which has components of velocity that are each multivariable functions of spatial coordinates and time similarly: Similarly for other physical vector fields such as electric fields and magnetic fields, and vector potential fields. Another important example is the equation of state in thermodynamics, an equation relating pressure , temperature , and volume of a fluid, in general it has an implicit form: The simplest example is the ideal gas law: where is the number of moles, constant for a fixed amount of substance, and the gas constant. Much more complicated equations of state have been empirically derived, but they all have the above implicit form. Real-valued functions of several real variables appear pervasively in economics. In the underpinnings of consumer theory, utility is expressed as a function of the amounts of various goods consumed, each amount being an argument of the utility function. The result of maximizing utility is a set of demand functions, each expressing the amount demanded of a particular good as a function of the prices of the various goods and of income or wealth. In producer theory, a firm is usually assumed to maximize profit as a function of the quantities of various goods produced and of the quantities of various factors of production employed. The result of the optimization is a set of demand functions for the various factors of production and a set of supply functions for the various products; each of these functions has as its arguments the prices of the goods and of the factors of production. Examples of complex-valued functions of several real variables Some "physical quantities" may be actually complex valued - such as complex impedance, complex permittivity, complex permeability, and complex refractive index. These are also functions of real variables, such as frequency or time, as well as temperature. In two-dimensional fluid mechanics, specifically in the theory of the potential flows used to describe fluid motion in 2d, the complex potential is a complex valued function of the two spatial coordinates and , and other real variables associated with the system. The real part is the velocity potential and the imaginary part is the stream function. The spherical harmonics occur in physics and engineering as the solution to Laplace's equation, as well as the eigenfunctions of the z-component angular momentum operator, which are complex-valued functions of real-valued spherical polar angles: In quantum mechanics, the wavefunction is necessarily complex-valued, but is a function of real spatial coordinates (or momentum components), as well as time : where each is related by a Fourier transform.
Mathematics
Functions: General
null
41140351
https://en.wikipedia.org/wiki/Carolina%20Reaper
Carolina Reaper
The Carolina Reaper chili pepper is a cultivar of the Capsicum chinense plant. Developed by American breeder Ed Currie, the pepper is red and gnarled, with a bumpy texture and small pointed tail. It was the hottest chili pepper in the world according to Guinness World Records from 2013 to 2023 before it was surpassed by Pepper X, which was also developed by Currie. Development Currie, an American breeder, began working in about 2001 on what would become the Carolina Reaper. It took over 10 years to develop. Sorting through hundreds of hybrid combinations, Currie was finally successful at crossing a "really nastily hot" La Soufrière (Saint Vincent) Habanero pepper from the Caribbean island of Saint Vincent and a Naga pepper/Ghost pepper(locally known as bhüt jolokia) from Assam". During November of that year, a reporter from NPR visited Currie to try the new pepper. According to Currie's website: "The reporter ate a small piece of the pepper, rolled around on the floor, hallucinated, and then shared his experiences with the national media." Currie officially named the pepper: "Smokin' Ed's Carolina Reaper". The word "reaper" was chosen by Currie due to the shape of the pepper's "sickle-like" tail. Details The Carolina Reaper was certified as the world's hottest chili pepper by Guinness World Records on August 11, 2017. Testing was conducted by Winthrop University in South Carolina during the certification process which showed an average heat level of 1,641,183 SHU for a given batch. Previously the record for the hottest pepper had been held by the scorpion pepper which measured in at 1,463,700 SHU in comparison. It was later claimed through media outlets such as the Associated Press that an individual Carolina Reaper had a heat level of 2.2 million SHU. Currie eventually bred an even stronger pepper—known as "Pepper X"—that took the title of "World's Hottest Pepper" on August 23, 2023 that was tested indicating an average rating of 2.69 million SHUs. Pungency The Reaper has been described as having a fruity taste, with the initial bite being sweet and then immediately turning to "molten lava." The sensory heat or pungency detected when eating a Carolina Reaper derives from the density of capsaicinoids, particularly capsaicin, which relates directly to the intensity of chili pepper heat and Scoville Heat Units (SHU). Cultivation For growing, the pepper has been described as "a good all-rounder to try at home" by James Wong, an English ethnobotanist, who stated that they require growing temperatures of at least . He suggested growing the plants in pots to restrict growth and produce fruit sooner. When fully ripe, two peppers occupy the palm of the hand.
Biology and health sciences
Botanical fruits used as culinary vegetables
Plants
55619793
https://en.wikipedia.org/wiki/%CA%BBOumuamua
ʻOumuamua
Oumuamua is the first interstellar object detected passing through the Solar System. Formally designated 1I/2017 U1, it was discovered by Robert Weryk using the Pan-STARRS telescope at Haleakalā Observatory, Hawaii, on 19 October 2017, approximately 40 days after it passed its closest point to the Sun on 9 September. When it was first observed, it was about from Earth (about 85 times as far away as the Moon) and already heading away from the Sun. Oumuamua is a small object estimated to be between long, with its width and thickness both estimated between . It has a red color, like objects in the outer Solar System. Despite its close approach to the Sun, it showed no signs of having a coma, the usual nebula around comets formed when they pass near the Sun. Further, it exhibited nongravitational acceleration, potentially due to outgassing or a push from solar radiation pressure. It has a rotation rate similar to the Solar System's asteroids, but many valid models permit it to be unusually more elongated than all but a few other natural bodies observed in the solar system. This feature raised speculation about its origin. Its light curve, assuming little systematic error, presents its motion as "tumbling" rather than "spinning", and moving sufficiently fast relative to the Sun that it is likely of extrasolar origin. Extrapolated and without further deceleration, its path cannot be captured into a solar orbit, so it will eventually leave the Solar System and continue into interstellar space. Its planetary system of origin and age are unknown. Oumuamua is remarkable for its extrasolar origin, high obliqueness, and observed acceleration without an apparent coma. By July 2019, most astronomers concluded that it was a natural object, but its precise characterization is contentious given the limited time window for observation. While an unconsolidated object (rubble pile) would require Oumuamua to be of a density similar to rocky asteroids, a small amount of internal strength similar to icy comets would allow it to have a relatively low density. Proposed explanations of its origin include the remnant of a disintegrated rogue comet, or a piece of an exoplanet rich in nitrogen ice, similar to Pluto. On 22 March 2023, astronomers proposed the observed acceleration was "due to the release of entrapped molecular hydrogen that formed through energetic processing of an H2O-rich icy body", consistent with 'Oumuamua being an interstellar comet, "originating as a planetesimal relic broadly similar to solar system comets". In January 2022, researchers proposed that a spacecraft launched from Earth as part of Project Lyra, could catch up to 'Oumuamua in 26 years for closer studies. Naming As the first known object of its type, Oumuamua presented a unique case for the International Astronomical Union, which assigns designations for astronomical objects. Originally classified as comet C/2017 U1, it was later reclassified as asteroid A/2017 U1 due to the absence of a coma. Once it was unambiguously identified as coming from outside the Solar System, a new designation was created: I, for Interstellar object. As the first object so identified, Oumuamua was designated 1I, with rules for the eligibility of objects for I-numbers and the names to be assigned to these interstellar objects yet to be codified. The object may be called 1I; 1I/2017 U1; 1I/Oumuamua; or 1I/2017 U1 (Oumuamua). The name comes (), and reflects the way the object is like a scout or messenger sent from the distant past to reach out to humanity. It roughly translates to 'first distant messenger'. The first character (not a diacritic) is a Hawaiian okina, not an apostrophe, and is pronounced as a glottal stop; the Pan-STARRS team chose the name in consultation with Ka'iu Kimura and Larry Kimura of the University of Hawaii at Hilo. Before the official name was decided, Rama was suggested, the name given to an alien spacecraft discovered under similar circumstances in the 1973 science fiction novel Rendezvous with Rama by Arthur C. Clarke. Observations Observations and conclusions concerning Oumuamua's trajectory were primarily obtained with data from the Pan-STARRS1 Telescope, part of the Spaceguard Survey, and the Canada–France–Hawaii Telescope (CFHT), and its composition and shape from the Very Large Telescope and the Gemini South telescope in Chile, and the Keck II telescope in Hawaii. These were collected by Karen J. Meech, Robert Weryk, and their colleagues and published in Nature on 20 November 2017. After the announcement, the space-based telescopes Hubble and Spitzer joined in the observations. Oumuamua is small and not very luminous. It was not seen in STEREO HI-1A observations near its perihelion on 9 September 2017, limiting its brightness to approximately 13.5 mag. By the end of October, it had already faded to about apparent magnitude 23, and in mid-December 2017, it was too faint and fast-moving to be studied by even the largest ground-based telescopes. Oumuamua was compared to the fictional alien spacecraft Rama due to its interstellar origin. Adding to the coincidence, both the real and the fictional objects are unusually elongated. Oumuamua has a reddish hue and unsteady brightness, which are typical of asteroids. The SETI Institute's radio telescope, the Allen Telescope Array, examined Oumuamua, but detected no unusual radio emissions. More detailed observations, using the Breakthrough Listen hardware and the Green Bank Telescope, were performed; the data were searched for narrowband signals and none were found. Given the close proximity to this interstellar object, limits were placed to putative transmitters with the extremely low effective isotropically radiated power of 0.08 watts. Trajectory Oumuamua appears to have come from roughly the direction of Vega in the constellation Lyra. Its incoming direction of motion is 6° from the solar apex (the direction of the Sun's movement relative to local stars), the most likely direction from which objects from outside the Solar System would come. On 26 October, two precovery observations from the Catalina Sky Survey were found dated 14 and 17 October. A two-week observation arc had verified a strongly hyperbolic trajectory. It has a hyperbolic excess velocity (velocity at infinity, ) of , its speed relative to the Sun when in interstellar space. By mid-November, astronomers were certain that it was an interstellar object. Based on observations spanning 80 days, Oumuamua's orbital eccentricity is 1.20, the highest ever observed until 2I/Borisov was discovered in August 2019. An eccentricity exceeding 1.0 means an object exceeds the Sun's escape velocity, is not bound to the Solar System, and may escape to interstellar space. While an eccentricity slightly above 1.0 can be obtained by encounters with planets, as happened with the previous record holder, C/1980 E1, Oumuamua's eccentricity is so high that it could not have been obtained through an encounter with any of the planets in the Solar System. Even undiscovered planets in the Solar System cannot account for Oumuamua's trajectory or boost its speed to the observed value. For these reasons, it can only be of interstellar origin. Oumuamua entered the Solar System from north of the plane of the ecliptic. The pull of the Sun's gravity caused it to speed up until it reached its maximum speed of as it passed south of the ecliptic on 6 September, where the Sun's gravity bent its orbit in a sharp turn northward at its closest approach (perihelion) on 9 September at a distance of from the Sun, i.e., about 17% closer than Mercury's closest approach to the Sun. It is now heading away from the Sun toward Pegasus, toward a vanishing point 66° from the direction of its approach. On the outward leg of its journey through the Solar System, Oumuamua passed beyond the orbit of Earth on 14 October with a closest approach distance of approximately from Earth. On 16 October it moved back north of the ecliptic plane and passed beyond the orbit of Mars on 1 November. It passed beyond Jupiter's orbit in May 2018, beyond Saturn's orbit in January 2019, and beyond Neptune's in 2022. As it leaves the Solar System it will be approximately right ascension 23'51" and declination +24°42', in Pegasus. It will continue to slow down until it reaches a speed of relative to the Sun, the same speed it had before its approach to the Solar System. Non-gravitational acceleration On 27 June 2018, astronomers reported a non-gravitational acceleration to Oumuamua's trajectory, potentially consistent with a push from solar radiation pressure. The resulting change in velocity during the period when it was near its closest approach to the Sun summed to about 17 meters per second. Initial speculation as to the cause of this acceleration pointed to the comet-like outgassing, whereby volatile substances inside the object evaporate as the Sun heats its surface. Although no such tail of gasses was observed following the object, researchers estimated that enough outgassing may have increased the object's speed without the gases being detectable. A critical reassessment of the outgassing hypothesis argued that, instead of the observed stability of Oumuamua's spin, outgassing would have caused its spin to rapidly change due to its elongated shape, resulting in the object tearing apart. Indications of origin Accounting for Vega's proper motion, it would have taken Oumuamua 600,000 years to reach the Solar System from Vega. But as a nearby star, Vega was not in the same part of the sky at that time. Astronomers calculate that 100 years ago the object was from the Sun and traveling at 26.33 km/s with respect to the Sun. This interstellar speed is very close to the mean motion of material in the Milky Way in the neighborhood of the Sun, also known as the local standard of rest (LSR), and especially close to the mean motion of a relatively close group of red dwarf stars. This velocity profile also indicates an extrasolar origin, but appears to rule out the closest dozen stars. In fact, the closeness of Oumuamua's velocity to the local standard of rest might mean that it has circulated the Milky Way several times and thus may have originated from an entirely different part of the galaxy. It is unknown how long the object has been traveling among the stars. The Solar System is likely the first planetary system Oumuamua has closely encountered since being ejected from its birth star system, potentially several billion years ago. It has been speculated that the object may have been ejected from a stellar system in one of the local kinematic associations of young stars (specifically, Carina or Columba) within a range of about 100 parsecs, 45 million years ago. The Carina and Columba associations are now very far in the sky from the Lyra constellation, the direction from which Oumuamua came when it entered the Solar System. Others have speculated that it was ejected from a white dwarf system and that its volatiles were lost when its parent star became a red giant. About 1.3 million years ago the object may have passed within a distance of to the nearby star TYC 4742-1027-1, but its velocity is too high to have originated from that star system, and it probably just passed through the system's Oort cloud at a relative speed of about . An August 2018 study using Gaia Data Release 2 updated the possible past close encounters and identified four stars—HIP 3757, HD 292249, Gaia DR2 2502921019565490176, and Gaia DR2 3666992950762141312—which Oumuamua passed relatively close to at moderately low velocities in the past few million years. This study also identifies future close encounters of Oumuamua on its outgoing trajectory from the Sun. In September 2018, astronomers described several possible home star systems from which Oumuamua may have originated. In April 2020, astronomers presented a new possible scenario for the object's origin. According to one hypothesis, Oumuamua could be a fragment from a tidally disrupted planet. If true, this would make Oumuamua a rare object, of a type much less abundant than most extrasolar "dusty-snowball" comets or asteroids. But this scenario leads to cigar-shaped objects, whereas Oumuamua's lightcurve favors a disc-like shape. In May 2020, it was proposed that the object was the first observed member of a class of small H2-ice-rich bodies that form at temperatures near 3 K in the cores of giant molecular clouds. The non-gravitational acceleration and high aspect ratio shape of Oumuamua might be explainable on this basis. However, it was later calculated that hydrogen icebergs cannot survive their journey through interstellar space. Classification Initially, Oumuamua was announced as comet C/2017 U1 (PANSTARRS) based on a strongly hyperbolic trajectory. In an attempt to confirm any cometary activity, very deep stacked images were taken at the Very Large Telescope later the same day, but the object showed no presence of a coma. Accordingly, the object was renamed A/2017 U1, becoming the first comet ever to be re-designated as an asteroid. Once it was identified as an interstellar object, it was designated 1I/2017 U1, the first member of a new class of objects. The lack of a coma limits the amount of surface ice to a few square meters, and any volatiles (if they exist) must lie below a crust at least thick. It also indicates that the object must have formed within the frost line of its parent stellar system or have been in the inner region of that stellar system long enough for all near-surface ice to sublimate, as may be the case with damocloids. It is difficult to say which scenario is more likely due to the chaotic nature of small body dynamics, although if it formed in a similar manner to Solar System objects, its spectrum indicates that the latter scenario is true. Any meteoric activity from Oumuamua would have been expected to occur on 18 October 2017 coming from the constellation Sextans, but no activity was detected by the Canadian Meteor Orbit Radar. On 27 June 2018, astronomers reported that Oumuamua was thought to be a mildly active comet, and not an asteroid, as previously thought. This was determined by measuring a non-gravitational boost to Oumuamua's acceleration, consistent with comet outgassing. However, studies submitted in October 2018 suggest that the object is neither an asteroid nor a comet, although the object could be a remnant of a disintegrated interstellar comet (or exocomet), as suggested by astronomer Zdenek Sekanina. Appearance, shape and composition Spectra from the Hale Telescope on 25 October showed red color resembling comet nuclei or Trojans. Higher signal to noise spectra recorded by the William Herschel Telescope later that day showed that the object was featureless, and colored red like Kuiper belt objects. Spectra obtained with the Very Large Telescope the following night showed that behavior continued into near-infrared wavelengths. Its spectrum is similar to that of D-type asteroids. Oumuamua is not rotating around its principal axis, and its motion may be a form of tumbling. This accounts for the various rotation periods reported, such as 8.10 hours (±0.42 hours or ±0.02 hours) by Bannister et al. and Bolin et al. with a lightcurve amplitude of , whereas Meech et al. reported a rotation period of 7.3 hours and a lightcurve amplitude of 2.5 magnitudes. Most likely, Oumuamua was set tumbling by a collision in its system of origin, and remains tumbling since the time scale for dissipation of this motion is very long, at least a billion years. The large variations on the light curves indicate that Oumuamua may be anything from a highly elongated cigar-like object, comparable to or greater than the most elongated Solar System objects, to an extremely flat object, a pancake or oblate spheroid. However, the size and shape have not been directly observed as Oumuamua appears as nothing more than a point source of light even in the most powerful telescopes. Neither its albedo nor its triaxial ellipsoid shape is known. If cigar-shaped, the longest-to-shortest axis ratio could be 5:1 or greater. Assuming an albedo of 10% (slightly higher than typical for D-type asteroids) and a 6:1 ratio, Oumuamua has dimensions of approximately with an average diameter of about . According to astronomer David Jewitt, the object is physically unremarkable except for its highly elongated shape. Bannister et al. have suggested that it could also be a contact binary, although this may not be compatible with its rapid rotation. One speculation regarding its shape is that it is a result of a violent event (such as a collision or stellar explosion) that caused its ejection from its system of origin. JPL News reported that Oumuamua "is up to one-quarter mile (400 meters) long and highly-elongated — perhaps 10 times as long as it is wide". A 2019 paper finds the best models as either a cigar-shape, 1:8 aspect ratio, or disc-shape, 1:6 aspect ratio, with the disc more likely since its rotation does not require a specific orientation to see the range of brightnesses observed. Monte Carlo simulations based on the available orbit determination suggest that the equatorial obliquity of Oumuamua could be about 93 degrees, if it has a very prolate or cigar-like shape, or close to 16 degrees, if it is very oblate or disk-like. A 2021 paper proposed that, if 'Oumuamua is made of nitrogen ice, the extreme shape could be a result of recent evaporation, and that when the object entered the Solar System it likely had an unremarkable 2:1 aspect ratio. The authors calculated that in this scenario, a month after perihelion, that Oumuamua had lost 92% of the mass it had upon entering the Solar System. Light curve observations suggest however that the object may be composed of dense metal-rich rock that has been reddened by millions of years of exposure to cosmic rays. It is thought that its surface contains tholins, which are irradiated organic compounds that are more common in objects in the outer Solar System and can help determine the age of the surface. This possibility is inferred from spectroscopic characterization and its reddish color, and from the expected effects of interstellar radiation. Despite the lack of any cometary coma when it approached the Sun, it may still contain internal ice, hidden by "an insulating mantle produced by long-term cosmic ray exposure". In November 2019, some astronomers noted that Oumuamua may be a "cosmic dust bunny", due to its "very lightweight and 'fluffy' conglomerate of dust and ice grains". In August 2020, astronomers reported that ʻOumuamua is not likely to have been composed of frozen hydrogen, which had been proposed earlier; the compositional nature of the object continues to be unknown. Radio measurements In December 2017, astronomer Avi Loeb of Harvard University, an adviser to the Breakthrough Listen Project, cited Oumuamua's unusually elongated shape as one reason the Green Bank Telescope in West Virginia would listen for radio emissions from it to see if there were any unexpected signs that it might be of artificial origin, although earlier limited observations by other radio telescopes such as the SETI Institute's Allen Telescope Array had produced no such results. On 13 December 2017, the Green Bank Telescope observed the object for six hours across four bands of radio frequency. No radio signals from Oumuamua were detected in this very limited scanning range, but more observations were planned. Discussion Nitrogen ice theory Outgassing of nitrogen ice (N2) could explain why no outgassing was detected. Nitrogen ice the size of 'Oumuamua could survive for 500 million years in the interstellar medium and would reflect two-thirds of the Sun's light. This explanation has been further supported in March 2021 when scientists presented a theory based on nitrogen ice, and further concluded that Oumuamua may be a piece of an exoplanet similar to the dwarf planet Pluto, an exo-Pluto as noted, from beyond our solar system. This theory has been criticized by Loeb. In November 2021, theoretical studies by Siraj and Loeb hypothesized that 'Oumuamua was not a nitrogen iceberg. Hydrogen ice theory It has been proposed that ʻOumuamua contains a significant amount of hydrogen ice. This would point to it originating from the core of an interstellar molecular cloud, where conditions for the formation of this material might exist. The Sun's heat would cause the hydrogen to sublime, which would in turn propel the body. The hydrogen coma formed by this process would be difficult to detect from Earth-based telescopes, as the atmosphere blocks those wavelengths. Regular water-ice comets undergo this as well, however to a much lesser extent and with a visible coma. This may explain the significant non-gravitational acceleration that ʻOumuamua underwent without showing signs of coma formation. Significant mass loss caused by the sublimation would also explain the unusual cigar-like shape, comparable to how a bar of soap becomes more elongated as it is used up. However, it was later shown that hydrogen icebergs cannot form out of small grains and that, to not evaporate during their journey in interstellar space, they would have had to have been formed about 40 million years ago, in the close neighborhood of the solar system. Hydrogen-laden water ice theory In 2023, it was proposed the observed non-gravitational acceleration and spectrum of ʻOumuamua can be best explained by hydrogen outgassing from the water ice matrix. The buildup of the hydrogen in the water ice is expected to happen in the interstellar comets, due to low-temperature water ice radiolysis by cosmic ray particles while ʻOumuamua or similar cometary body was in interstellar space. Hypothetical space missions The Initiative for Interstellar Studies (i4is) launched Project Lyra to assess the feasibility of a mission to Oumuamua. Several options for sending a spacecraft to Oumuamua within a time-frame of 5 to 25 years were suggested. Different mission durations and their velocity requirements were explored with respect to the launch date, assuming direct impulsive transfer to the intercept trajectory. The Space Launch System (also being looked at for "interstellar precursor missions") would be even more capable. Such an interstellar precursor could easily pass by Oumuamua on its way out of the Solar System, at speeds of . More advanced options of using solar, laser electric, and laser sail propulsion, based on Breakthrough Starshot technology, have also been considered. The challenge is to get to the interstellar object in a reasonable amount of time (and so at a reasonable distance from Earth), and yet be able to gain useful scientific information. To do this, decelerating the spacecraft at Oumuamua would be "highly desirable, due to the minimal science return from a hyper-velocity encounter". If the investigative craft goes too fast, it would not be able to get into orbit or land on the object and would fly past it. The authors conclude that, although challenging, an encounter mission would be feasible using near-term technology. Seligman and Laughlin adopt a complementary approach to the Lyra study but also conclude that such missions, though challenging to mount, are both feasible and scientifically attractive. Technosignature hypothesis On 26 October 2018, Loeb and his postdoc, Shmuel Bialy, submitted a paper exploring the possibility of Oumuamua being an artificial thin solar sail accelerated by solar radiation pressure, in an effort to help explain the object's comet-like non-gravitational acceleration. Other scientists have stated that the available evidence is insufficient to consider such a premise, and that a tumbling solar sail would not be able to accelerate. In response, Loeb wrote an article detailing six anomalous properties of Oumuamua that make it unusual, unlike any comets or asteroids seen before. A subsequent report on observations by the Spitzer Space Telescope set a tight limit on cometary outgassing of any carbon-based molecules and indicated that Oumuamua is at least ten times shinier than a typical comet. The solar sail technosignature hypothesis is considered unlikely by many experts owing to available simpler explanations that align with the expected characteristics of interstellar asteroids and comets. Other interstellar objects 2I/Borisov was discovered on 30 August 2019, and was soon confirmed to be an interstellar comet. Arriving from the direction of Cassiopeia, the object arrived at perihelion (closest point to the Sun) on 8 December 2019. Other proposed interstellar objects include the meteors CNEOS 2014-01-08 and CNEOS 2017-03-09 that impacted Earth in 2014 and 2017, respectively, although these claims have been met with skepticism.
Physical sciences
Other notable objects
Astronomy
32736981
https://en.wikipedia.org/wiki/Electronic-warfare%20aircraft
Electronic-warfare aircraft
An electronic-warfare aircraft is a military aircraft equipped for electronic warfare (EW), that is, degrading the effectiveness of enemy radar and radio systems by using radar jamming and deception methods. In 1943, British Avro Lancaster aircraft were equipped with chaff in order to blind enemy air defence radars. They were supplemented by specially-equipped aircraft flown by No. 100 Group RAF, which operated modified Halifaxes, Liberators and Fortresses carrying various jammers such as Carpet, Airborne Cigar, Mandrel, Jostle, and Piperack. List of electronic-warfare aircraft Examples of modern aircraft designed or modified for EW include: Antonov An-12BK-PPS (Soviet Union) Antonov An-26REP (Soviet Union) Boeing EA-18G Growler (United States) Denel TP1 Oryx EW (South Africa) Douglas C-47TP EW (South Africa) Douglas EA-3 Skywarrior (United States) Douglas EB-66 Destroyer (United States) Douglas EF-10B Skyknight (United States) Embraer R-99 (Brazil) General Dynamics/Grumman EF-111A Raven (United States) IAI 202B Arava (Israel) Ilyushin Il-22PP (Soviet Union) / (Russia) Kawasaki EC-1 (Japan) L3Harris EA-37B Compass Call (United States) Kawasaki RC-2 (Japan) Lockheed EC-130H Compass Call (United States) Mil Mi-8PP (Soviet Union) Northrop Grumman EA-6B Prowler (United States) Tornado ECR (Germany / Italy) Shaanxi Y-8EW (China) Shaanxi Y-8-GX1 (China) Shaanxi Y-9-GX11 (China) Shaanxi Y-9DZ (China) Shenyang J-15D (China) Shenyang J-16D (China) Sukhoi Su-24MP (Soviet Union) Tupolev Tu-16RM-2 (Soviet Union) Yakovlev Yak-28PP (Soviet Union) Dassault Falcon 20 (Pakistan) Saab Globaleye (Sweden) Hava Soj (Turkey)
Technology
Military aviation
null
32742753
https://en.wikipedia.org/wiki/Ordinary%20differential%20equation
Ordinary differential equation
In mathematics, an ordinary differential equation (ODE) is a differential equation (DE) dependent on only a single independent variable. As with any other DE, its unknown(s) consists of one (or more) function(s) and involves the derivatives of those functions. The term "ordinary" is used in contrast with partial differential equations (PDEs) which may be with respect to one independent variable, and, less commonly, in contrast with stochastic differential equations (SDEs) where the progression is random. Differential equations A linear differential equation is a differential equation that is defined by a linear polynomial in the unknown function and its derivatives, that is an equation of the form where and are arbitrary differentiable functions that do not need to be linear, and are the successive derivatives of the unknown function of the variable . Among ordinary differential equations, linear differential equations play a prominent role for several reasons. Most elementary and special functions that are encountered in physics and applied mathematics are solutions of linear differential equations (see Holonomic function). When physical phenomena are modeled with non-linear equations, they are generally approximated by linear differential equations for an easier solution. The few non-linear ODEs that can be solved explicitly are generally solved by transforming the equation into an equivalent linear ODE (see, for example Riccati equation). Some ODEs can be solved explicitly in terms of known functions and integrals. When that is not possible, the equation for computing the Taylor series of the solutions may be useful. For applied problems, numerical methods for ordinary differential equations can supply an approximation of the solution. Background Ordinary differential equations (ODEs) arise in many contexts of mathematics and social and natural sciences. Mathematical descriptions of change use differentials and derivatives. Various differentials, derivatives, and functions become related via equations, such that a differential equation is a result that describes dynamically changing phenomena, evolution, and variation. Often, quantities are defined as the rate of change of other quantities (for example, derivatives of displacement with respect to time), or gradients of quantities, which is how they enter differential equations. Specific mathematical fields include geometry and analytical mechanics. Scientific fields include much of physics and astronomy (celestial mechanics), meteorology (weather modeling), chemistry (reaction rates), biology (infectious diseases, genetic variation), ecology and population modeling (population competition), economics (stock trends, interest rates and the market equilibrium price changes). Many mathematicians have studied differential equations and contributed to the field, including Newton, Leibniz, the Bernoulli family, Riccati, Clairaut, d'Alembert, and Euler. A simple example is Newton's second law of motion—the relationship between the displacement and the time of an object under the force , is given by the differential equation which constrains the motion of a particle of constant mass . In general, is a function of the position of the particle at time . The unknown function appears on both sides of the differential equation, and is indicated in the notation . Definitions In what follows, is a dependent variable representing an unknown function of the independent variable . The notation for differentiation varies depending upon the author and upon which notation is most useful for the task at hand. In this context, the Leibniz's notation is more useful for differentiation and integration, whereas Lagrange's notation is more useful for representing higher-order derivatives compactly, and Newton's notation is often used in physics for representing derivatives of low order with respect to time. General definition Given , a function of , , and derivatives of . Then an equation of the form is called an explicit ordinary differential equation of order . More generally, an implicit ordinary differential equation of order takes the form: There are further classifications: System of ODEs A number of coupled differential equations form a system of equations. If is a vector whose elements are functions; , and is a vector-valued function of and its derivatives, then is an explicit system of ordinary differential equations of order and dimension . In column vector form: These are not necessarily linear. The implicit analogue is: where is the zero vector. In matrix form For a system of the form , some sources also require that the Jacobian matrix be non-singular in order to call this an implicit ODE [system]; an implicit ODE system satisfying this Jacobian non-singularity condition can be transformed into an explicit ODE system. In the same sources, implicit ODE systems with a singular Jacobian are termed differential algebraic equations (DAEs). This distinction is not merely one of terminology; DAEs have fundamentally different characteristics and are generally more involved to solve than (nonsingular) ODE systems. Presumably for additional derivatives, the Hessian matrix and so forth are also assumed non-singular according to this scheme, although note that any ODE of order greater than one can be (and usually is) rewritten as system of ODEs of first order, which makes the Jacobian singularity criterion sufficient for this taxonomy to be comprehensive at all orders. The behavior of a system of ODEs can be visualized through the use of a phase portrait. Solutions Given a differential equation a function , where is an interval, is called a solution or integral curve for , if is -times differentiable on , and Given two solutions and , is called an extension of if and A solution that has no extension is called a maximal solution. A solution defined on all of is called a global solution. A general solution of an th-order equation is a solution containing arbitrary independent constants of integration. A particular solution is derived from the general solution by setting the constants to particular values, often chosen to fulfill set 'initial conditions or boundary conditions'. A singular solution is a solution that cannot be obtained by assigning definite values to the arbitrary constants in the general solution. In the context of linear ODE, the terminology particular solution can also refer to any solution of the ODE (not necessarily satisfying the initial conditions), which is then added to the homogeneous solution (a general solution of the homogeneous ODE), which then forms a general solution of the original ODE. This is the terminology used in the guessing method section in this article, and is frequently used when discussing the method of undetermined coefficients and variation of parameters. Solutions of finite duration For non-linear autonomous ODEs it is possible under some conditions to develop solutions of finite duration, meaning here that from its own dynamics, the system will reach the value zero at an ending time and stays there in zero forever after. These finite-duration solutions can't be analytical functions on the whole real line, and because they will be non-Lipschitz functions at their ending time, they are not included in the uniqueness theorem of solutions of Lipschitz differential equations. As example, the equation: Admits the finite duration solution: Theories Singular solutions The theory of singular solutions of ordinary and partial differential equations was a subject of research from the time of Leibniz, but only since the middle of the nineteenth century has it received special attention. A valuable but little-known work on the subject is that of Houtain (1854). Darboux (from 1873) was a leader in the theory, and in the geometric interpretation of these solutions he opened a field worked by various writers, notably Casorati and Cayley. To the latter is due (1872) the theory of singular solutions of differential equations of the first order as accepted circa 1900. Reduction to quadratures The primitive attempt in dealing with differential equations had in view a reduction to quadratures. As it had been the hope of eighteenth-century algebraists to find a method for solving the general equation of the th degree, so it was the hope of analysts to find a general method for integrating any differential equation. Gauss (1799) showed, however, that complex differential equations require complex numbers. Hence, analysts began to substitute the study of functions, thus opening a new and fertile field. Cauchy was the first to appreciate the importance of this view. Thereafter, the real question was no longer whether a solution is possible by means of known functions or their integrals, but whether a given differential equation suffices for the definition of a function of the independent variable or variables, and, if so, what are the characteristic properties. Fuchsian theory Two memoirs by Fuchs inspired a novel approach, subsequently elaborated by Thomé and Frobenius. Collet was a prominent contributor beginning in 1869. His method for integrating a non-linear system was communicated to Bertrand in 1868. Clebsch (1873) attacked the theory along lines parallel to those in his theory of Abelian integrals. As the latter can be classified according to the properties of the fundamental curve that remains unchanged under a rational transformation, Clebsch proposed to classify the transcendent functions defined by differential equations according to the invariant properties of the corresponding surfaces under rational one-to-one transformations. Lie's theory From 1870, Sophus Lie's work put the theory of differential equations on a better foundation. He showed that the integration theories of the older mathematicians can, using Lie groups, be referred to a common source, and that ordinary differential equations that admit the same infinitesimal transformations present comparable integration difficulties. He also emphasized the subject of transformations of contact. Lie's group theory of differential equations has been certified, namely: (1) that it unifies the many ad hoc methods known for solving differential equations, and (2) that it provides powerful new ways to find solutions. The theory has applications to both ordinary and partial differential equations. A general solution approach uses the symmetry property of differential equations, the continuous infinitesimal transformations of solutions to solutions (Lie theory). Continuous group theory, Lie algebras, and differential geometry are used to understand the structure of linear and non-linear (partial) differential equations for generating integrable equations, to find its Lax pairs, recursion operators, Bäcklund transform, and finally finding exact analytic solutions to DE. Symmetry methods have been applied to differential equations that arise in mathematics, physics, engineering, and other disciplines. Sturm–Liouville theory Sturm–Liouville theory is a theory of a special type of second-order linear ordinary differential equation. Their solutions are based on eigenvalues and corresponding eigenfunctions of linear operators defined via second-order homogeneous linear equations. The problems are identified as Sturm–Liouville problems (SLP) and are named after J. C. F. Sturm and J. Liouville, who studied them in the mid-1800s. SLPs have an infinite number of eigenvalues, and the corresponding eigenfunctions form a complete, orthogonal set, which makes orthogonal expansions possible. This is a key idea in applied mathematics, physics, and engineering. SLPs are also useful in the analysis of certain partial differential equations. Existence and uniqueness of solutions There are several theorems that establish existence and uniqueness of solutions to initial value problems involving ODEs both locally and globally. The two main theorems are {| class="wikitable" |- ! Theorem ! Assumption ! Conclusion |- |Peano existence theorem || continuous ||local existence only |- |Picard–Lindelöf theorem || Lipschitz continuous ||local existence and uniqueness |- |} In their basic form both of these theorems only guarantee local results, though the latter can be extended to give a global result, for example, if the conditions of Grönwall's inequality are met. Also, uniqueness theorems like the Lipschitz one above do not apply to DAE systems, which may have multiple solutions stemming from their (non-linear) algebraic part alone. Local existence and uniqueness theorem simplified The theorem can be stated simply as follows. For the equation and initial value problem: if and are continuous in a closed rectangle in the plane, where and are real (symbolically: ) and denotes the Cartesian product, square brackets denote closed intervals, then there is an interval for some where the solution to the above equation and initial value problem can be found. That is, there is a solution and it is unique. Since there is no restriction on to be linear, this applies to non-linear equations that take the form , and it can also be applied to systems of equations. Global uniqueness and maximum domain of solution When the hypotheses of the Picard–Lindelöf theorem are satisfied, then local existence and uniqueness can be extended to a global result. More precisely: For each initial condition there exists a unique maximum (possibly infinite) open interval such that any solution that satisfies this initial condition is a restriction of the solution that satisfies this initial condition with domain . In the case that , there are exactly two possibilities explosion in finite time: leaves domain of definition: where is the open set in which is defined, and is its boundary. Note that the maximum domain of the solution is always an interval (to have uniqueness) may be smaller than may depend on the specific choice of . Example. This means that , which is and therefore locally Lipschitz continuous, satisfying the Picard–Lindelöf theorem. Even in such a simple setting, the maximum domain of solution cannot be all since the solution is which has maximum domain: This shows clearly that the maximum interval may depend on the initial conditions. The domain of could be taken as being but this would lead to a domain that is not an interval, so that the side opposite to the initial condition would be disconnected from the initial condition, and therefore not uniquely determined by it. The maximum domain is not because which is one of the two possible cases according to the above theorem. Reduction of order Differential equations are usually easier to solve if the order of the equation can be reduced. Reduction to a first-order system Any explicit differential equation of order , can be written as a system of first-order differential equations by defining a new family of unknown functions for . The -dimensional system of first-order coupled differential equations is then more compactly in vector notation: where Summary of exact solutions Some differential equations have solutions that can be written in an exact and closed form. Several important classes are given here. In the table below, , , , , and , are any integrable functions of , ; and are real given constants; are arbitrary constants (complex in general). The differential equations are in their equivalent and alternative forms that lead to the solution through integration. In the integral solutions, and are dummy variables of integration (the continuum analogues of indices in summation), and the notation just means to integrate with respect to , then after the integration substitute , without adding constants (explicitly stated). Separable equations General first-order equations General second-order equations Linear to the th order equations The guessing method When all other methods for solving an ODE fail, or in the cases where we have some intuition about what the solution to a DE might look like, it is sometimes possible to solve a DE simply by guessing the solution and validating it is correct. To use this method, we simply guess a solution to the differential equation, and then plug the solution into the differential equation to validate if it satisfies the equation. If it does then we have a particular solution to the DE, otherwise we start over again and try another guess. For instance we could guess that the solution to a DE has the form: since this is a very common solution that physically behaves in a sinusoidal way. In the case of a first order ODE that is non-homogeneous we need to first find a solution to the homogeneous portion of the DE, otherwise known as the associated homogeneous equation, and then find a solution to the entire non-homogeneous equation by guessing. Finally, we add both of these solutions together to obtain the general solution to the ODE, that is: Software for ODE solving Maxima, an open-source computer algebra system. COPASI, a free (Artistic License 2.0) software package for the integration and analysis of ODEs. MATLAB, a technical computing application (MATrix LABoratory) GNU Octave, a high-level language, primarily intended for numerical computations. Scilab, an open source application for numerical computation. Maple, a proprietary application for symbolic calculations. Mathematica, a proprietary application primarily intended for symbolic calculations. SymPy, a Python package that can solve ODEs symbolically Julia (programming language), a high-level language primarily intended for numerical computations. SageMath, an open-source application that uses a Python-like syntax with a wide range of capabilities spanning several branches of mathematics. SciPy, a Python package that includes an ODE integration module. Chebfun, an open-source package, written in MATLAB, for computing with functions to 15-digit accuracy. GNU R, an open source computational environment primarily intended for statistics, which includes packages for ODE solving.
Mathematics
Calculus and analysis
null
32747596
https://en.wikipedia.org/wiki/Structural%20coloration
Structural coloration
Structural coloration in animals, and a few plants, is the production of colour by microscopically structured surfaces fine enough to interfere with visible light instead of pigments, although some structural coloration occurs in combination with pigments. For example, peacock tail feathers are pigmented brown, but their microscopic structure makes them also reflect blue, turquoise, and green light, and they are often iridescent. Structural coloration was first described by English scientists Robert Hooke and Isaac Newton, and its principle—wave interference—explained by Thomas Young a century later. Young described iridescence as the result of interference between reflections from two or more surfaces of thin films, combined with refraction as light enters and leaves such films. The geometry then determines that at certain angles, the light reflected from both surfaces interferes constructively, while at other angles, the light interferes destructively. Different colours therefore appear at different angles. In animals such as on the feathers of birds and the scales of butterflies, interference is created by a range of photonic mechanisms, including diffraction gratings, selective mirrors, photonic crystals, crystal fibres, matrices of nanochannels and proteins that can vary their configuration. Some cuts of meat also show structural coloration due to the exposure of the periodic arrangement of the muscular fibres. Many of these photonic mechanisms correspond to elaborate structures visible by electron microscopy. In the few plants that exploit structural coloration, brilliant colours are produced by structures within cells. The most brilliant blue coloration known in any living tissue is found in the marble berries of Pollia condensata, where a spiral structure of cellulose fibrils produces Bragg's law scattering of light. The bright gloss of buttercups is produced by thin-film reflection by the epidermis supplemented by yellow pigmentation, and strong diffuse scattering by a layer of starch cells immediately beneath. Structural coloration has potential for industrial, commercial and military applications, with biomimetic surfaces that could provide brilliant colours, adaptive camouflage, efficient optical switches and low-reflectance glass. History In his 1665 book Micrographia, Robert Hooke described the "fantastical" colours of the peacock's feathers: In his 1704 book Opticks, Isaac Newton described the mechanism of the colours other than the brown pigment of peacock tail feathers. Newton noted that Thomas Young (1773–1829) extended Newton's particle theory of light by showing that light could also behave as a wave. He showed in 1803 that light could diffract from sharp edges or slits, creating interference patterns. In his 1892 book Animal Coloration, Frank Evers Beddard (1858–1925) acknowledged the existence of structural colours: But Beddard then largely dismissed structural coloration, firstly as subservient to pigments: "in every case the [structural] colour needs for its display a background of dark pigment;" and then by asserting its rarity: "By far the commonest source of colour in invertebrate animals is the presence in the skin of definite pigments", though he does later admit that the Cape golden mole has "structural peculiarities" in its hair that "give rise to brilliant colours". Principles Structure not pigment Structural coloration is caused by interference effects rather than by pigments. Colours are produced when a material is scored with fine parallel lines, or formed of one or more parallel thin layers, or otherwise composed of microstructures on the scale of the colour's wavelength. Structural coloration is responsible for the blues and greens of the feathers of many birds (the bee-eater, kingfisher and roller, for example), as well as many butterfly wings, beetle wing-cases (elytra) and (while rare among flowers) the gloss of buttercup petals. These are often iridescent, as in peacock feathers and nacreous shells such as of pearl oysters (Pteriidae) and Nautilus. This is because the reflected colour depends on the viewing angle, which in turn governs the apparent spacing of the structures responsible. Structural colours can be combined with pigment colours: peacock feathers are pigmented brown with melanin, while buttercup petals have both carotenoid pigments for yellowness and thin films for reflectiveness. Principle of iridescence Iridescence, as explained by Thomas Young in 1803, is created when extremely thin films reflect part of the light falling on them from their top surfaces. The rest of the light goes through the films, and a further part of it is reflected from their bottom surfaces. The two sets of reflected waves travel back upwards in the same direction. But since the bottom-reflected waves travelled a little farther – controlled by the thickness and refractive index of the film, and the angle at which the light fell – the two sets of waves are out of phase. When the waves are one or more whole wavelengths apart – in other words, at certain specific angles, they add (interfere constructively), giving a strong reflection. At other angles and phase differences, they can subtract, giving weak reflections. The thin film therefore selectively reflects just one wavelength – a pure colour – at any given angle, but other wavelengths – different colours – at different angles. So, as a thin-film structure such as a butterfly's wing or bird's feather moves, it seems to change colour. Mechanisms Fixed structures A number of fixed structures can create structural colours, by mechanisms including diffraction gratings, selective mirrors, photonic crystals, crystal fibres and deformed matrices. Structures can be far more elaborate than a single thin film: films can be stacked up to give strong iridescence, to combine two colours, or to balance out the inevitable change of colour with angle to give a more diffuse, less iridescent effect. Each mechanism offers a specific solution to the problem of creating a bright colour or combination of colours visible from different directions. A diffraction grating constructed of layers of chitin and air gives rise to the iridescent colours of various butterfly wing scales as well as to the tail feathers of birds such as the peacock. Hooke and Newton were correct in their claim that the peacock's colours are created by interference, but the structures responsible, being close to the wavelength of light in scale (see micrographs), were smaller than the striated structures they could see with their light microscopes. Another way to produce a diffraction grating is with tree-shaped arrays of chitin, as in the wing scales of some of the brilliantly coloured tropical Morpho butterflies (see drawing). Yet another variant exists in Parotia lawesii, Lawes's parotia, a bird of paradise. The barbules of the feathers of its brightly coloured breast patch are V-shaped, creating thin-film microstructures that strongly reflect two different colours, bright blue-green and orange-yellow. When the bird moves the colour switches sharply between these two colours, rather than drifting iridescently. During courtship, the male bird systematically makes small movements to attract females, so the structures must have evolved through sexual selection. Photonic crystals can be formed in different ways. In Parides sesostris, the emerald-patched cattleheart butterfly, photonic crystals are formed of arrays of nano-sized holes in the chitin of the wing scales. The holes have a diameter of about 150 nanometres and are about the same distance apart. The holes are arranged regularly in small patches; neighbouring patches contain arrays with differing orientations. The result is that these emerald-patched cattleheart scales reflect green light evenly at different angles instead of being iridescent. In Lamprocyphus augustus, a weevil from Brazil, the chitin exoskeleton is covered in iridescent green oval scales. These contain diamond-based crystal lattices oriented in all directions to give a brilliant green coloration that hardly varies with angle. The scales are effectively divided into pixels about a micrometre wide. Each such pixel is a single crystal and reflects light in a direction different from its neighbours. Selective mirrors to create interference effects are formed of micron-sized bowl-shaped pits lined with multiple layers of chitin in the wing scales of Papilio palinurus, the emerald swallowtail butterfly. These act as highly selective mirrors for two wavelengths of light. Yellow light is reflected directly from the centres of the pits; blue light is reflected twice by the sides of the pits. The combination appears green, but can be seen as an array of yellow spots surrounded by blue circles under a microscope. Crystal fibres, formed of hexagonal arrays of hollow nanofibres, create the bright iridescent colours of the bristles of Aphrodita, the sea mouse, a non-wormlike genus of marine annelids. The colours are aposematic, warning predators not to attack. The chitin walls of the hollow bristles form a hexagonal honeycomb-shaped photonic crystal; the hexagonal holes are 0.51 μm apart. The structure behaves optically as if it consisted of a stack of 88 diffraction gratings, making Aphrodita one of the most iridescent of marine organisms. Deformed matrices, consisting of randomly oriented nanochannels in a spongelike keratin matrix, create the diffuse non-iridescent blue colour of Ara ararauna, the blue-and-yellow macaw. Since the reflections are not all arranged in the same direction, the colours, while still magnificent, do not vary much with angle, so they are not iridescent. Spiral coils, formed of helicoidally stacked cellulose microfibrils, create Bragg reflection in the "marble berries" of the African herb Pollia condensata, resulting in the most intense blue coloration known in nature. The berry's surface has four layers of cells with thick walls, containing spirals of transparent cellulose spaced so as to allow constructive interference with blue light. Below these cells is a layer two or three cells thick containing dark brown tannins. Pollia produces a stronger colour than the wings of Morpho butterflies, and is one of the first instances of structural coloration known from any plant. Each cell has its own thickness of stacked fibres, making it reflect a different colour from its neighbours, and producing a pixellated or pointillist effect with different blues speckled with brilliant green, purple, and red dots. The fibres in any one cell are either left-handed or right-handed, so each cell circularly polarizes the light it reflects in one direction or the other. Pollia is the first organism known to show such random polarization of light, which, nevertheless does not have a visual function, as the seed-eating birds who visit this plant species are not able to perceive polarised light. Spiral microstructures are also found in scarab beetles where they produce iridescent colours. Thin film with diffuse reflector, based on the top two layers of a buttercup's petals. The brilliant yellow gloss derives from a combination, rare among plants, of yellow pigment and structural coloration. The very smooth upper epidermis acts as a reflective and iridescent thin film; for example, in Ranunculus acris, the layer is 2.7 micrometres thick. The unusual starch cells form a diffuse but strong reflector, enhancing the flower's brilliance. The curved petals form a paraboloidal dish which directs the sun's heat to the reproductive parts at the centre of the flower, keeping it some degrees Celsius above the ambient temperature. Surface gratings, consisting of ordered surface features due to exposure of ordered muscle cells on cuts of meat. The structural coloration on meat cuts appears only after the ordered pattern of muscle fibrils is exposed and light is diffracted by the proteins in the fibrils. The coloration or wavelength of the diffracted light depends on the angle of observation and can be enhanced by covering the meat with translucent foils. Roughening the surface or removing water content by drying causes the structure to collapse, thus, the structural coloration to disappear. Interference from multiple total internal reflections can occur in microscale structures, such as sessile water droplets and biphasic oil-in-water droplets as well as polymer microstructured surfaces. In this structural coloration mechanism, light rays that travel by different paths of total internal reflection along an interface interfere to generate iridescent colour. Variable structures Some animals including cephalopods such as squid are able to vary their colours rapidly for both camouflage and signalling. The mechanisms include reversible proteins which can be switched between two configurations. The configuration of reflectin proteins in chromatophore cells in the skin of the Doryteuthis pealeii squid is controlled by electric charge. When charge is absent, the proteins stack together tightly, forming a thin, more reflective layer; when charge is present, the molecules stack more loosely, forming a thicker layer. Since chromatophores contain multiple reflectin layers, the switch changes the layer spacing and hence the colour of light that is reflected. Blue-ringed octopuses spend much of their time hiding in crevices whilst displaying effective camouflage patterns with their dermal chromatophore cells. If they are provoked, they quickly change colour, becoming bright yellow with each of the 50-60 rings flashing bright iridescent blue within a third of a second. In the greater blue-ringed octopus (Hapalochlaena lunulata), the rings contain multi-layer iridophores. These are arranged to reflect blue–green light in a wide viewing direction. The fast flashes of the blue rings are achieved using muscles under neural control. Under normal circumstances, each ring is hidden by contraction of muscles above the iridophores. When these relax and muscles outside the ring contract, the bright blue rings are exposed. Examples In technology Gabriel Lippmann won the Nobel Prize in Physics in 1908 for his work on a structural coloration method of colour photography, the Lippmann plate. This used a photosensitive emulsion fine enough for the interference caused by light waves reflecting off the back of the glass plate to be recorded in the thickness of the emulsion layer, in a monochrome (black and white) photographic process. Shining white light through the plate effectively reconstructs the colours of the photographed scene. In 2010, the dressmaker Donna Sgro made a dress from Teijin Fibers' Morphotex, an undyed fabric woven from structurally coloured fibres, mimicking the microstructure of Morpho butterfly wing scales. The fibres are composed of 61 flat alternating layers, between 70 and 100 nanometres thick, of two plastics with different refractive indices, nylon and polyester, in a transparent nylon sheath with an oval cross-section. The materials are arranged so that the colour does not vary with angle. The fibres have been produced in red, green, blue, and violet. Several countries and regions, including the U.S., European Union, and Brazil, use banknotes that include optically variable ink, which is structurally coloured, as a security feature. These pearlescent inks appear as different colours depending on the angle the banknote is viewed from. Because the ink is hard to obtain, and because a photocopier or scanner (which works from only one angle) cannot reproduce or even perceive the color-shifting effect, the ink serves to make counterfeiting more difficult. Structural coloration could be further exploited industrially and commercially, and research that could lead to such applications is under way. A direct parallel would be to create active or adaptive military camouflage fabrics that vary their colours and patterns to match their environments, just as chameleons and cephalopods do. The ability to vary reflectivity to different wavelengths of light could also lead to efficient optical switches that could function like transistors, enabling engineers to make fast optical computers and routers. The surface of the compound eye of the housefly is densely packed with microscopic projections that have the effect of reducing reflection and hence increasing transmission of incident light. Similarly, the eyes of some moths have antireflective surfaces, again using arrays of pillars smaller than the wavelength of light. "Moth-eye" nanostructures could be used to create low-reflectance glass for windows, solar cells, display devices, and military stealth technologies. Antireflective biomimetic surfaces using the "moth-eye" principle can be manufactured by first creating a mask by lithography with gold nanoparticles, and then performing reactive-ion etching.
Biology and health sciences
Biological pigments
Biology
38360943
https://en.wikipedia.org/wiki/Milling%20%28machining%29
Milling (machining)
Milling is the process of machining using rotary cutters to remove material by advancing a cutter into a workpiece. This may be done by varying directions on one or several axes, cutter head speed, and pressure. Milling covers a wide variety of different operations and machines, on scales from small individual parts to large, heavy-duty gang milling operations. It is one of the most commonly used processes for machining custom parts to precise tolerances. Milling can be done with a wide range of machine tools. The original class of machine tools for milling was the milling machine (often called a mill). After the advent of computer numerical control (CNC) in the 1960s, milling machines evolved into machining centers: milling machines augmented by automatic tool changers, tool magazines or carousels, CNC capability, coolant systems, and enclosures. Milling centers are generally classified as vertical machining centers (VMCs) or horizontal machining centers (HMCs). The integration of milling into turning environments, and vice versa, began with live tooling for lathes and the occasional use of mills for turning operations. This led to a new class of machine tools, multitasking machines (MTMs), which are purpose-built to facilitate milling and turning within the same work envelope. Process Milling is a cutting process that uses a milling cutter to remove material from the surface of a workpiece. The milling cutter is a rotary cutting tool, often with multiple cutting points. As opposed to drilling, where the tool is advanced along its rotation axis, the cutter in milling is usually moved perpendicular to its axis so that cutting occurs on the circumference of the cutter. As the milling cutter enters the work piece, the cutting edges (flutes or teeth) of the tool repeatedly cut into and exit from the material, shaving off chips (swarf) from the work piece with each pass. The cutting action is shear deformation; material is pushed off the work piece in tiny clumps that hang together to a greater or lesser extent (depending on the material) to form chips. This makes metal cutting somewhat different (in its mechanics) from slicing softer materials with a blade. The milling process removes material by performing many separate, small cuts. This is accomplished by using a cutter with many teeth, spinning the cutter at high speed, or advancing the material through the cutter slowly; most often it is some combination of these three approaches. The speeds and feeds used are varied to suit a combination of variables. The speed at which the piece advances through the cutter is called feed rate, or just feed; it is most often measured as distance per time (inches per minute [in/min or ipm] or millimeters per minute [mm/min]), although distance per revolution or per cutter tooth are also sometimes used. There are two major classes of milling process: In face milling, the cutting action occurs primarily at the end corners of the milling cutter. Face milling is used to cut flat surfaces (faces) into the work piece, or to cut flat-bottomed cavities. In peripheral milling, the cutting action occurs primarily along the circumference of the cutter, so that the cross section of the milled surface ends up receiving the shape of the cutter. In this case the blades of the cutter can be seen as scooping out material from the work piece. Peripheral milling is well suited to the cutting of deep slots, threads, and gear teeth. Milling cutters Many different types of cutting tools are used in the milling process. Milling cutters such as end mills may have cutting surfaces across their entire end surface, so that they can be drilled into the work piece (plunging). Milling cutters may also have extended cutting surfaces on their sides to allow for peripheral milling. Tools optimized for face milling tend to have only small cutters at their end corners. The cutting surfaces of a milling cutter are generally made of a hard and temperature-resistant material, so that they wear slowly. A low cost cutter may have surfaces made of high speed steel. More expensive but slower-wearing materials include cemented carbide. Thin film coatings may be applied to decrease friction or further increase hardness. There are cutting tools typically used in milling machines or machining centers to perform milling operations (and occasionally in other machine tools). They remove material by their movement within the machine (e.g., a ball nose mill) or directly from the cutter's shape (e.g., a form tool such as a hobbing cutter). As material passes through the cutting area of a milling machine, the blades of the cutter take swarfs of material at regular intervals. Surfaces cut by the side of the cutter (as in peripheral milling) therefore always contain regular ridges. The distance between ridges and the height of the ridges depend on the feed rate, number of cutting surfaces, the cutter diameter. With a narrow cutter and rapid feed rate, these revolution ridges can be significant variations in the surface finish. The face milling process can in principle produce very flat surfaces. However, in practice the result always shows visible trochoidal marks following the motion of points on the cutter's end face. These revolution marks give the characteristic finish of a face milled surface. Revolution marks can have significant roughness depending on factors such as flatness of the cutter's end face and the degree of perpendicularity between the cutter's rotation axis and feed direction. Often a final pass with a slow feed rate is used to improve the surface finish after the bulk of the material has been removed. In a precise face milling operation, the revolution marks will only be microscopic scratches due to imperfections in the cutting edge. Gang milling refers to the use of two or more milling cutters mounted on the same arbor (that is, ganged) in a horizontal-milling setup. All of the cutters may perform the same type of operation, or each cutter may perform a different type of operation. For example, if several workpieces need a slot, a flat surface, and an angular groove, a good method to cut these (within a non-CNC context) would be gang milling. All the completed workpieces would be the same, and milling time per piece would be minimized. Gang milling was especially important before the CNC era, because for duplicate part production, it was a substantial efficiency improvement over manual-milling one feature at an operation, then changing machines (or changing setup of the same machine) to cut the next op. Today, CNC mills with automatic tool change and 4- or 5-axis control obviate gang-milling practice to a large extent. Equipment Milling is performed with a milling cutter in various forms, held in a collet or similar which, in turn, is held in the spindle of a milling machine. Types and nomenclature Mill orientation is the primary classification for milling machines. The two basic configurations are vertical and horizontal – referring to the orientation of the rotating spindle upon which the cutter is mounted. However, there are alternative classifications according to method of control, size, purpose and power source. Mill orientation Vertical In the vertical milling machine the spindle axis is vertically oriented. Milling cutters are held in the spindle and rotate on its axis. The spindle can generally be lowered (or the table can be raised, giving the same relative effect of bringing the cutter closer or deeper into the work), allowing plunge cuts and drilling. The depth to which blades cut into the work can be controlled with a micrometer adjustment nut. There are two subcategories of vertical mills: the bed mill and the turret mill. A turret mill has a fixed spindle and the table is moved both perpendicular and parallel to the spindle axis to accomplish cutting. Some turret mills have a quill which allows the milling cutter (or a drill) to be raised and lowered in a manner similar to a drill press. This provides two methods of cutting in the vertical (Z) direction: by raising or lowering the quill, and by moving the knee. In the bed mill, however, the table moves only perpendicular to the spindle's axis, while the spindle itself moves parallel to its own axis. Turret mills are generally considered by some to be more versatile of the two designs. A third type also exists, a lighter, more versatile machine, called a mill-drill. The mill-drill is a close relative of the vertical mill and quite popular in light industry; and with hobbyists. A mill-drill is similar in basic configuration to a very heavy drill press, but equipped with an X-Y table and a much larger column. They also typically use more powerful motors than a comparably sized drill press, most are muti-speed belt driven with some models having a geared head or electronic speed control. They generally have quite heavy-duty spindle bearings to deal with the lateral loading on the spindle that is created by a milling operation. A mill drill also typically raises and lowers the entire head, including motor, often on a dovetailed (sometimes round with rack and pinion) vertical column. A mill drill also has a large quill that is generally locked during milling operations and released to facilitate drilling functions. Other differences that separate a mill-drill from a drill press may be a fine tuning adjustment for the Z-axis, a more precise depth stop, the capability to lock the X, Y or Z axis, and often a system of tilting the head or the entire vertical column and powerhead assembly to allow angled cutting-drilling. Aside from size, the principal difference between these lighter machines and larger vertical mills is that the X-Y table is at a fixed elevation; the Z-axis is controlled by moving the head or quill down toward the X,Y table. A mill drill typically has an internal taper fitting in the quill to take a collet chuck, face mills, or a Jacobs chuck similar to the vertical mill. Horizontal A horizontal mill has the same short but the cutters are mounted on a horizontal spindle, or arbor, mounted across the table. Many horizontal mills also feature a built-in rotary table that allows milling at various angles; this feature is called a universal table. While endmills and the other types of tools available to a vertical mill may be used in a horizontal mill, their real advantage lies in arbor-mounted cutters, called side and face mills, which have a cross section rather like a circular saw, but are generally wider and smaller in diameter. Because the cutters have good support from the arbor and have a larger cross-sectional area than an end mill, quite heavy cuts can be taken enabling rapid material removal rates. These are used to mill grooves and slots. Plain mills are used to shape flat surfaces. Several cutters may be ganged together on the arbor to mill a complex shape of slots and planes. Special cutters can also cut grooves, bevels, radii, or indeed any section desired. These specialty cutters tend to be expensive. Simplex mills have one spindle, and duplex mills have two. It is also easier to cut gears on a horizontal mill. Some horizontal milling machines are equipped with a power-take-off provision on the table. This allows the table feed to be synchronized to a rotary fixture, enabling the milling of spiral features such as hypoid gears. Universal A universal milling machine is one with the facility to either have a horizontal spindle or a vertical spindle. The latter sometimes being on a two-axis turret enabling the spindle to be pointed in any direction on desires. The two options may be driven independently or from one motor through gearing. In either case, as the work is generally placed in the same place for either type of operation, the mechanism for the method not being used is moved out of the way. In smaller machines, "spares" may be lifted off while larger machines offer a system to retract those parts not in use. Comparative merits The choice between vertical and horizontal spindle orientation in milling machine design usually hinges on the shape and size of a workpiece and the number of sides of the workpiece that require machining. Work in which the spindle's axial movement is normal to one plane, with an endmill as the cutter, lends itself to a vertical mill, where the operator can stand before the machine and have easy access to the cutting action by looking down upon it. Thus vertical mills are most favored for diesinking work (machining a mould into a block of metal). Heavier and longer workpieces lend themselves to placement on the table of a horizontal mill. Prior to numerical control, horizontal milling machines evolved first, because they evolved by putting milling tables under lathe-like headstocks. Vertical mills appeared in subsequent decades, and accessories in the form of add-on heads to change horizontal mills to vertical mills (and later vice versa) have been commonly used. Even in the CNC era, a heavy workpiece needing machining on multiple sides lends itself to a horizontal machining center, while diesinking lends itself to a vertical one. Alternative classifications In addition to horizontal versus vertical, other distinctions are also important: Variants Bed mill This refers to any milling machine where the spindle is on a pendant that moves up and down to move the cutter into the work, while the table sits on a stout bed that rests on the floor. These are generally more rigid than a knee mill. Gantry mills can be included in this bed mill category. Box mill or column mill Very basic hobbyist bench-mounted milling machines that feature a head riding up and down on a column or box way. C-frame mill These are larger, industrial production mills. They feature a knee and fixed spindle head that is only mobile vertically. They are typically much more powerful than a turret mill, featuring a separate hydraulic motor for integral hydraulic power feeds in all directions, and a twenty to fifty horsepower motor. Backlash eliminators are almost always standard equipment. They use large NMTB 40 or 50 tooling. The tables on C-frame mills are usually 18" by 68" or larger, to allow multiple parts to be machined at the same time. Floor mill These have a row of rotary tables, and a horizontal pendant spindle mounted on a set of tracks that runs parallel to the table row. These mills have predominantly been converted to CNC, but some can still be found (if one can even find a used machine available) under manual control. The spindle carriage moves to each individual table, performs the machining operations, and moves to the next table while the previous table is being set up for the next operation. Unlike other mills, floor mills have movable floor units. A crane drops massive rotary tables, X-Y tables, etc., into position for machining, allowing large and complex custom milling operations. Gantry mill The milling head rides over two rails (often steel shafts) which lie at each side of the work surface. Due to its design it usually has a very small footprint compared to the machine travel size. As a downside they are usually not as rigid as e.g. C-Frame mills. Horizontal boring mill Large, accurate bed horizontal mills that incorporate many features from various machine tools. They are predominantly used to create large manufacturing jigs, or to modify large, high precision parts. They have a spindle stroke of several (usually between four and six) feet, and many are equipped with a tailstock to perform very long boring operations without losing accuracy as the bore increases in depth. A typical bed has X and Y travel, and is between three and four feet square with a rotary table or a larger rectangle without a table. The pendant usually provides between four and eight feet of vertical movement. Some mills have a large (30" or more) integral facing head. Right angle rotary tables and vertical milling attachments are available for further flexibility. Jig borer Vertical mills that are built to bore holes, and very light slot or face milling. They are typically bed mills with a long spindle throw. The beds are more accurate, and the handwheels are graduated down to .0001" for precise hole placement. Knee mill or knee-and-column mill refers to any milling machine whose x-y table rides up and down the column on a vertically adjustable knee. This includes Bridgeports. Planer-style mill (Plano Milling)Large mills built in the same configuration as planers except with a milling spindle instead of a planing head. This term is growing dated as planers themselves are largely a thing of the past. Ram-type mill This can refer to any mill that has a cutting head mounted on a sliding ram. The spindle can be oriented either vertically or horizontally. In practice most mills with rams also involve swiveling ability, whether or not it is called "turret" mounting. The Bridgeport configuration can be classified as a vertical-head ram-type mill. Van Norman Machine Tool Company specialized in ram-type mills through most of the 20th century. Since the wide dissemination of CNC machines, ram-type mills are still made in the Bridgeport configuration (with either manual or CNC control), but the less common variations (such as were built by Van Norman, Index, and others) have died out, their work being done now by either Bridgeport-form mills or machining centers. Turret mill More commonly referred to as Bridgeport-type milling machines. The spindle can be aligned in many different positions for a very versatile, if somewhat less rigid machine. Alternative terminology A milling machine is often called a mill by machinists. The archaic term miller was commonly used in the 19th and early 20th centuries. Since the 1960s there has developed an overlap of usage between the terms milling machine and machining center. NC/CNC machining centers evolved from milling machines, which is why the terminology evolved gradually with considerable overlap that still persists. The distinction, when one is made, is that a machining center is a mill with features that pre-CNC mills never had, especially an automatic tool changer (ATC) that includes a tool magazine (carousel), and sometimes an automatic pallet changer (APC). In typical usage, all machining centers are mills, but not all mills are machining centers; only mills with ATCs are machining centers. Computer numerical control Most CNC milling machines (also called machining centers) are computer controlled vertical mills with the ability to move the spindle vertically along the Z-axis. This extra degree of freedom permits their use in diesinking, engraving applications, and 2.5D surfaces such as relief sculptures. When combined with the use of conical tools or a ball nose cutter, it also significantly improves milling precision without impacting speed, providing a cost-efficient alternative to most flat-surface hand-engraving work. CNC machines can exist in virtually any of the forms of manual machinery, like horizontal mills. The most advanced CNC milling-machines, the multiaxis machine, add two more axes in addition to the three normal axes (XYZ). Horizontal milling machines also have a C or Q axis, allowing the horizontally mounted workpiece to be rotated, essentially allowing asymmetric and eccentric turning. The fifth axis (B axis) controls the tilt of the tool itself. When all of these axes are used in conjunction with each other, extremely complicated geometries, even organic geometries such as a human head can be made with relative ease with these machines. But the skill to program such geometries is beyond that of most operators. Therefore, 5-axis milling machines are practically always programmed with CAM. The operating system of such machines is a closed loop system and functions on feedback. These machines have developed from the basic NC (NUMERIC CONTROL) machines. A computerized form of NC machines is known as CNC machines. A set of instructions (called a program) is used to guide the machine for desired operations. There are over 100 different G-codes and M-codes. Some very commonly used codes, which are used in the program are: G00 – rapid traverse G01 – linear interpolation of tool G02 - circular arc clockwise (cw) G03 - circular arc counter-clockwise (ccw) G20 - dimensions in inch G21 – dimensions in mm G28 - return to reference point G40 - Tool compensation cancel G41 - Tool compensation left G42 - Tool compensation right G43 - Tool length compensation G54 - Select coordinate system #1 M03 – spindle start (clockwise) M04 – spindle start (counter-clockwise) M05 - spindle stop M06 - tool change M08 - coolant on M09 - coolant off M30 – program end Various other codes are also used. A CNC machine is operated by a single operator called a programmer. This machine is capable of performing various operations automatically and economically. With the declining price of computers and open source CNC software, the entry price of CNC machines has plummeted. Tooling The accessories and cutting tools used on machine tools (including milling machines) are referred to in aggregate by the mass noun "tooling". There is a high degree of standardization of the tooling used with CNC milling machines, and a lesser degree with manual milling machines. To ease up the organization of the tooling in CNC production many companies use a tool management solution. Milling cutters for specific applications are held in various tooling configurations. CNC milling machines nearly always use SK (or ISO), CAT, BT or HSK tooling. SK tooling is the most common in Europe, while CAT tooling, sometimes called V-Flange Tooling, is the oldest and probably most common type in the USA. CAT tooling was invented by Caterpillar Inc. of Peoria, Illinois, in order to standardize the tooling used on their machinery. CAT tooling comes in a range of sizes designated as CAT-30, CAT-40, CAT-50, etc. The number refers to the Association for Manufacturing Technology (formerly the National Machine Tool Builders Association (NMTB)) taper size of the tool. An improvement on CAT Tooling is Bridgeport Taper (BT) Tooling, which looks similar and can easily be confused with CAT tooling. Like CAT Tooling, BT Tooling comes in a range of sizes and uses the same NMTB body taper. However, BT tooling is symmetrical about the spindle axis, which CAT tooling is not. This gives BT tooling greater stability and balance at high speeds. One other subtle difference between these two toolholders is the thread used to hold the pull stud. CAT Tooling is all Imperial thread and BT Tooling is all Metric thread. Note that this affects the pull stud only; it does not affect the tool that they can hold. Both types of tooling are sold to accept both Imperial and metric sized tools. SK and HSK tooling, sometimes called "Hollow Shank Tooling", is much more common in Europe where it was invented than it is in the United States. It is claimed that HSK tooling is even better than BT Tooling at high speeds. The holding mechanism for HSK tooling is placed within the (hollow) body of the tool and, as spindle speed increases, it expands, gripping the tool more tightly with increasing spindle speed. There is no pull stud with this type of tooling. For manual milling machines, there is less standardization, because a greater plurality of formerly competing standards exist. Newer and larger manual machines usually use NMTB tooling. This tooling is somewhat similar to CAT tooling but requires a drawbar within the milling machine. Furthermore, there are a number of variations with NMTB tooling that make interchangeability troublesome. The older a machine, the greater the plurality of standards that may apply (e.g., Morse, Jarno, Brown & Sharpe, Van Norman, and other less common builder-specific tapers). However, two standards that have seen especially wide usage are the Morse #2 and the R8, whose prevalence was driven by the popularity of the mills built by Bridgeport Machines of Bridgeport, Connecticut. These mills so dominated the market for such a long time that "Bridgeport" is virtually synonymous with "manual milling machine". Most of the machines that Bridgeport made between 1938 and 1965 used a Morse taper #2, and from about 1965 onward most used an R8 taper. Many cutting tools exist for milling machines, including milling cutters, slitting cutters, gear cutters, end mills, etc. Accessories Arbor support Stop block CNC pocket milling Pocket milling has been regarded as one of the most widely used operations in machining. It is extensively used in aerospace and shipyard industries. In pocket milling the material inside an arbitrarily closed boundary on a flat surface of a work piece is removed to a fixed depth. Generally flat bottom end mills are used for pocket milling. Firstly roughing operation is done to remove the bulk of material and then the pocket is finished by a finish end mill. Most of the industrial milling operations can be taken care of by 2.5 axis CNC milling. This type of path control can machine up to 80% of all mechanical parts. Since the importance of pocket milling is very relevant, therefore effective pocketing approaches can result in reduction in machining time and cost. NC pocket milling can be carried out mainly by two tool paths, viz., linear and non-linear. Linear tool path In this approach, the tool movement is unidirectional. Zig-zag and zig tool paths are examples of linear tool paths. Zig-zag In zig-zag milling, material is removed both in forward and backward paths. In this case, cutting is done both with and against the rotation of the spindle. This reduces the machining time but increases machine chatter and tool wear. Zig In zig milling, the tool moves only in one direction. The tool has to be lifted and retracted after each cut, due to which machining time increases. However, in case of zig milling surface quality is better. Non-linear tool path In this approach, tool movement is multi-directional. One example of non-linear tool path is contour-parallel tool path. Contour-parallel In this approach, the required pocket boundary is used to derive the tool path. In this case, the cutter is always in contact with the work material. Hence the idle time spent in positioning and retracting the tool is avoided. For large-scale material removal, contour-parallel tool path is widely used because it can be consistently used with up-cut or down-cut method during the entire process. There are three different approaches that fall into the category of contour-parallel tool path generation. They are: Pair-wise intersection approach: In pair-wise intersection approach, the boundary of the pocket is brought inwards in steps, The offset segments will intersect at concave corners. To obtain the required contour, these intersections are to be trimmed off. On the other hand, in case of convex corner, the offset segments are extended and thereby connected to make the contour. These operations, viz. offsetting, trimming and extending, are repeatedly done to cover the entire machining volume with sufficient layer of profiles. Voronoi diagram approach: In the voronoi diagram approach, the pocket boundary is segmented and a voronoi diagram is constructed for the entire pocket boundary. These voronoi diagrams are used for generating the tool path for machining. This method is considered to be more efficient and robust. Moreover, it avoids topological problems associated with traditional offsetting algorithms. Curvilinear In this approach, the tool travels along a gradually evolving spiral path. The spiral starts at the center of the pocket to be machined and the tool gradually moves towards the pocket boundary. The direction of the tool path changes progressively and local acceleration and deceleration of the tool are minimized. This reduces tool wear. History 1780-1810 Milling machines evolved from the practice of rotary filing—that is, running a circular cutter with file-like teeth in the headstock of a lathe. Rotary filing and, later, true milling were developed to reduce time and effort spent hand-filing. The full story of milling machine development may never be known, because much early development took place in individual shops where few records were kept for posterity. However, the broad outlines are known, as summarized below. From a history-of-technology viewpoint, it is clear that the naming of this new type of machining with the term "milling" was an extension from that word's earlier senses of processing materials by abrading them in some way (cutting, grinding, crushing, etc.). Rotary filing long predated milling. A rotary file by Jacques de Vaucanson, circa 1760, is well known. In 1783, Samuel Rehe invented a true milling machine. In 1795, Eli Terry began using a milling machine at Plymouth Connecticut in the production of tall case clocks. With the use of his milling machine, Terry was the first to accomplish Interchangeable parts in the clock industry. Milling wooden parts was efficient in interchangeable parts, but inefficient in high yields. Milling wooden blanks results in a low yield of parts because the machines single blade would cause loss of gear teeth when the cutter hit parallel grains in the wood. Terry later invented a spindle cutting machine to mass produce parts in 1807. Other Connecticut clockmakers like James Harrison of Waterbury, Thomas Barnes of Litchfield, and Gideon Roberts of Bristol, also used milling machines to produce their clocks. 1810s–1830s It is clear that milling machines as a distinct class of machine tool (separate from lathes running rotary files) first appeared between 1814 and 1818. The centers of earliest development of true milling machines were two federal armories of the U.S. (Springfield and Harpers Ferry) together with the various private armories and inside contractors that shared turnover of skilled workmen with them. Between 1912 and 1916, Joseph W. Roe, a respected founding father of machine tool historians, credited Eli Whitney (one of the private arms makers mentioned above) with producing the first true milling machine. By 1918, he considered it "Probably the first milling machine ever built—certainly the oldest now in existence […]." However, subsequent scholars, including Robert S. Woodbury and others, have improved upon Roe's early version of the history and suggest that just as much credit—in fact, probably more—belongs to various other inventors, including Robert Johnson of Middletown, Connecticut; Captain John H. Hall of the Harpers Ferry armory; Simeon North of the Staddle Hill factory in Middletown; Roswell Lee of the Springfield armory; and Thomas Blanchard. (Several of the men mentioned above are sometimes described on the internet as "the inventor of the first milling machine" or "the inventor of interchangeable parts". Such claims are oversimplified, as these technologies evolved over time among many people.) Peter Baida, citing Edward A. Battison's article "Eli Whitney and the Milling Machine," which was published in the Smithsonian Journal of History in 1966, exemplifies the dispelling of the "Great Man" image of Whitney by historians of technology working in the 1950s and 1960s. He quotes Battison as concluding that "There is no evidence that Whitney developed or used a true milling machine." Baida says, "The so-called Whitney machine of 1818 seems actually to have been made after Whitney's death in 1825." Baida cites Battison's suggestion that the first true milling machine was made not by Whitney, but by Robert Johnson of Middletown. The late teens of the 19th century were a pivotal time in the history of machine tools, as the period of 1814 to 1818 is also the period during which several contemporary pioneers (Fox, Murray, and Roberts) were developing the planer, and as with the milling machine, the work being done in various shops was undocumented for various reasons (partially because of proprietary secrecy, and also simply because no one was taking down records for posterity). James Nasmyth built a milling machine very advanced for its time between 1829 and 1831. It was tooled to mill the six sides of a hex nut that was mounted in a six-way indexing fixture. A milling machine built and used in the shop of Gay & Silver (aka Gay, Silver, & Co) in the 1830s was influential because it employed a better method of vertical positioning than earlier machines. For example, Whitney's machine (the one that Roe considered the very first) and others did not make provision for vertical travel of the knee. Evidently, the workflow assumption behind this was that the machine would be set up with shims, vise, etc. for a certain part design, and successive parts did not require vertical adjustment (or at most would need only shimming). This indicates that early thinking about milling machines was as production and not as toolroom machines. In these early years, milling was often viewed as only a roughing operation to be followed by finishing with a hand file. The idea of reducing hand filing was more important than replacing it. 1840s–1860 Some of the key men in milling machine development during this era included Frederick W. Howe, Francis A. Pratt, Elisha K. Root, and others. (These same men during the same era were also busy developing the state of the art in turret lathes. Howe's experience at Gay & Silver in the 1840s acquainted him with early versions of both machine tools. His machine tool designs were later built at Robbins & Lawrence, the Providence Tool Company, and Brown & Sharpe.) The most successful milling machine design to emerge during this era was the , which rather than being a specific make and model of machine tool is truly a family of tools built by various companies on a common configuration over several decades. It took its name from the first company to put one on the market, George S. Lincoln & Company (formerly the Phoenix Iron Works), whose first one was built in 1855 for the Colt armory. During this era there was a continued blind spot in milling machine design, as various designers failed to develop a truly simple and effective means of providing slide travel in all three of the archetypal milling axes (X, Y, and Z—or as they were known in the past, longitudinal, traverse, and vertical). Vertical positioning ideas were either absent or underdeveloped. The Lincoln miller's spindle could be raised and lowered, but the original idea behind its positioning was to be set up in position and then run, as opposed to being moved frequently while running. Like a turret lathe, it was a repetitive-production machine, with each skilled setup followed by extensive fairly low skill operation. 1860s In 1861, Frederick W. Howe, while working for the Providence Tool Company, asked Joseph R. Brown of Brown & Sharpe for a solution to the problem of milling spirals, such as the flutes of twist drills. These were usually filed by hand at the time. (Helical planing existed but was by no means common.) Brown designed a "universal milling machine" that, starting from its first sale in March 1862, was wildly successful. It solved the problem of 3-axis travel (i.e., the axes that we now call XYZ) much more elegantly than had been done in the past, and it allowed for the milling of spirals using an indexing head fed in coordination with the table feed. The term "universal" was applied to it because it was ready for any kind of work, including toolroom work, and was not as limited in application as previous designs. (Howe had designed a "universal miller" in 1852, but Brown's of 1861 is the one considered a groundbreaking success.) Brown also developed and patented (1864) the design of formed milling cutters in which successive sharpenings of the teeth do not disturb the geometry of the form. The advances of the 1860s opened the floodgates and ushered in modern milling practice. 1870s to World War I In these decades, Brown & Sharpe and the Cincinnati Milling Machine Company dominated the American milling machine field. However, hundreds of other firms also built milling machines at the time, and many were significant in various ways. Besides a wide variety of specialized production machines, the archetypal multipurpose milling machine of the late 19th and early 20th centuries was a heavy knee-and-column horizontal-spindle design with power table feeds, indexing head, and a stout overarm to support the arbor. The evolution of machine design was driven not only by inventive spirit but also by the constant evolution of milling cutters that saw milestone after milestone from 1860 through World War I. World War I and interwar period Around the end of World War I, machine tool control advanced in various ways that laid the groundwork for later CNC technology. The jig borer popularized the ideas of coordinate dimensioning (dimensioning of all locations on the part from a single reference point); working routinely in "tenths" (ten-thousandths of an inch, 0.0001") as an everyday machine capability; and using the control to go straight from drawing to part, circumventing jig-making. In 1920 the new tracer design of J.C. Shaw was applied to Keller tracer milling machines for die sinking via the three dimensional copying of a template. This made die sinking faster and easier just as dies were in higher demand than ever before, and was very helpful for large steel dies such as those used to stamp sheets in automobile manufacturing. Such machines translated the tracer movements to input for servos that worked the machine leadscrews or hydraulics. They also spurred the development of antibacklash leadscrew nuts. All of the above concepts were new in the 1920s but became routine in the NC/CNC era. By the 1930s, incredibly large and advanced milling machines existed, such as the Cincinnati Hydro-Tel, that presaged today's CNC mills in every respect except for CNC control itself. Bridgeport milling machine In 1936, Rudolph Bannow (1897–1962) conceived of a major improvement to the milling machine. His company commenced manufacturing a new knee-and-column vertical mill in 1938. This was the Bridgeport milling machine, often called a ram-type or turret-type mill because its head has sliding-ram and rotating-turret mounting. The machine became so popular that many other manufacturers created copies and variants. Furthermore, its name came to connote any such variant. The Bridgeport offered enduring advantages over previous models. It was small enough, light enough, and affordable enough to be a practical acquisition for even the smallest machine shop businesses, yet it was also smartly designed, versatile, well-built, and rigid. Its various directions of sliding and pivoting movement allowed the head to approach the work from any angle. The Bridgeport's design became the dominant form for manual milling machines used by several generations of small- and medium-enterprise machinists. By the 1980s an estimated quarter-million Bridgeport milling machines had been built, and they (and their clones) are still being produced today. 1940s–1970s By 1940, automation via cams, such as in screw machines and automatic chuckers, had already been very well developed for decades. Beginning in the 1930s, ideas involving servomechanisms had been in the air, but it was especially during and immediately after World War II that they began to germinate (see also Numerical control > History). These were soon combined with the emerging technology of digital computers. This technological development milieu, spanning from the immediate pre–World War II period into the 1950s, was powered by the military capital expenditures that pursued contemporary advancements in the directing of gun and rocket artillery and in missile guidance—other applications in which humans wished to control the kinematics/dynamics of large machines quickly, precisely, and automatically. Sufficient R&D spending probably would not have happened within the machine tool industry alone; but it was for the latter applications that the will and ability to spend was available. Once the development was underway, it was eagerly applied to machine tool control in one of the many post-WWII instances of technology transfer. In 1952, numerical control reached the developmental stage of laboratory reality. The first NC machine tool was a Cincinnati Hydrotel milling machine retrofitted with a scratch-built NC control unit. It was reported in Scientific American, just as another groundbreaking milling machine, the Brown & Sharpe universal, had been in 1862. During the 1950s, numerical control moved slowly from the laboratory into commercial service. For its first decade, it had rather limited impact outside of aerospace work. But during the 1960s and 1970s, NC evolved into CNC, data storage and input media evolved, computer processing power and memory capacity steadily increased, and NC and CNC machine tools gradually disseminated from an environment of huge corporations and mainly aerospace work to the level of medium-sized corporations and a wide variety of products. NC and CNC's drastic advancement of machine tool control deeply transformed the culture of manufacturing. The details (which are beyond the scope of this article) have evolved immensely with every passing decade. 1980s–present Computers and CNC machine tools continue to develop rapidly. The personal computer revolution has a great impact on this development. By the late 1980s small machine shops had desktop computers and CNC machine tools. Soon after, hobbyists, artists, and designers began obtaining CNC mills and lathes. Manufacturers have started producing economically priced CNCs machines small enough to sit on a desktop which can cut at high resolution materials softer than stainless steel. They can be used to make anything from jewelry to printed circuit boards to gun parts, even fine art. Standards National and international standards are used to standardize the definitions, environmental requirements, and test methods used for milling. Selection of the standard to be used is an agreement between the supplier and the user and has some significance in the design of the mill. In the United States, ASME has developed the standards B5.45-1972 Milling Machines and B94.19-1997 Milling Cutters and End Mills. General tolerances include: +/-0.005" (~0.1mm) for local tolerances across most geometries, +/-0.010" (~0.25mm) for plastics with variation depending on the size of the part, 0.030" (~0.75mm) minimum wall thickness for metals, and 0.060" (~1.5mm) minimum wall thickness for plastics.
Technology
Industrial machinery
null
60456582
https://en.wikipedia.org/wiki/Evolution%20of%20bacteria
Evolution of bacteria
The evolution of bacteria has progressed over billions of years since the Precambrian time with their first major divergence from the archaeal/eukaryotic lineage roughly 3.2-3.5 billion years ago. This was discovered through gene sequencing of bacterial nucleoids to reconstruct their phylogeny. Furthermore, evidence of permineralized microfossils of early prokaryotes was also discovered in the Australian Apex Chert rocks, dating back roughly 3.5 billion years ago during the time period known as the Precambrian time. This suggests that an organism in of the phylum Thermotogota (formerly Thermotogae) was the most recent common ancestor of modern bacteria. Further chemical and isotopic analysis of ancient rock reveals that by the Siderian period, roughly 2.45 billion years ago, oxygen had appeared. This indicates that oceanic, photosynthetic cyanobacteria evolved during this period because they were the first microbes to produce oxygen as a byproduct of their metabolic process. Therefore, this phylum was thought to have been predominant roughly 2.3 billion years ago. However, some scientists argue they could have lived as early as 2.7 billion years ago, as this was roughly before the time of the Great Oxygenation Event, meaning oxygen levels had time to increase in the atmosphere before it altered the ecosystem during this event. The rise in atmospheric oxygen led to the evolution of Pseudomonadota (formerly proteobacteria). Today this phylum includes many nitrogen fixing bacteria, pathogens, and free-living microorganisms. This phylum evolved approximately 1.5 billion years ago during the Paleoproterozoic era. However, there are still many conflicting theories surrounding the origins of bacteria. Even though microfossils of ancient bacteria have been discovered, some scientists argue that the lack of identifiable morphology in these fossils means they can not be utilised to draw conclusions on an accurate evolutionary timeline of bacteria. Nevertheless, more recent technological developments means more evidence has been discovered. Defining bacteria Bacteria are prokaryotic microorganisms that can either have a bacilli, spirilli, or cocci shape and measure between 0.5-20 micrometers. They were one of the first living cells to evolve and have spread to inhabit a variety of different habitats including hydrothermal vents, glacial rocks, and other organisms. They share characteristics with eukaryotic cells including the cytoplasm, cell membrane, and ribosomes. Some unique bacterial features include the cell wall (also found in plants and fungi), flagella (not common for all bacteria), and the nucleoid. Bacteria can metabolise in different ways, most commonly by heterotrophic or autotrophic (either photosynthetic or chemosynthetic) processes. Bacteria reproduce through binary fission, though they can still share genetic information between individuals either by transduction, transformation, or conjugation. Process of bacterial evolution Bacteria evolve in a similar process to other organisms. This is through the process of natural selection, whereby beneficial adaptations are passed onto future generations until the trait becomes common within the entire population. However, since bacteria reproduce via binary fission—a form of asexual reproduction—the daughter cell and parent cell are genetically identical. This makes bacteria susceptible to environmental pressures, an issue that is overcome by sharing genetic information via transduction, transformation, or conjugation. This allows for new genetic and physical adaptations to develop, allowing bacteria to adapt to their environment and evolve. Furthermore, bacteria can reproduce in as little as 20 minutes, which allows for fast adaptation, meaning new strains of bacteria can evolve quickly. This has become an issue regarding antibiotic resistant bacteria. Thermotogales Thermotogota bacteria are typically thermophilic or hyperthermophilic, gram-negative staining, anaerobic organisms that can live near hydrothermal vents where temperatures can range between 55-95 °C. They are thought to be some of the earliest forms of life. Evidence of bacteria has been discovered in the Australian Apex Chert near ancient hydrothermal vents. These rocks date back 3.46 billion years and, because oxygen was not present in large quantities in Earth's early atmosphere, these fossils are thought to represent early thermophilic bacteria, which do not require oxygen to survive. Furthermore, living species such as Thermotoga neapolitana, which are thought to resemble their ancestral form, live around these vents, which some scientists have used as evidence to support this theory. More recent evidence suggests that Thermotogales evolved roughly between 3.2-3.5 billion years ago. This evidence was collected via gene sequencing of bacterial nucleoids to reconstruct their phylogeny. The first major divergence within the Thermotogales phylum was between Thermotogaceae and Fervidobacteriaceae, however, it is yet to be determined as to when this occurred. The family of Thermotogaceae then diverged into the genus Thermotoga and the genus Pseudothermotoga. The genus Thermotoga represents the majority of existing hyperthermophiles and are unique in that they are wrapped in an outer membrane that is referred to as a "toga". Some extant Thermotoga species include T. neapolitana. Thermotogale phylogeny The phylogeny based on the work of the All-Species Living Tree Project. Cyanobacteria Cyanobacteria or blue green-algae is a gram negative bacteria, a phylum of photosynthetic bacteria that evolved between 2.3-2.7 billion years ago. This prokaryote produces oxygen as a byproduct of its photosynthetic processes. They have made a distinctive impact in pharmaceutical and agricultural industry due to their potential of making bioactive compounds with antibacterial, anti-fungal, antiviral, and anti-algal properties. Typically they form motile filaments referred to as hormogonia, which can form colonies and then bud and travel to colonise new areas. They have been located in environments including freshwater, oceans, soil and rock (both damp and dry), as well as arctic rock. These organisms had evolved photosynthetic reaction centres and became the first oxygen producing autotrophs to appear in the fossil record. They utilise sunlight in order to drive their metabolic processes, which removes carbon dioxide from the atmosphere and releases oxygen. Due to this trait some scientist credit this phylum to causing the Great Oxygenation Event roughly 2.3 billion years ago However, the closest known relatives of oxygen producing Cyanobacteria did not produce oxygen. These relatives are Melainabacteria and Sericytochromatia, neither of which can photosynthesise. Through genetic sequencing, scientists discovered that these two groups did not have any remnants of the genes required for the functioning of photosynthetic reactions. This suggests that Cyanobacteria, Melainabacteria, and Sericytochromatia evolved from a non-photosynthetic common ancestor.
Biology and health sciences
Basics_4
Biology
42593487
https://en.wikipedia.org/wiki/HWB%20color%20model
HWB color model
HWB (Hue, Whiteness, Blackness) is a cylindrical-coordinate representation of points in an RGB color model, similar to HSL and HSV. It was developed by HSV’s creator Alvy Ray Smith in 1996 to address some of the issues with HSV. HWB was designed to be more intuitive for humans to use and slightly faster to compute. The first coordinate, H (Hue), is the same as the Hue coordinate in HSL and HSV. W and B stand for Whiteness and Blackness respectively and range from 0–100% (or 0–1). The mental model is that the user can pick a main hue and then “mix” it with white and/or black to produce the desired color. HWB was included in the CSS Level 4 Color Module in 2014. Conversion HWB is very closely related to HSV, and therefore the conversion formulas are fairly simple. Before conversion from HWB, if the sum of whiteness and blackness exceeds 100%, both components must be scaled back proportionally to make the sum 100%. Swatches The CSS Color Level 4 draft specification includes a number of HWB example color swatches.
Physical sciences
Basics
Physics
42593531
https://en.wikipedia.org/wiki/Pollen%20beetle
Pollen beetle
Pollen beetle is an informal term for any species of beetle associated intimately with suitably pollen-rich species of flowers. Typically such a beetle is adapted to the pollen as a major part of its diet. Species in at least sixteen families of the order Coleoptera could be counted as pollen beetles, and "pollen beetle" also is a common name for some such species. Pollen feeding Many insects of small to modest size benefit from feeding on pollen of suitable plants; pollen tends to be rich in protein, plentiful in season and easily available. Many plants, such as some members of the Araceae and Zamiaceae, produce large amounts of nutritious pollen, while producing little or no nectar. The pollen of such plants attracts and nourishes specialist pollen feeders that perform pollination, and many of those pollen feeders are beetles. In accessing the pollen of such plants, many pollen-feeding insects form mutually beneficial ecological relationships. Others damage the flowers however, reducing seed set, and harming the crop rather than benefiting it. The nutritious Pollen from plants adapted to such pollinators contrasts with the also copious pollen from specialist anemophilous plants. Strict anemophiles commonly produce pollen that is low in nutritive value, since it is not adapted to attract or feed pollinators. Furthermore, as an adaptation to being borne effectively by the wind, anemophilous pollen must be very light. More specifically than speaking loosely of "pollen beetles" as a common name the expression is the common name for certain species of beetle in particular, mainly a few members of the families Melyridae, Nitidulidae and Oedemeridae. However, there are many others that, though not commonly called "pollen beetles", do feed avidly on pollen, including species in the following: Alleculidae Buprestidae Cantharidae Cerambycidae Cleridae Curculionidae Dermestidae Languriidae Meloidae Mordellidae Phalacridae Scarabaeidae (tribe Hopliini in particular) Stenotrachelidae (Synonym Cephaloidae) Pollen beetles in the Melyridae Among the Melyridae the most prominent example of a species commonly referred to as a pollen beetle is Astylus atromaculatus. It feeds particularly on the large amounts of pollen produced by members of the grass family Poaceae, and related wind-pollinated families such as the Cyperaceae. It does however also visit many other flowers, such as members of the Asteraceae. It is not clear whether it causes more harm to crops by eating pollen and young tissues than it does good by promoting pollination. Another species of "pollen beetle" in the family Melyridae is Dicranolaius bellulus. It is more of a predator than say, Astylus species, and is accordingly valued as an agent in pest management programs, but pollen is a major part of its diet. Pollen beetles in the Nitidulidae Several members of the Nitidulidae are similarly referred to as pollen beetles. In particular, practically all the species in the genus Meligethes are called pollen beetles. Of these the most notorious is probably the one now accepted as being properly called Meligethes aeneus/Brassicogethes aeneus. It is called the "Brassica pollen beetle", "Rape pollen beetle" and similar names, because it first became prominent as a pest of such crops in the first quarter of the 20th century. Pollen beetles in the Oedemeridae Many species in the family Oedemeridae are similarly called "pollen beetles". For example, the Southern Pollen Beetle in the United States is Oxycopis notoxoides).
Biology and health sciences
Beetles (Coleoptera)
Animals
50375026
https://en.wikipedia.org/wiki/Power-to-X
Power-to-X
Power-to-X (also P2X and P2Y) are electricity conversion, energy storage, and reconversion pathways from surplus renewable energy. Power-to-X conversion technologies allow for the decoupling of power from the electricity sector for use in other sectors (such as transport or chemicals), possibly using power that has been provided by additional investments in generation. The term is widely used in Germany and may have originated there. The X in the terminology can refer to one of the following: power-to-ammonia, power-to-chemicals, power-to-fuel, power-to-gas (power-to-hydrogen, power-to-methane) power-to-liquid (synthetic fuel), power to food, power-to-heat. Electric vehicle charging, space heating and cooling, and water heating can be shifted in time to match generation, forms of demand response that can be called power-to-mobility and power-to-heat. Collectively power-to-X schemes which use surplus power fall under the heading of flexibility measures and are particularly useful in energy systems with high shares of renewable generation and/or with strong decarbonization targets. A large number of pathways and technologies are encompassed by the term. In 2016 the German government funded a €30million first-phase research project into power-to-X options. Power-to-fuel Surplus electric power can be converted to gas fuel energy for storage and reconversion. Direct current electrolysis of water (efficiency 80–85% at best) can be used to produce hydrogen which can, in turn, be converted to methane (CH4) via methanation. Another possibility is converting the hydrogen, along with CO2 to methanol. Both these fuels can be stored and used to produce electricity again, hours to months later. Storage and reconversion of power-to-fuel Hydrogen and methane can be used as downstream fuels, fed into the natural gas grid, or used to make synthetic fuel. Alternatively they can be used as a chemical feedstock, as can ammonia (). Reconversion technologies include gas turbines, combined cycle plants, reciprocating engines and fuel cells. Power-to-power refers to the round-trip reconversion efficiency. For hydrogen storage, the round-trip efficiency remains limited at 35–50%. Electrolysis is expensive and power-to-gas processes need substantial full-load hours to be economic. However, while round-trip conversion efficiency of power-to-power is lower than with batteries and electrolysis can be expensive, storage of the fuels themselves is quite inexpensive. This means that large amounts of energy can be stored for long periods of time with power-to-power, which is ideal for seasonal storage. This could be particularly useful for systems with high variable renewable energy penetration, since many areas have significant seasonal variability of solar, wind, and run-of-the-river-hydroelectric generation. Batteries Despite it also being based fundamentally on electrolytic chemical reactions, battery storage is not normally considered a power-to-fuel concept. Power-to-heat The purpose of power-to-heat systems is to utilize excess electricity generated by renewable energy sources which would otherwise be wasted. Depending on the context, the power-to-heat can either be stored as heat, or delivered as heat to meet a need. Heating systems In contrast to simple electric heating systems such as night storage heating which covers the complete heating requirements, power-to-heat systems are hybrid systems, which additionally have traditional heating systems using chemical fuels like wood or natural gas. When there are excess energy the heat production can result from electric energy otherwise the traditional heating system will be used. In order to increase flexibility power-to-heat systems are often coupled with heat accumulators. The power supply occurs for the most part in the local and district heating networks. Power-to-heat systems are also able to supply buildings or industrial systems with heat. Power-to-heat involves contributing to the heat sector, either by resistance heating or via a heat pump. Resistance heaters have unity efficiency, and the corresponding coefficient of performance (COP) of heat pumps is 2–5. Back-up immersion heating of both domestic hot water and district heating offers a cheap way of using surplus renewable energy and will often displace carbon-intensive fossil fuels for the task. Large-scale heat pumps in district heating systems with thermal energy storage are an especially attractive option for power-to-heat: they offer exceptionally high efficiency for balancing excess wind and solar power, and they can be profitable investments. Heat storage systems Other forms of power-to-X Power-to-mobility refers to the charging of battery electric vehicles (BEV). Given the expected uptake of EVs, dedicated dispatch will be required. As vehicles are idle for most of the time, shifting the charging time can offer considerable flexibility: the charging window is a relatively long 8–12hours, whereas the charging duration is around 90minutes. The EV batteries can also be discharged to the grid to make them work as electricity storage devices, but this causes additional wear to the battery. Impact According to the German concept of sector coupling interconnecting all the energy-using sectors will require the digitalisation and automation of numerous processes to synchronise supply and demand. A 2023 study examined to role that powertoX could play in a highlyrenewable future energy system for Japan. The P2X technologies considered include water electrolysis, methanation, Fischer–Tropsch synthesis, and Haber–Bosch synthesis and the study used linear programming to determine leastcost system structure and operation. Results indicate that these various P2X technologies can effectively shift electricity loads and reduce curtailment by 80% or more.
Technology
Energy storage
null
38386085
https://en.wikipedia.org/wiki/Xenacoelomorpha
Xenacoelomorpha
Xenacoelomorpha () is a small phylum of bilaterian invertebrate animals, consisting of two sister groups: xenoturbellids and acoelomorphs. This new phylum was named in February 2011 and suggested based on morphological synapomorphies (physical appearances shared by the animals in the clade), which was then confirmed by phylogenomic analyses of molecular data (similarities in the DNA of the animals within the clade). Phylogenetics Prior to molecular studies, xenacoelomorphs were considered to be flatworms based on their superficial similarities. Like flatworms, they do not have a coelom and are dorsoventrally flattened. With the advent of phylogenetics, Xenoturbella and Acoelomorpha were found to be sister groups and only distantly related to flatworms. Initially this phylum was considered to be a member of the deuterostomes, but because of recent transcriptome analyses, it was concluded that phylum Xenacoelomorpha is the sister group to the Nephrozoa, which includes both the protostomes and the deuterostomes, which makes the phylum the basalmost bilaterian clade. This would mean they are neither deuterostomes nor protostomes. Their larvae show similarities with cnidarian planula larvae and poriferan parenchyma larvae, but it is not clear if the similarities are ancestral or derived. However, some studies point out that their basal placement may be caused by high mutation rates leading to long branch attraction (LBA). These analyses suggest that the xenacoelomorphs are instead the sister group of Ambulacraria forming the clade Xenambulacraria and that despite their simple body plans, they actually derive from a more complex ancestor. Having a larger number of species within this group would allow for better conclusions and analysis to be made within the phylum and in groups closely related to the phylum. Internal phylogeny For multiple decades, the genus Xenoturbella contained only one species, X. bocki. In 2016 however, a team reported the discovery of four new species from the Gulf of California and sequenced each new species' mitogenome and upon analysis found that the two species that lived in shallow water (X. bocki and X. hollandorum) formed a "shallow" clade and that three deep water species formed a "deep" clade. The following year another team discovered a sixth species, X. japonica, found off the coast of Japan. Their phylogenetic analysis confirmed the first team's hypothesis and placed X. japonica within the shallow clade. The other two groups, Nemertodermatida and Acoela, have less clear relationships as species-level phylogenies have not been conducted. Nemertodermatida only has two families and six total genera. Ascopariidae contains two of these genera, while Nemertodermatidae has the other four. A 2016 study analyzed three of the four Nemertodermatid genera and found that Sterreria and Meara are closer to each other than to Nemertoderma, while Nemertinoides was left unplaced. Acoela phylogeny is even less certain as it is by far the most diverse part of the phylum and is very understudied. A 2011 study attempted to solve this problem and recovered numerous traditional families as polyphyletic. They also recovered a tentative clade of various species from Actinoposthiidae and Isodiametridae which is not shown in the below cladogram. Several small basal families were not included in their study and their position is still uncertain. Characteristics The phylum consists of small, flat and worm-like creatures found in marine and sometimes brackish water environments, on the sediments. There are species that are variously free-living, parasitic, and symbiotic. They can be found at depths of almost and near hydrothermal vents. The phylum is hermaphroditic (all individuals have both male and female sex organs) and reproduces sexually with direct development, meaning they skip a potentially vulnerable larval stage. Xenoturbella have external fertilization, and Acoelomorpha has internal fertilization. All xenacoelomorphs are bilateral, meaning they have a central front-to-back body axis with mirror image right and left sides. They are triploblasts (meaning they have the three germ layers: ectoderm, endoderm, and mesoderm). Their body plan is acoelomate – they lack a coelom – do not have a true body cavity. Also an excretory system is absent, yet all genes related to the excretory system are present except for Osr, which is essential for the development for such a system. In acoelomorphs, which has gone through rapid evolutionary rates and chromosomic rearrangements, about 60% of the genes shared between protostomes and deuterostomes are missing. How many of these genes which are present or absent in Xenoturbella will require a whole genome sequencing. While other animals that are diploblastic (only have two germ layers: ectoderm and endoderm) also lack a coelom, those technically do not have an acoelomate body plan because they lack the mesoderm germ layer. In acoels, the mouth opens directly into a large endodermal syncytium, while in nemertodermatids and xenoturbellids there is a sack-like gut lined by unciliated cells. A defining feature is a digestive system lacking nerve cells. Because an enteric nervous system, also called the stomatogastric nervous system, is also found in many cnidarians, its absence is most likely a derived trait. Their nervous systems are basiepidermal – located right under the epidermis – and they have no brain. The xenoturbellids' nervous system consists of a simple nerve net, with no special concentration of neurons. In acoelomorphs the nervous system is arranged in a series of longitudinal bundles, united in the anterior region by a ring comissure of variable complexity. The sensory organs include a statocyst (for balance). Some groups have two unicellular ocelli (simple eyes). The epidermis of all species within the phylum is ciliated. The cilia are composed of a set of nine pairs of peripheral microtubules and one or two central microtubules (patterns 9+1 and 9+2, respectively). The pairs 4–7 terminate before the tip, creating a structure called a "shelf".
Biology and health sciences
Other
Animals
44035599
https://en.wikipedia.org/wiki/Deforestation%20and%20climate%20change
Deforestation and climate change
Deforestation is a primary contributor to climate change, and climate change affects the health of forests. Land use change, especially in the form of deforestation, is the second largest source of carbon dioxide emissions from human activities, after the burning of fossil fuels. Greenhouse gases are emitted from deforestation during the burning of forest biomass and decomposition of remaining plant material and soil carbon. Global models and national greenhouse gas inventories give similar results for deforestation emissions. , deforestation is responsible for about 11% of global greenhouse gas emissions. Carbon emissions from tropical deforestation are accelerating. When forests grow they are a carbon sink and therefore have potential to mitigate the effects of climate change. Some of the effects of climate change, such as more wildfires, invasive species, and more extreme weather events can lead to more forest loss. The relationship between deforestation and climate change is one of a positive (amplifying) climate feedback. The more trees that are removed equals larger effects of climate change which, in turn, results in the loss of more trees. Forests cover 31% of the land area on Earth. Every year, 75,700 square kilometers (18.7 million acres) of the forest is lost. There was a 12% increase in the loss of primary tropical forests from 2019 to 2020. Deforestation has many causes and drivers. Examples include agricultural clearcutting, livestock grazing, logging for timber, and wildfires. Causes of deforestation Causes not linked to climate change Causes due to climate change Effects of deforestation on climate change aspects Irreversible deforestation would result in a permanent rise in the global surface temperature. Moreover, it suggests that standing tropical forests help cool the average global temperature by more than . Deforestation of tropical forests may risk triggering tipping points in the climate system and of forest ecosystem collapse which would also have effects on climate change. Several studies since the early 1990s have shown that large-scale deforestation north of 50°N leads to overall net global cooling while tropical deforestation produces substantial warming. Carbon-centric metrics are inadequate because biophysical mechanisms other than impacts are important, especially the much higher albedo of bare high-latitude ground vis-à-vis intact forest. Deforestation, particularly in large swaths of the Amazon, where nearly 20% of the rainforest has been clear cut, has climactic effects and effects on water sources as well as on the soil. Moreover, the type of land usage after deforestation also produces varied results. When deforested land is converted to pasture land for livestock grazing it has a greater effect on the ecosystem than forest to cropland conversions. Other effect of deforestation in the Amazon rainforest is seen through the greater amount of carbon dioxide emission. The Amazon rainforest absorbs one-fourth of the carbon dioxide emissions on Earth, however, the amount of absorbed today decreases by 30% than it was in the 1990s due to deforestation. Modeling studies have concluded that there are two crucial moments that can lead to devastating effects in the Amazon rainforest which are increase in temperature by and deforestation reaching a level of 40%. Forest fires Statistics have shown that there is a direct correlation between forest fires and deforestation. Statistics regarding the Brazilian Amazon area during the early 2000s have shown that fires and the air pollution that accompanies these fires mirror the patterns of deforestation and "high deforestation rates led to frequent fires". The Amazon rainforest has recently experienced fires that occurred inside the forest when wildfires tend to occur on the outer edges of the forest. Wetlands have faced an increase in forest fires as well. Due to the change in temperature, the climate around forests have become warm and dry, conditions that allow forest fires to occur. Under unmitigated climate change, by the end of the century, 21% of the Amazon would be vulnerable to post‐fire grass invasion. In 3% of the Amazon, fire return intervals are already shorter than the time required for grass exclusion by canopy recovery, implying a high risk of irreversible shifts to a fire‐maintained degraded forest grassy state. The south‐eastern region of the Amazon is currently at highest risk of irreversible degradation. According to a study in tropical peatland forest of Borneo, deforestation also contributes to the increase in fire risk. Carbon sequestration through forestry Concerns with forestry projects Changes in rainfall As a consequence of reduced evapotranspiration, precipitation is also reduced. This implies having a hotter and drier climate, and a longer dry season. This change in climate has drastic ecological and global impacts including increases in severity and frequency of fires, and disruption in the pollination process that will likely spread beyond the area of deforestation. According to a study published in 2023, tropical deforestation has led to a significant decrease in the amount of observed precipitation. By the year 2100, researchers anticipate that deforestation in the Congo will diminish regional precipitation levels by up to 8-10%. Decreasing albedo Deforestation changes the landscape and reflectivity of earth's surface, i.e. decreasing Albedo. This results in an increase in the absorption of light energy from the sun in the form of heat, enhancing global warming. Policies and programs to reduce deforestation Reducing emissions from deforestation and forest degradation in developing countries The Bali Action Plan The Bali Action Plan was developed in December 2007 in Bali, Indonesia. It is a direct result of the Kyoto Protocol of December 1997. One of the key elements of The Bali Action Plan involves a concerted effort by the member countries of the Kyoto Protocol to enact and create policy approaches that incentivize emissions reduction caused by deforestation and forest degradation in the developing world. It emphasized the importance of sustainable forest management and conservation practices in mitigating climate change. This coupled with the increased attention to carbon emission stocks as a way to provide additional resource flows to the developing countries. Trillion Tree Campaign The Billion Tree Campaign was launched in 2006 by the United Nations Environment Programme (UNEP) as a response to the challenges of climate change, as well as to a wider array of sustainability challenges, from water supply to biodiversity loss. Its initial target was the planting of one billion trees in 2007. Only one year later in 2008, the campaign's objective was raised to 7 billion trees—a target to be met by the climate change conference that was held in Copenhagen, Denmark in December 2009. Three months before the conference, the 7 billion planted trees mark had been surpassed. In December 2011, after more than 12 billion trees had been planted, UNEP formally handed management of the program over to the not-for-profit Plant-for-the-Planet initiative, based in Munich, Germany. The Amazon Fund (Brazil)
Physical sciences
Climate change
Earth science
39809523
https://en.wikipedia.org/wiki/Objective-C
Objective-C
Objective-C is a high-level general-purpose, object-oriented programming language that adds Smalltalk-style message passing (messaging) to the C programming language. Originally developed by Brad Cox and Tom Love in the early 1980s, it was selected by NeXT for its NeXTSTEP operating system. Due to Apple macOS’s direct lineage from NeXTSTEP, Objective-C was the standard language used, supported, and promoted by Apple for developing macOS and iOS applications (via their respective application programming interfaces (APIs), Cocoa and Cocoa Touch) from 1997, when Apple purchased NeXT until the introduction of the Swift language in 2014. Objective-C programs developed for non-Apple operating systems or that are not dependent on Apple's APIs may also be compiled for any platform supported by GNU GNU Compiler Collection (GCC) or LLVM/Clang. Objective-C source code 'messaging/implementation' program files usually have filename extensions, while Objective-C 'header/interface' files have extensions, the same as C header files. Objective-C++ files are denoted with a filename extension. History Objective-C was created mainly by Brad Cox and Tom Love in the early 1980s at their company Productivity Products International (PPI). Leading up to the creation of their company, both had been introduced to Smalltalk while at ITT Corporation's Programming Technology Center in 1981. The earliest work on Objective-C traces back to around then. Cox was intrigued by problems of true reusability in software design and programming. He realized that a language like Smalltalk would be invaluable in building development environments for system developers at ITT. However, he and Tom Love also recognized that backward compatibility with C was critically important in ITT's telecom engineering milieu. Cox began writing a pre-processor for C to add some of the abilities of Smalltalk. He soon had a working implementation of an object-oriented extension to the C language, which he named Object-Oriented Pre-Compiler (OOPC). Love was hired by Schlumberger Research in 1982 and had the opportunity to acquire the first commercial copy of Smalltalk-80, which further influenced the development of their brainchild. To demonstrate that real progress could be made, Cox showed that making interchangeable software components really needed only a few practical changes to existing tools. Specifically, they needed to support objects in a flexible manner, come supplied with a usable set of libraries, and allow for the code (and any resources needed by the code) to be bundled into one cross-platform format. Love and Cox eventually formed PPI to commercialize their product, which coupled an Objective-C compiler with class libraries. In 1986, Cox published the main description of Objective-C in its original form in the book Object-Oriented Programming, An Evolutionary Approach. Although he was careful to explain that there is more to the problem of reusability than just what Objective-C provides, the language was often compared feature for feature with other languages. Popularization through NeXT In 1988, NeXT licensed Objective-C from StepStone (the new name of PPI, the owner of the Objective-C trademark) and extended the GCC compiler to support Objective-C. NeXT developed the Application Kit (AppKit) and Foundation Kit libraries on which the NeXTSTEP user interface and Interface Builder were based. While the NeXT workstations failed to make a great impact in the marketplace, the tools were widely lauded in the industry. NeXT dropped hardware production and focused on software tools, selling NeXTSTEP (and OPENSTEP) as a platform for custom programming. To circumvent the terms of the GPL, NeXT had originally intended to ship the Objective-C frontend separately, allowing the user to link it with GCC to produce the compiler executable. Though initially accepted by Richard M. Stallman, this plan was rejected after Stallman consulted with GNU's lawyers and NeXT agreed to make Objective-C part of GCC. The work to extend GNU Compiler Collection (GCC) was led by Steve Naroff, who joined NeXT from StepStone. The compiler changes were made available as per GNU General Public License (GPL) terms, but the runtime libraries were not, rendering the open source contribution unusable to the general public. This led to other parties developing such runtime libraries under open source licenses. Later, Steve Naroff was also principal contributor to work at Apple to build the Objective-C frontend to Clang. The GNU project started work on its free software implementation of Cocoa, named GNUstep, based on the OpenStep standard. Dennis Glatting wrote the first GNU Objective-C runtime in 1992. The current GNU Objective-C runtime, in use since 1993, is the one developed by Kresten Krab Thorup while he was a university student in Denmark. Thorup also worked at NeXT from 1993 to 1996. Apple development and Swift After acquiring NeXT in 1996, Apple Computer used OpenStep in its then-new operating system, Mac OS X. This included Objective-C, NeXT's Objective-C-based developer tool, Project Builder, and its interface design tool, Interface Builder. Both were later merged into one application, Xcode. Most of Apple's current Cocoa API is based on OpenStep interface objects and is the most significant Objective-C environment being used for active development. At WWDC 2014, Apple introduced a new language, Swift, which was characterized as "Objective-C without the C". Syntax Objective-C is a thin layer atop C and is a "strict superset" of C, meaning that it is possible to compile any C program with an Objective-C compiler and to freely include C language code within an Objective-C class. Objective-C derives its object syntax from Smalltalk. All of the syntax for non-object-oriented operations (including primitive variables, pre-processing, expressions, function declarations, and function calls) are identical to those of C, while the syntax for object-oriented features is an implementation of Smalltalk-style messaging. Messages The Objective-C model of object-oriented programming is based on message passing to object instances. In Objective-C, one does not call a method; one sends a message. This is unlike the Simula-style programming model used by C++. The difference between these two concepts is in how the code referenced by the method or message name is executed. In a Simula-style language, the method name is—in most cases—bound to a section of code in the target class by the compiler. In Smalltalk and Objective-C, the target of a message is resolved at runtime, with the receiving object itself interpreting the message. A method is identified by a selector or — a unique identifier for each message name, often just a -terminated string representing its name—and resolved to a C method pointer implementing it: an . A consequence of this is that the message-passing system has no type checking. The object to which the message is directed—the receiver—is not guaranteed to respond to a message, and if it does not, it raises an exception. Sending the message to the object pointed to by the pointer would require the following code in C++: obj->method(argument); In Objective-C, this is written as follows: [obj method:argument]; The "method" call is translated by the compiler to the family of runtime functions. Different implementations handle modern additions like . In GNU families, this function is named , but it has been deprecated in favor of a modern lookup system under . Both styles of programming have multiple strengths and weaknesses. Object-oriented programming in the Simula (C++) style allows multiple inheritance and faster execution by using compile-time binding whenever possible, but it does not support dynamic binding by default. It also forces all methods to have a corresponding implementation unless they are abstract. The Smalltalk-style programming as used in Objective-C allows messages to go unimplemented, with the method resolved to its implementation at runtime. For example, a message may be sent to a collection of objects, to which only some will be expected to respond, without fear of producing runtime errors. Message passing also does not require that an object be defined at compile time. An implementation is still required for the method to be called in the derived object. (See the dynamic typing section below for more advantages of dynamic (late) binding.) Interfaces and implementations Objective-C requires that the interface and implementation of a class be in separately declared code blocks. By convention, developers place the interface in a header file and the implementation in a code file. The header files, normally suffixed .h, are similar to C header files while the implementation (method) files, normally suffixed .m, can be very similar to C code files. Interface This is analogous to class declarations as used in other object-oriented languages, such as C++ or Python. The interface of a class is usually defined in a header file. A common convention is to name the header file after the name of the class, e.g. would contain the interface for the class . An interface declaration takes the form: @interface classname : superclassname { // instance variables } + classMethod1; + (return_type)classMethod2; + (return_type)classMethod3:(param1_type)param1_varName; - (return_type)instanceMethod1With1Parameter:(param1_type)param1_varName; - (return_type)instanceMethod2With2Parameters:(param1_type)param1_varName param2_callName:(param2_type)param2_varName; @end In the above, plus signs denote class methods, or methods that can be called on the class itself (not on an instance), and minus signs denote instance methods, which can only be called on a particular instance of the class. Class methods also have no access to instance variables. The code above is roughly equivalent to the following C++ interface: class classname : public superclassname { protected: // instance variables public: // Class (static) functions static void *classMethod1(); static return_type classMethod2(); static return_type classMethod3(param1_type param1_varName); // Instance (member) functions return_type instanceMethod1With1Parameter(param1_type param1_varName); return_type instanceMethod2With2Parameters(param1_type param1_varName, param2_type param2_varName = default); }; Note that demonstrates the interleaving of selector segments with argument expressions, for which there is no direct equivalent in C/C++. Return types can be any standard C type, a pointer to a generic Objective-C object, a pointer to a specific type of object such as NSArray *, NSImage *, or NSString *, or a pointer to the class to which the method belongs (instancetype). The default return type is the generic Objective-C type . Method arguments begin with a name labeling the argument that is part of the method name, followed by a colon followed by the expected argument type in parentheses and the argument name. The label can be omitted. - (void)setRangeStart:(int)start end:(int)end; - (void)importDocumentWithName:(NSString *)name withSpecifiedPreferences:(Preferences *)prefs beforePage:(int)insertPage; A derivative of the interface definition is the category, which allows one to add methods to existing classes. Implementation The interface only declares the class interface and not the methods themselves: the actual code is written in the implementation file. Implementation (method) files normally have the file extension .m, which originally signified "messages". @implementation classname + (return_type)classMethod { // implementation } - (return_type)instanceMethod { // implementation } @end Methods are written using their interface declarations. Comparing Objective-C and C: - (int)method:(int)i { return [self square_root:i]; } int function(int i) { return square_root(i); } The syntax allows pseudo-naming of arguments. - (void)changeColorToRed:(float)red green:(float)green blue:(float)blue { //... Implementation ... } // Called like so: [myColor changeColorToRed:5.0 green:2.0 blue:6.0]; Internal representations of a method vary between different implementations of Objective-C. If myColor is of the class , instance method might be internally labeled . The is to refer to an instance method, with the class and then method names appended and colons changed to underscores. As the order of parameters is part of the method name, it cannot be changed to suit coding style or expression as with true named parameters. However, internal names of the function are rarely used directly. Generally, messages are converted to function calls defined in the Objective-C runtime library. It is not necessarily known at link time which method will be called because the class of the receiver (the object being sent the message) need not be known until runtime. Instantiation Once an Objective-C class is written, it can be instantiated. This is done by first allocating an uninitialized instance of the class (an object) and then by initializing it. An object is not fully functional until both steps have been completed. These steps should be accomplished with one line of code so that there is never an allocated object that hasn't undergone initialization (and because it is unwise to keep the intermediate result since -init can return a different object than that on which it is called). Instantiation with the default, no-parameter initializer: MyObject *foo = [[MyObject alloc] init]; Instantiation with a custom initializer: MyObject *foo = [[MyObject alloc] initWithString:myString]; In the case where no custom initialization is being performed, the "new" method can often be used in place of the alloc-init messages: MyObject *foo = [MyObject new]; Also, some classes implement class method initializers. Like +new, they combine +alloc and -init, but unlike +new, they return an autoreleased instance. Some class method initializers take parameters: MyObject *foo = [MyObject object]; MyObject *bar = [MyObject objectWithString:@"Wikipedia :)"]; The alloc message allocates enough memory to hold all the instance variables for an object, sets all the instance variables to zero values, and turns the memory into an instance of the class; at no point during the initialization is the memory an instance of the superclass. The init message performs the set-up of the instance upon creation. The init method is often written as follows: - (id)init { self = [super init]; if (self) { // perform initialization of object here } return self; } In the above example, notice the return type. This type stands for pointer to any object in Objective-C (See the Dynamic typing section). The initializer pattern is used to assure that the object is properly initialized by its superclass before the init method performs its initialization. It performs the following actions: Line 2 Sends the superclass instance an message and assigns the result to (pointer to the current object). Line 3 Checks if the returned object pointer is valid before performing any initialization. Line 6 Returns the value of self to the caller. A non-valid object pointer has the value ; conditional statements like treat nil like a null pointer, so the initialization code will not be executed if returned nil. If there is an error in initialization, the init method should perform any necessary cleanup, including sending a message to self, and return to indicate that initialization failed. Any checking for such errors must only be performed after having called the superclass initialization to ensure that destroying the object will be done correctly. If a class has more than one initialization method, only one of them (the designated initializer) needs to follow this pattern; others should call the designated initializer instead of the superclass initializer. Protocols In other programming languages, these are called interfaces. Objective-C was extended at NeXT to introduce the concept of multiple inheritance of specification, but not implementation, through the introduction of protocols. This is a pattern achievable either as an abstract multiple inherited base class in C++, or as an interface (as in Java and C#). Objective-C makes use of ad hoc protocols called informal protocols and compiler-enforced protocols called formal protocols. An informal protocol is a list of methods that a class can opt to implement. It is specified in the documentation, since it has no presence in the language. Informal protocols are implemented as a category (see below) on NSObject and often include optional methods, which, if implemented, can change the behavior of a class. For example, a text field class might have a delegate that implements an informal protocol with an optional method for performing auto-completion of user-typed text. The text field discovers whether the delegate implements that method (via reflective programming (reflection)) and, if so, calls the delegate's method to support the auto-complete feature. A formal protocol is similar to an interface in Java, C#, and Ada 2005. It is a list of methods that any class can declare itself to implement. Objective-C versions before 2.0 required that a class must implement all methods in a protocol it declares itself as adopting; the compiler will emit an error if the class does not implement every method from its declared protocols. Objective-C 2.0 added support for marking certain methods in a protocol optional, and the compiler will not enforce implementation of optional methods. A class must be declared to implement that protocol to be said to conform to it. This is detectable at runtime. Formal protocols cannot provide any implementations; they simply assure callers that classes that conform to the protocol will provide implementations. In the NeXT/Apple library, protocols are frequently used by the Distributed Objects system to represent the abilities of an object executing on a remote system. The syntax @protocol NSLocking - (void)lock; - (void)unlock; @end denotes that there is the abstract idea of locking. By stating in the class definition that the protocol is implemented, @interface NSLock : NSObject <NSLocking> // ... @end instances of NSLock claim that they will provide an implementation for the two instance methods. Dynamic typing Objective-C, like Smalltalk, can use dynamic typing: an object can be sent a message that is not specified in its interface. This can allow for increased flexibility, as it allows an object to "capture" a message and send the message to a different object that can respond to the message appropriately, or likewise send the message on to another object. This behavior is known as message forwarding or delegation (see below). Alternatively, an error handler can be used in case the message cannot be forwarded. If an object does not forward a message, respond to it, or handle an error, then the system will generate a runtime exception. If messages are sent to nil (the null object pointer), they will be silently ignored or raise a generic exception, depending on compiler options. Static typing information may also optionally be added to variables. This information is then checked at compile time. In the following four statements, increasingly specific type information is provided. The statements are equivalent at runtime, but the extra information allows the compiler to warn the programmer if the passed argument does not match the type specified. - (void)setMyValue:(id)foo; In the above statement, foo may be of any class. - (void)setMyValue:(id<NSCopying>)foo; In the above statement, foo may be an instance of any class that conforms to the NSCopying protocol. - (void)setMyValue:(NSNumber *)foo; In the above statement, foo must be an instance of the NSNumber class. - (void)setMyValue:(NSNumber<NSCopying> *)foo; In the above statement, foo must be an instance of the NSNumber class, and it must conform to the NSCopying protocol. In Objective-C, all objects are represented as pointers, and static initialization is not allowed. The simplest object is the type that () points to, which only has an isa pointer describing its class. Other types from C, like values and structs, are unchanged because they are not part of the object system. This decision differs from the C++ object model, where structs and classes are united. Forwarding Objective-C permits the sending of a message to an object that may not respond. Rather than responding or simply dropping the message, an object can forward the message to an object that can respond. Forwarding can be used to simplify implementation of certain design patterns, such as the observer pattern or the proxy pattern. The Objective-C runtime specifies a pair of methods in forwarding methods:- (retval_t)forward:(SEL)sel args:(arglist_t)args; // with GCC - (id)forward:(SEL)sel args:(marg_list)args; // with NeXT/Apple systems action methods:- (retval_t)performv:(SEL)sel args:(arglist_t)args; // with GCC - (id)performv:(SEL)sel args:(marg_list)args; // with NeXT/Apple systems An object wishing to implement forwarding needs only to override the forwarding method with a new method to define the forwarding behavior. The action method need not be overridden, as this method merely performs an action based on the selector and arguments. Notice the SEL type, which is the type of messages in Objective-C. Note: in OpenStep, Cocoa, and GNUstep, the commonly used frameworks of Objective-C, one does not use the class. The method of the class is used to do forwarding. Example Here is an example of a program that demonstrates the basics of forwarding. Forwarder.h #import <objc/Object.h> @interface Forwarder : Object { id recipient; // The object we want to forward the message to. } // Accessor methods. - (id)recipient; - (id)setRecipient:(id)_recipient; @end Forwarder.m #import "Forwarder.h" @implementation Forwarder - (retval_t)forward:(SEL)sel args:(arglist_t)args { /* * Check whether the recipient actually responds to the message. * This may or may not be desirable, for example, if a recipient * in turn does not respond to the message, it might do forwarding * itself. */ if ([recipient respondsToSelector:sel]) { return [recipient performv:sel args:args]; } else { return [self error:"Recipient does not respond"]; } } - (id)setRecipient:(id)_recipient { [recipient autorelease]; recipient = [_recipient retain]; return self; } - (id)recipient { return recipient; } @end Recipient.h #import <objc/Object.h> // A simple Recipient object. @interface Recipient : Object - (id)hello; @end Recipient.m #import "Recipient.h" @implementation Recipient - (id)hello { printf("Recipient says hello!\n"); return self; } @end main.m #import "Forwarder.h" #import "Recipient.h" int main(void) { Forwarder *forwarder = [Forwarder new]; Recipient *recipient = [Recipient new]; [forwarder setRecipient:recipient]; // Set the recipient. /* * Observe forwarder does not respond to a hello message! It will * be forwarded. All unrecognized methods will be forwarded to * the recipient * (if the recipient responds to them, as written in the Forwarder) */ [forwarder hello]; [recipient release]; [forwarder release]; return 0; }
Technology
Programming languages
null
42598658
https://en.wikipedia.org/wiki/Problem%20of%20time
Problem of time
In theoretical physics, the problem of time is a conceptual conflict between quantum mechanics and general relativity. Quantum mechanics regards the flow of time as universal and absolute, whereas general relativity regards the flow of time as malleable and relative. This problem raises the question of what time really is in a physical sense and whether it is truly a real, distinct phenomenon. It also involves the related question of why time seems to flow in a single direction, despite the fact that no known physical laws at the microscopic level seem to require a single direction. Time in quantum mechanics In classical mechanics, a special status is assigned to time in the sense that it is treated as a classical background parameter, external to the system itself. This special role is seen in the standard Copenhagen interpretation of quantum mechanics: all measurements of observables are made at certain instants of time and probabilities are only assigned to such measurements. Furthermore, the Hilbert space used in quantum theory relies on a complete set of observables which commute at a specific time. Time in general relativity In general relativity time is no longer a unique background parameter, but a general coordinate. The field equations of general relativity are not parameterized by time but formulated in terms of spacetime. Many of the issues related to the problem of time exist within general relativity. At the cosmic scale, general relativity shows a closed universe with no external time. These two very different roles of time are incompatible. Impact on quantum gravity Quantum gravity describes theories that attempt to reconcile or unify quantum mechanics and general relativity, the current theory of gravity. The problem of time is central to these theoretical attempts. It remains unclear how time is related to quantum probability, whether time is fundamental or a consequence of processes, and whether time is approximate, among other issues. Different theories try different answers to the questions but no clear solution has emerged. The Frozen Formalism Problem The most commonly discussed aspect of the problem of time is the Frozen Formalism Problem. The non-relativistic equation of quantum mechanics includes time evolution: where is an energy operator characterizing the system and the wave function over space evolves in time, . In general relativity the energy operator becomes a constraint in the Wheeler–DeWitt equation: where the operator varies throughout space, but the wavefunction here, called the wavefunction of the universe, is constant. Consequently this cosmic universal wavefunction is frozen and does not evolve. Somehow, at a smaller scale, the laws of physics, including a concept of time, apply within the universe while the cosmic level is static. Proposed solutions to the problem of time Work started by Don Page and William Wootters suggests that the universe appears to evolve for observers on the inside because of energy entanglement between an evolving system and a clock system, both within the universe. In this way the overall system can remain timeless while parts experience time via entanglement. The issue remains an open question closely related to attempted theories of quantum gravity. In other words, time is an entanglement phenomenon, which places all equal clock readings (of correctly prepared clocks – or any objects usable as clocks) into the same history. In 2013, at the Istituto Nazionale di Ricerca Metrologica (INRIM) in Turin, Italy, Ekaterina Moreva, together with Giorgio Brida, Marco Gramegna, Vittorio Giovannetti, Lorenzo Maccone, and Marco Genovese performed the first experimental test of Page and Wootters' ideas. They confirmed for photons that time is an emergent phenomenon for internal observers but absent for external observers of the universe just as the Wheeler–DeWitt equation predicts. Consistent discretizations approach developed by Jorge Pullin and Rodolfo Gambini have no constraints. These are lattice approximation techniques for quantum gravity. In the canonical approach, if one discretizes the constraints and equations of motion, the resulting discrete equations are inconsistent: they cannot be solved simultaneously. To address this problem, one uses a technique based on discretizing the action of the theory and working with the discrete equations of motion. These are automatically guaranteed to be consistent. Most of the hard conceptual questions of quantum gravity are related to the presence of constraints in the theory. Consistent discretized theories are free of these conceptual problems and can be straightforwardly quantized, providing a solution to the problem of time. It is a bit more subtle than this. Although without constraints and having "general evolution", the latter is only in terms of a discrete parameter that isn't physically accessible. The way out is addressed in a way similar to the Page–Wooters approach. The idea is to pick one of the physical variables to be a clock and ask relational questions. These ideas, where the clock is also quantum mechanical, have actually led to a new interpretation of quantum mechanics — the Montevideo interpretation of quantum mechanics. This new interpretation solves the problems of the use of environmental decoherence as a solution to the problem of measurement in quantum mechanics by invoking fundamental limitations, due to the quantum mechanical nature of clocks, in the process of measurement. These limitations are very natural in the context of generally covariant theories as quantum gravity where the clock must be taken as one of the degrees of freedom of the system itself. They have also put forward this fundamental decoherence as a way to resolve the black hole information paradox. In certain circumstances, a matter field is used to de-parametrize the theory and introduce a physical Hamiltonian. This generates physical time evolution, not a constraint. Reduced phase-space quantization constraints are solved first and then quantized. This approach was considered for some time to be impossible, as it seems to require first finding the general solution to Einstein's equations. However, with the use of ideas involved in Dittrich's approximation scheme (built on ideas of Carlo Rovelli) a way to explicitly implement, at least in principle, a reduced phase-space quantization was made viable. Avshalom Elitzur and Shahar Dolev argue that quantum-mechanical experiments such as the "quantum liar" provide evidence of inconsistent histories, and that spacetime itself may therefore be subject to change affecting entire histories. Elitzur and Dolev also believe that an objective passage of time and relativity can be reconciled and that it would resolve many of the issues with the block universe and the conflict between relativity and quantum mechanics. One solution to the problem of time proposed by Lee Smolin is that there exists a "thick present" of events, in which two events in the present can be causally related to each other, but in contrast to the block-universe view of time in which all time exists eternally. Marina Cortês and Lee Smolin argue that certain classes of discrete dynamical systems demonstrate time asymmetry and irreversibility, which is consistent with an objective passage of time. Weyl time in scale-invariant quantum gravity Motivated by the Immirzi ambiguity in loop quantum gravity and the near-conformal invariance of the standard model of elementary particles, Charles Wang and co-workers have argued that the problem of time may be related to an underlying scale invariance of gravity–matter systems. Scale invariance has also been proposed to resolve the hierarchy problem of fundamental couplings. As a global continuous symmetry, scale invariance generates a conserved Weyl current according to Noether’s theorem. In scale-invariant cosmological models, this Weyl current naturally gives rise to a harmonic time. In the context of loop quantum gravity, Charles Wang et al. suggest that scale invariance may lead to the existence of a quantized time. Thermal time hypothesis The thermal time hypothesis is a possible solution to the problem of time in classical and quantum theory as has been put forward by Carlo Rovelli and Alain Connes. Physical time flow is modeled as a fundamental property of the theory, a macroscopic feature of thermodynamical origin.
Physical sciences
Quantum mechanics
Physics
44046994
https://en.wikipedia.org/wiki/Pregnancy%20in%20fish
Pregnancy in fish
Pregnancy has been traditionally defined as the period of time eggs are incubated in the body after the egg-sperm union. Although the term often refers to placental mammals, it has also been used in the titles of many international, peer-reviewed, scientific articles on fish. Consistent with this definition, there are several modes of reproduction in fish, providing different amounts of parental care. In ovoviviparity, there is internal fertilization and the young are born live but there is no placental connection or significant trophic (feeding) interaction; the mother's body maintains gas exchange but the unborn young are nourished by egg yolk. There are two types of viviparity in fish. In histotrophic viviparity, the zygotes develop in the female's oviducts, but she provides no direct nutrition; the embryos survive by eating her eggs or their unborn siblings. In hemotrophic viviparity, the zygotes are retained within the female and are provided with nutrients by her, often through some form of placenta. In seahorses and pipefish, it is the male that becomes pregnant. Types of reproduction and pregnancy Pregnancy has been traditionally defined as the period during which developing embryos are incubated in the body after egg-sperm union. Despite strong similarities between viviparity in mammals, researchers have historically been reluctant to use the term "pregnancy" for non-mammals because of the highly developed form of viviparity in eutherians. Recent research into physiological, morphological and genetic changes associated with fish reproduction provide evidence that incubation in some species is a highly specialized form of reproduction similar to other forms of viviparity. Although the term "pregnancy" often refers to eutherian animals, it has also been used in the titles of many international, peer-reviewed, scientific articles on fish, e.g. Five modes of reproduction can be differentiated in fish based on relations between the zygote(s) and parents: Ovuliparity: Fertilization of eggs is external; zygotes develop externally. Oviparity: Fertilization of eggs is internal; zygotes develop externally as eggs with large vitellus. Ovoviviparity: Fertilization is internal; zygotes are retained in the female (or male) but without major trophic (feeding) interactions between zygote and parents (there may be minor interactions, such as maintenance of water and oxygen levels). The embryos depend upon their yolk for survival. There are two types of viviparity among fish. Histotrophic ("tissue eating") viviparity: The zygotes develop in the female's oviducts, but she provides no direct nutrition. The embryos survive by eating her eggs or their unborn siblings. Hemotrophic ("blood eating") viviparity: The zygotes are retained within the female and are provided with nutrients by her, often through some form of placenta. Diagram Ovoviviparous fish Examples of ovoviviparous fish are many of the squaliform sharks, which include sand sharks, mackerel sharks, nurse sharks, requiem sharks, dog sharks and hammerheads, among others, and the lobe finned coelacanth. Some species of rockfish (Sebastes) and sculpins (Comephoridae) produce rather weak larvae with no egg membrane and are also, by definition, ovoviviparous. Ovoviviparity occurs in most live-bearing bony fishes (Poeciliidae). Viviparous fish Viviparous fish include the families Goodeidae, Anablepidae, Jenynsiidae, Poeciliidae, Embiotocidae and some sharks (some species of the requiem sharks, Carcharinidae and the hammerheads, Sphyrnidae, among others). The halfbeaks, Hemiramphidae, are found in both marine and freshwaters and those species that are marine produce eggs with extended filaments that attach to floating or stationary debris, while those that are found in freshwater are viviparous with internal fertilization. The Bythitidae are also viviparous although one species, Dinematichthys ilucoeteoides, is reported to be ovoviviparous. Aquarists commonly refer to ovoviviparous and viviparous fish as "livebearers". Examples include guppies, mollies, moonfish, platys, four-eyed fish and swordtails. All of these varieties exhibit signs of their pregnancy before the live fry are born. As examples, the female swordtail and guppy will both give birth to anywhere from 20 to 100 live young after a gestation period of four to six weeks, and mollies will produce a brood of 20 to 60 live young after a gestation of six to 10 weeks. Nutrition during pregnancy Other terms relating to pregnancy in fish relate to the differences in the mode and extent of support the female gives the developing offspring. "Lecithotrophy" (yolk feeding) occurs when the mother provisions the oocyte with all the resources it needs prior to fertilization, so the egg is independent of the mother. Many members of the fish family Poeciliidae are considered to be lecithotrophic, however, research is increasingly showing that others are matrotrophic. "Aplacental viviparity" occurs when the female retains the embryos during the entire time of development but without any transfer of nutrients to the young. The yolk sac is the only source of nutrients for the developing embryo. There are at least two exceptions to this; some sharks gain nourishment by eating unfertilized eggs produced by the mother (oophagy or egg eating) or by eating their unborn siblings (intra-uterine cannibalism). "Matrotrophy" (mother feeding) occurs when the embryo exhausts its yolk supply early in gestation and the mother provides additional nutrition. Post-fertilization transfer of nutrients has been reported in several species within the genera Gambusia and Poecilia, specifically, G. affinis, G. clarkhubbsi, G. holbrooki, G. gaigei, G. geiseri, G. nobilis, P. formosa, P. latipinna, and P. mexicana. Viviparous fish have developed several ways of providing their offspring with nutrition. "Embryotrophic" or "histrotrophic" nutrition occurs by the production of nutritive fluid, uterine milk, by the uterine lining, which is absorbed directly by the developing embryo. "Hemotrophic" nutrition occurs through the passing of nutritive substances between blood vessels of the mother and embryo that are in close proximity, i.e. a placenta-like organ similar to that found in mammals. Comparison between species There is considerable variation between species in the length of pregnancy. At least one group of fish has been named after its pregnancy characteristics. The surfperch, genus Embiotoca, is a saltwater fish with a gestation period of three to six months. This lengthy period of pregnancy gives the family its scientific name from the Greek "embios" meaning "persistent" and "tokos" meaning "birth". The table below shows the gestation period and number of young born for some selected fish. a Unlike any other shark, the yolk-sac placenta is globular or spherical. b At first, the embryos are sustained by a yolk sac, but later a placenta develops. c A bonnethead female produced a pup by parthenogenesis in 2001. d The spadenose shark has the most advanced form of placental viviparity known in fish, as measured by the complexity of the placental connection and the difference in weight between the egg and the newborn young. e The eelpout suckles its young embryos while still within their mother's body, making it the only fish species to suckle its offspring. f Only one pregnant female is known to have been caught; she was carrying six unborn young. g 1 per uterine horn Poeciliopsis Members of the genus Poeciliopsis (amongst others) show variable reproductive life history adaptations. P. monacha can be considered to be lecithotrophic because the female does not really provide any resources for her offspring after fertilization. P. lucida shows an intermediate level of matrotrophy, meaning that to a certain extent, the offspring's metabolism can actually affect the mother's metabolism, allowing for increased nutrient exchange. P. prolifica is considered to be highly matrotrophic, and almost all of the nutrients and materials needed for foetal development are supplied to the oocyte after it has been fertilized. This level of matrotrophy allows Poeciliopsis to carry several broods at different stages of development, a phenomenon known as superfetation. P. elongata, P. turneri and P. presidionis form another clade which could be considered an outgroup to the P. monacha, P.lucida, and P. prolifica clade. These three species are very highly matrotrophic – so much so that in 1947, C. L. Turner described the follicular cells of P. turneri as "pseudo-placenta, pseudo-chorion, and pseudo-allantois". Guppy Guppies are highly prolific livebearers giving birth to between five and 30 fry, though under extreme circumstances, she may give birth to only one or two or over 100. The gestation period of a guppy is typically 21–30 days, but can vary considerably. The area where a pregnant guppy's abdomen meets the tail is sometimes called the "gravid patch", or "gravid spot". When pregnant, there is a slight discoloration that slowly darkens as the guppy progresses through pregnancy. The patch first has a yellowish tinge, then brown and become deep orange as the pregnancy develops. This patch is where the fertilized eggs are stored and grow. The darkening is actually the eyes of the developing baby guppies and the orange tinge is their jelly-like eggs. Elasmobranchs The majority of elasmobranchs are viviparous and show a wide range of strategies to provide their offspring with nourishment and respiratory requirements. Some sharks simply retain their young in the dilated posterior segment of the oviduct. In its simplest form, the uterus does not provide any additional nutrients to the embryos. However, other elasmobranchs develop secretory uterine villi that produce histotroph, a nutrient which supplements the yolk stores of the oocyte. Uterine secretions are perhaps most advanced in the stingrays. Following depletion of the yolk, the uterine lining hypertrophies into secretory appendages termed "trophonemata". The process by which the uterine secretions (also known as uterine milk or histotroph) are produced resembles that of breast milk in mammals. Furthermore, the milk is rich in protein and lipid. As the embryo grows, vascularisation of the trophonemata enlarges to form sinusoids that project out to the surface to form a functional respiratory membrane. In lamnoid sharks, following yolk use, the embryos develop teeth and eat eggs and siblings within the uterus. There is usually one fetus per uterus and it grows to enormous proportions of up to 1.3 m in length. In placental sharks, the yolk sac is not withdrawn to become incorporated into the abdominal wall. Rather, it lengthens to form an umbilical cord and the yolk sac becomes modified into a functional epitheliochorial placenta. Male pregnancy The male fishes of seahorses, pipefishes, weedy and leafy sea dragons (Syngnathidae) are unusual as the male, rather than the female, incubates the eggs before releasing live fry into the surrounding water. To achieve this, male seahorses protect eggs in a specialized brood pouch, male sea dragons attach their eggs to a specific area on their bodies, and male pipefish of different species may do either. When a female's eggs reach maturity, she squirts them from a chamber in her trunk via her ovipositor into his brood pouch or egg pouch, sometimes called a "marsupium". During a mammalian pregnancy, the placenta allows the female to nourish her progeny in the womb, and remove their waste products. If male pipefish and seahorses provide only a simple pouch for fish eggs to develop and hatch, it might not fully qualify as bona-fide pregnancy. However, current research suggests that in syngnathid species with well developed brood pouches, males do provide nutrients, osmoregulation and oxygenation to the embryos they carry. Seahorse When mating, the female seahorse deposits up to 1,500 (average of 100 to 1,000) eggs in the male's pouch, located on the ventral abdomen at the base of the tail. Male juveniles develop pouches when they are 5–7 months old. The male carries the eggs for 9 to 45 days until the seahorses emerge fully developed, but very small. The number born maybe as few as five for smaller species, or 2,500 for larger species. A male seahorse's body has large amounts of prolactin, the same hormone that governs milk production in pregnant mammals and although the male seahorse does not supply milk, his pouch provides oxygen as well as a controlled-environment. When the fry are ready to be born, the male expels them with muscular contractions, sometimes while attaching himself to seaweed with his tail. Birth typically occurs during the night, and a female returning for the routine morning greeting finds her mate ready for the next batch of eggs. The table below shows the gestation period and number of young born for some selected seahorses. Pipefish Pipefish brood their offspring either on distinct region of its body or in a brood pouch. Brood pouches vary significantly among different species of pipefish, but all contain a small opening through which the female's eggs can be deposited. The location of the brood pouch can be along the entire underside of the pipefish or just at the base of the tail, as with seahorses. Pipefish in the genus Syngnathus have a brood pouch with a ventral seam that can completely cover all of their eggs when sealed. In males without these pouches, eggs adhere to a strip of soft skin on the ventral surface of their bodies that does not contain any exterior covering – a type of "skin brooding". At least two species of pipefish, Syngnathus fuscus and Syngnathus floridae, provide nutrients for their offspring.
Biology and health sciences
Animal reproduction
Biology
48894713
https://en.wikipedia.org/wiki/Hydrogen%20chalcogenide
Hydrogen chalcogenide
Hydrogen chalcogenides (also chalcogen hydrides or hydrogen chalcides) are binary compounds of hydrogen with chalcogen atoms (elements of group 16: oxygen, sulfur, selenium, tellurium, polonium, and livermorium). Water, the first chemical compound in this series, contains one oxygen atom and two hydrogen atoms, and is the most common compound on the Earth's surface. Dihydrogen chalcogenides The most important series, including water, has the chemical formula H2X, with X representing any chalcogen. They are therefore triatomic. They take on a bent structure and as such are polar molecules. Water is an essential compound to life on Earth today, covering 70.9% of the planet's surface. The other hydrogen chalcogenides are usually extremely toxic, and have strong unpleasant scents usually resembling rotting eggs or vegetables. Hydrogen sulfide is a common product of decomposition in oxygen-poor environments and as such is one chemical responsible for the smell of flatulence. It is also a volcanic gas. Despite its toxicity, the human body intentionally produces it in small quantities for use as a signaling molecule. Water can dissolve the other hydrogen chalcogenides (at least those up to hydrogen telluride), forming acidic solutions known as hydrochalcogenic acids. Although these are weaker acids than the hydrohalic acids, they follow a similar trend of acid strength increasing with heavier chalcogens, and also form in a similar way (turning the water into a hydronium ion H3O+ and the solute into a XH− ion). It is unknown if polonium hydride forms an acidic solution in water like its lighter homologues, or if it behaves more like a metal hydride (see also hydrogen astatide). Some properties of the hydrogen chalcogenides follow: Many of the anomalous properties of water compared to the rest of the hydrogen chalcogenides may be attributed to significant hydrogen bonding between hydrogen and oxygen atoms. Some of these properties are the high melting and boiling points (it is a liquid at room temperature), as well as the high dielectric constant and observable ionic dissociation. Hydrogen bonding in water also results in large values of heat and entropy of vaporisation, surface tension, and viscosity. The other hydrogen chalcogenides are highly toxic, malodorous gases. Hydrogen sulfide occurs commonly in nature and its properties compared with water reveal a lack of any significant hydrogen bonding. Since they are both gases at STP, hydrogen can be simply burned in the presence of oxygen to form water in a highly exothermic reaction; such a test can be used in beginner chemistry to test for the gases produced by a reaction as hydrogen will burn with a pop. Water, hydrogen sulfide, and hydrogen selenide may be made by heating their constituent elements together above 350 °C, but hydrogen telluride and polonium hydride are not attainable by this method due to their thermal instability; hydrogen telluride decomposes in moisture, in light, and in temperatures above 0 °C. Polonium hydride is unstable, and due to the intense radioactivity of polonium (resulting in self-radiolysis upon formation), only trace quantities may be obtained by treating dilute hydrochloric acid with polonium-plated magnesium foil. Its properties are somewhat distinct from the rest of the hydrogen chalcogenides, since polonium is a metal while the other chalcogens are not, and hence this compound is intermediate between a normal hydrogen chalcogenide or hydrogen halide such as hydrogen chloride, and a metal hydride like stannane. Like water, the first of the group, polonium hydride is also a liquid at room temperature. Unlike water, however, the strong intermolecular attractions that cause the higher boiling point are van der Waals interactions, an effect of the large electron clouds of polonium. Dihydrogen dichalcogenides Dihydrogen dichalcogenides have the chemical formula H2X2, and are generally less stable than the monochalcogenides, commonly decomposing into the monochalcogenide and the chalcogen involved. The most important of these is hydrogen peroxide, H2O2, a pale blue, nearly colourless liquid that has a lower volatility than water and a higher density and viscosity. It is important chemically as it can be either oxidised or reduced in solutions of any pH, can readily form peroxometal complexes and peroxoacid complexes, as well as undergoing many proton acid/base reactions. In its less concentrated form hydrogen peroxide has some major household uses, such as a disinfectant or for bleaching hair; much more concentrated solutions are much more dangerous. Some properties of the hydrogen dichalcogenides follow: An alternative structural isomer of the dichalcogenides, in which both hydrogen atoms are bonded to the same chalcogen atom, which is also bonded to the other chalcogen atom, have been examined computationally. These H2X+–X– structures are ylides. This isomeric form of hydrogen peroxide, oxywater, has not been synthesized experimentally. The analogous isomer of hydrogen disulfide, thiosulfoxide, has been detected by mass spectrometry experiments. It is possible for two different chalcogen atoms to share a dichalcogenide, as in hydrogen thioperoxide (H2SO); more well-known compounds of similar description include sulfuric acid (H2SO4). Higher dihydrogen chalcogenides All straight-chain hydrogen chalcogenides follow the formula . Higher hydrogen polyoxides than are not stable. Trioxidane, with three oxygen atoms, is a transient unstable intermediate in several reactions. The next two in the oxygen series, tetraoxidane and pentaoxidane, have also been synthesized and found to be highly reactive. An alternative structural isomer of trioxidane, in which the two hydrogen atoms are attached to the central oxygen of the three-oxygen chain rather than one on each end, has been examined computationally. Beyond and , many higher polysulfanes (n = 3–8) are known as stable compounds. They feature unbranched sulfur chains, reflecting sulfur's tendency for catenation. Starting with , all known polysulfanes are liquids at room temperature. is colourless while the other polysulfanes are yellow; the colour becomes richer as n increases, as do the density, viscosity, and boiling point. A table of physical properties is given below. However, they can easily be oxidised and are all thermally unstable, disproportionating readily to sulfur and hydrogen sulfide, a reaction for which alkali acts as a catalyst: They also react with sulfite and cyanide to produce thiosulfate and thiocyanate respectively. An alternative structural isomer of the trisulfide, in which the two hydrogen atoms are attached to the central sulfur of the three-sulfur chain rather than one on each end, has been examined computationally. Thiosulfurous acid, a branched isomer of the tetrasulfide, in which the fourth sulfur is bonded to the central sulfur of a linear dihydrogen trisulfide structure (), has also been examined computationally. Thiosulfuric acid, in which two sulfur atoms branch off of the central of a linear dihydrogen trisulfide structure has been studied computationally as well. Higher polonium hydrides may exist. Other hydrogen-chalcogen compounds Some monohydrogen chalcogenide compounds do exist and others have been studied theoretically. As radical compounds, they are quite unstable. The two simplest are hydroxyl (HO) and hydroperoxyl (HO2). The compound hydrogen ozonide (HO3) is also known, along with some of its alkali metal ozonide salts are (various MO3). The respective sulfur analogue for hydroxyl is sulfanyl (HS) and HS2 for hydroperoxyl. One or both of the protium atoms in water can be substituted with the isotope deuterium, yielding respectively semiheavy water and heavy water, the latter being one of the most famous deuterium compounds. Due to the high difference in density between deuterium and regular protium, heavy water exhibits many anomalous properties. The radioisotope tritium can also form tritiated water in much the same way. Another notable deuterium chalcogenide is deuterium disulfide. Deuterium telluride (D2Te) has slightly higher thermal stability than protium telluride, and has been used experimentally for chemical deposition methods of telluride-based thin films. Hydrogen shares many properties with the halogens; substituting the hydrogen with halogens can result in chalcogen halide compounds such as oxygen difluoride and dichlorine monoxide, alongside ones that may be impossible with hydrogen such as chlorine dioxide. Hydrogen Ions One of the most well-known hydrogen chalcogenide ions is the hydroxide ion, and the related hydroxy functional group. The former is present in alkali metal, alkaline earth, and rare-earth hydroxides, formed by reacting the respective metal with water. The hydroxy group appears commonly in organic chemistry, such as within alcohols. The related bisulfide/sulfhydryl group appears in hydrosulfide salts and thiols, respectively. The hydronium (H3O+) ion is present in aqueous acidic solutions, including the hydrochalcogenic acids themselves, as well as pure water alongside hydroxide.
Physical sciences
Hydrogen compounds
Chemistry
50402274
https://en.wikipedia.org/wiki/TRAPPIST-1
TRAPPIST-1
|- ! style="background-color: #FFFFC0; text-align: center;" colspan="2"| Characteristics |- style="vertical-align:top" ! style="text-align:left" | Evolutionary stage | Main sequence |- style="vertical-align:top" ! style="text-align:left" | Spectral type | M8V |- style="vertical-align:top" ! style="text-align:left" | Apparent magnitude (R) | |- style="vertical-align:top" ! style="text-align:left" | Apparent magnitude (I) | |- style="vertical-align:top" ! style="text-align:left" | Apparent magnitude (J) | |- style="vertical-align:top" ! style="text-align:left" | Apparent magnitude (H) | |- style="vertical-align:top" ! style="text-align:left" | Apparent magnitude (K) | |- style="vertical-align:top" ! style="text-align:left" | V−R | 2.332 |- style="vertical-align:top" ! style="text-align:left" | R−I | 2.442 |- style="vertical-align:top" ! style="text-align:left" | J−H | 0.636 |- style="vertical-align:top" ! style="text-align:left" | J−K | 1.058 |- style="vertical-align:top" TRAPPIST-1 is a cool red dwarf star with seven known exoplanets. It lies in the constellation Aquarius about light-years away from Earth, and has a surface temperature of about . Its radius is slightly larger than Jupiter and it has a mass of about 9% of the Sun. It is estimated to be 7.6 billion years old, making it older than the Solar System. The discovery of the star was first published in 2000. Observations in 2016 from the Transiting Planets and Planetesimals Small Telescope (TRAPPIST) at La Silla Observatory in Chile and other telescopes led to the discovery of two terrestrial planets in orbit around TRAPPIST-1. In 2017, further analysis of the original observations identified five more terrestrial planets. It takes the seven planets between about 1.5 and 19 days to orbit around the star in circular orbits. They are likely tidally locked to TRAPPIST-1, such that one side of each planet always faces the star, leading to permanent day on one side and permanent night on the other. Their masses are comparable to that of Earth and they all lie in the same plane; from Earth they seem to move past the disk of the star. Up to four of the planets—designated d, e, f and g—orbit at distances where temperatures are suitable for the existence of liquid water, and are thus potentially hospitable to life. There is no evidence of an atmosphere on any of the planets, and observations of TRAPPIST-1b have ruled out the existence of an atmosphere. It is unclear whether radiation emissions from TRAPPIST-1 would allow for such atmospheres. The planets have low densities; they may consist of large amounts of volatile materials. Due to the possibility of several of the planets being habitable, the system has drawn interest from researchers and has appeared in popular culture. Discovery The star now known as TRAPPIST-1 was discovered in 1999 by astronomer John Gizis and colleagues during a survey of close-by ultra-cool dwarf stars. It appeared in sample C of the surveyed stars, which was obtained in June 1999. Publication of the discovery took place in 2000. The name is a reference to the TRAnsiting Planets and PlanetesImals Small Telescope (TRAPPIST) project that discovered the first two exoplanets around the star. Its planetary system was discovered by a team led by Michaël Gillon, a Belgian astronomer at the University of Liege, in 2016 during observations made at the La Silla Observatory, Chile, using the TRAPPIST telescope. The discovery was based on anomalies in the light curves measured by the telescope in 2015. These were initially interpreted as indicating the existence of three planets. In 2016, separate discoveries revealed that the third planet was in fact multiple planets. The telescopes and observatories involved were the Spitzer Space Telescope and the ground-based TRAPPIST, TRAPPIST-North in Oukaïmeden Observatory, Morocco, the South African Astronomical Observatory, and the Liverpool Telescopes and William Herschel Telescopes in Spain. The observations of TRAPPIST-1 are considered among the most important research findings of the Spitzer Space Telescope. Complementing the findings were observations by the Himalayan Chandra Telescope, the United Kingdom Infrared Telescope, and the Very Large Telescope. Since then, research has confirmed the existence of at least seven planets in the system, the orbits of which have been calculated using measurements from the Spitzer and Kepler telescopes. Some news reports incorrectly attributed the discovery of the TRAPPIST-1 planets to NASA; in fact the TRAPPIST project that led to their discovery received funding from both NASA and the European Research Council of the European Union (EU). Description TRAPPIST-1 is in the constellation Aquarius, five degrees south of the celestial equator. It is a relatively close star located light-years from Earth, with a large proper motion and no companion stars. It is a red dwarf of spectral class M, meaning it is relatively small and cold. With a radius 12% of that of the Sun, TRAPPIST-1 is only slightly larger than the planet Jupiter (though much more massive). Its mass is approximately 9% of that of the Sun, being just sufficient to allow nuclear fusion to take place. TRAPPIST-1's density is unusually low for a red dwarf. It has a low effective temperature of making it, , the coldest-known star to host planets. TRAPPIST-1 is cold enough for condensates to form in its photosphere; these have been detected through the polarisation they induce in its radiation during transits of its planets. Elements heavier than helium form compounds in its atmosphere, which display as absorption lines in TRAPPIST-1's spectrum. There is no evidence that it has a stellar cycle. Its luminosity, emitted mostly as infrared radiation, is about 0.055% that of the Sun. Low-precision measurements from the XMM-Newton satellite and other facilities show that the star emits faint radiation at short wavelengths such as x-rays and UV radiation. There are no detectable radio wave emissions. Rotation period and age Measurements of TRAPPIST-1's rotation have yielded a period of 3.3 days; earlier measurements of 1.4 days appear to have been caused by changes in the distribution of its starspots. Its rotational axis may be slightly offset from that of its planets. Using a combination of techniques, the age of TRAPPIST-1 has been estimated at about billion years, making it older than the Solar System, which is about billion years old. It is expected to shine for ten trillion years—about 700 times longer than the present age of the Universe—whereas the Sun will run out of hydrogen and leave the main sequence in a few billion years. Activity Photospheric features have been detected on TRAPPIST-1. The Kepler and Spitzer Space Telescopes have observed possible bright spots, which may be faculae, although some of these may be too large to qualify as such. Bright spots are correlated to the occurrence of some stellar flares. Kepler K2 observations have shown that TRAPPIST-1 produces frequent flares (42 flares in 80 days), including large, complex flares that could alter nearby planetary atmospheres irreversibly and significantly, raising doubts of hosting life as we know it on Earth. The star has a strong magnetic field with a mean intensity of about 600 gauss which may be an underestimate. The magnetic field drives high chromospheric activity, and may be capable of trapping coronal mass ejections. According to Garraffo et al. (2017), TRAPPIST-1 loses about solar masses per year to the stellar wind, a rate which is about 1.5 times that of the Sun. Dong et al. (2018) simulated the observed properties of TRAPPIST-1 with a mass loss of solar masses per year. Simulations to estimate mass loss are complicated because, as of 2019, most of the parameters that govern TRAPPIST-1's stellar wind are not known from direct observation. Planetary system TRAPPIST-1 is orbited by seven planets, designated TRAPPIST-1b, 1c, 1d, 1e, 1f, 1g and 1h in alphabetic order going out from the star. These planets have orbital periods ranging from 1.5 to 19 days, at distances of 0.011–0.059 astronomical units (1.7–8.9 million km). All the planets are much closer to their star than Mercury is to the Sun, making the TRAPPIST-1 system very compact. Kral et al. (2018) did not detect any comets around TRAPPIST-1, and Marino et al. (2020) found no evidence of a Kuiper belt, although it is uncertain whether a Solar System-like belt around TRAPPIST-1 would be observable from Earth. Observations with the Atacama Large Millimeter Array found no evidence of a circumstellar dust disk. The inclinations of planetary orbits relative to the system's ecliptic are less than 0.1 degrees, making TRAPPIST-1 the flattest planetary system in the NASA Exoplanet Archive. The orbits are highly circular, with minimal eccentricities and are well-aligned with the spin axis of TRAPPIST-1. The planets orbit in the same plane and, from the perspective of the Solar System, transit TRAPPIST-1 during their orbit and frequently pass in front of each other. Size and composition The radii of the planets are estimated to range between 77.5 and 112.9% of Earth's radius. The planet/star mass ratio of the TRAPPIST-1 system resembles that of the moon/planet ratio of the Solar System's gas giants. The TRAPPIST-1 planets are expected to have compositions that resemble each other as well as that of Earth. The estimated densities of the planets are lower than Earth's which may imply that they have large amounts of volatile chemicals. Alternatively, their cores may be smaller than that of Earth and therefore they may be rocky planets with less iron than that of Earth, include large amounts of elements other than iron, or their iron may exist in an oxidised form rather than as a core. Their densities are too low for a pure magnesium silicate composition, requiring the presence of lower-density compounds such as water. Planets b, d, f, g and h are expected to contain large quantities of volatile chemicals. The planets may have deep atmospheres and oceans, and contain vast amounts of ice. Subsurface oceans, buried under icy shells, would form in the colder planets. Several compositions are possible considering the large uncertainties in their densities. The photospheric features of the star may introduce inaccuracies in measurements of the properties of TRAPPIST-1's planets, including their densities being underestimated by 8 percent, and incorrect estimates of their water content. Resonance and tides The planets are in orbital resonances. The durations of their orbits have ratios of 8:5, 5:3, 3:2, 3:2, 4:3 and 3:2 between neighbouring planet pairs, and each set of three is in a Laplace resonance. Simulations have shown such resonances can remain stable over billions of years but that their stability is strongly dependent on initial conditions. Many configurations become unstable after less than a million years. The resonances enhance the exchange of angular momentum between the planets, resulting in measurable variations—earlier or later—in their transit times in front of TRAPPIST-1. These variations yield information on the planetary system, such as the masses of the planets, when other techniques are not available. The resonances and the proximity to the host star have led to comparisons between the TRAPPIST-1 system and the Galilean moons of Jupiter. Kepler-223 is another exoplanet system with a TRAPPIST-1-like long resonance. The closeness of the planets to TRAPPIST-1 results in tidal interactions stronger than those on Earth. All the planets have reached an equilibrium with slow planetary rotations and tidal locking, which can lead to the synchronisation of a planet's rotation to its revolution around its star. However, the mutual interactions of the planets could prevent them from reaching full synchronisation, which would have important implications for the planets' climates. These interactions could force periodic or episodic full rotations of the planets' surfaces with respect to the star on timescales of several Earth years. Vinson, Tamayo and Hansen (2019) found the planets TRAPPIST-1d, e and f likely have chaotic rotations due to mutual interactions, preventing them from becoming synchronised to their star. Lack of synchronisation potentially makes the planets more habitable. Other processes that can prevent synchronous rotation are torques induced by stable triaxial deformation of the planets, which would allow them to enter 3:2 resonances. The planets are likely to undergo substantial tidal heating due to deformations arising from their orbital eccentricities and gravitational interactions with one another. Such heating would facilitate volcanism and degassing especially on the innermost planets, with degassing facilitating the establishment of atmospheres. According to Luger et al. (2017), tidal heating of the four innermost planets is expected to be greater than Earth's inner heat flux. For the outer planets Quick et al. (2020) noted that their tidal heating could be comparable to that in the Solar System bodies Europa, Enceladus and Triton, and may be sufficient to drive detectable cryovolcanic activity. Tidal heating could influence temperatures of the night sides and cold areas where volatiles may be trapped, and gases are expected to accumulate; it would also influence the properties of any subsurface oceans where cryovolcanism, volcanism and hydrothermal venting could occur. It may further be sufficient to melt the mantles of the four innermost planets, in whole or in part, potentially forming subsurface magma oceans. This heat source is likely dominant over radioactive decay, both of which have substantial uncertainties and are considerably less than the stellar radiation received. Intense tides could fracture the planets' crusts even if they are not sufficiently strong to trigger the onset of plate tectonics. Tides can also occur in the planetary atmospheres. Skies and impact of stellar light Because most of TRAPPIST-1's radiation is in the infrared region, there may be very little visible light on the planets' surfaces; Amaury Triaud, one of the system's co-discoverers, said the skies would never be brighter than Earth's sky at sunset and only a little brighter than a night with a full moon. Ignoring atmospheric effects, illumination would be orange-red. For TRAPPIST-1e, the central star would be four times as wide in the sky as the Sun in Earth's. All of the planets would be visible from each other and would, in many cases, appear larger than Earth's Moon in the sky of Earth, and each would be recognizable as a planet rather than a star. They would undergo noticeable retrograde motions in the sky. Observers on TRAPPIST-1e, f and g, however, could never experience a total stellar eclipse. Assuming the existence of atmospheres, the star's long-wavelength radiation would be absorbed to a greater degree by water and carbon dioxide than sunlight on Earth; it would also be scattered less by the atmosphere and less reflected by ice, although the development of highly reflective hydrohalite ice may negate this effect. The same amount of radiation results in a warmer planet compared to Sun-like irradiation; more radiation would be absorbed by the planets' upper atmosphere than by the lower layers, making the atmosphere more stable and less prone to convection. Habitable zone For a dim star like TRAPPIST-1, the habitable zone is located closer to the star than for the Sun. Three or four planets might be located in the habitable zone; these include , and ; or , and . , this is the largest-known number of planets within the habitable zone of any known star or star system. The presence of liquid water on any of the planets depends on several other factors, such as albedo (reflectivity), the presence of an atmosphere and any greenhouse effect. Surface conditions are difficult to constrain without better knowledge of the planets' atmospheres. A synchronously rotating planet might not entirely freeze over if it receives too little radiation from its star because the day-side could be sufficiently heated to halt the progress of glaciation. Other factors for the occurrence of liquid water include the presence of oceans and vegetation; the reflective properties of the land surface; the configuration of continents and oceans; the presence of clouds; and sea ice dynamics. The effects of volcanic activity may extend the system's habitable zone to TRAPPIST-1h. Even if the outer planets are too cold to be habitable, they may have ice-covered subsurface oceans that may harbour life. Intense extreme ultraviolet (XUV) and X-ray radiation can split water into its component parts of hydrogen and oxygen, and heat the upper atmosphere until they escape from the planet. This was thought to have been particularly important early in the star's history, when radiation was more intense and could have heated every planet's water to its boiling point. This process is believed to have removed water from Venus. In the case of TRAPPIST-1, different studies with different assumptions on the kinetics, energetics and XUV emissions have come to different conclusions on whether any TRAPPIST-1 planet may retain substantial amounts of water. Because the planets are most likely synchronised to their host star, any water present could become trapped on the planets' night sides and would be unavailable to support life unless heat transport by the atmosphere or tidal heating are intense enough to melt ice. Moons No moons with a size comparable to Earth's have been detected in the TRAPPIST-1 system, and they are unlikely in such a densely packed planetary system. This is because moons would likely be either destroyed by their planet's gravity after entering its Roche limit or stripped from the planet by leaving its Hill radius Although the TRAPPIST-1 planets appear in an analysis of potential exomoon hosts, they do not appear in the list of habitable-zone exoplanets that could host a moon for at least one Hubble time, a timeframe slightly longer than the current age of the Universe. Despite these factors, it is possible the planets could host moons. Magnetic effects The TRAPPIST-1 planets are expected to be within the Alfvén surface of their host star, the area around the star within which any planet would directly magnetically interact with the corona of the star, possibly destabilising any atmosphere the planet has. Stellar energetic particles would not create a substantial radiation hazard for organisms on TRAPPIST-1 planets if atmospheres reached pressures of about . Estimates of radiation fluxes have considerable uncertainties due to the lack of knowledge about the structure of TRAPPIST-1's magnetic field. Induction heating from the star's time-varying electrical and magnetic fields may occur on its planets but this would make no substantial contribution to their energy balance and is vastly exceeded by tidal heating. Formation history The TRAPPIST-1 planets most likely formed further from the star and migrated inwards, although it is possible they formed in their current locations. According to the most popular theory on the formation of the TRAPPIST-1 planets (Ormel et al. (2017)), the planets formed when a streaming instability at the water-ice line gave rise to precursor bodies, which accumulated additional fragments and migrated inwards, eventually giving rise to planets. The migration may initially have been fast and later slowed, and tidal effects may have further influenced the formation processes. The distribution of the fragments would have controlled the final mass of the planets, which would consist of approximately 10% water consistent with observational inference. Resonant chains of planets like those of TRAPPIST-1 usually become unstable when the gas disk that gave rise to them dissipates, but in this case, the planets remained in resonance. The resonance may have been either present from the system's formation and was preserved when the planets simultaneously moved inwards, or it might have formed later when inward-migrating planets accumulated at the outer edge of the gas disk and interacted with each other. Inward-migrating planets would contain substantial amounts of water—too much for it to entirely escape—whereas planets that formed in their current location would most likely lose all water. According to Flock et al. (2019), the orbital distance of the innermost planet TRAPPIST-1b is consistent with the expected radius of an inward-moving planet around a star that was one order of magnitude brighter in the past, and with the cavity in the protoplanetary disc created by TRAPPIST-1's magnetic field. Alternatively, TRAPPIST-1h may have formed in or close to its current location. The presence of other bodies and planetesimals early in the system's history would have destabilised the TRAPPIST-1 planets' resonance if the bodies were massive enough. Raymond et al. (2021) concluded the TRAPPIST-1 planets assembled in one to two million years, after which time little additional mass was accreted. This would limit any late delivery of water to the planets and also implies the planets cleared the neighbourhood of any additional material. The lack of giant impact events (the rapid formation of the planets would have quickly exhausted pre-planetary material) would help the planets preserve their volatile materials, only once the planet formation process was complete. Due to a combination of high insolation, the greenhouse effect of water vapour atmospheres and remnant heat from the process of planet assembly, the TRAPPIST-1 planets would likely have initially had molten surfaces. Eventually the surfaces would cool until the magma oceans solidified, which in the case of TRAPPIST-1b may have taken between a few billions of years, or a few millions of years. The outer planets would then have become cold enough for water vapour to condense. List of planets TRAPPIST-1b TRAPPIST-1b has a semi-major axis of 0.0115 astronomical units () and an orbital period of 1.51 Earth days. It is tidally locked to its star. The planet is outside the habitable zone; its expected irradiation is more than four times that of Earth and the James Webb Space Telescope (JWST) has measured a brightness temperature of on the day side. TRAPPIST-1b has a slightly larger measured radius and mass than Earth but estimates of its density imply it does not exclusively consist of rock. Owing to its black-body temperature of , TRAPPIST-1b may have had a runaway greenhouse effect similar to that of Venus; JWST observations indicate that it has either no atmosphere at all or one nearly devoid of CO2. Based on several climate models, the planet would have been desiccated by TRAPPIST-1's stellar wind and radiation; it could be quickly losing hydrogen and therefore any hydrogen-dominated atmosphere. Water, if any exists, could persist only in specific settings on the planet, whose surface temperature could be as high as , making TRAPPIST-1b a candidate magma ocean planet. According to JWST observations, the planet has an albedo of about zero. TRAPPIST-1c TRAPPIST-1c has a semi-major axis of and orbits its star every 2.42 Earth days. It is close enough to TRAPPIST-1 to be tidally locked. JWST observations have ruled out the existence of Venus-like atmospheres, or CO2-rich atmospheres without a temperature inversion. Airlessness is possible, but only if the surface is subject to rapid volcanic overprinting which is expected given the amount of tidal heating. However, water vapour- or oxygen-rich atmospheres or no-atmosphere scenarios are possible. These data imply that relative to Earth or Venus, TRAPPIST-1 c has a lower carbon content. TRAPPIST-1c is outside the habitable zone as it receives about twice as much stellar irradiation as Earth and thus either is or has been a runaway greenhouse. Based on several climate models, the planet would have been desiccated by TRAPPIST-1's stellar wind and radiation. TRAPPIST-1c could harbour water only in specific settings on its surface. Observations in 2017 showed no escaping hydrogen, but observations by the Hubble Space Telescope (HST) in 2020 indicated that hydrogen may be escaping at a rate of . TRAPPIST-1d TRAPPIST-1d has a semi-major axis of and an orbital period of 4.05 Earth days. It is more massive but less dense than Mars. Based on fluid dynamical arguments, TRAPPIST-1d is expected to have weak temperature gradients on its surface if it is tidally locked, and may have significantly different stratospheric dynamics than that of Earth. Several climate models suggest that the planet may or may not have been desiccated by TRAPPIST-1's stellar wind and radiation; density estimates, if confirmed, indicate it is not dense enough to consist solely of rock. The current state of TRAPPIST-1d depends on its rotation and climatic factors like cloud feedback; it is close to the inner edge of the habitable zone, but the existence of either liquid water or alternatively a runaway greenhouse effect (that would render it uninhabitable) are dependent on detailed atmospheric conditions. Water could persist in specific settings on the planet. TRAPPIST-1e TRAPPIST-1e has a semi-major axis of and orbits its star every 6.10 Earth days. It has density similar that of Earth. Based on several climate models, the planet is the most likely of the system to have retained its water, and the most likely to have liquid water for many climate states. A dedicated climate model project called TRAPPIST-1 Habitable Atmosphere Intercomparison (THAI) has been launched to study its potential climate states. Based on observations of its Lyman-alpha radiation emissions, TRAPPIST-1e may be losing hydrogen at a rate of . TRAPPIST-1e is in a comparable position within the habitable zone to that of Proxima Centauri b, which also has an Earth-like density. TRAPPIST-1e could have retained masses of water equivalent to several of Earth's oceans. Moderate quantities of carbon dioxide could warm TRAPPIST-1e to temperatures suitable for the presence of liquid water. TRAPPIST-1f TRAPPIST-1f has a semi-major axis of and orbits its star every 9.21 Earth days. It is likely too distant from its host star to sustain liquid water, being instead an entirely glaciated snowball planet that might host a subsurface ocean. Moderate quantities of CO2 could warm TRAPPIST-1f to temperatures suitable for the presence of liquid water. Lakes or ponds with liquid water might form in places where tidal heating is concentrated. TRAPPIST-1f may have retained masses of water equivalent to several of Earth's oceans and which could comprise up to half of the planet's mass; it could thus be an ocean planet. TRAPPIST-1g TRAPPIST-1g has a semi-major axis of and orbits its star every 12.4 Earth days. It is likely too distant from its host star to sustain liquid water, being instead a snowball planet that might host a subsurface ocean. Moderate quantities of CO2 or internal heat from radioactive decay and tidal heating may warm its surface to above the melting point of water. TRAPPIST-1g may have retained masses of water equivalent to several of Earth's oceans; density estimates of the planet, if confirmed, indicate it is not dense enough to consist solely of rock. Up to half of its mass may be water. TRAPPIST-1h TRAPPIST-1h has a semi-major axis of ; it is the system's least-massive-known planet and orbits its star every 18.9 Earth days. It is likely too distant from its host star to sustain liquid water and may be a snowball planet, or have a methane/nitrogen atmosphere resembling that of Titan. It might host a subsurface ocean. Large quantities of CO2, hydrogen or methane, or internal heat from radioactive decay and tidal heating, would be needed to warm TRAPPIST-1h to the point where liquid water could exist. TRAPPIST-1h could have retained masses of water equivalent to several of Earth's oceans. Data table Potential planetary atmospheres , the existence of an atmosphere around TRAPPIST-1b has been ruled out by James Webb Space Telescope observations, and there is no evidence for the other planets in the system, but atmospheres are not ruled out and could be detected in the future. The outer planets are more likely to have atmospheres than the inner planets. Several studies have simulated how different atmospheric scenarios would look to observers, and the chemical processes underpinning these atmospheric compositions. The visibility of an exoplanet and of its atmosphere scale with the inverse square of the radius of its host star. Detection of individual components of the atmospheres—in particular CO2, ozone and water—would also be possible, although different components would require different conditions and different numbers of transits. A contamination of the atmospheric signals through patterns in the stellar photosphere is a further impediment to detection. The existence of atmospheres around TRAPPIST-1's planets depends on the balance between the amount of atmosphere initially present, its rate of evaporation, and the rate at which it is built back up by meteorite impacts, incoming material from a protoplanetary disk, and outgassing and volcanic activity. Impact events may be particularly important in the outer planets because they can both add and remove volatiles; addition is likely dominant in the outermost planets where impact velocities are slower. The formation conditions of the planets would give them large initial quantities of volatile materials, including oceans over 100 times larger than those of Earth. If the planets are tidally locked to TRAPPIST-1, surfaces that permanently face away from the star can cool sufficiently for any atmosphere to freeze out on the night side. This frozen-out atmosphere could be recycled through glacier-like flows to the day side with assistance from tidal or geothermal heating from below, or could be stirred by impact events. These processes could allow an atmosphere to persist. In a carbon dioxide (CO2) atmosphere, carbon-dioxide ice is denser than water ice, under which it tends to be buried. CO2–water compounds named clathrates can form. Further complications are a potential runaway feedback loop between melting ice and evaporation, and the greenhouse effect. Numerical modelling and observations constrain the properties of hypothetical atmospheres around TRAPPIST-1 planets: Theoretical calculations and observations have ruled out the possibility the TRAPPIST-1 planets have hydrogen-rich or helium-rich atmospheres. Hydrogen-rich exospheres may be detectable but have not been reliably detected, except perhaps for TRAPPIST-1b and 1c by Bourrier et al. (2017). Water-dominated atmospheres, though suggested by some density estimates, are improbable for the planets because they are expected to be unstable under the conditions around TRAPPIST-1, especially early in the star's life. The spectral properties of the planets imply they do not have a cloud-free, water-rich atmosphere. Oxygen-dominated atmospheres can form when radiation splits water into hydrogen and oxygen, and the hydrogen escapes due to its lighter mass. The existence of such an atmosphere and its mass depends on the initial water mass, on whether the oxygen is dragged out of the atmosphere by escaping hydrogen and of the state of the planet's surface; a partially molten surface could absorb sufficient quantities of oxygen to remove an atmosphere. Atmospheres formed by ammonia and/or methane near TRAPPIST-1 would be destroyed by the star's radiation at a sufficient rate to quickly remove an atmosphere. The rate at which ammonia or methane are produced, possibly by organisms, would have to be considerably larger than that on Earth to sustain such an atmosphere. It is possible the development of organic hazes from ammonia or methane photolysis could shield the remaining molecules from degradation caused by radiation. Ducrot et al. (2020) interpreted observational data as implying methane-dominated atmospheres are unlikely around TRAPPIST-1 planets. Nitrogen-dominated atmospheres are particularly unstable with respect to atmospheric escape, especially on the innermost planets, although the presence of CO2 may slow evaporation. Unless the TRAPPIST-1 planets initially contained far more nitrogen than Earth, they are unlikely to have retained such atmospheres. CO2-dominated atmospheres escape slowly because CO2 effectively radiates away energy and thus does not readily reach escape velocity; on a synchronously rotating planet, however, CO2 can freeze out on the night side, especially if there are no other gases in the atmosphere. The decomposition of CO2 caused by radiation could yield substantial amounts of oxygen, carbon monoxide (CO), and ozone. Theoretical modelling by Krissansen-Totton and Fortney (2022) suggests the inner planets most likely have oxygen-and-CO2-rich atmospheres, if any. If the planets have an atmosphere, the amount of precipitation, its form and location would be determined by the presence and position of mountains and oceans, and the rotation period. Planets in the habitable zone are expected to have an atmospheric circulation regime resembling Earth's tropical regions with largely uniform temperatures. Whether greenhouse gases can accumulate on the outer TRAPPIST-1 planets in sufficient quantities to warm them to the melting point of water is controversial; on a synchronously rotating planet, CO2 could freeze and precipitate on the night side, and ammonia and methane would be destroyed by XUV radiation from TRAPPIST-1. Carbon dioxide freezing-out can occur only on the outermost planets unless special conditions are met, and other volatiles do not freeze out. Stability The emission of extreme ultraviolet (XUV) radiation by a star has an important influence on the stability of its planets' atmospheres, their composition and the habitability of their surfaces. It can cause the ongoing removal of atmospheres from planets. XUV radiation-induced atmospheric escape has been observed on gas giants. M dwarfs emit large amounts of XUV radiation; TRAPPIST-1 and the Sun emit about the same amount of XUV radiation and because TRAPPIST-1's planets are much closer to the star than the Sun's, they receive much more intense irradiation. TRAPPIST-1 has been emitting radiation for much longer than the Sun. The process of atmospheric escape has been modelled mainly in the context of hydrogen-rich atmospheres and little quantitative research has been done on those of other compositions such as water and CO2. TRAPPIST-1 has moderate to high stellar activity, and this may be another difficulty for the persistence of atmospheres and water on the planets: Dwarfs of the spectral class M have intense flares; TRAPPIST-1 averages one flare every two days and about four to six superflares per year. Such flares would have only small impacts on atmospheric temperatures but would substantially affect the stability and chemistry of atmospheres. According to Samara, Patsourakos and Georgoulis (2021), the TRAPPIST-1 planets are unlikely to be able to retain atmospheres against coronal mass ejections. The stellar wind from TRAPPIST-1 may have a pressure 1,000 times larger than that of the Sun at Earth's orbit, which could destabilise atmospheres of the star's planets up to planet f. The pressure would push the wind deep into the atmospheres, facilitating loss of water and evaporation of the atmospheres. Stellar wind-driven escape in the Solar System is largely independent from planetary properties such as mass, scaling instead with the stellar wind mass flux impacting the planet. Stellar wind from TRAPPIST-1 could remove the atmospheres of its planets on a timescale of 100 million to 10 billion years. Ohmic heating of the atmosphere of TRAPPIST-1e, f, and g amounts to five to fifteen times the heating from XUV radiation; if the heat is effectively absorbed, it could destabilise the atmospheres. The star's history also influences the atmospheres of its planets. Immediately after its formation, TRAPPIST-1 would have been in a pre-main-sequence state, which may have lasted between hundreds of millions and two billion years. While in this state, it would have been considerably brighter than it is today and the star's intense irradiation would have impacted the atmospheres of surrounding planets, vaporising all common volatiles such as ammonia, CO2, sulfur dioxide and water. Thus, all of the system's planets would have been heated to a runaway greenhouse for at least part of their existence. The XUV radiation would have been even higher during the pre-main-sequence stage. Possible life Life may be possible in the TRAPPIST-1 system, and some of the star's planets are considered promising targets for its detection. On the basis of atmospheric stability, TRAPPIST-1e is theoretically the planet most likely to harbour life; the probability that it does is considerably less than that of Earth. There are an array of factors at play: Due to multiple interactions, TRAPPIST-1 planets are expected to have intense tides. If oceans are present, the tides could: lead to alternate flooding and drying of coastal landscapes triggering chemical reactions conducive to the development of life; favour the evolution of biological rhythms such as the day-night cycle that otherwise would not develop in a synchronously rotating planet; mix oceans, thus supplying and redistributing nutrients; and stimulate periodic expansions of marine organisms similar to red tides on Earth. TRAPPIST-1 may not produce sufficient quantities of radiation for photosynthesis to support an Earth-like biosphere. Mullan and Bais (2018) speculated that radiation from flares may increase the photosynthetic potential of TRAPPIST-1, but according to Lingam and Loeb (2019), the potential would still be small. Due to the proximity of the TRAPPIST-1 planets, it is possible rock-encased microorganisms ripped from one planet may arrive at another planet while still viable inside the rock, allowing life to spread between the planets if it originates on one. Too much UV radiation from a star can sterilise the surface of a planet but too little may not allow the formation of chemical compounds that give rise to life. Inadequate production of hydroxyl radicals by low stellar-UV emission may allow gases such as carbon monoxide that are toxic to higher life to accumulate in the planets' atmospheres. The possibilities range from UV fluxes from TRAPPIST-1 being unlikely to be much larger than these of early Earth—even in the event that TRAPPIST-1's emissions of UV radiation are high—to being sufficient to sterilise the planets if they do not have protective atmospheres. it is unclear which effect would predominate around TRAPPIST-1, although observations with the Kepler Space Telescope and the Evryscope telescopes indicate the UV flux may be insufficient for the formation of life or its sterilisation. Intense flaring activity of the host star—that could alter nearby planets' atmospheres irreversibly and significantly—raised doubts of the habitability of the system.  Although initial water reservoirs could have been lost during the early life of the system due to the stellar activity, a potential subsequent water delivery event, like the late heavy bombardment in the Solar system, could replenish planetary water reservoirs. The outer planets in the TRAPPIST-1 system could host subsurface oceans similar to those of Enceladus and Europa in the Solar System. Chemolithotrophy, the growth of organisms based on non-organic reduced compounds, could sustain life in such oceans. Very deep oceans may be inimical to the development of life. Some planets of the TRAPPIST-1 system may have enough water to completely submerge their surfaces. If so, this would have important effects on the possibility of life developing on the planets, and on their climates, as weathering would decrease, starving the oceans of nutrients like phosphorus as well as potentially leading to the accumulation of carbon dioxide in their atmospheres. TRAPPIST-1 is well-suited to the search of technosignatures that would indicate the existence of past or present technology in the TRAPPIST-1 system. Searches in 2017 found only signals coming from Earth, others in 2024 found nothing although their sensitivity is low. In less than two millennia, Earth will be transiting in front of the Sun from the viewpoint of TRAPPIST-1, making the detection of life on Earth from TRAPPIST-1 possible. Reception and scientific importance Public reaction and cultural impact The discovery of the TRAPPIST-1 planets drew widespread attention in major world newspapers, social media, streaming television and websites. , the discovery of TRAPPIST-1 led to the largest single-day web traffic to the NASA website. NASA started a public campaign on Twitter to find names for the planets, which drew responses of varying seriousness, although the names of the planets will be decided by the International Astronomical Union. The dynamics of the TRAPPIST-1 planetary system have been represented as music, such as Tim Pyle's Trappist Transits, in Isolation's single Trappist-1 (A Space Anthem) and Leah Asher's piano work TRAPPIST-1. The alleged discovery of an SOS signal from TRAPPIST-1 was an April Fools prank by researchers at the High Energy Stereoscopic System in Namibia. In 2018, Aldo Spadon created a giclée (digital artwork) named "TRAPPIST-1 Planetary System as seen from Space". A website was dedicated to the TRAPPIST-1 system. Exoplanets are often featured in science-fiction works; books, comics and video games have featured the TRAPPIST-1 system, the earliest being The Terminator, a short story by Swiss author Laurence Suhner published in the academic journal that announced the system's discovery. At least one conference was organised to recognise works of fiction featuring TRAPPIST-1. The planets have been used as the basis of science education competitions and school projects, their surfaces portrayed in artistic imagery. Websites offering TRAPPIST-1-like planets as settings of virtual reality simulations exist, such as the "Exoplanet Travel Bureau" and the "Exoplanets Excursion"—both by NASA. Scientific accuracy has been a point of discussion for such cultural depictions of TRAPPIST-1 planets. Scientific importance TRAPPIST-1 has drawn intense scientific interest. Its planets are the most easily studied exoplanets within their star's habitable zone owing to their relative closeness, the small size of their host star, and because from Earth's perspective they frequently pass in front of their host star. Future observations with space-based observatories and ground-based facilities may allow further insights into their properties such as density, atmospheres and biosignatures. TRAPPIST-1 planets are considered an important observation target for the James Webb Space Telescope and other telescopes under construction; JWST began investigating the TRAPPIST-1 planets in 2023. Together with the discovery of Proxima Centauri b, the discovery of the TRAPPIST-1 planets and the fact that three of the planets are within the habitable zone has led to an increase in studies on planetary habitability. The planets are considered prototypical for the research on habitability of M dwarfs. The star has been the subject of detailed studies of its various aspects including the possible effects of vegetation on its planets; the possibility of detecting oceans on its planets using starlight reflected off their surfaces; possible efforts to terraform its planets; and difficulties any inhabitants of the planets would encounter with discovering certain laws of physics (general relativity, Kepler's laws and the law of gravitation) and with interstellar travel. The role EU funding played in the discovery of TRAPPIST-1 has been cited as an example of the importance of EU projects, and the involvement of a Moroccan observatory and a Saudi Arabian university as an indication of the Islamic and Arab world's role in science. The original discoverers were affiliated with universities spanning Africa, Europe, and North America, and the discovery of TRAPPIST-1 is considered to be an example of the importance of co-operation between observatories. It is also one of the major astronomical discoveries from Chilean observatories. Exploration TRAPPIST-1 is too distant from Earth to be reached by humans with current or expected technology. Spacecraft mission designs using present-day rockets and gravity assists would need hundreds of millennia to reach TRAPPIST-1; even a theoretical interstellar probe travelling at near the speed of light would need decades to reach the star. The speculative Breakthrough Starshot proposal for sending small, laser-accelerated, uncrewed probes would require around two centuries to reach TRAPPIST-1.
Physical sciences
Notable stars
Astronomy
38403118
https://en.wikipedia.org/wiki/Congener%20%28chemistry%29
Congener (chemistry)
In chemistry, congeners are chemical substances "related to each other by origin, structure, or function". Common origin and structure Any significant quantity of a polyhalogenated compound is by default a blend of multiple molecule types because each molecule forms independently, and chlorine and bromine do not strongly select which site(s) they bond to. Polychlorinated biphenyls (PCBs) are a family of 209 congeners. Polybrominated biphenyls and polychlorinated diphenyl ethers are also families of 209 congeners. Similarly polychlorinated dibenzodioxins, polychlorinated dibenzofurans, polychlorinated terphenyls, polychlorinated naphthalene, polychloro phenoxy phenol, and polybrominated diphenyl ethers (PBDEs) (pentabromodiphenyl ether, octabromodiphenyl ether, decabromodiphenyl ether), etc. are also groups of congeners. Common origin Congener (alcohol), substances other than alcohol (desirable or undesirable) also produced during fermentation. Congeners of oleic acids can modify cell membrane behavior, protecting against tumors or having effects on blood pressure. Common structure Congeners can refer to similar compounds that substitute other elements with similar valences, yielding molecules having similar structures. Examples: potassium chloride and sodium chloride may be considered congeners; also potassium chloride and potassium fluoride. hydrogen peroxide (HOOH), hydrogen thioperoxide (HSOH), and hydrogen disulfide (HSSH). Structural analogs are often isoelectronic. Other Congeners refer to the various oxidation states of a given element in a compound. For example, titanium(II) chloride (titanium dichloride), titanium(III) chloride (titanium trichloride), and titanium(IV) chloride (titanium tetrachloride) may be considered congeners. Congeners can refer to other elements in the same group in the periodic table. For example, congeners of the Group 11 element copper are silver and gold, sometimes found together in the same ores (porphyry copper deposit) due to their chemical similarity.
Physical sciences
Substance
Chemistry
38407148
https://en.wikipedia.org/wiki/Quantum%20contextuality
Quantum contextuality
Quantum contextuality is a feature of the phenomenology of quantum mechanics whereby measurements of quantum observables cannot simply be thought of as revealing pre-existing values. Any attempt to do so in a realistic hidden-variable theory leads to values that are dependent upon the choice of the other (compatible) observables which are simultaneously measured (the measurement context). More formally, the measurement result (assumed pre-existing) of a quantum observable is dependent upon which other commuting observables are within the same measurement set. Contextuality was first demonstrated to be a feature of quantum phenomenology by the Bell–Kochen–Specker theorem. The study of contextuality has developed into a major topic of interest in quantum foundations as the phenomenon crystallises certain non-classical and counter-intuitive aspects of quantum theory. A number of powerful mathematical frameworks have been developed to study and better understand contextuality, from the perspective of sheaf theory, graph theory, hypergraphs, algebraic topology, and probabilistic couplings. Nonlocality, in the sense of Bell's theorem, may be viewed as a special case of the more general phenomenon of contextuality, in which measurement contexts contain measurements that are distributed over spacelike separated regions. This follows from Fine's theorem. Quantum contextuality has been identified as a source of quantum computational speedups and quantum advantage in quantum computing. Contemporary research has increasingly focused on exploring its utility as a computational resource. Kochen and Specker The need for contextuality was discussed informally in 1935 by Grete Hermann, but it was more than 30 years later when Simon B. Kochen and Ernst Specker, and separately John Bell, constructed proofs that any realistic hidden-variable theory able to explain the phenomenology of quantum mechanics is contextual for systems of Hilbert space dimension three and greater. The Kochen–Specker theorem proves that realistic noncontextual hidden-variable theories cannot reproduce the empirical predictions of quantum mechanics. Such a theory would suppose the following. All quantum-mechanical observables may be simultaneously assigned definite values (this is the realism postulate, which is false in standard quantum mechanics, since there are observables that are indefinite in every given quantum state). These global value assignments may deterministically depend on some "hidden" classical variable, which in turn may vary stochastically for some classical reason (as in statistical mechanics). The measured assignments of observables may therefore finally stochastically change. This stochasticity is, however, epistemic and not ontic, as in the standard formulation of quantum mechanics. Value assignments pre-exist and are independent of the choice of any other observables, which, in standard quantum mechanics, are described as commuting with the measured observable, and they are also measured. Some functional constraints on the assignments of values for compatible observables are assumed (e.g., they are additive and multiplicative, there are, however, several versions of this functional requirement). In addition, Kochen and Specker constructed an explicitly noncontextual hidden-variable model for the two-dimensional qubit case in their paper on the subject, thereby completing the characterisation of the dimensionality of quantum systems that can demonstrate contextual behaviour. Bell's proof invoked a weaker version of Gleason's theorem, reinterpreting the theorem to show that quantum contextuality exists only in Hilbert space dimension greater than two. Frameworks for contextuality Sheaf-theoretic framework The sheaf-theoretic, or Abramsky–Brandenburger, approach to contextuality initiated by Samson Abramsky and Adam Brandenburger is theory-independent and can be applied beyond quantum theory to any situation in which empirical data arises in contexts. As well as being used to study forms of contextuality arising in quantum theory and other physical theories, it has also been used to study formally equivalent phenomena in logic, relational databases, natural language processing, and constraint satisfaction. In essence, contextuality arises when empirical data is locally consistent but globally inconsistent. This framework gives rise in a natural way to a qualitative hierarchy of contextuality: (Probabilistic) contextuality may be witnessed in measurement statistics, e.g. by the violation of an inequality. A representative example is the KCBS proof of contextuality. Logical contextuality may be witnessed in the "possibilistic" information about which outcome events are possible and which are not possible. A representative example is Hardy's nonlocality proof of nonlocality. Strong contextuality is a maximal form of contextuality. Whereas (probabilistic) contextuality arises when measurement statistics cannot be reproduced by a mixture of global value assignments, strong contextuality arises when no global value assignment is even compatible with the possible outcome events. A representative example is the original Kochen–Specker proof of contextuality. Each level in this hierarchy strictly includes the next. An important intermediate level that lies strictly between the logical and strong contextuality classes is all-versus-nothing contextuality, a representative example of which is the Greenberger–Horne–Zeilinger proof of nonlocality. Graph and hypergraph frameworks Adán Cabello, Simone Severini, and Andreas Winter introduced a general graph-theoretic framework for studying contextuality of different physical theories. Within this framework experimental scenarios are described by graphs, and certain invariants of these graphs were shown have particular physical significance. One way in which contextuality may be witnessed in measurement statistics is through the violation of noncontextuality inequalities (also known as generalized Bell inequalities). With respect to certain appropriately normalised inequalities, the independence number, Lovász number, and fractional packing number of the graph of an experimental scenario provide tight upper bounds on the degree to which classical theories, quantum theory, and generalised probabilistic theories, respectively, may exhibit contextuality in an experiment of that kind. A more refined framework based on hypergraphs rather than graphs is also used. Contextuality-by-default (CbD) framework In the CbD approach, developed by Ehtibar Dzhafarov, Janne Kujala, and colleagues, (non)contextuality is treated as a property of any system of random variables, defined as a set  in which each random variable  is labeled by its content the property it measures, and its context the set of recorded circumstances under which it is recorded (including but not limited to which other random variables it is recorded together with);  stands for " is measured in ". The variables within a context are jointly distributed, but variables from different contexts are stochastically unrelated, defined on different sample spaces. A (probabilistic) coupling of the system  is defined as a system  in which all variables are jointly distributed and, in any context ,  and  are identically distributed. The system is considered noncontextual if it has a coupling  such that the probabilities are maximal possible for all contexts  and contents such that . If such a coupling does not exist, the system is contextual. For the important class of cyclic systems of dichotomous () random variables, (), it has been shown that such a system is noncontextual if and only if where and with the maximum taken over all  whose product is . If  and , measuring the same content in different context, are always identically distributed, the system is called consistently connected (satisfying "no-disturbance" or "no-signaling" principle). Except for certain logical issues, in this case CbD specializes to traditional treatments of contextuality in quantum physics. In particular, for consistently connected cyclic systems the noncontextuality criterion above reduces to which includes the Bell/CHSH inequality (), KCBS inequality (), and other famous inequalities. That nonlocality is a special case of contextuality follows in CbD from the fact that being jointly distributed for random variables is equivalent to being measurable functions of one and the same random variable (this generalizes Arthur Fine's analysis of Bell's theorem). CbD essentially coincides with the probabilistic part of Abramsky's sheaf-theoretic approach if the system is strongly consistently connected, which means that the joint distributions of  and  coincide whenever  are measured in contexts . However, unlike most approaches to contextuality, CbD allows for inconsistent connectedness, with  and differently distributed. This makes CbD applicable to physics experiments in which no-disturbance condition is violated, as well as to human behavior where this condition is violated as a rule. In particular, Vctor Cervantes, Ehtibar Dzhafarov, and colleagues have demonstrated that random variables describing certain paradigms of simple decision making form contextual systems, whereas many other decision-making systems are noncontextual once their inconsistent connectedness is properly taken into account. Operational framework An extended notion of contextuality due to Robert Spekkens applies to preparations and transformations as well as to measurements, within a general framework of operational physical theories. With respect to measurements, it removes the assumption of determinism of value assignments that is present in standard definitions of contextuality. This breaks the interpretation of nonlocality as a special case of contextuality, and does not treat irreducible randomness as nonclassical. Nevertheless, it recovers the usual notion of contextuality when outcome determinism is imposed. Spekkens' contextuality can be motivated using Leibniz's law of the identity of indiscernibles. The law applied to physical systems in this framework mirrors the entended definition of noncontextuality. This was further explored by Simmons et al, who demonstrated that other notions of contextuality could also be motivated by Leibnizian principles, and could be thought of as tools enabling ontological conclusions from operational statistics. Extracontextuality and extravalence Given a pure quantum state , Born's rule tells that the probability to obtain another state in a measurement is . However, such a number does not define a full probability distribution, i.e. values over a set of mutually exclusive events, summing up to 1. In order to obtain such a set one needs to specify a context, that is a complete set of commuting operators (CSCO), or equivalently a set of N orthogonal projectors that sum to identity, where is the dimension of the Hilbert space. Then one has as expected. In that sense, one can tell that a state vector alone is predictively incomplete, as long a context has not been specified. The actual physical state, now defined by within a specified context, has been called a modality by Auffèves and Grangier Since it is clear that alone does not define a modality, what is its status ? If , one sees easily that is associated with an equivalence class of modalities, belonging to different contexts, but connected between themselves with certainty, even if the different CSCO observables do not commute. This equivalence class is called an extravalence class, and the associated transfer of certainty between contexts is called extracontextuality. As a simple example, the usual singlet state for two spins 1/2 can be found in the (non commuting) CSCOs associated with the measurement of the total spin (with ), or with a Bell measurement, and actually it appears in infinitely many different CSCOs - but obviously not in all possible ones. The concepts of extravalence and extracontextuality are very useful to spell out the role of contextuality in quantum mechanics, that is not non-contextual (like classical physical would be), but not either fully contextual, since modalities belonging to incompatible (non-commuting) contexts may be connected with certainty. Starting now from extracontextuality as a postulate, the fact that certainty can be transferred between contexts, and is then associated with a given projector, is the very basis of the hypotheses of Gleason's theorem, and thus of Born's rule. Also, associating a state vector with an extravalence class clarifies its status as a mathematical tool to calculate probabilities connecting modalities, which correspond to the actual observed physical events or results. This point of view is quite useful, and it can be used everywhere in quantum mechanics. Other frameworks and extensions A form of contextuality that may present in the dynamics of a quantum system was introduced by Shane Mansfield and Elham Kashefi, and has been shown to relate to computational quantum advantages. As a notion of contextuality that applies to transformations it is inequivalent to that of Spekkens. Examples explored to date rely on additional memory constraints which have a more computational than foundational motivation. Contextuality may be traded-off against Landauer erasure to obtain equivalent advantages. Fine's theorem The Kochen–Specker theorem proves that quantum mechanics is incompatible with realistic noncontextual hidden variable models. On the other hand Bell's theorem proves that quantum mechanics is incompatible with factorisable hidden variable models in an experiment in which measurements are performed at distinct spacelike separated locations. Arthur Fine showed that in the experimental scenario in which the famous CHSH inequalities and proof of nonlocality apply, a factorisable hidden variable model exists if and only if a noncontextual hidden variable model exists. This equivalence was proven to hold more generally in any experimental scenario by Samson Abramsky and Adam Brandenburger. It is for this reason that we may consider nonlocality to be a special case of contextuality. Measures of contextuality Contextual fraction A number of methods exist for quantifying contextuality. One approach is by measuring the degree to which some particular noncontextuality inequality is violated, e.g. the KCBS inequality, the Yu–Oh inequality, or some Bell inequality. A more general measure of contextuality is the contextual fraction. Given a set of measurement statistics e, consisting of a probability distribution over joint outcomes for each measurement context, we may consider factoring e into a noncontextual part eNC and some remainder e, The maximum value of λ over all such decompositions is the noncontextual fraction of e denoted NCF(e), while the remainder CF(e)=(1-NCF(e)) is the contextual fraction of e. The idea is that we look for a noncontextual explanation for the highest possible fraction of the data, and what is left over is the irreducibly contextual part. Indeed, for any such decomposition that maximises λ the leftover e' is known to be strongly contextual. This measure of contextuality takes values in the interval [0,1], where 0 corresponds to noncontextuality and 1 corresponds to strong contextuality. The contextual fraction may be computed using linear programming. It has also been proved that CF(e) is an upper bound on the extent to which e violates any normalised noncontextuality inequality. Here normalisation means that violations are expressed as fractions of the algebraic maximum violation of the inequality. Moreover, the dual linear program to that which maximises λ computes a noncontextual inequality for which this violation is attained. In this sense the contextual fraction is a more neutral measure of contextuality, since it optimises over all possible noncontextual inequalities rather than checking the statistics against one inequality in particular. Measures of (non)contextuality within the Contextuality-by-Default (CbD) framework Several measures of the degree of contextuality in contextual systems were proposed within the CbD framework, but only one of them, denoted CNT2, has been shown to naturally extend into a measure of noncontextuality in noncontextual systems, NCNT2. This is important, because at least in the non-physical applications of CbD contextuality and noncontextuality are of equal interest. Both CNT2 and NCNT2 are defined as the -distance between a probability vector  representing a system and the surface of the noncontextuality polytope  representing all possible noncontextual systems with the same single-variable marginals. For cyclic systems  of dichotomous random variables, it is shown that if the system is contextual (i.e., ), and if it is noncontextual ( ), where  is the -distance from the vector  to the surface of the box circumscribing the noncontextuality polytope. More generally, NCNT2 and CNT2 are computed by means of linear programming. The same is true for other CbD-based measures of contextuality. One of them, denoted CNT3, uses the notion of a quasi-coupling, that differs from a coupling in that the probabilities in the joint distribution of its values are replaced with arbitrary reals (allowed to be negative but summing to 1). The class of quasi-couplings  maximizing the probabilities  is always nonempty, and the minimal total variation of the signed measure in this class is a natural measure of contextuality. Contextuality as a resource for quantum computing Recently, quantum contextuality has been investigated as a source of quantum advantage and computational speedups in quantum computing. Magic state distillation Magic state distillation is a scheme for quantum computing in which quantum circuits constructed only of Clifford operators, which by themselves are fault-tolerant but efficiently classically simulable, are injected with certain "magic" states that promote the computational power to universal fault-tolerant quantum computing. In 2014, Mark Howard, et al. showed that contextuality characterizes magic states for qubits of odd prime dimension and for qubits with real wavefunctions. Extensions to the qubit case have been investigated by Juani Bermejo Vega et al. This line of research builds on earlier work by Ernesto Galvão, which showed that Wigner function negativity is necessary for a state to be "magic"; it later emerged that Wigner negativity and contextuality are in a sense equivalent notions of nonclassicality. Measurement-based quantum computing Measurement-based quantum computation (MBQC) is a model for quantum computing in which a classical control computer interacts with a quantum system by specifying measurements to be performed and receiving measurement outcomes in return. The measurement statistics for the quantum system may or may not exhibit contextuality. A variety of results have shown that the presence of contextuality enhances the computational power of an MBQC. In particular, researchers have considered an artificial situation in which the power of the classical control computer is restricted to only being able to compute linear Boolean functions, i.e. to solve problems in the Parity L complexity class ⊕L. For interactions with multi-qubit quantum systems a natural assumption is that each step of the interaction consists of a binary choice of measurement which in turn returns a binary outcome. An MBQC of this restricted kind is known as an l2-MBQC. Anders and Browne In 2009, Janet Anders and Dan Browne showed that two specific examples of nonlocality and contextuality were sufficient to compute a non-linear function. This in turn could be used to boost computational power to that of a universal classical computer, i.e. to solve problems in the complexity class P'''. This is sometimes referred to as measurement-based classical computation. The specific examples made use of the Greenberger–Horne–Zeilinger nonlocality proof and the supra-quantum Popescu–Rohrlich box. Raussendorf In 2013, Robert Raussendorf showed more generally that access to strongly contextual measurement statistics is necessary and sufficient for an l2-MBQC to compute a non-linear function. He also showed that to compute non-linear Boolean functions with sufficiently high probability requires contextuality. Abramsky, Barbosa and Mansfield A further generalization and refinement of these results due to Samson Abramsky, Rui Soares Barbosa and Shane Mansfield appeared in 2017, proving a precise quantifiable relationship between the probability of successfully computing any given non-linear function and the degree of contextuality present in the l2-MBQC as measured by the contextual fraction. Specifically, where are the probability of success, the contextual fraction of the measurement statistics e, and a measure of the non-linearity of the function to be computed , respectively. Further examples The above inequality was also shown to relate quantum advantage in non-local games to the degree of contextuality required by the strategy and an appropriate measure of the difficulty of the game. Similarly the inequality arises in a transformation-based model of quantum computation analogous to l2''-MBQC where it relates the degree of sequential contextuality present in the dynamics of the quantum system to the probability of success and the degree of non-linearity of the target function. Preparation contextuality has been shown to enable quantum advantages in cryptographic random-access codes and in state-discrimination tasks. In classical simulations of quantum systems, contextuality has been shown to incur memory costs.
Physical sciences
Quantum mechanics
Physics
44065971
https://en.wikipedia.org/wiki/Blockchain
Blockchain
A blockchain is a distributed ledger with growing lists of records (blocks) that are securely linked together via cryptographic hashes. Each block contains a cryptographic hash of the previous block, a timestamp, and transaction data (generally represented as a Merkle tree, where data nodes are represented by leaves). Since each block contains information about the previous block, they effectively form a chain (compare linked list data structure), with each additional block linking to the ones before it. Consequently, blockchain transactions are resistant to alteration because, once recorded, the data in any given block cannot be changed retroactively without altering all subsequent blocks and obtaining network consensus to accept these changes. This protects blockchains against nefarious activities such as creating assets "out of thin air", double-spending, counterfeiting, fraud, and theft. Blockchains are typically managed by a peer-to-peer (P2P) computer network for use as a public distributed ledger, where nodes collectively adhere to a consensus algorithm protocol to add and validate new transaction blocks. Although blockchain records are not unalterable, since blockchain forks are possible, blockchains may be considered secure by design and exemplify a distributed computing system with high Byzantine fault tolerance. A blockchain was created by a person (or group of people) using the name (or pseudonym) Satoshi Nakamoto in 2008 to serve as the public distributed ledger for bitcoin cryptocurrency transactions, based on previous work by Stuart Haber, W. Scott Stornetta, and Dave Bayer. The implementation of the blockchain within bitcoin made it the first digital currency to solve the double-spending problem without the need for a trusted authority or central server. The bitcoin design has inspired other applications and blockchains that are readable by the public and are widely used by cryptocurrencies. The blockchain may be considered a type of payment rail. Private blockchains have been proposed for business use. Computerworld called the marketing of such privatized blockchains without a proper security model "snake oil"; however, others have argued that permissioned blockchains, if carefully designed, may be more decentralized and therefore more secure in practice than permissionless ones. History Cryptographer David Chaum first proposed a blockchain-like protocol in his 1982 dissertation "Computer Systems Established, Maintained, and Trusted by Mutually Suspicious Groups". Further work on a cryptographically secured chain of blocks was described in 1991 by Stuart Haber and W. Scott Stornetta. They wanted to implement a system wherein document timestamps could not be tampered with. In 1992, Haber, Stornetta, and Dave Bayer incorporated Merkle trees into the design, which improved its efficiency by allowing several document certificates to be collected into one block. Under their company Surety, their document certificate hashes have been published in The New York Times every week since 1995. The first decentralized blockchain was conceptualized by a person (or group of people) known as Satoshi Nakamoto in 2008. Nakamoto improved the design in an important way using a Hashcash-like method to timestamp blocks without requiring them to be signed by a trusted party and introducing a difficulty parameter to stabilize the rate at which blocks are added to the chain. The design was implemented the following year by Nakamoto as a core component of the cryptocurrency bitcoin, where it serves as the public ledger for all transactions on the network. In August 2014, the bitcoin blockchain file size, containing records of all transactions that have occurred on the network, reached 20 GB (gigabytes). In January 2015, the size had grown to almost 30 GB, and from January 2016 to January 2017, the bitcoin blockchain grew from 50 GB to 100 GB in size. The ledger size had exceeded 200 GB by early 2020. The words block and chain were used separately in Satoshi Nakamoto's original paper, but were eventually popularized as a single word, blockchain, by 2016. According to Accenture, an application of the diffusion of innovations theory suggests that blockchains attained a 13.5% adoption rate within financial services in 2016, therefore reaching the early adopters' phase. Industry trade groups joined to create the Global Blockchain Forum in 2016, an initiative of the Chamber of Digital Commerce. In May 2018, Gartner found that only 1% of CIOs indicated any kind of blockchain adoption within their organisations, and only 8% of CIOs were in the short-term "planning or [looking at] active experimentation with blockchain". For the year 2019 Gartner reported 5% of CIOs believed blockchain technology was a 'game-changer' for their business. Structure and design A blockchain is a decentralized, distributed, and often public, digital ledger consisting of records called blocks that are used to record transactions across many computers so that any involved block cannot be altered retroactively, without the alteration of all subsequent blocks. This allows the participants to verify and audit transactions independently and relatively inexpensively. A blockchain database is managed autonomously using a peer-to-peer network and a distributed timestamping server. They are authenticated by mass collaboration powered by collective self-interests. Such a design facilitates robust workflow where participants' uncertainty regarding data security is marginal. The use of a blockchain removes the characteristic of infinite reproducibility from a digital asset. It confirms that each unit of value was transferred only once, solving the long-standing problem of double-spending. A blockchain has been described as a value-exchange protocol. A blockchain can maintain title rights because, when properly set up to detail the exchange agreement, it provides a record that compels offer and acceptance. Logically, a blockchain can be seen as consisting of several layers: infrastructure (hardware) networking (node discovery, information propagation and verification) consensus (proof of work, proof of stake) data (blocks, transactions) application (smart contracts/decentralized applications, if applicable) Blocks Blocks hold batches of valid transactions that are hashed and encoded into a Merkle tree. Each block includes the cryptographic hash of the prior block in the blockchain, linking the two. The linked blocks form a chain. This iterative process confirms the integrity of the previous block, all the way back to the initial block, which is known as the genesis block (Block 0). To assure the integrity of a block and the data contained in it, the block is usually digitally signed. Sometimes separate blocks can be produced concurrently, creating a temporary fork. In addition to a secure hash-based history, any blockchain has a specified algorithm for scoring different versions of the history so that one with a higher score can be selected over others. Blocks not selected for inclusion in the chain are called orphan blocks. Peers supporting the database have different versions of the history from time to time. They keep only the highest-scoring version of the database known to them. Whenever a peer receives a higher-scoring version (usually the old version with a single new block added) they extend or overwrite their own database and retransmit the improvement to their peers. There is never an absolute guarantee that any particular entry will remain in the best version of history forever. Blockchains are typically built to add the score of new blocks onto old blocks and are given incentives to extend with new blocks rather than overwrite old blocks. Therefore, the probability of an entry becoming superseded decreases exponentially as more blocks are built on top of it, eventually becoming very low. For example, bitcoin uses a proof-of-work system, where the chain with the most cumulative proof-of-work is considered the valid one by the network. There are a number of methods that can be used to demonstrate a sufficient level of computation. Within a blockchain the computation is carried out redundantly rather than in the traditional segregated and parallel manner. Block time The block time is the average time it takes for the network to generate one extra block in the blockchain. By the time of block completion, the included data becomes verifiable. In cryptocurrency, this is practically when the transaction takes place, so a shorter block time means faster transactions. The block time for Ethereum is set to between 14 and 15 seconds, while for bitcoin it is on average 10 minutes. Hard forks Decentralization By storing data across its peer-to-peer network, the blockchain eliminates some risks that come with data being held centrally. The decentralized blockchain may use ad hoc message passing and distributed networking. In a so-called "51% attack" a central entity gains control of more than half of a network and can then manipulate that specific blockchain record at will, allowing double-spending. Blockchain security methods include the use of public-key cryptography. A public key (a long, random-looking string of numbers) is an address on the blockchain. Value tokens sent across the network are recorded as belonging to that address. A private key is like a password that gives its owner access to their digital assets or the means to otherwise interact with the various capabilities that blockchains now support. Data stored on the blockchain is generally considered incorruptible. Every node in a decentralized system has a copy of the blockchain. Data quality is maintained by massive database replication and computational trust. No centralized "official" copy exists and no user is "trusted" more than any other. Transactions are broadcast to the network using the software. Messages are delivered on a best-effort basis. Early blockchains rely on energy-intensive mining nodes to validate transactions, add them to the block they are building, and then broadcast the completed block to other nodes. Blockchains use various time-stamping schemes, such as proof-of-work, to serialize changes. Later consensus methods include proof of stake. The growth of a decentralized blockchain is accompanied by the risk of centralization because the computer resources required to process larger amounts of data become more expensive. Finality Finality is the level of confidence that the well-formed block recently appended to the blockchain will not be revoked in the future (is "finalized") and thus can be trusted. Most distributed blockchain protocols, whether proof of work or proof of stake, cannot guarantee the finality of a freshly committed block, and instead rely on "probabilistic finality": as the block goes deeper into a blockchain, it is less likely to be altered or reverted by a newly found consensus. Byzantine fault tolerance-based proof-of-stake protocols purport to provide so called "absolute finality": a randomly chosen validator proposes a block, the rest of validators vote on it, and, if a supermajority decision approves it, the block is irreversibly committed into the blockchain. A modification of this method, an "economic finality", is used in practical protocols, like the Casper protocol used in Ethereum: validators which sign two different blocks at the same position in the blockchain are subject to "slashing", where their leveraged stake is forfeited. Openness Open blockchains are more user-friendly than some traditional ownership records, which, while open to the public, still require physical access to view. Because all early blockchains were permissionless, controversy has arisen over the blockchain definition. An issue in this ongoing debate is whether a private system with verifiers tasked and authorized (permissioned) by a central authority should be considered a blockchain. Proponents of permissioned or private chains argue that the term "blockchain" may be applied to any data structure that batches data into time-stamped blocks. These blockchains serve as a distributed version of multiversion concurrency control (MVCC) in databases. Just as MVCC prevents two transactions from concurrently modifying a single object in a database, blockchains prevent two transactions from spending the same single output in a blockchain. Opponents say that permissioned systems resemble traditional corporate databases, not supporting decentralized data verification, and that such systems are not hardened against operator tampering and revision. Nikolai Hampton of Computerworld said that "many in-house blockchain solutions will be nothing more than cumbersome databases," and "without a clear security model, proprietary blockchains should be eyed with suspicion." Permissionless (public) blockchain An advantage to an open, permissionless, or public, blockchain network is that guarding against bad actors is not required and no access control is needed. This means that applications can be added to the network without the approval or trust of others, using the blockchain as a transport layer. Bitcoin and other cryptocurrencies currently secure their blockchain by requiring new entries to include proof of work. To prolong the blockchain, bitcoin uses Hashcash puzzles. While Hashcash was designed in 1997 by Adam Back, the original idea was first proposed by Cynthia Dwork and Moni Naor and Eli Ponyatovski in their 1992 paper "Pricing via Processing or Combatting Junk Mail". In 2016, venture capital investment for blockchain-related projects was weakening in the USA but increasing in China. Bitcoin and many other cryptocurrencies use open (public) blockchains. , bitcoin has the highest market capitalization. Permissioned (private) blockchain Permissioned blockchains use an access control layer to govern who has access to the network. It has been argued that permissioned blockchains can guarantee a certain level of decentralization, if carefully designed, as opposed to permissionless blockchains, which are often centralized in practice. Disadvantages of permissioned blockchain Nikolai Hampton argued in Computerworld that "There is also no need for a '51 percent' attack on a private blockchain, as the private blockchain (most likely) already controls 100 percent of all block creation resources. If you could attack or damage the blockchain creation tools on a private corporate server, you could effectively control 100 percent of their network and alter transactions however you wished." This has a set of particularly profound adverse implications during a financial crisis or debt crisis like the financial crisis of 2007–08, where politically powerful actors may make decisions that favor some groups at the expense of others, and "the bitcoin blockchain is protected by the massive group mining effort. It's unlikely that any private blockchain will try to protect records using gigawatts of computing power — it's time-consuming and expensive." He also said, "Within a private blockchain there is also no 'race'; there's no incentive to use more power or discover blocks faster than competitors. This means that many in-house blockchain solutions will be nothing more than cumbersome databases." Blockchain analysis The analysis of public blockchains has become increasingly important with the popularity of bitcoin, Ethereum, litecoin and other cryptocurrencies. A blockchain, if it is public, provides anyone who wants access to observe and analyse the chain data, given one has the know-how. The process of understanding and accessing the flow of crypto has been an issue for many cryptocurrencies, crypto exchanges and banks. The reason for this is accusations of blockchain-enabled cryptocurrencies enabling illicit dark market trading of drugs, weapons, money laundering, etc. A common belief has been that cryptocurrency is private and untraceable, thus leading many actors to use it for illegal purposes. This is changing now that specialised tech companies provide blockchain tracking services, making crypto exchanges, law-enforcement and banks more aware of what is happening with crypto funds and fiat-crypto exchanges. The development, some argue, has led criminals to prioritise the use of new cryptos such as Monero. Standardisation In April 2016, Standards Australia submitted a proposal to the International Organization for Standardization to consider developing standards to support blockchain technology. This proposal resulted in the creation of ISO Technical Committee 307, Blockchain and Distributed Ledger Technologies. The technical committee has working groups relating to blockchain terminology, reference architecture, security and privacy, identity, smart contracts, governance and interoperability for blockchain and DLT, as well as standards specific to industry sectors and generic government requirements. More than 50 countries are participating in the standardization process together with external liaisons such as the Society for Worldwide Interbank Financial Telecommunication (SWIFT), the European Commission, the International Federation of Surveyors, the International Telecommunication Union (ITU) and the United Nations Economic Commission for Europe (UNECE). Many other national standards bodies and open standards bodies are also working on blockchain standards. These include the National Institute of Standards and Technology (NIST), the European Committee for Electrotechnical Standardization (CENELEC), the Institute of Electrical and Electronics Engineers (IEEE), the Organization for the Advancement of Structured Information Standards (OASIS), and some individual participants in the Internet Engineering Task Force (IETF). Centralized blockchain Although most of blockchain implementation are decentralized and distributed, Oracle launched a centralized blockchain table feature in Oracle 21c database. The Blockchain Table in Oracle 21c database is a centralized blockchain which provide immutable feature. Compared to decentralized blockchains, centralized blockchains normally can provide a higher throughput and lower latency of transactions than consensus-based distributed blockchains. Types Currently, there are at least four types of blockchain networks — public blockchains, private blockchains, consortium blockchains and hybrid blockchains. Public blockchains A public blockchain has absolutely no access restrictions. Anyone with an Internet connection can send transactions to it as well as become a validator (i.e., participate in the execution of a consensus protocol). Usually, such networks offer economic incentives for those who secure them and utilize some type of a proof-of-stake or proof-of-work algorithm. Some of the largest, most known public blockchains are the bitcoin blockchain and the Ethereum blockchain. Private blockchains A private blockchain is permissioned. One cannot join it unless invited by the network administrators. Participant and validator access is restricted. To distinguish between open blockchains and other peer-to-peer decentralized database applications that are not open ad-hoc compute clusters, the terminology Distributed Ledger (DLT) is normally used for private blockchains. Hybrid blockchains A hybrid blockchain has a combination of centralized and decentralized features. The exact workings of the chain can vary based on which portions of centralization and decentralization are used. Sidechains A sidechain is a designation for a blockchain ledger that runs in parallel to a primary blockchain. Entries from the primary blockchain (where said entries typically represent digital assets) can be linked to and from the sidechain; this allows the sidechain to otherwise operate independently of the primary blockchain (e.g., by using an alternate means of record keeping, alternate consensus algorithm, etc.). Consortium blockchain A consortium blockchain is a type of blockchain that combines elements of both public and private blockchains. In a consortium blockchain, a group of organizations come together to create and operate the blockchain, rather than a single entity. The consortium members jointly manage the blockchain network and are responsible for validating transactions. Consortium blockchains are permissioned, meaning that only certain individuals or organizations are allowed to participate in the network. This allows for greater control over who can access the blockchain and helps to ensure that sensitive information is kept confidential. Consortium blockchains are commonly used in industries where multiple organizations need to collaborate on a common goal, such as supply chain management or financial services. One advantage of consortium blockchains is that they can be more efficient and scalable than public blockchains, as the number of nodes required to validate transactions is typically smaller. Additionally, consortium blockchains can provide greater security and reliability than private blockchains, as the consortium members work together to maintain the network. Some examples of consortium blockchains include Quorum and Hyperledger. Uses Blockchain technology can be integrated into multiple areas. The primary use of blockchains is as a distributed ledger for cryptocurrencies such as bitcoin; there were also a few other operational products that had matured from proof of concept by late 2016. As of 2016, some businesses have been testing the technology and conducting low-level implementation to gauge blockchain's effects on organizational efficiency in their back office. Blockchain is seen as a pivotal technological advancement of the 21st century, with the ability to impact organizations at strategic, operational, and market levels. In 2019, it was estimated that around $2.9 billion were invested in blockchain technology, which represents an 89% increase from the year prior. Additionally, the International Data Corp estimated that corporate investment into blockchain technology would reach $12.4 billion by 2022. Furthermore, According to PricewaterhouseCoopers (PwC), the second-largest professional services network in the world, blockchain technology has the potential to generate an annual business value of more than $3 trillion by 2030. PwC's estimate is further augmented by a 2018 study that they have conducted, in which PwC surveyed 600 business executives and determined that 84% have at least some exposure to utilizing blockchain technology, which indicates a significant demand and interest in blockchain technology. In 2019, the BBC World Service radio and podcast series Fifty Things That Made the Modern Economy identified blockchain as a technology that would have far-reaching consequences for economics and society. The economist and Financial Times journalist and broadcaster Tim Harford discussed why the underlying technology might have much wider applications and the challenges that needed to be overcome. His first broadcast was on 29 June 2019. The number of blockchain wallets quadrupled to 40 million between 2016 and 2020. A paper published in 2022 discussed the potential use of blockchain technology in sustainable management. Cryptocurrencies Most cryptocurrencies use blockchain technology to record transactions. For example, the bitcoin network and Ethereum network are both based on blockchain. The criminal enterprise Silk Road, which operated on Tor, utilized cryptocurrency for payments, some of which the US federal government seized through research on the blockchain and forfeiture. Governments have mixed policies on the legality of their citizens or banks owning cryptocurrencies. China implements blockchain technology in several industries including a national digital currency which launched in 2020. To strengthen their respective currencies, Western governments including the European Union and the United States have initiated similar projects. Smart contracts Blockchain-based smart contracts are contracts that can be partially or fully executed or enforced without human interaction. One of the main objectives of a smart contract is automated escrow. A key feature of smart contracts is that they do not need a trusted third party (such as a trustee) to act as an intermediary between contracting entities — the blockchain network executes the contract on its own. This may reduce friction between entities when transferring value and could subsequently open the door to a higher level of transaction automation. An IMF staff discussion from 2018 reported that smart contracts based on blockchain technology might reduce moral hazards and optimize the use of contracts in general, but "no viable smart contract systems have yet emerged." Due to the lack of widespread use, their legal status was unclear. Financial services According to Reason, many banks have expressed interest in implementing distributed ledgers for use in banking and are cooperating with companies creating private blockchains; according to a September 2016 IBM study, it is occurring faster than expected. It has been estimated by the World Economic Forum that by 2025, 10% of the world's GDP will be stored on blockchain related technology. Banks are interested in this technology not least because it has the potential to speed up back office settlement systems. Moreover, as the blockchain industry has reached early maturity institutional appreciation has grown that it is, practically speaking, the infrastructure of a whole new financial industry, with all the implications which that entails. This technology will transform financial transactions due to its ability to enhance data storage, process simultaneous transactions, lessen transaction costs, and improve capital market transparency for debt and equity capital administration. Banks such as UBS are opening new research labs dedicated to blockchain technology in order to explore how blockchain can be used in financial services to increase efficiency and reduce costs. Berenberg, a German bank, believes that blockchain is an "overhyped technology" that has had a large number of "proofs of concept", but still has major challenges, and very few success stories. The blockchain has also given rise to initial coin offerings (ICOs) as well as a new category of digital asset called security token offerings (STOs), also sometimes referred to as digital security offerings (DSOs). STO/DSOs may be conducted privately or on public, regulated stock exchange and are used to tokenize traditional assets such as company shares as well as more innovative ones like intellectual property, real estate, art, or individual products. A number of companies are active in this space providing services for compliant tokenization, private STOs, and public STOs. Games Blockchain technology, such as cryptocurrencies and non-fungible tokens (NFTs), has been used in video games for monetization. Many live-service games offer in-game customization options, such as character skins or other in-game items, which the players can earn and trade with other players using in-game currency. Some games also allow for trading of virtual items using real-world currency, but this may be illegal in some countries where video games are seen as akin to gambling, and has led to gray market issues such as skin gambling, and thus publishers typically have shied away from allowing players to earn real-world funds from games. Blockchain games typically allow players to trade these in-game items for cryptocurrency, which can then be exchanged for money. The first known game to use blockchain technologies was CryptoKitties, launched in November 2017, where the player would purchase NFTs with Ethereum cryptocurrency, each NFT consisting of a virtual pet that the player could breed with others to create offspring with combined traits as new NFTs. The game made headlines in December 2017 when one virtual pet sold for more than US$100,000. CryptoKitties also illustrated scalability problems for games on Ethereum when it created significant congestion on the Ethereum network in early 2018 with approximately 30% of all Ethereum transactions being for the game. By the early 2020s, there had not been a breakout success in video games using blockchain, as these games tend to focus on using blockchain for speculation instead of more traditional forms of gameplay, which offers limited appeal to most players. Such games also represent a high risk to investors as their revenues can be difficult to predict. However, limited successes of some games, such as Axie Infinity during the COVID-19 pandemic, and corporate plans towards metaverse content, refueled interest in the area of GameFi, a term describing the intersection of video games and financing typically backed by blockchain currency, in the second half of 2021. Several major publishers, including Ubisoft, Electronic Arts, and Take Two Interactive, have stated that blockchain and NFT-based games are under serious consideration for their companies in the future. In October 2021, Valve Corporation banned blockchain games, including those using cryptocurrency and NFTs, from being hosted on its Steam digital storefront service, which is widely used for personal computer gaming, claiming that this was an extension of their policy banning games that offered in-game items with real-world value. Valve's prior history with gambling, specifically skin gambling, was speculated to be a factor in the decision to ban blockchain games. Journalists and players responded positively to Valve's decision as blockchain and NFT games have a reputation for scams and fraud among most PC gamers, and Epic Games, which runs the Epic Games Store in competition to Steam, said that they would be open to accepted blockchain games in the wake of Valve's refusal. Supply chain There have been several different efforts to employ blockchains in supply chain management. Precious commodities mining — Blockchain technology has been used for tracking the origins of gemstones and other precious commodities. In 2016, The Wall Street Journal reported that the blockchain technology company Everledger was partnering with IBM's blockchain-based tracking service to trace the origin of diamonds to ensure that they were ethically mined. As of 2019, the Diamond Trading Company (DTC) has been involved in building a diamond trading supply chain product called Tracer. Food supply — As of 2018, Walmart and IBM were running a trial to use a blockchain-backed system for supply chain monitoring for lettuce and spinach all nodes of the blockchain were administered by Walmart and located on the IBM cloud. Fashion industry — There is an opaque relationship between brands, distributors, and customers in the fashion industry, which prevents the sustainable and stable development of the fashion industry. Blockchain could make this information transparent, assisting sustainable development of the industry. Motor vehicles — Mercedes-Benz and partner Icertis developed a blockchain prototype used to facilitate consistent documentation of contracts along the supply chain so that the ethical standards and contractual obligations required of its direct suppliers can be passed on to second tier suppliers and beyond. In another project, the company uses blockchain technology to track the emissions of climate-relevant gases and the amount of secondary material along the supply chain for its battery cell manufacturers. Domain names There are several different efforts to offer domain name services via the blockchain. These domain names can be controlled by the use of a private key, which purports to allow for uncensorable websites. This would also bypass a registrar's ability to suppress domains used for fraud, abuse, or illegal content. Namecoin is a cryptocurrency that supports the ".bit" top-level domain (TLD). Namecoin was forked from bitcoin in 2011. The .bit TLD is not sanctioned by ICANN, instead requiring an alternative DNS root. As of 2015, .bit was used by 28 websites, out of 120,000 registered names. Namecoin was dropped by OpenNIC in 2019, due to malware and potential other legal issues. Other blockchain alternatives to ICANN include The Handshake Network, EmerDNS, and Unstoppable Domains. Specific TLDs include ".eth", ".luxe", and ".kred", which are associated with the Ethereum blockchain through the Ethereum Name Service (ENS). The .kred TLD also acts as an alternative to conventional cryptocurrency wallet addresses as a convenience for transferring cryptocurrency. Other uses Blockchain technology can be used to create a permanent, public, transparent ledger system for compiling data on sales, tracking digital use and payments to content creators, such as wireless users or musicians. The Gartner 2019 CIO Survey reported 2% of higher education respondents had launched blockchain projects and another 18% were planning academic projects in the next 24 months. In 2017, IBM partnered with ASCAP and PRS for Music to adopt blockchain technology in music distribution. Imogen Heap's Mycelia service has also been proposed as a blockchain-based alternative "that gives artists more control over how their songs and associated data circulate among fans and other musicians." New distribution methods are available for the insurance industry such as peer-to-peer insurance, parametric insurance and microinsurance following the adoption of blockchain. The sharing economy and IoT are also set to benefit from blockchains because they involve many collaborating peers. The use of blockchain in libraries is being studied with a grant from the U.S. Institute of Museum and Library Services. Other blockchain designs include Hyperledger, a collaborative effort from the Linux Foundation to support blockchain-based distributed ledgers, with projects under this initiative including Hyperledger Burrow (by Monax) and Hyperledger Fabric (spearheaded by IBM). Another is Quorum, a permissioned private blockchain by JPMorgan Chase with private storage, used for contract applications. Oracle introduced a blockchain table feature in its Oracle 21c database. Blockchain is also being used in peer-to-peer energy trading. Lightweight blockchains, or simplified blockchains, are more suitable for internet of things (IoT) applications than conventional blockchains. One experiment suggested that a lightweight blockchain-based network could accommodate up to 1.34 million authentication processes every second, which could be sufficient for resource-constrained IoT networks. Blockchain could be used in detecting counterfeits by associating unique identifiers to products, documents and shipments, and storing records associated with transactions that cannot be forged or altered. It is however argued that blockchain technology needs to be supplemented with technologies that provide a strong binding between physical objects and blockchain systems, as well as provisions for content creator verification ala KYC standards. The EUIPO established an Anti-Counterfeiting Blockathon Forum, with the objective of "defining, piloting and implementing" an anti-counterfeiting infrastructure at the European level. The Dutch Standardisation organisation NEN uses blockchain together with QR Codes to authenticate certificates. Beijing and Shanghai are among the cities designated by China to trial blockchain applications as January 30, 2022. In Chinese legal proceedings, blockchain technology was first accepted as a method for authenticating internet evidence by the Hangzhou Internet Court in 2019 and has since been accepted by other Chinese courts. Blockchain interoperability With the increasing number of blockchain systems appearing, even only those that support cryptocurrencies, blockchain interoperability is becoming a topic of major importance. The objective is to support transferring assets from one blockchain system to another blockchain system. Wegner stated that "interoperability is the ability of two or more software components to cooperate despite differences in language, interface, and execution platform". The objective of blockchain interoperability is therefore to support such cooperation among blockchain systems, despite those kinds of differences. There are already several blockchain interoperability solutions available. They can be classified into three categories: cryptocurrency interoperability approaches, blockchain engines, and blockchain connectors. Several individual IETF participants produced the draft of a blockchain interoperability architecture. Energy consumption concerns Some cryptocurrencies use blockchain mining — the peer-to-peer computer computations by which transactions are validated and verified. This requires a large amount of energy. In June 2018, the Bank for International Settlements criticized the use of public proof-of-work blockchains for their high energy consumption. Early concern over the high energy consumption was a factor in later blockchains such as Cardano (2017), Solana (2020) and Polkadot (2020) adopting the less energy-intensive proof-of-stake model. Researchers have estimated that bitcoin consumes 100,000 times as much energy as proof-of-stake networks. In 2021, a study by Cambridge University determined that bitcoin (at 121 terawatt-hours per year) used more electricity than Argentina (at 121TWh) and the Netherlands (109TWh). According to Digiconomist, one bitcoin transaction required 708 kilowatt-hours of electrical energy, the amount an average U.S. household consumed in 24 days. In February 2021, U.S. Treasury secretary Janet Yellen called bitcoin "an extremely inefficient way to conduct transactions", saying "the amount of energy consumed in processing those transactions is staggering". In March 2021, Bill Gates stated that "Bitcoin uses more electricity per transaction than any other method known to mankind", adding "It's not a great climate thing." Nicholas Weaver, of the International Computer Science Institute at the University of California, Berkeley, examined blockchain's online security, and the energy efficiency of proof-of-work public blockchains, and in both cases found it grossly inadequate. The 31TWh-45TWh of electricity used for bitcoin in 2018 produced 17–23 million tonnes of . By 2022, the University of Cambridge and Digiconomist estimated that the two largest proof-of-work blockchains, bitcoin and Ethereum, together used twice as much electricity in one year as the whole of Sweden, leading to the release of up to 120 million tonnes of each year. Some cryptocurrency developers are considering moving from the proof-of-work model to the proof-of-stake model. In Sept, 2022, Ethereum converted from proof-of-work to proof-of-stake. Academic research In October 2014, the MIT Bitcoin Club, with funding from MIT alumni, provided undergraduate students at the Massachusetts Institute of Technology access to $100 of bitcoin. The adoption rates, as studied by Catalini and Tucker (2016), revealed that when people who typically adopt technologies early are given delayed access, they tend to reject the technology. Many universities have founded departments focusing on crypto and blockchain, including MIT, in 2017. In the same year, Edinburgh became "one of the first big European universities to launch a blockchain course", according to the Financial Times. Adoption decision Motivations for adopting blockchain technology (an aspect of innovation adoption) have been investigated by researchers. For example, Janssen, et al. provided a framework for analysis, and Koens & Poll pointed out that adoption could be heavily driven by non-technical factors. Based on behavioral models, Li has discussed the differences between adoption at the individual level and organizational levels. Collaboration Scholars in business and management have started studying the role of blockchains to support collaboration. It has been argued that blockchains can foster both cooperation (i.e., prevention of opportunistic behavior) and coordination (i.e., communication and information sharing). Thanks to reliability, transparency, traceability of records, and information immutability, blockchains facilitate collaboration in a way that differs both from the traditional use of contracts and from relational norms. Contrary to contracts, blockchains do not directly rely on the legal system to enforce agreements. In addition, contrary to the use of relational norms, blockchains do not require a trust or direct connections between collaborators. Blockchain and internal audit The need for internal audits to provide effective oversight of organizational efficiency will require a change in the way that information is accessed in new formats. Blockchain adoption requires a framework to identify the risk of exposure associated with transactions using blockchain. The Institute of Internal Auditors has identified the need for internal auditors to address this transformational technology. New methods are required to develop audit plans that identify threats and risks. The Internal Audit Foundation study, Blockchain and Internal Audit, assesses these factors. The American Institute of Certified Public Accountants has outlined new roles for auditors as a result of blockchain. Journals In September 2015, the first peer-reviewed academic journal dedicated to cryptocurrency and blockchain technology research, Ledger, was announced. The inaugural issue was published in December 2016. The journal covers aspects of mathematics, computer science, engineering, law, economics and philosophy that relate to cryptocurrencies. The journal encourages authors to digitally sign a file hash of submitted papers, which are then timestamped into the bitcoin blockchain. Authors are also asked to include a personal bitcoin address on the first page of their papers for non-repudiation purposes.
Technology
Computer security
null
47197916
https://en.wikipedia.org/wiki/Fruit%20dove
Fruit dove
The fruit doves, also known as fruit pigeons, are a genus (Ptilinopus) of birds in the pigeon and dove family (Columbidae). These colourful, frugivorous doves are found in forests and woodlands in Southeast Asia and Oceania. It is a large genus with over 50 species, some threatened or already extinct. Taxonomy The genus Ptilinopus was introduced in 1825 by the English naturalist William Swainson with the rose-crowned fruit dove (Ptilinopus regina) as the type species. The genus name combines the Ancient Greek words meaning "down feather" with meaning "foot". The many species of this genus can be further grouped by geography and by certain shared characteristics. The fruit doves of the Sunda Islands and northern Australia, such as the pink-headed fruit dove and banded fruit dove, have comparatively longer tails than other species, and are notable for their solid colouration on the head, neck and breast, with a black band across the belly. Another grouping can be made of certain fruit doves endemic to New Guinea, the Moluccas, and the Bismarck Archipelago, including the carunculated fruit dove, knob-billed fruit dove, and others; these are notable for their grey colouration on the head or shoulder and/or enlarged cere (part of the bill). This group is uncharacteristically not sexually dimorphic, meaning males and females look alike. The orange dove, golden dove, and whistling dove, all endemic to Fiji and sometimes placed in their own genus Chrysoena, have in common their small size, compact shape, yellow or orange colouration in the males, and hair-like body feathers. They also are known for their rather un-pigeon-like vocalizations, which sound like snapping, barking, or whistling, respectively. Finally, the Pacific Islands provide homes to a number of species that share generally green colouration with crimson caps or crowns, ventriloquial cooing or hooting, and a distinct texture of the breast feathers. Recent evidence suggests Ptilinopus as presently defined is paraphyletic as Alectroenas and Drepanoptila are embedded within it. Species The genus contains 57 species: Banded fruit dove, Ptilinopus cinctus (Temminck, 1809) Black-banded fruit dove, Ptilinopus alligator Collett, 1898 – split from banded fruit dove Red-naped fruit dove, Ptilinopus dohertyi Rothschild, 1896 Pink-headed fruit dove, Ptilinopus porphyreus (Temminck, 1822) Flame-breasted fruit dove, Ptilinopus marchei Oustalet, 1880 Cream-breasted fruit dove, Ptilinopus merrilli (McGregor, 1916) Yellow-breasted fruit dove, Ptilinopus occipitalis Gray, GR, 1844 Red-eared fruit dove, Ptilinopus fischeri Brüggemann, 1876 Jambu fruit dove, Ptilinopus jambu (Gmelin, JF, 1789) Banggai fruit dove, Ptilinopus subgularis Meyer, AB & Wiglesworth, 1896 Oberholser's fruit dove, Ptilinopus gularis (Quoy & Gaimard, 1832) – split from Banggai fruit dove Sula fruit dove, Ptilinopus mangoliensis Rothschild, 1898 – split from Banggai fruit dove Black-chinned fruit dove, Ptilinopus leclancheri (Bonaparte, 1855) Scarlet-breasted fruit dove, Ptilinopus bernsteinii Schlegel, 1863 Wompoo fruit dove, Ptilinopus magnificus (Temminck, 1821) Pink-spotted fruit dove, Ptilinopus perlatus (Temminck, 1835) Ornate fruit dove, Ptilinopus ornatus Schlegel, 1871 Tanna fruit dove, Ptilinopus tannensis (Latham, 1790) Orange-fronted fruit dove, Ptilinopus aurantiifrons Gray, GR, 1858 Wallace's fruit dove, Ptilinopus wallacii Gray, GR, 1858 Superb fruit dove, Ptilinopus superbus (Temminck, 1809) Many-colored fruit dove, Ptilinopus perousii Peale, 1849 Crimson-crowned fruit dove, Ptilinopus porphyraceus (Temminck, 1821) Purple-capped fruit dove, Ptilinopus ponapensis Finsch, 1878 – split from crimson-crowned fruit dove Kosrae fruit dove, Ptilinopus hernsheimi Finsch, 1880 – split from crimson-crowned fruit dove Palau fruit dove, Ptilinopus pelewensis Hartlaub & Finsch, 1868 Lilac-crowned fruit dove, Ptilinopus rarotongensis Hartlaub & Finsch, 1871 Mariana fruit dove, Ptilinopus roseicapilla (Lesson, RP, 1831) Rose-crowned fruit dove, Ptilinopus regina Swainson, 1825 Silver-capped fruit dove, Ptilinopus richardsii Ramsay, EP, 1882 Geelvink fruit dove, Ptilinopus speciosus Schlegel, 1871 – split from yellow-bibbed fruit dove Grey-green fruit dove, Ptilinopus purpuratus (Gmelin, JF, 1789) Raiatea fruit dove, Ptilinopus chrysogaster Gray, GR, 1854 – split from grey-green fruit dove Makatea fruit dove, Ptilinopus chalcurus Gray, GR, 1860 Atoll fruit dove, Ptilinopus coralensis Peale, 1849 Red-bellied fruit dove, Ptilinopus greyi Bonaparte, 1857 Rapa fruit dove, Ptilinopus huttoni Finsch, 1874 White-capped fruit dove, Ptilinopus dupetithouarsii (Néboux, 1840) † Red-moustached fruit dove, Ptilinopus mercierii (des Murs & Prévost, 1849) - extinct (mid-20th century) Henderson fruit dove, Ptilinopus insularis North, 1908 Coroneted fruit dove, Ptilinopus coronulatus Gray, GR, 1858 Beautiful fruit dove, Ptilinopus pulchellus (Temminck, 1835) Blue-capped fruit dove, Ptilinopus monacha (Temminck, 1824) White-bibbed fruit dove, Ptilinopus rivoli (Prévost, 1843) Yellow-bibbed fruit dove, Ptilinopus solomonensis Gray, GR, 1870 Claret-breasted fruit dove, Ptilinopus viridis (Linnaeus, 1766) White-headed fruit dove, Ptilinopus eugeniae (Gould, 1856) Lompobattang fruit-dove, Ptilinopus fischeri meridionalis Meyer, AB & Wiglesworth, 1893 Orange-bellied fruit dove, Ptilinopus iozonus Gray, GR, 1858 Knob-billed fruit dove, Ptilinopus insolitus Schlegel, 1863 Grey-headed fruit dove, Ptilinopus hyogastrus (Temminck, 1824) Carunculated fruit dove, Ptilinopus granulifrons Hartert, EJO, 1898 Black-naped fruit dove, Ptilinopus melanospilus (Salvadori, 1875) Dwarf fruit dove, Ptilinopus nainus (Temminck, 1835) Negros fruit dove, Ptilinopus arcanus Ripley & Rabor, 1955 – possibly extinct (late 20th century?) Orange dove, Ptilinopus victor (Gould, 1872) Golden dove, Ptilinopus luteovirens (Hombron & Jacquinot, 1841) Whistling dove, Ptilinopus layardi Elliot, DG, 1878 Description These small- to medium-sized doves generally have short, fan-shaped tails, and are remarkable for their colourful and often glossy plumage, as evidenced in the aptly named orange fruit dove, flame-breasted fruit dove, and pink-headed fruit dove. Males and females of many fruit dove species look very different. For example, the female many-colored fruit dove shares the male's crimson crown and deep pink undertail feathers, but is otherwise green, whereas the male has a crimson on the upper back and has areas of yellow, olive, cinnamon, and grey. Distribution and habitat This is a large genus, most diverse in and around the island of New Guinea, in the Philippines, and in the biogeographical region of Wallacea. Some species have ranges as far west as the Sunda Islands, others north to Taiwan, south to Australia, and east into Polynesia. Behaviour and ecology Fruit doves, as their name implies, eat fruit. Ficus is especially important. They live in various kinds of forest or woodland. Some species are restricted to primary forest, such as lowland rainforest, montane forest, or monsoon forest, while others prefer secondary forest or disturbed areas. Some species specialize in particular habitats, from lowland coastal forest to the cloud forest or moss forest of high altitudes. Some species of fruit doves are only found in habitats dominated by particular plants, such as mangrove, eucalyptus, or pandanus. Only a few species can commonly be seen around human habitation, these include the knob-billed fruit dove, Makatea fruit dove, and black-naped fruit dove, which are known to visit gardens and such. Much is still to be learned about fruit doves. Many species are shy and difficult to observe in their natural habitat. For example, there are several species in the Philippines, and for most of them, little or nothing is known of their breeding or nesting behavior.
Biology and health sciences
Columbimorphae
Animals
41202764
https://en.wikipedia.org/wiki/Slack%20%28software%29
Slack (software)
Slack is a cloud-based team communication platform developed by Slack Technologies, which has been owned by Salesforce since 2020. Slack uses a freemium model. Slack is primarily offered as a business-to-business service, with its userbase being predominantly team-based businesses while its functionalities are focused primarily on business administration and communication. History Slack originated as an internal communication tool used within Stewart Butterfield's company Tiny Speck, during its work on the development of Glitch, an online video game. These communication tools were initially built around the Internet Relay Chat (IRC) protocol and included scripts designed to automate and organize file exchanges among its development team. By October 2012, Stewart Butterfield realized that Glitch was not going to bring necessary profits. As a result, he decided to change the direction of his company and repurpose the communication tools they had developed into a new product. In 2012, Butterfield decided to name their upcoming product "Slack", which he said was derived from the phrase "Searchable Log of All Conversation and Knowledge", replacing the previous codename "linefeed". In August 2013, Slack was launched to the public and continued to maintain compatibility with IRC, reflecting its origin. Additionally, it was also compatible with XMPP messaging protocols. In May 2018 however, the company chose to close down these corresponding gateways due to limitations inherent in those protocols. On July 26, 2018, Slack acquired HipChat and Stride from Atlassian, with plans to shut down both services in February 2019 and migrate their users to Slack. In June 2019, Slack went public through a direct public offering to reach a market value of $19.5 billion. In July 2020, Slack acquired Rimeto, a startup specializing in employee directories and profiles, with plans to rename the service and operate it as a standalone app. Almost a year later, in June 2021, its functionality was launched as an optional feature within Slack named "Slack Atlas". On December 1, 2020, Slack and Salesforce announced an agreement for Salesforce to acquire the company for approximately $27.7 billion, marking one of the most significant tech acquisitions at that time. The acquisition was completed on July 21, 2021. On December 5, 2022, Salesforce announced that Butterfield was leaving Slack and would be succeeded by Lidiane Jones, an executive vice president at Salesforce. On November 13, 2023, Salesforce executive Denise Dresser was appointed to replace Jones, who was in turn replacing Whitney Wolfe Herd as Bumble CEO. In August 2024, Box and Slack announced an expanded partnership that introduced secure AI capabilities to enterprise content management. Outages and security breaches In March 2015, Slack announced that it had been hacked for over four days in February 2015 and that some data associated with user accounts had been compromised, including email addresses, usernames, hashed passwords, phone numbers, and Skype IDs. In response to the attacks, Slack added two-factor authentication to its service. On January 4, 2021, Slack suffered a significant outage that lasted several hours. From 10 AM ET until 3 PM ET, users could not log in, send or receive messages, place or answer calls, or use Slack connections. After 3 PM, most of the core features became operational, except for push notifications, email, and third-party integrations, including Google Calendar and Outlook Calendar. In 2022, Slack suffered widely reported outages on February 22, March 9 and July 26. On 31 December 2022, Slack announced that its private GitHub repositories had been compromised during the previous weeks, using stolen security tokens. Features Slack offers many IRC-style features, including persistent chat rooms known as channels, which are organized by topic, as well as private groups and direct-messaging functionalities. All content, including files, conversations, and people, is searchable within Slack. Users can express their reactions in the form of emojis to any message. Message history on Slack is limited to messages from the last 90 days on the free plan. Slack allows communities, groups, or teams to join a "workspace" via a specific URL or invitation sent by a team admin or owner. A workspace can contain both public and private channels, with public channels being accessible to all members of the workspace. Both public and private channels can be converted interchangeably. Direct messages enable users to engage in private one-on-one or group conversations with up to nine participants. Group direct message can be converted into a private channel. Slack integrates with many third-party services and supports community-built integrations, including Google Drive, Trello, Dropbox, Box, Heroku, IBM Bluemix, Crashlytics, GitHub, Runscope, Zendesk, and Zapier. In July 2015, Slack launched an integration with Google Calendar. Later, in December of the same year, a searchable directory was introduced, consisting of over 150 integrations that users can install. In addition to these integrations, Slack offers users the ability to incorporate and customize chatbots known as "Slackbots". These chatbots can be configured to send notifications, reminders, or provide tailored responses to specific phrases, among other functions. Furthermore, Slack provides an application programming interface that enables users to develop applications and automate various processes, including sending notifications based on input, generating alerts for specific conditions, creating internal support tickets, and more. In March 2018, Slack partnered with Workday, a financial and human capital management firm. This integration allows employees to access co-worker information, organizational charts, benefits, feedback, and time-off requests within Slack. In March 2022, Slack introduced a new voice conference feature known as "Huddles". Within a Huddle, users have the capability to mute or unmute themselves, share their screens, draw on a shared screen, and invite others to the call. In June 2022, Huddles received the additional functionality of video calls. Huddles are restricted to only two participants on free tiers, while paid plans offer the capacity for up to 50 participants. In March 2023, Salesforce announced the partnership with OpenAI to launch a ChatGPT integration for Slack and can be used for summarizing conversations, answering questions, or drafting replies. Slack provides mobile apps for iOS and Android in addition to their web browser client and desktop clients for MacOS, Windows, and Linux (beta). Slack is also available for the Apple Watch, allowing users to send direct messages, see mentions, and make simple replies. Business model Slack is a freemium product whose main paid features are the ability to search more than 90 days' archived messages and add unlimited apps and integrations. They also claim support for an unlimited number of users. When freeCodeCamp attempted to switch its community of over 8,000 users to Slack in 2015, however, they experienced many technical issues and were advised by Slack support to limit their channels to "no more than 1,000 users (ideally more like 500)". That specific limit no longer applied by January 2017. Growth In August 2013, 8,000 customers signed up for the service within 24 hours of its launch. In February 2015, the company reported that approximately 10,000 new daily active users had signed up each week, and had more than 135,000 paying customers spread across 60,000 teams. By April 2015, those numbers had grown to 200,000 paid subscribers and a total of 750,000 daily active users. Late in 2015, Slack passed more than a million daily active users. , Slack had over 8 million daily users, 3 million of whom had paid accounts. At the time of its S-1 filing for IPO, dated April 26, 2019, Slack reported more than 10 million daily active users from more than 600,000 organizations, located in more than 150 countries. In 2019, it was estimated that employees at large firms were sending over 200 Slack messages per week on average. From 2013 to 2019 the amount of time spent on work email had declined, which was attributed to the proliferation of Slack and its competitors Workplace (launched by Facebook in 2016), Microsoft Teams (launched 2017) and Google Hangouts Chat and Meet (launched in 2018). Reception In March 2015, the Financial Times wrote that Slack was the first business technology to have crossed from business into personal use since Microsoft Office and the BlackBerry. In 2017, New York magazine criticized the platform for being "another utility we both rely on and resent". In 2017, Slack was recognized as the best startup of the year at the Crunchies Awards, organized by TechCrunch. The digital rights group Electronic Frontier Foundation (EFF) has cautioned that "Slack stores and is able to read all of your communications, as well as identifying information for everyone in your workspace." They commended the company for following several best practices in standing up for users concerning government data requests, such as requiring a warrant for content stored on its server, and awarding it four out of five stars in its 2017 "Who has your back" report; the EFF also criticized Slack for "a broad set of exceptions" to its promise to notify users of such requests, and for other privacy shortcomings. Slack has been criticized by users for storing user data exclusively on cloud servers under Slack control. This is found to be a particular issue for users with large teams, who experienced issues with connectivity within the app, access to archived messages, and the number of users for a given "workspace". Slack has also been criticized for a retroactive 2018 change to its privacy policy, allowing access to all public and private chat messages by workspace administrators, without the need of consent from any parties using the app. According to the new policy, Slack workspace members are no longer notified when data is downloaded from their workspace. Slack has also been criticized when used in free and open-source projects for the inability to search for messages and discussions.
Technology
Social network and blogging
null
55694751
https://en.wikipedia.org/wiki/Tapanuli%20orangutan
Tapanuli orangutan
The Tapanuli orangutan (Pongo tapanuliensis) is a species of orangutan restricted to South Tapanuli in the island of Sumatra in Indonesia. It is one of three known species of orangutan, alongside the Sumatran orangutan (P. abelii), found farther northwest on the island, and the Bornean orangutan (P. pygmaeus). It was described as a distinct species in 2017. , there are roughly 800 individuals of this species and it is currently on the critically endangered species list. Taxonomy Discovery and naming An isolated population of orangutans in the Batang Toru area of South Tapanuli was reported in 1939. The population was rediscovered by an expedition to the area in 1997, but it was not recognized as a distinct species then. Pongo tapanuliensis was identified as a distinct species, following a detailed phylogenetic study in 2017. The study analyzed the genetic samples of 37 wild orangutans from populations across Sumatra and Borneo and conducted a morphological analysis of the skeletons of 34 adult males. The holotype of the species is the complete skeleton of an adult male from Batang Toru who died after being wounded by locals in November 2013. The holotype is stored in the Zoological Museum of Bogor. The skull and teeth of the Batang Toru male differ significantly from those of the other two orangutan species. Comparisons of the genomes of all 37 orangutans using principal component analysis and population genetic models also indicated that the Batang Toru population is a separate species. The specific name, tapanuliensis, as well as the common name, Tapanuli orangutan, refer to Tapanuli, the hilly region in North Sumatra where the species lives. Phylogeny Genetic comparisons show that Tapanuli orangutans diverged from Sumatran orangutans about 3.4 million years ago, and became more isolated after the Lake Toba eruption that occurred about 75,000 years ago. They had continued sporadic contact that stopped between 10,000 and 20,000 years ago. Tapanuli orangutans diverged from Bornean orangutans about 674,000 years ago. Orangutans were able to travel from Sumatra to Borneo because the islands were connected by land bridges as parts of Sundaland during recent glacial periods when sea levels were much lower. The present range of Tapanuli orangutans is thought to be close to the area where ancestral orangutans first entered what is now Indonesia from mainland Asia. Description Tapanuli orangutans resemble Sumatran orangutans more than Bornean orangutans in body build and fur color. However, they have frizzier hair, smaller heads, and flatter and wide faces. Dominant male Tapanuli orangutans have prominent moustaches and large flat cheek pads, known as flanges, covered in downy hair. The Tapanuli orangutan differs from the other two existing orangutan species in several specific features: their upper canines are larger; they have a shallower face depth; their pharyngotympanic tube is shorter; they have a shorter mandibular joint; they have a narrower maxillary incisor row; the distance across the palate at the first molars are narrower; there is a smaller horizontal length between the mandibular symphysis; they have a smaller inferior torus; and the width of the ascending ramus located in the mandible. As with other two orangutan species, males are larger than females; males are in height and in weight, females are in height and in weight. When comparing the Tapanuli orangutan with the Pongo abelii, the Tapanuli orangutan has a deeper suborbital fossa, a triangular pyriform aperture, and a facial profile that is more angled. Behavior The loud, long-distance call or 'long call' of male Tapanuli orangutans has a higher maximum frequency than that of Sumatran orangutans, and lasts much longer and has more pulses than that of Bornean orangutans. Their diet is also unique, containing unusual items like caterpillars and conifer cones. Tapanuli orangutans are thought to be exclusively arboreal as scientists have not seen them descend to the ground in over 3,000 hours of observation. This is probably due to the presence of Sumatran tigers in the area. Their other main predators are Sunda clouded leopards, Sumatran dholes and crocodiles. Tapanuli orangutans have slow reproductive rates causing a problem in increasing population. Habitat and distribution Tapanuli orangutans live in tropical and subtropical moist broadleaf forests located south of Lake Toba in Sumatra. The entirety of the species is found in an area of about at elevations from . Tapanuli orangutans are separated from the island's other species of orangutan, the Sumatran orangutan, by just . Conservation With fewer than 800 individuals restricted to an area of about , the Tapanuli orangutan is the rarest great ape. It is listed as critically endangered by the International Union for Conservation of Nature (IUCN) because of hunting, conflict with humans, the illegal wildlife trade, rampant habitat destruction for small scale agriculture, mining and a proposed hydroelectric dam, the Batang Toru hydropower project, in the area with the highest density of orangutans, which could impact up to 10% of its already dwindling habitat and degrade important wildlife corridors. Conservationists predict an 83% decline in three generations (75 years) if the necessary conservation measures and practices are not implemented. Inbreeding depression is likely due to the small population size and fragmented range. This is supported by the genomes of the two Tapanuli orangutan individuals, which show signs of inbreeding. In August 2019 Swiss environmental group PanEco, which is a partner in the Sumatran Orangutan Conservation Programme, dropped its previous opposition to the dam, several months after firing several researchers who opposed the new strategy.
Biology and health sciences
Apes
Animals
42644968
https://en.wikipedia.org/wiki/Void%20%28astronomy%29
Void (astronomy)
Cosmic voids (also known as dark space) are vast spaces between filaments (the largest-scale structures in the universe), which contain very few or no galaxies. In spite of their size, most galaxies are not located in voids. This is because most galaxies are gravitationally bound together, creating huge cosmic structures known as galaxy filaments. The cosmological evolution of the void regions differs drastically from the evolution of the universe as a whole: there is a long stage when the curvature term dominates, which prevents the formation of galaxy clusters and massive galaxies. Hence, although even the emptiest regions of voids contain more than ~15% of the average matter density of the universe, the voids look almost empty to an observer. Voids typically have a diameter of 10 to 100 megaparsecs (30 to 300 million light-years); particularly large voids, defined by the absence of rich superclusters, are sometimes called supervoids. They were first discovered in 1978 in a pioneering study by Stephen Gregory and Laird A. Thompson at the Kitt Peak National Observatory. Voids are believed to have been formed by baryon acoustic oscillations in the Big Bang, collapses of mass followed by implosions of the compressed baryonic matter. Starting from initially small anisotropies from quantum fluctuations in the early universe, the anisotropies grew larger in scale over time. Regions of higher density collapsed more rapidly under gravity, eventually resulting in the large-scale, foam-like structure or "cosmic web" of voids and galaxy filaments seen today. Voids located in high-density environments are smaller than voids situated in low-density spaces of the universe. Voids appear to correlate with the observed temperature of the cosmic microwave background (CMB) because of the Sachs–Wolfe effect. Colder regions correlate with voids, and hotter regions correlate with filaments because of gravitational redshifting. As the Sachs–Wolfe effect is only significant if the universe is dominated by radiation or dark energy, the existence of voids is significant in providing physical evidence for dark energy. Large-scale structure The structure of the Universe can be broken down into components that can help describe the characteristics of individual regions of the cosmos. These are the main structural components of the cosmic web: Voids – vast, largely spherical regions with very low cosmic mean densities, up to 100 megaparsecs (Mpc) in diameter. Walls – the regions that contain the typical cosmic mean density of matter abundance. Walls can be further broken down into two smaller structural features: Clusters – highly concentrated zones where walls meet and intersect, adding to the effective size of the local wall. Filaments – the branching arms of walls that can stretch for tens of megaparsecs. Voids have a mean density less than a tenth of the average density of the universe. This serves as a working definition even though there is no single agreed-upon definition of what constitutes a void. The matter density value used for describing the cosmic mean density is usually based on a ratio of the number of galaxies per unit volume rather than the total mass of the matter contained in a unit volume. Discovery Study of cosmic voids within the discipline of astrophysics began in the mid-1970s when redshift surveys led two separate teams of astrophysicists in 1978 to identify superclusters and voids in the distribution of galaxies and Abell clusters. The new redshift surveys revolutionized the field of astronomy by adding depth to the two-dimensional maps of cosmological structure, which were often densely packed and overlapping, allowing for the first three-dimensional mapping of the universe. Through redshift surveys, their depth was calculated from the individual redshifts of the galaxies due to the expansion of the universe according to Hubble's law. Timeline A summarized timeline of important events in the field of cosmic voids from its beginning to recent times is as follows: 1961 – Large-scale structural features such as "second-order clusters", a specific type of supercluster, were brought to the astronomical community's attention. 1978 – The first two papers on the topic of voids in the large-scale structure were published referencing voids found in the foreground of the Coma/A1367 clusters. 1981 – Discovery of a large void in the Boötes region of the sky that was nearly 50 h−1 Mpc in diameter (which was later recalculated to be about 34 h−1 Mpc). Here h is the dimensionless Hubble parameter, approximately 0.7. 1983 – Computer simulations sophisticated enough to provide relatively reliable results of growth and evolution of the large-scale structure emerged and yielded insight on key features of the large-scale galaxy distribution. 1985 – Details of the supercluster and void structure of the Perseus–Pisces region were surveyed. 1989 – The Center for Astrophysics Redshift Survey revealed that large voids, sharp filaments, and the walls that surround them dominate the large-scale structure of the universe. 1991 – The Las Campanas Redshift Survey confirmed the abundance of voids in the large-scale structure of the universe (Kirshner et al. 1991). 1995 – Comparisons of optically selected galaxy surveys indicate that the same voids are found regardless of the sample selection. 2001 – The completed two-degree Field Galaxy Redshift Survey adds a significantly large amount of voids to the database of all known cosmic voids. 2009 – The Sloan Digital Sky Survey (SDSS) data combined with previous large-scale surveys now provide the most complete view of the detailed structure of cosmic voids. Methods for finding There exist a number of ways for finding voids with the results of large-scale surveys of the universe. Of the many different algorithms, virtually all fall into one of three general categories. The first class consists of void finders that try to find empty regions of space based on local galaxy density. The second class are those which try to find voids via the geometrical structures in the dark matter distribution as suggested by the galaxies. The third class is made of those finders which identify structures dynamically by using gravitationally unstable points in the distribution of dark matter. The three most popular methods through the study of cosmic voids are listed below. VoidFinder algorithm This first-class method uses each galaxy in a catalog as its target and then uses the Nearest Neighbor Approximation to calculate the cosmic density in the region contained in a spherical radius determined by the distance to the third-closest galaxy. El Ad & Piran introduced this method in 1997 to allow a quick and effective method for standardizing the cataloging of voids. Once the spherical cells are mined from all of the structure data, each cell is expanded until the underdensity returns to average expected wall density values. One of the helpful features of void regions is that their boundaries are very distinct and defined, with a cosmic mean density that starts at 10% in the body and quickly rises to 20% at the edge and then to 100% in the walls directly outside the edges. The remaining walls and overlapping void regions are then gridded into, respectively, distinct and intertwining zones of filaments, clusters, and near-empty voids. Any overlap of more than 10% with already known voids are considered to be subregions within those known voids. All voids admitted to the catalog had a minimum radius of 10 Mpc in order to ensure all identified voids were not accidentally cataloged due to sampling errors. Zone bordering on voidness (ZOBOV) algorithm This particular second-class algorithm uses a Voronoi tessellation technique and mock border particles in order to categorize regions based on a high-density contrasting border with a very low amount of bias. Neyrinck introduced this algorithm in 2008 with the purpose of introducing a method that did not contain free parameters or presumed shape tessellations. Therefore, this technique can create more accurately shaped and sized void regions. Although this algorithm has some advantages in shape and size, it has been criticized often for sometimes providing loosely defined results. Since it has no free parameters, it mostly finds small and trivial voids, although the algorithm places a statistical significance on each void it finds. A physical significance parameter can be applied in order to reduce the number of trivial voids by including a minimum density to average density ratio of at least 1:5. Subvoids are also identified using this process which raises more philosophical questions on what qualifies as a void. Void finders such as VIDE are based on ZOBOV. Dynamical void analysis (DIVA) algorithm This third-class method is drastically different from the previous two algorithms listed. The most striking aspect is that it requires a different definition of what it means to be a void. Instead of the general notion that a void is a region of space with a low cosmic mean density; a hole in the distribution of galaxies, it defines voids to be regions in which matter is escaping; which corresponds to the dark energy equation of state, w. Void centers are then considered to be the maximal source of the displacement field denoted as Sψ. The purpose for this change in definitions was presented by Lavaux and Wandelt in 2009 as a way to yield cosmic voids such that exact analytical calculations can be made on their dynamical and geometrical properties. This allows DIVA to heavily explore the ellipticity of voids and how they evolve in the large-scale structure, subsequently leading to the classification of three distinct types of voids. These three morphological classes are True voids, Pancake voids, and Filament voids. Another notable quality is that even though DIVA also contains selection function bias just as first-class methods do, DIVA is devised such that this bias can be precisely calibrated, leading to much more reliable results. Multiple shortfalls of this Lagrangian-Eulerian hybrid approach exist. One example is that the resulting voids from this method are intrinsically different than those found by other methods, which makes an all-data points inclusive comparison between results of differing algorithms very difficult. Significance Voids have contributed significantly to the modern understanding of the cosmos, with applications ranging from shedding light on the current understanding of dark energy, to refining and constraining cosmological evolution models. The Milky Way Galaxy is in a cosmic void named the KBC Void. Some popular applications are mentioned in detail below. Dark energy The simultaneous existence of the largest-known voids and galaxy clusters requires about 70% dark energy in the universe today, consistent with the latest data from the cosmic microwave background. Voids act as bubbles in the universe that are sensitive to background cosmological changes. This means that the evolution of a void's shape is in part the result of the expansion of the universe. Since this acceleration is believed to be caused by dark energy, studying the changes of a void's shape over a period of time can be used to constrain the standard ΛCDM model, or further refine the Quintessence + Cold Dark Matter (QCDM) model and provide a more accurate dark energy equation of state. Additionally the abundance of voids is a promising way to constrain the dark energy equation of state. Neutrinos Neutrinos, due to their very small mass and extremely weak interaction with other matter, will free-stream in and out of voids which are smaller than the mean-free path of neutrinos. This has an effect on the size and depth distribution of voids, and is expected to make it possible with future astronomical surveys (e.g. the Euclid satellite) to measure the sum of the masses of all neutrino species by comparing the statistical properties of void samples to theoretical predictions. Galactic formation and evolution models Cosmic voids contain a mix of galaxies and matter that is slightly different than other regions in the universe. This unique mix supports the biased galaxy formation picture predicted in Gaussian adiabatic cold dark matter models. This phenomenon provides an opportunity to modify the morphology-density correlation that holds discrepancies with these voids. Such observations like the morphology-density correlation can help uncover new facets about how galaxies form and evolve on the large scale. On a more local scale, galaxies that reside in voids have differing morphological and spectral properties than those that are located in the walls. One feature that has been found is that voids have been shown to contain a significantly higher fraction of starburst galaxies of young, hot stars when compared to samples of galaxies in walls. Voids offer opportunities to study the strength of intergalactic magnetic fields. For example, a 2015 study concluded, based on the deflection of blazar gamma-ray emissions that travel through voids, that intergalactic space contains a magnetic field of strength at least 10 G. The specific large-scale magnetic structure of the universe suggests primordial "magnetogenesis", which in turn could have played a role in the formation of magnetic fields within galaxies, and could also change estimates of the timeline of recombination in the early universe. Anomalies in anisotropies Cold spots in the cosmic microwave background, such as the WMAP cold spot found by Wilkinson Microwave Anisotropy Probe, could possibly be explained by an extremely large cosmic void that has a radius of ~120 Mpc, as long as the late integrated Sachs–Wolfe effect was accounted for in the possible solution. Anomalies in CMB screenings are now being potentially explained through the existence of large voids located down the line-of-sight in which the cold spots lie. Expansion Although dark energy is currently the most popular explanation for the acceleration in the expansion of the universe, another theory elaborates on the possibility of our galaxy being part of a very large, not-so-underdense, cosmic void. According to this theory, such an environment could naively lead to the demand for dark energy to solve the problem with the observed acceleration. As more data has been released on this topic the chances of it being a realistic solution in place of the current ΛCDM interpretation has been largely diminished but not all together abandoned. Gravitational theories The abundance of voids, particularly when combined with the abundance of clusters of galaxies, is a promising method for precision tests of deviations from general relativity on large scales and in low-density regions. The insides of voids often seem to adhere to cosmological parameters which differ from those of the known universe. It is because of this unique feature that cosmic voids are useful laboratories to study the effects that gravitational clustering and growth rates have on local galaxies and structure when the cosmological parameters have different values from the outside universe. Due to the observation that larger voids predominantly remain in a linear regime, with most structures within exhibiting spherical symmetry in the underdense environment; that is, the underdensity leads to near-negligible particle-particle gravitational interactions that would otherwise occur in a region of normal galactic density. Testing models for voids can be performed with very high accuracy. The cosmological parameters that differ in these voids are Ωm, ΩΛ, and H0.
Physical sciences
Basics_3
null
41228216
https://en.wikipedia.org/wiki/Field-effect%20transistor
Field-effect transistor
The field-effect transistor (FET) is a type of transistor that uses an electric field to control the current through a semiconductor. It comes in two types: junction FET (JFET) and metal-oxide-semiconductor FET (MOSFET). FETs have three terminals: source, gate, and drain. FETs control the current by the application of a voltage to the gate, which in turn alters the conductivity between the drain and source. FETs are also known as unipolar transistors since they involve single-carrier-type operation. That is, FETs use either electrons (n-channel) or holes (p-channel) as charge carriers in their operation, but not both. Many different types of field effect transistors exist. Field effect transistors generally display very high input impedance at low frequencies. The most widely used field-effect transistor is the MOSFET (metal–oxide–semiconductor field-effect transistor). History The concept of a field-effect transistor (FET) was first patented by the Austro-Hungarian born physicist Julius Edgar Lilienfeld in 1925 and by Oskar Heil in 1934, but they were unable to build a working practical semiconducting device based on the concept. The transistor effect was later observed and explained by John Bardeen and Walter Houser Brattain while working under William Shockley at Bell Labs in 1947, shortly after the 17-year patent expired. Shockley initially attempted to build a working FET by trying to modulate the conductivity of a semiconductor, but was unsuccessful, mainly due to problems with the surface states, the dangling bond, and the germanium and copper compound materials. In the course of trying to understand the mysterious reasons behind their failure to build a working FET, it led to Bardeen and Brattain instead inventing the point-contact transistor in 1947, which was followed by Shockley's bipolar junction transistor in 1948. The first FET device to be successfully built was the junction field-effect transistor (JFET). A JFET was first patented by Heinrich Welker in 1945. The static induction transistor (SIT), a type of JFET with a short channel, was invented by Japanese engineers Jun-ichi Nishizawa and Y. Watanabe in 1950. Following Shockley's theoretical treatment on the JFET in 1952, a working practical JFET was built by George C. Dacey and Ian M. Ross in 1953. However, the JFET still had issues affecting junction transistors in general. Junction transistors were relatively bulky devices that were difficult to manufacture on a mass-production basis, which limited them to a number of specialised applications. The insulated-gate field-effect transistor (IGFET) was theorized as a potential alternative to junction transistors, but researchers were unable to build working IGFETs, largely due to the troublesome surface state barrier that prevented the external electric field from penetrating into the material. By the mid-1950s, researchers had largely given up on the FET concept, and instead focused on bipolar junction transistor (BJT) technology. The foundations of MOSFET technology were laid down by the work of William Shockley, John Bardeen and Walter Brattain. Shockley independently envisioned the FET concept in 1945, but he was unable to build a working device. The next year Bardeen explained his failure in terms of surface states. Bardeen applied the theory of surface states on semiconductors (previous work on surface states was done by Shockley in 1939 and Igor Tamm in 1932) and realized that the external field was blocked at the surface because of extra electrons which are drawn to the semiconductor surface. Electrons become trapped in those localized states forming an inversion layer. Bardeen's hypothesis marked the birth of surface physics. Bardeen then decided to make use of an inversion layer instead of the very thin layer of semiconductor which Shockley had envisioned in his FET designs. Based on his theory, in 1948 Bardeen patented the progenitor of MOSFET, an insulated-gate FET (IGFET) with an inversion layer. The inversion layer confines the flow of minority carriers, increasing modulation and conductivity, although its electron transport depends on the gate's insulator or quality of oxide if used as an insulator, deposited above the inversion layer. Bardeen's patent as well as the concept of an inversion layer forms the basis of CMOS technology today. In 1976 Shockley described Bardeen's surface state hypothesis "as one of the most significant research ideas in the semiconductor program". After Bardeen's surface state theory the trio tried to overcome the effect of surface states. In late 1947, Robert Gibney and Brattain suggested the use of electrolyte placed between metal and semiconductor to overcome the effects of surface states. Their FET device worked, but amplification was poor. Bardeen went further and suggested to rather focus on the conductivity of the inversion layer. Further experiments led them to replace electrolyte with a solid oxide layer in the hope of getting better results. Their goal was to penetrate the oxide layer and get to the inversion layer. However, Bardeen suggested they switch from silicon to germanium and in the process their oxide got inadvertently washed off. They stumbled upon a completely different transistor, the point-contact transistor. Lillian Hoddeson argues that "had Brattain and Bardeen been working with silicon instead of germanium they would have stumbled across a successful field effect transistor". By the end of the first half of the 1950s, following theoretical and experimental work of Bardeen, Brattain, Kingston, Morrison and others, it became more clear that there were two types of surface states. Fast surface states were found to be associated with the bulk and a semiconductor/oxide interface. Slow surface states were found to be associated with the oxide layer because of adsorption of atoms, molecules and ions by the oxide from the ambient. The latter were found to be much more numerous and to have much longer relaxation times. At the time Philo Farnsworth and others came up with various methods of producing atomically clean semiconductor surfaces. In 1955, Carl Frosch and Lincoln Derrick accidentally covered the surface of silicon wafer with a layer of silicon dioxide. They showed that oxide layer prevented certain dopants into the silicon wafer, while allowing for others, thus discovering the passivating effect of oxidation on the semiconductor surface. Their further work demonstrated how to etch small openings in the oxide layer to diffuse dopants into selected areas of the silicon wafer. In 1957, they published a research paper and patented their technique summarizing their work. The technique they developed is known as oxide diffusion masking, which would later be used in the fabrication of MOSFET devices. At Bell Labs, the importance of Frosch's technique was immediately realized. Results of their work circulated around Bell Labs in the form of BTL memos before being published in 1957. At Shockley Semiconductor, Shockley had circulated the preprint of their article in December 1956 to all his senior staff, including Jean Hoerni. In 1955, Ian Munro Ross filed a patent for a FeFET or MFSFET. Its structure was like that of a modern inversion channel MOSFET, but ferroelectric material was used as a dielectric/insulator instead of oxide. He envisioned it as a form of memory, years before the floating gate MOSFET. In February 1957, John Wallmark filed a patent for FET in which germanium monoxide was used as a gate dielectric, but he didn't pursue the idea. In his other patent filed the same year he described a double gate FET. In March 1957, in his laboratory notebook, Ernesto Labate, a research scientist at Bell Labs, conceived of a device similar to the later proposed MOSFET, although Labate's device didn't explicitly use silicon dioxide as an insulator. Metal-oxide-semiconductor FET (MOSFET) In 1955, Carl Frosch and Lincoln Derrick accidentally grew a layer of silicon dioxide over the silicon wafer, for which they observed surface passivation effects. By 1957 Frosch and Derrick, using masking and predeposition, were able to manufacture silicon dioxide transistors and showed that silicon dioxide insulated, protected silicon wafers and prevented dopants from diffusing into the wafer. J.R. Ligenza and W.G. Spitzer studied the mechanism of thermally grown oxides and fabricated a high quality Si/SiO2 stack in 1960. Following this research, Mohamed Atalla and Dawon Kahng proposed a silicon MOS transistor in 1959 and successfully demonstrated a working MOS device with their Bell Labs team in 1960. Their team included E. E. LaBate and E. I. Povilonis who fabricated the device; M. O. Thurston, L. A. D’Asaro, and J. R. Ligenza who developed the diffusion processes, and H. K. Gummel and R. Lindner who characterized the device. With its high scalability, and much lower power consumption and higher density than bipolar junction transistors, the MOSFET made it possible to build high-density integrated circuits. The MOSFET is also capable of handling higher power than the JFET. The MOSFET was the first truly compact transistor that could be miniaturised and mass-produced for a wide range of uses. The MOSFET thus became the most common type of transistor in computers, electronics, and communications technology (such as smartphones). The US Patent and Trademark Office calls it a "groundbreaking invention that transformed life and culture around the world". In 1948, Bardeen and Brattain patented the progenitor of MOSFET, an insulated-gate FET (IGFET) with an inversion layer. Their patent and the concept of an inversion layer, forms the basis of CMOS technology today. CMOS (complementary MOS), a semiconductor device fabrication process for MOSFETs, was developed by Chih-Tang Sah and Frank Wanlass at Fairchild Semiconductor in 1963. The first report of a floating-gate MOSFET was made by Dawon Kahng and Simon Sze in 1967. The concept of a double-gate thin-film transistor (TFT) was proposed by H. R. Farrah (Bendix Corporation) and R. F. Steinberg in 1967. A double-gate MOSFET was first demonstrated in 1984 by Electrotechnical Laboratory researchers Toshihiro Sekigawa and Yutaka Hayashi. FinFET (fin field-effect transistor), a type of 3D non-planar multi-gate MOSFET, originated from the research of Digh Hisamoto and his team at Hitachi Central Research Laboratory in 1989. Basic information FETs can be majority-charge-carrier devices, in which the current is carried predominantly by majority carriers, or minority-charge-carrier devices, in which the current is mainly due to a flow of minority carriers. The device consists of an active channel through which charge carriers, electrons or holes, flow from the source to the drain. Source and drain terminal conductors are connected to the semiconductor through ohmic contacts. The conductivity of the channel is a function of the potential applied across the gate and source terminals. The FET's three terminals are: source (S), through which the carriers enter the channel. Conventionally, current entering the channel at S is designated by IS. drain (D), through which the carriers leave the channel. Conventionally, current leaving the channel at D is designated by ID. Drain-to-source voltage is VDS. gate (G), the terminal that modulates the channel conductivity. By applying voltage to G, one can control ID. More about terminals All FETs have source, drain, and gate terminals that correspond roughly to the emitter, collector, and base of BJTs. Most FETs have a fourth terminal called the body, base, bulk, or substrate. This fourth terminal serves to bias the transistor into operation; it is rare to make non-trivial use of the body terminal in circuit designs, but its presence is important when setting up the physical layout of an integrated circuit. The size of the gate, length L in the diagram, is the distance between source and drain. The width is the extension of the transistor, in the direction perpendicular to the cross section in the diagram (i.e., into/out of the screen). Typically the width is much larger than the length of the gate. A gate length of 1 μm limits the upper frequency to about 5 GHz, 0.2 μm to about 30 GHz. The names of the terminals refer to their functions. The gate terminal may be thought of as controlling the opening and closing of a physical gate. This gate permits electrons to flow through or blocks their passage by creating or eliminating a channel between the source and drain. Electron-flow from the source terminal towards the drain terminal is influenced by an applied voltage. The body simply refers to the bulk of the semiconductor in which the gate, source and drain lie. Usually the body terminal is connected to the highest or lowest voltage within the circuit, depending on the type of the FET. The body terminal and the source terminal are sometimes connected together since the source is often connected to the highest or lowest voltage within the circuit, although there are several uses of FETs which do not have such a configuration, such as transmission gates and cascode circuits. Unlike BJTs, the vast majority of FETs are electrically symmetrical. The source and drain terminals can thus be interchanged in practical circuits with no change in operating characteristics or function. This can be confusing when FET's appear to be connected "backwards" in schematic diagrams and circuits because the physical orientation of the FET was decided for other reasons, such as printed circuit layout considerations. Effect of gate voltage on current The FET controls the flow of electrons (or electron holes) from the source to drain by affecting the size and shape of a "conductive channel" created and influenced by voltage (or lack of voltage) applied across the gate and source terminals. (For simplicity, this discussion assumes that the body and source are connected.) This conductive channel is the "stream" through which electrons flow from source to drain. n-channel FET In an n-channel "depletion-mode" device, a negative gate-to-source voltage causes a depletion region to expand in width and encroach on the channel from the sides, narrowing the channel. If the active region expands to completely close the channel, the resistance of the channel from source to drain becomes large, and the FET is effectively turned off like a switch (see right figure, when there is very small current). This is called "pinch-off", and the voltage at which it occurs is called the "pinch-off voltage". Conversely, a positive gate-to-source voltage increases the channel size and allows electrons to flow easily (see right figure, when there is a conduction channel and current is large). In an n-channel "enhancement-mode" device, a conductive channel does not exist naturally within the transistor, and a positive gate-to-source voltage is necessary to create one. The positive voltage attracts free-floating electrons within the body towards the gate, forming a conductive channel. But first, enough electrons must be attracted near the gate to counter the dopant ions added to the body of the FET; this forms a region with no mobile carriers called a depletion region, and the voltage at which this occurs is referred to as the threshold voltage of the FET. Further gate-to-source voltage increase will attract even more electrons towards the gate which are able to create an active channel from source to drain; this process is called inversion. p-channel FET In a p-channel "depletion-mode" device, a positive voltage from gate to body widens the depletion layer by forcing electrons to the gate-insulator/semiconductor interface, leaving exposed a carrier-free region of immobile, positively charged acceptor ions. Conversely, in a p-channel "enhancement-mode" device, a conductive region does not exist and negative voltage must be used to generate a conduction channel. Effect of drain-to-source voltage on channel For either enhancement- or depletion-mode devices, at drain-to-source voltages much less than gate-to-source voltages, changing the gate voltage will alter the channel resistance, and drain current will be proportional to drain voltage (referenced to source voltage). In this mode the FET operates like a variable resistor and the FET is said to be operating in a linear mode or ohmic mode. If drain-to-source voltage is increased, this creates a significant asymmetrical change in the shape of the channel due to a gradient of voltage potential from source to drain. The shape of the inversion region becomes "pinched-off" near the drain end of the channel. If drain-to-source voltage is increased further, the pinch-off point of the channel begins to move away from the drain towards the source. The FET is said to be in saturation mode; although some authors refer to it as active mode, for a better analogy with bipolar transistor operating regions. The saturation mode, or the region between ohmic and saturation, is used when amplification is needed. The in-between region is sometimes considered to be part of the ohmic or linear region, even where drain current is not approximately linear with drain voltage. Even though the conductive channel formed by gate-to-source voltage no longer connects source to drain during saturation mode, carriers are not blocked from flowing. Considering again an n-channel enhancement-mode device, a depletion region exists in the p-type body, surrounding the conductive channel and drain and source regions. The electrons which comprise the channel are free to move out of the channel through the depletion region if attracted to the drain by drain-to-source voltage. The depletion region is free of carriers and has a resistance similar to silicon. Any increase of the drain-to-source voltage will increase the distance from drain to the pinch-off point, increasing the resistance of the depletion region in proportion to the drain-to-source voltage applied. This proportional change causes the drain-to-source current to remain relatively fixed, independent of changes to the drain-to-source voltage, quite unlike its ohmic behavior in the linear mode of operation. Thus, in saturation mode, the FET behaves as a constant-current source rather than as a resistor, and can effectively be used as a voltage amplifier. In this case, the gate-to-source voltage determines the level of constant current through the channel. Composition FETs can be constructed from various semiconductors, out of which silicon is by far the most common. Most FETs are made by using conventional bulk semiconductor processing techniques, using a single crystal semiconductor wafer as the active region, or channel. Among the more unusual body materials are amorphous silicon, polycrystalline silicon or other amorphous semiconductors in thin-film transistors or organic field-effect transistors (OFETs) that are based on organic semiconductors; often, OFET gate insulators and electrodes are made of organic materials, as well. Such FETs are manufactured using a variety of materials such as silicon carbide (SiC), gallium arsenide (GaAs), gallium nitride (GaN), and indium gallium arsenide (InGaAs). In June 2011, IBM announced that it had successfully used graphene-based FETs in an integrated circuit. These transistors are capable of about 2.23 GHz cutoff frequency, much higher than standard silicon FETs. Types The channel of a FET is doped to produce either an n-type semiconductor or a p-type semiconductor. The drain and source may be doped of opposite type to the channel, in the case of enhancement mode FETs, or doped of similar type to the channel as in depletion mode FETs. Field-effect transistors are also distinguished by the method of insulation between channel and gate. Types of FETs include: The MOSFET (metal–oxide–semiconductor field-effect transistor) utilizes an insulator (typically SiO2) between the gate and the body. This is by far the most common type of FET. The DGMOSFET (dual-gate MOSFET) or DGMOS, a MOSFET with two insulated gates. The IGBT (insulated-gate bipolar transistor) is a device for power control. It has a structure akin to a MOSFET coupled with a bipolar-like main conduction channel. These are commonly used for the 200–3000 V drain-to-source voltage range of operation. Power MOSFETs are still the device of choice for drain-to-source voltages of 1 to 200 V. The JLNT (Junctionless nanowire transistor) is a type of Field-effect transistor (FET) which channel is one or multiple nanowires and does not present any junction. The MNOS (metal–nitride–oxide–semiconductor transistor) utilizes a nitride-oxide layer insulator between the gate and the body. The ISFET (ion-sensitive field-effect transistor) can be used to measure ion concentrations in a solution; when the ion concentration (such as H+, see pH electrode) changes, the current through the transistor will change accordingly. The BioFET (Biologically sensitive field-effect transistor) is a class of sensors/biosensors based on ISFET technology which are utilized to detect charged molecules; when a charged molecule is present, changes in the electrostatic field at the BioFET surface result in a measurable change in current through the transistor. These include enzyme modified FETs (EnFETs), immunologically modified FETs (ImmunoFETs), gene-modified FETs (GenFETs), DNAFETs, cell-based BioFETs (CPFETs), beetle/chip FETs (BeetleFETs), and FETs based on ion-channels/protein binding. The DNAFET (DNA field-effect transistor) is a specialized FET that acts as a biosensor, by using a gate made of single-strand DNA molecules to detect matching DNA strands. finFET, including GAAFET or gate-all-around FET, used on high density processor chips The JFET (junction field-effect transistor) uses a reverse biased p–n junction to separate the gate from the body. The static induction transistor (SIT) is a type of JFET with a short channel. The DEPFET is a FET formed in a fully depleted substrate and acts as a sensor, amplifier and memory node at the same time. It can be used as an image (photon) sensor. The FREDFET (fast-reverse or fast-recovery epitaxial diode FET) is a specialized FET designed to provide a very fast recovery (turn-off) of the body diode, making it convenient for driving inductive loads such as electric motors, especially medium-powered brushless DC motors. The HIGFET (heterostructure insulated-gate field-effect transistor) is now used mainly in research. The MODFET (modulation-doped field-effect transistor) is a high-electron-mobility transistor using a quantum well structure formed by graded doping of the active region. The TFET (tunnel field-effect transistor) is based on band-to-band tunneling. The TQFET (topological quantum field-effect transistor) switches a 2D material from dissipationless topological insulator ('on' state) to conventional insulator ('off' state) using an applied electric field. The HEMT (high-electron-mobility transistor), also called a HFET (heterostructure FET), can be made using bandgap engineering in a ternary semiconductor such as AlGaAs. The fully depleted wide-band-gap material forms the isolation between gate and body. The MESFET (metal–semiconductor field-effect transistor) substitutes the p–n junction of the JFET with a Schottky barrier; and is used in GaAs and other III-V semiconductor materials. The NOMFET is a nanoparticle organic memory field-effect transistor. The GNRFET (graphene nanoribbon field-effect transistor) uses a graphene nanoribbon for its channel. The VeSFET (vertical-slit field-effect transistor) is a square-shaped junctionless FET with a narrow slit connecting the source and drain at opposite corners. Two gates occupy the other corners, and control the current through the slit. The CNTFET (carbon nanotube field-effect transistor). The OFET (organic field-effect transistor) uses an organic semiconductor in its channel. The QFET (quantum field effect transistor) takes advantage of quantum tunneling to greatly increase the speed of transistor operation by eliminating the traditional transistor's area of electron conduction. The SB-FET (Schottky-barrier field-effect transistor) is a field-effect transistor with metallic source and drain contact electrodes, which create Schottky barriers at both the source-channel and drain-channel interfaces. The GFET is a highly sensitive graphene-based field effect transistor used as biosensors and chemical sensors. Due to the 2 dimensional structure of graphene, along with its physical properties, GFETs offer increased sensitivity, and reduced instances of 'false positives' in sensing applications The Fe FET uses a ferroelectric between the gate, allowing the transistor to retain its state in the absence of bias - such devices may have application as non-volatile memory. VTFET, or Vertical-Transport Field-Effect Transistor, IBM's 2021 modification of finFET to allow higher density and lower power. Advantages Field-effect transistors have high gate-to-drain current resistance, of the order of 100 MΩ or more, providing a high degree of isolation between control and flow. Because base current noise will increase with shaping time, a FET typically produces less noise than a bipolar junction transistor (BJT), and is found in noise-sensitive electronics such as tuners and low-noise amplifiers for VHF and satellite receivers. It exhibits no offset voltage at zero drain current and makes an excellent signal chopper. It typically has better thermal stability than a BJT. Because the FETs are controlled by gate charge, once the gate is closed or open, there is no additional power draw, as there would be with a bipolar junction transistor or with non-latching relays in some states. This allows extremely low-power switching, which in turn allows greater miniaturization of circuits because heat dissipation needs are reduced compared to other types of switches. Disadvantages A field-effect transistor has a relatively low gain–bandwidth product compared to a bipolar junction transistor. MOSFETs are very susceptible to overload voltages, thus requiring special handling during installation. The fragile insulating layer of the MOSFET between the gate and the channel makes it vulnerable to electrostatic discharge or changes to threshold voltage during handling. This is not usually a problem after the device has been installed in a properly designed circuit. FETs often have a very low "on" resistance and have a high "off" resistance. However, the intermediate resistances are significant, and so FETs can dissipate large amounts of power while switching. Thus, efficiency can put a premium on switching quickly, but this can cause transients that can excite stray inductances and generate significant voltages that can couple to the gate and cause unintentional switching. FET circuits can therefore require very careful layout and can involve trades between switching speed and power dissipation. There is also a trade-off between voltage rating and "on" resistance, so high-voltage FETs have a relatively high "on" resistance and hence conduction losses. Failure modes Field-effect transistors are relatively robust, especially when operated within the temperature and electrical limitations defined by the manufacturer (proper derating). However, modern FET devices can often incorporate a body diode. If the characteristics of the body diode are not taken into consideration, the FET can experience slow body diode behavior, where a parasitic transistor will turn on and allow high current to be drawn from drain to source when the FET is off. Uses The most commonly used FET is the MOSFET. The CMOS (complementary metal oxide semiconductor) process technology is the basis for modern digital integrated circuits. This process technology uses an arrangement where the (usually "enhancement-mode") p-channel MOSFET and n-channel MOSFET are connected in series such that when one is on, the other is off. In FETs, electrons can flow in either direction through the channel when operated in the linear mode. The naming convention of drain terminal and source terminal is somewhat arbitrary, as the devices are typically (but not always) built symmetrical from source to drain. This makes FETs suitable for switching analog signals between paths (multiplexing). With this concept, one can construct a solid-state mixing board, for example. FET is commonly used as an amplifier. For example, due to its large input resistance and low output resistance, it is effective as a buffer in common-drain (source follower) configuration. IGBTs are used in switching internal combustion engine ignition coils, where fast switching and voltage blocking capabilities are important. Source-gated transistor Source-gated transistors are more robust to manufacturing and environmental issues in large-area electronics such as display screens, but are slower in operation than FETs.
Technology
Semiconductors
null
41228673
https://en.wikipedia.org/wiki/Internal%20combustion%20engine
Internal combustion engine
An internal combustion engine (ICE or IC engine) is a heat engine in which the combustion of a fuel occurs with an oxidizer (usually air) in a combustion chamber that is an integral part of the working fluid flow circuit. In an internal combustion engine, the expansion of the high-temperature and high-pressure gases produced by combustion applies direct force to some component of the engine. The force is typically applied to pistons (piston engine), turbine blades (gas turbine), a rotor (Wankel engine), or a nozzle (jet engine). This force moves the component over a distance. This process transforms chemical energy into kinetic energy which is used to propel, move or power whatever the engine is attached to. The first commercially successful internal combustion engines were invented in the mid-19th century. The first modern internal combustion engine, the Otto engine, was designed in 1876 by the German engineer Nicolaus Otto. The term internal combustion engine usually refers to an engine in which combustion is intermittent, such as the more familiar two-stroke and four-stroke piston engines, along with variants, such as the six-stroke piston engine and the Wankel rotary engine. A second class of internal combustion engines use continuous combustion: gas turbines, jet engines and most rocket engines, each of which are internal combustion engines on the same principle as previously described. In contrast, in external combustion engines, such as steam or Stirling engines, energy is delivered to a working fluid not consisting of, mixed with, or contaminated by combustion products. Working fluids for external combustion engines include air, hot water, pressurized water or even boiler-heated liquid sodium. While there are many stationary applications, most ICEs are used in mobile applications and are the primary power supply for vehicles such as cars, aircraft and boats. ICEs are typically powered by hydrocarbon-based fuels like natural gas, gasoline, diesel fuel, or ethanol. Renewable fuels like biodiesel are used in compression ignition (CI) engines and bioethanol or ETBE (ethyl tert-butyl ether) produced from bioethanol in spark ignition (SI) engines. As early as 1900 the inventor of the diesel engine, Rudolf Diesel, was using peanut oil to run his engines. Renewable fuels are commonly blended with fossil fuels. Hydrogen, which is rarely used, can be obtained from either fossil fuels or renewable energy. History Various scientists and engineers contributed to the development of internal combustion engines. In 1791, John Barber developed the gas turbine. In 1794 Thomas Mead patented a gas engine. Also in 1794, Robert Street patented an internal combustion engine, which was also the first to use liquid fuel, and built an engine around that time. In 1798, John Stevens built the first American internal combustion engine. In 1807, French engineers Nicéphore Niépce (who went on to invent photography) and Claude Niépce ran a prototype internal combustion engine, using controlled dust explosions, the Pyréolophore, which was granted a patent by Napoleon Bonaparte. This engine powered a boat on the Saône river in France. In the same year, Swiss engineer François Isaac de Rivaz invented a hydrogen-based internal combustion engine and powered the engine by electric spark. In 1808, De Rivaz fitted his invention to a primitive working vehicle – "the world's first internal combustion powered automobile". In 1823, Samuel Brown patented the first internal combustion engine to be applied industrially. In 1854, in the UK, the Italian inventors Eugenio Barsanti and Felice Matteucci obtained the certification: "Obtaining Motive Power by the Explosion of Gases". In 1857 the Great Seal Patent Office conceded them patent No.1655 for the invention of an "Improved Apparatus for Obtaining Motive Power from Gases". Barsanti and Matteucci obtained other patents for the same invention in France, Belgium and Piedmont between 1857 and 1859. In 1860, Belgian engineer Jean Joseph Etienne Lenoir produced a gas-fired internal combustion engine. In 1864, Nicolaus Otto patented the first atmospheric gas engine. In 1872, American George Brayton invented the first commercial liquid-fueled internal combustion engine. In 1876, Nicolaus Otto began working with Gottlieb Daimler and Wilhelm Maybach, patented the compressed charge, four-cycle engine. In 1879, Karl Benz patented a reliable two-stroke gasoline engine. Later, in 1886, Benz began the first commercial production of motor vehicles with an internal combustion engine, in which a three-wheeled, four-cycle engine and chassis formed a single unit. In 1892, Rudolf Diesel developed the first compressed charge, compression ignition engine. In 1926, Robert Goddard launched the first liquid-fueled rocket. In 1939, the Heinkel He 178 became the world's first jet aircraft. Etymology At one time, the word engine (via Old French, from Latin , "ability") meant any piece of machinery—a sense that persists in expressions such as siege engine. A "motor" (from Latin , "mover") is any machine that produces mechanical power. Traditionally, electric motors are not referred to as "engines"; however, combustion engines are often referred to as "motors". (An electric engine refers to a locomotive operated by electricity.) In boating, an internal combustion engine that is installed in the hull is referred to as an engine, but the engines that sit on the transom are referred to as motors. Applications Reciprocating piston engines are by far the most common power source for land and water vehicles, including automobiles, motorcycles, ships and to a lesser extent, locomotives (some are electrical but most use diesel engines). Rotary engines of the Wankel design are used in some automobiles, aircraft and motorcycles. These are collectively known as internal-combustion-engine vehicles (ICEV). Where high power-to-weight ratios are required, internal combustion engines appear in the form of combustion turbines, or sometimes Wankel engines. Powered aircraft typically use an ICE which may be a reciprocating engine. Airplanes can instead use jet engines and helicopters can instead employ turboshafts; both of which are types of turbines. In addition to providing propulsion, aircraft may employ a separate ICE as an auxiliary power unit. Wankel engines are fitted to many unmanned aerial vehicles. ICEs drive large electric generators that power electrical grids. They are found in the form of combustion turbines with a typical electrical output in the range of some 100 MW. Combined cycle power plants use the high temperature exhaust to boil and superheat water steam to run a steam turbine. Thus, the efficiency is higher because more energy is extracted from the fuel than what could be extracted by the combustion engine alone. Combined cycle power plants achieve efficiencies in the range of 50–60%. In a smaller scale, stationary engines like gas engines or diesel generators are used for backup or for providing electrical power to areas not connected to an electric grid. Small engines (usually 2‐stroke single cylinder gasoline/petrol engines) are a common power source for lawnmowers, string trimmers, chainsaws, leaf blowers, pressure washers, radio-controlled cars, snowmobiles, jet skis, outboard motors, mopeds, and motorcycles. Classification There are several possible ways to classify internal combustion engines. Reciprocating By number of strokes: Two-stroke engine Clerk cycle Day cycle Four-stroke engine (Otto cycle) Six-stroke engine By type of ignition: Compression-ignition engine Spark-ignition engine (commonly found as gasoline engines) By mechanical/thermodynamic cycle (these cycles are infrequently used but are commonly found in hybrid vehicles, along with other vehicles manufactured for fuel efficiency): Atkinson cycle Miller cycle Rotary Wankel engine Pistonless rotary engine Continuous combustion Gas turbine engine Turbojet, through a propelling nozzle Turbofan, through a duct-fan Turboprop, through an unducted propeller, usually with variable pitch Turboshaft, a gas turbine optimized for producing mechanical torque instead of thrust Ramjet, similar to a turbojet but uses vehicle speed to compress (ram) the air instead of a compressor. Scramjet, a variant of the ramjet that uses supersonic combustion. Rocket engine Reciprocating engines Structure The base of a reciprocating internal combustion engine is the engine block, which is typically made of cast iron (due to its good wear resistance and low cost) or aluminum. In the latter case, the cylinder liners are made of cast iron or steel, or a coating such as nikasil or alusil. The engine block contains the cylinders. In engines with more than one cylinder they are usually arranged either in 1 row (straight engine) or 2 rows (boxer engine or V engine); 3 or 4 rows are occasionally used (W engine) in contemporary engines, and other engine configurations are possible and have been used. Single-cylinder engines (or thumpers) are common for motorcycles and other small engines found in light machinery. On the outer side of the cylinder, passages that contain cooling fluid are cast into the engine block whereas, in some heavy duty engines, the passages are the types of removable cylinder sleeves which can be replaceable. Water-cooled engines contain passages in the engine block where cooling fluid circulates (the water jacket). Some small engines are air-cooled, and instead of having a water jacket the cylinder block has fins protruding away from it to cool the engine by directly transferring heat to the air. The cylinder walls are usually finished by honing to obtain a cross hatch, which is able to retain more oil. A too rough surface would quickly harm the engine by excessive wear on the piston. The pistons are short cylindrical parts which seal one end of the cylinder from the high pressure of the compressed air and combustion products and slide continuously within it while the engine is in operation. In smaller engines, the pistons are made of aluminum; while in larger applications, they are typically made of cast iron. In performance applications, pistons can also be titanium or forged steel for greater strength. The top surface of the piston is called its crown and is typically flat or concave. Some two-stroke engines use pistons with a deflector head. Pistons are open at the bottom and hollow except for an integral reinforcement structure (the piston web). When an engine is working, the gas pressure in the combustion chamber exerts a force on the piston crown which is transferred through its web to a gudgeon pin. Each piston has rings fitted around its circumference that mostly prevent the gases from leaking into the crankcase or the oil into the combustion chamber. A ventilation system drives the small amount of gas that escapes past the pistons during normal operation (the blow-by gases) out of the crankcase so that it does not accumulate contaminating the oil and creating corrosion. In two-stroke gasoline engines the crankcase is part of the air–fuel path and due to the continuous flow of it, two-stroke engines do not need a separate crankcase ventilation system. The cylinder head is attached to the engine block by numerous bolts or studs. It has several functions. The cylinder head seals the cylinders on the side opposite to the pistons; it contains short ducts (the ports) for intake and exhaust and the associated intake valves that open to let the cylinder be filled with fresh air and exhaust valves that open to allow the combustion gases to escape. The valves are often poppet valves but they can also be rotary valves or sleeve valves. However, 2-stroke crankcase scavenged engines connect the gas ports directly to the cylinder wall without poppet valves; the piston controls their opening and occlusion instead. The cylinder head also holds the spark plug in the case of spark ignition engines and the injector for engines that use direct injection. All CI (compression ignition) engines use fuel injection, usually direct injection but some engines instead use indirect injection. SI (spark ignition) engines can use a carburetor or fuel injection as port injection or direct injection. Most SI engines have a single spark plug per cylinder but some have 2. A head gasket prevents the gas from leaking between the cylinder head and the engine block. The opening and closing of the valves is controlled by one or several camshafts and springs—or in some engines—a desmodromic mechanism that uses no springs. The camshaft may press directly the stem of the valve or may act upon a rocker arm, again, either directly or through a pushrod. The crankcase is sealed at the bottom with a sump that collects the falling oil during normal operation to be cycled again. The cavity created between the cylinder block and the sump houses a crankshaft that converts the reciprocating motion of the pistons to rotational motion. The crankshaft is held in place relative to the engine block by main bearings, which allow it to rotate. Bulkheads in the crankcase form a half of every main bearing; the other half is a detachable cap. In some cases a single main bearing deck is used rather than several smaller caps. A connecting rod is connected to offset sections of the crankshaft (the crankpins) in one end and to the piston in the other end through the gudgeon pin and thus transfers the force and translates the reciprocating motion of the pistons to the circular motion of the crankshaft. The end of the connecting rod attached to the gudgeon pin is called its small end, and the other end, where it is connected to the crankshaft, the big end. The big end has a detachable half to allow assembly around the crankshaft. It is kept together to the connecting rod by removable bolts. The cylinder head has an intake manifold and an exhaust manifold attached to the corresponding ports. The intake manifold connects to the air filter directly, or to a carburetor when one is present, which is then connected to the air filter. It distributes the air incoming from these devices to the individual cylinders. The exhaust manifold is the first component in the exhaust system. It collects the exhaust gases from the cylinders and drives it to the following component in the path. The exhaust system of an ICE may also include a catalytic converter and muffler. The final section in the path of the exhaust gases is the tailpipe. Four-stroke engines The top dead center (TDC) of a piston is the position where it is nearest to the valves; bottom dead center (BDC) is the opposite position where it is furthest from them. A stroke is the movement of a piston from TDC to BDC or vice versa, together with the associated process. While an engine is in operation, the crankshaft rotates continuously at a nearly constant speed. In a 4-stroke ICE, each piston experiences 2 strokes per crankshaft revolution in the following order. Starting the description at TDC, these are: Intake, induction or suction: The intake valves are open as a result of the cam lobe pressing down on the valve stem. The piston moves downward increasing the volume of the combustion chamber and allowing air to enter in the case of a CI engine or an air-fuel mix in the case of SI engines that do not use direct injection. The air or air-fuel mixture is called the charge in any case. Compression: In this stroke, both valves are closed and the piston moves upward reducing the combustion chamber volume which reaches its minimum when the piston is at TDC. The piston performs work on the charge as it is being compressed; as a result, its pressure, temperature and density increase; an approximation to this behavior is provided by the ideal gas law. Just before the piston reaches TDC, ignition begins. In the case of a SI engine, the spark plug receives a high voltage pulse that generates the spark which gives it its name and ignites the charge. In the case of a CI engine, the fuel injector quickly injects fuel into the combustion chamber as a spray; the fuel ignites due to the high temperature. Power or working stroke: The pressure of the combustion gases pushes the piston downward, generating more kinetic energy than is required to compress the charge. Complementary to the compression stroke, the combustion gases expand and as a result their temperature, pressure and density decreases. When the piston is near to BDC the exhaust valve opens. In the blowdown, the combustion gases expand irreversibly due to the leftover pressure—in excess of back pressure, the gauge pressure on the exhaust port. Exhaust: The exhaust valve remains open while the piston moves upward expelling the combustion gases. For naturally aspirated engines a small part of the combustion gases may remain in the cylinder during normal operation because the piston does not close the combustion chamber completely; these gases dissolve in the next charge. At the end of this stroke, the exhaust valve closes, the intake valve opens, and the sequence repeats in the next cycle. The intake valve may open before the exhaust valve closes to allow better scavenging. Two-stroke engines The defining characteristic of this kind of engine is that each piston completes a cycle every crankshaft revolution. The 4 processes of intake, compression, power and exhaust take place in only 2 strokes so that it is not possible to dedicate a stroke exclusively for each of them. Starting at TDC the cycle consists of: Power: While the piston is descending the combustion gases perform work on it, as in a 4-stroke engine. The same thermodynamics for the expansion apply. Scavenging: Around 75° of crankshaft rotation before BDC the exhaust valve or port opens, and blowdown occurs. Shortly thereafter the intake valve or transfer port opens. The incoming charge displaces the remaining combustion gases to the exhaust system and a part of the charge may enter the exhaust system as well. The piston reaches BDC and reverses direction. After the piston has traveled a short distance upwards into the cylinder the exhaust valve or port closes; shortly the intake valve or transfer port closes as well. Compression: With both intake and exhaust closed the piston continues moving upwards compressing the charge and performing work on it. As in the case of a 4-stroke engine, ignition starts just before the piston reaches TDC and the same consideration on the thermodynamics of the compression on the charge apply. While a 4-stroke engine uses the piston as a positive displacement pump to accomplish scavenging taking 2 of the 4 strokes, a 2-stroke engine uses the last part of the power stroke and the first part of the compression stroke for combined intake and exhaust. The work required to displace the charge and exhaust gases comes from either the crankcase or a separate blower. For scavenging, expulsion of burned gas and entry of fresh mix, two main approaches are described: Loop scavenging, and Uniflow scavenging. SAE news published in the 2010s that 'Loop Scavenging' is better under any circumstance than Uniflow Scavenging. Crankcase scavenged Some SI engines are crankcase scavenged and do not use poppet valves. Instead, the crankcase and the part of the cylinder below the piston is used as a pump. The intake port is connected to the crankcase through a reed valve or a rotary disk valve driven by the engine. For each cylinder, a transfer port connects in one end to the crankcase and in the other end to the cylinder wall. The exhaust port is connected directly to the cylinder wall. The transfer and exhaust port are opened and closed by the piston. The reed valve opens when the crankcase pressure is slightly below intake pressure, to let it be filled with a new charge; this happens when the piston is moving upwards. When the piston is moving downwards the pressure in the crankcase increases and the reed valve closes promptly, then the charge in the crankcase is compressed. When the piston is moving downwards, it also uncovers the exhaust port and the transfer port and the higher pressure of the charge in the crankcase makes it enter the cylinder through the transfer port, blowing the exhaust gases. Lubrication is accomplished by adding two-stroke oil to the fuel in small ratios. Petroil refers to the mix of gasoline with the aforesaid oil. This kind of 2-stroke engine has a lower efficiency than comparable 4-strokes engines and releases more polluting exhaust gases for the following conditions: They use a total-loss oiling system: all the lubricating oil is eventually burned along with the fuel. There are conflicting requirements for scavenging: On one side, enough fresh charge needs to be introduced in each cycle to displace almost all the combustion gases but introducing too much of it means that a part of it gets in the exhaust. They must use the transfer port(s) as a carefully designed and placed nozzle so that a gas current is created in a way that it sweeps the whole cylinder before reaching the exhaust port so as to expel the combustion gases, but minimize the amount of charge exhausted. 4-stroke engines have the benefit of forcibly expelling almost all of the combustion gases because during exhaust the combustion chamber is reduced to its minimum volume. In crankcase scavenged 2-stroke engines, exhaust and intake are performed mostly simultaneously and with the combustion chamber at its maximum volume. The main advantage of 2-stroke engines of this type is mechanical simplicity and a higher power-to-weight ratio than their 4-stroke counterparts. Despite having twice as many power strokes per cycle, less than twice the power of a comparable 4-stroke engine is attainable in practice. In the US, 2-stroke engines were banned for road vehicles due to the pollution. Off-road only motorcycles are still often 2-stroke but are rarely road legal. However, many thousands of 2-stroke lawn maintenance engines are in use. Blower scavenged Using a separate blower avoids many of the shortcomings of crankcase scavenging, at the expense of increased complexity which means a higher cost and an increase in maintenance requirement. An engine of this type uses ports or valves for intake and valves for exhaust, except opposed piston engines, which may also use ports for exhaust. The blower is usually of the Roots-type but other types have been used too. This design is commonplace in CI engines, and has been occasionally used in SI engines. CI engines that use a blower typically use uniflow scavenging. In this design the cylinder wall contains several intake ports placed uniformly spaced along the circumference just above the position that the piston crown reaches when at BDC. An exhaust valve or several like that of 4-stroke engines is used. The final part of the intake manifold is an air sleeve that feeds the intake ports. The intake ports are placed at a horizontal angle to the cylinder wall (I.e: they are in plane of the piston crown) to give a swirl to the incoming charge to improve combustion. The largest reciprocating IC are low speed CI engines of this type; they are used for marine propulsion (see marine diesel engine) or electric power generation and achieve the highest thermal efficiencies among internal combustion engines of any kind. Some diesel–electric locomotive engines operate on the 2-stroke cycle. The most powerful of them have a brake power of around 4.5 MW or 6,000 HP. The EMD SD90MAC class of locomotives are an example of such. The comparable class GE AC6000CW, whose prime mover has almost the same brake power, uses a 4-stroke engine. An example of this type of engine is the Wärtsilä-Sulzer RTA96-C turbocharged 2-stroke diesel, used in large container ships. It is the most efficient and powerful reciprocating internal combustion engine in the world with a thermal efficiency over 50%. For comparison, the most efficient small four-stroke engines are around 43% thermally-efficient (SAE 900648); size is an advantage for efficiency due to the increase in the ratio of volume to surface area. See the external links for an in-cylinder combustion video in a 2-stroke, optically accessible motorcycle engine. Historical design Dugald Clerk developed the first two-cycle engine in 1879. It used a separate cylinder which functioned as a pump in order to transfer the fuel mixture to the cylinder. In 1899 John Day simplified Clerk's design into the type of 2 cycle engine that is very widely used today. Day cycle engines are crankcase scavenged and port timed. The crankcase and the part of the cylinder below the exhaust port is used as a pump. The operation of the Day cycle engine begins when the crankshaft is turned so that the piston moves from BDC upward (toward the head) creating a vacuum in the crankcase/cylinder area. The carburetor then feeds the fuel mixture into the crankcase through a reed valve or a rotary disk valve (driven by the engine). There are cast in ducts from the crankcase to the port in the cylinder to provide for intake and another from the exhaust port to the exhaust pipe. The height of the port in relationship to the length of the cylinder is called the "port timing". On the first upstroke of the engine there would be no fuel inducted into the cylinder as the crankcase was empty. On the downstroke, the piston now compresses the fuel mix, which has lubricated the piston in the cylinder and the bearings due to the fuel mix having oil added to it. As the piston moves downward it first uncovers the exhaust, but on the first stroke there is no burnt fuel to exhaust. As the piston moves downward further, it uncovers the intake port which has a duct that runs to the crankcase. Since the fuel mix in the crankcase is under pressure, the mix moves through the duct and into the cylinder. Because there is no obstruction in the cylinder of the fuel to move directly out of the exhaust port prior to the piston rising far enough to close the port, early engines used a high domed piston to slow down the flow of fuel. Later the fuel was "resonated" back into the cylinder using an expansion chamber design. When the piston rose close to TDC, a spark ignited the fuel. As the piston is driven downward with power, it first uncovers the exhaust port where the burned fuel is expelled under high pressure and then the intake port where the process has been completed and will keep repeating. Later engines used a type of porting devised by the Deutz company to improve performance. It was called the Schnurle Reverse Flow system. DKW licensed this design for all their motorcycles. Their DKW RT 125 was one of the first motor vehicles to achieve over 100 mpg as a result. Ignition Internal combustion engines require ignition of the mixture, either by spark ignition (SI) or compression ignition (CI). Before the invention of reliable electrical methods, hot tube and flame methods were used. Experimental engines with laser ignition have been built. Spark ignition process The spark-ignition engine was a refinement of the early engines which used Hot Tube ignition. When Bosch developed the magneto it became the primary system for producing electricity to energize a spark plug. Many small engines still use magneto ignition. Small engines are started by hand cranking using a recoil starter or hand crank. Prior to Charles F. Kettering of Delco's development of the automotive starter all gasoline engined automobiles used a hand crank. Larger engines typically power their starting motors and ignition systems using the electrical energy stored in a lead–acid battery. The battery's charged state is maintained by an automotive alternator or (previously) a generator which uses engine power to create electrical energy storage. The battery supplies electrical power for starting when the engine has a starting motor system, and supplies electrical power when the engine is off. The battery also supplies electrical power during rare run conditions where the alternator cannot maintain more than 13.8 volts (for a common 12 V automotive electrical system). As alternator voltage falls below 13.8 volts, the lead-acid storage battery increasingly picks up electrical load. During virtually all running conditions, including normal idle conditions, the alternator supplies primary electrical power. Some systems disable alternator field (rotor) power during wide-open throttle conditions. Disabling the field reduces alternator pulley mechanical loading to nearly zero, maximizing crankshaft power. In this case, the battery supplies all primary electrical power. Gasoline engines take in a mixture of air and gasoline and compress it by the movement of the piston from bottom dead center to top dead center when the fuel is at maximum compression. The reduction in the size of the swept area of the cylinder and taking into account the volume of the combustion chamber is described by a ratio. Early engines had compression ratios of 6 to 1. As compression ratios were increased, the efficiency of the engine increased as well. With early induction and ignition systems the compression ratios had to be kept low. With advances in fuel technology and combustion management, high-performance engines can run reliably at 12:1 ratio. With low octane fuel, a problem would occur as the compression ratio increased as the fuel was igniting due to the rise in temperature that resulted. Charles Kettering developed a lead additive which allowed higher compression ratios, which was progressively abandoned for automotive use from the 1970s onward, partly due to lead poisoning concerns. The fuel mixture is ignited at different progressions of the piston in the cylinder. At low rpm, the spark is timed to occur close to the piston achieving top dead center. In order to produce more power, as rpm rises the spark is advanced sooner during piston movement. The spark occurs while the fuel is still being compressed progressively more as rpm rises. The necessary high voltage, typically 10,000 volts, is supplied by an induction coil or transformer. The induction coil is a fly-back system, using interruption of electrical primary system current through some type of synchronized interrupter. The interrupter can be either contact points or a power transistor. The problem with this type of ignition is that as RPM increases the availability of electrical energy decreases. This is especially a problem, since the amount of energy needed to ignite a more dense fuel mixture is higher. The result was often a high RPM misfire. Capacitor discharge ignition was developed. It produces a rising voltage that is sent to the spark plug. CD system voltages can reach 60,000 volts. CD ignitions use step-up transformers. The step-up transformer uses energy stored in a capacitance to generate electric spark. With either system, a mechanical or electrical control system provides a carefully timed high-voltage to the proper cylinder. This spark, via the spark plug, ignites the air-fuel mixture in the engine's cylinders. While gasoline internal combustion engines are much easier to start in cold weather than diesel engines, they can still have cold weather starting problems under extreme conditions. For years, the solution was to park the car in heated areas. In some parts of the world, the oil was actually drained and heated overnight and returned to the engine for cold starts. In the early 1950s, the gasoline Gasifier unit was developed, where, on cold weather starts, raw gasoline was diverted to the unit where part of the fuel was burned causing the other part to become a hot vapor sent directly to the intake valve manifold. This unit was quite popular until electric engine block heaters became standard on gasoline engines sold in cold climates. Compression ignition process For ignition, diesel, PPC and HCCI engines rely solely on the high temperature and pressure created by the engine in its compression process. The compression level that occurs is usually twice or more than a gasoline engine. Diesel engines take in air only, and shortly before peak compression, spray a small quantity of diesel fuel into the cylinder via a fuel injector that allows the fuel to instantly ignite. HCCI type engines take in both air and fuel, but continue to rely on an unaided auto-combustion process, due to higher pressures and temperature. This is also why diesel and HCCI engines are more susceptible to cold-starting issues, although they run just as well in cold weather once started. Light duty diesel engines with indirect injection in automobiles and light trucks employ glowplugs (or other pre-heating: see Cummins ISB#6BT) that pre-heat the combustion chamber just before starting to reduce no-start conditions in cold weather. Most diesels also have a battery and charging system; nevertheless, this system is secondary and is added by manufacturers as a luxury for the ease of starting, turning fuel on and off (which can also be done via a switch or mechanical apparatus), and for running auxiliary electrical components and accessories. Most new engines rely on electrical and electronic engine control units (ECU) that also adjust the combustion process to increase efficiency and reduce emissions. Lubrication Surfaces in contact and relative motion to other surfaces require lubrication to reduce wear, noise and increase efficiency by reducing the power wasting in overcoming friction, or to make the mechanism work at all. Also, the lubricant used can reduce excess heat and provide additional cooling to components. At the very least, an engine requires lubrication in the following parts: Between pistons and cylinders Small bearings Big end bearings Main bearings Valve gear (The following elements may not be present): Tappets Rocker arms Pushrods Timing chain or gears. Toothed belts do not require lubrication. In 2-stroke crankcase scavenged engines, the interior of the crankcase, and therefore the crankshaft, connecting rod and bottom of the pistons are sprayed by the two-stroke oil in the air-fuel-oil mixture which is then burned along with the fuel. The valve train may be contained in a compartment flooded with lubricant so that no oil pump is required. In a splash lubrication system no oil pump is used. Instead the crankshaft dips into the oil in the sump and due to its high speed, it splashes the crankshaft, connecting rods and bottom of the pistons. The connecting rod big end caps may have an attached scoop to enhance this effect. The valve train may also be sealed in a flooded compartment, or open to the crankshaft in a way that it receives splashed oil and allows it to drain back to the sump. Splash lubrication is common for small 4-stroke engines. In a forced (also called pressurized) lubrication system, lubrication is accomplished in a closed-loop which carries motor oil to the surfaces serviced by the system and then returns the oil to a reservoir. The auxiliary equipment of an engine is typically not serviced by this loop; for instance, an alternator may use ball bearings sealed with their own lubricant. The reservoir for the oil is usually the sump, and when this is the case, it is called a wet sump system. When there is a different oil reservoir the crankcase still catches it, but it is continuously drained by a dedicated pump; this is called a dry sump system. On its bottom, the sump contains an oil intake covered by a mesh filter which is connected to an oil pump then to an oil filter outside the crankcase. From there it is diverted to the crankshaft main bearings and valve train. The crankcase contains at least one oil gallery (a conduit inside a crankcase wall) to which oil is introduced from the oil filter. The main bearings contain a groove through all or half its circumference; the oil enters these grooves from channels connected to the oil gallery. The crankshaft has drillings that take oil from these grooves and deliver it to the big end bearings. All big end bearings are lubricated this way. A single main bearing may provide oil for 0, 1 or 2 big end bearings. A similar system may be used to lubricate the piston, its gudgeon pin and the small end of its connecting rod; in this system, the connecting rod big end has a groove around the crankshaft and a drilling connected to the groove which distributes oil from there to the bottom of the piston and from then to the cylinder. Other systems are also used to lubricate the cylinder and piston. The connecting rod may have a nozzle to throw an oil jet to the cylinder and bottom of the piston. That nozzle is in movement relative to the cylinder it lubricates, but always pointed towards it or the corresponding piston. Typically forced lubrication systems have a lubricant flow higher than what is required to lubricate satisfactorily, in order to assist with cooling. Specifically, the lubricant system helps to move heat from the hot engine parts to the cooling liquid (in water-cooled engines) or fins (in air-cooled engines) which then transfer it to the environment. The lubricant must be designed to be chemically stable and maintain suitable viscosities within the temperature range it encounters in the engine. Cylinder configuration Common cylinder configurations include the straight or inline configuration, the more compact V configuration, and the wider but smoother flat or boxer configuration. Aircraft engines can also adopt a radial configuration, which allows more effective cooling. More unusual configurations such as the H, U, X, and W have also been used. Multiple cylinder engines have their valve train and crankshaft configured so that pistons are at different parts of their cycle. It is desirable to have the pistons' cycles uniformly spaced (this is called even firing) especially in forced induction engines; this reduces torque pulsations and makes inline engines with more than 3 cylinders statically balanced in its primary forces. However, some engine configurations require odd firing to achieve better balance than what is possible with even firing. For instance, a 4-stroke I2 engine has better balance when the angle between the crankpins is 180° because the pistons move in opposite directions and inertial forces partially cancel, but this gives an odd firing pattern where one cylinder fires 180° of crankshaft rotation after the other, then no cylinder fires for 540°. With an even firing pattern, the pistons would move in unison and the associated forces would add. Multiple crankshaft configurations do not necessarily need a cylinder head at all because they can instead have a piston at each end of the cylinder called an opposed piston design. Because fuel inlets and outlets are positioned at opposed ends of the cylinder, one can achieve uniflow scavenging, which, as in the four-stroke engine is efficient over a wide range of engine speeds. Thermal efficiency is improved because of a lack of cylinder heads. This design was used in the Junkers Jumo 205 diesel aircraft engine, using two crankshafts at either end of a single bank of cylinders, and most remarkably in the Napier Deltic diesel engines. These used three crankshafts to serve three banks of double-ended cylinders arranged in an equilateral triangle with the crankshafts at the corners. It was also used in single-bank locomotive engines, and is still used in marine propulsion engines and marine auxiliary generators. Diesel cycle Most truck and automotive diesel engines use a cycle reminiscent of a four-stroke cycle, but with temperature increase by compression causing ignition, rather than needing a separate ignition system. This variation is called the diesel cycle. In the diesel cycle, diesel fuel is injected directly into the cylinder so that combustion occurs at constant pressure, as the piston moves. Otto cycle The Otto cycle is the most common cycle for most cars' internal combustion engines that use gasoline as a fuel. It consists of the same major steps as described for the four-stroke engine: Intake, compression, ignition, expansion and exhaust. Five-stroke engine In 1879, Nicolaus Otto manufactured and sold a double expansion engine (the double and triple expansion principles had ample usage in steam engines), with two small cylinders at both sides of a low-pressure larger cylinder, where a second expansion of exhaust stroke gas took place; the owner returned it, alleging poor performance. In 1906, the concept was incorporated in a car built by EHV (Eisenhuth Horseless Vehicle Company); and in the 21st century Ilmor designed and successfully tested a 5-stroke double expansion internal combustion engine, with high power output and low SFC (Specific Fuel Consumption). Six-stroke engine The six-stroke engine was invented in 1883. Four kinds of six-stroke engines use a regular piston in a regular cylinder (Griffin six-stroke, Bajulaz six-stroke, Velozeta six-stroke and Crower six-stroke), firing every three crankshaft revolutions. These systems capture the waste heat of the four-stroke Otto cycle with an injection of air or water. The Beare Head and "piston charger" engines operate as opposed-piston engines, two pistons in a single cylinder, firing every two revolutions rather than every four like a four-stroke engine. Other cycles The first internal combustion engines did not compress the mixture. The first part of the piston downstroke drew in a fuel-air mixture, then the inlet valve closed and, in the remainder of the down-stroke, the fuel-air mixture fired. The exhaust valve opened for the piston upstroke. These attempts at imitating the principle of a steam engine were very inefficient. There are a number of variations of these cycles, most notably the Atkinson and Miller cycles. Split-cycle engines separate the four strokes of intake, compression, combustion and exhaust into two separate but paired cylinders. The first cylinder is used for intake and compression. The compressed air is then transferred through a crossover passage from the compression cylinder into the second cylinder, where combustion and exhaust occur. A split-cycle engine is really an air compressor on one side with a combustion chamber on the other. Previous split-cycle engines have had two major problems—poor breathing (volumetric efficiency) and low thermal efficiency. However, new designs are being introduced that seek to address these problems. The Scuderi Engine addresses the breathing problem by reducing the clearance between the piston and the cylinder head through various turbocharging techniques. The Scuderi design requires the use of outwardly opening valves that enable the piston to move very close to the cylinder head without the interference of the valves. Scuderi addresses the low thermal efficiency via firing after top dead center (ATDC). Firing ATDC can be accomplished by using high-pressure air in the transfer passage to create sonic flow and high turbulence in the power cylinder. Combustion turbines Jet engine Jet engines use a number of rows of fan blades to compress air which then enters a combustor where it is mixed with fuel (typically JP fuel) and then ignited. The burning of the fuel raises the temperature of the air which is then exhausted out of the engine creating thrust. A modern turbofan engine can operate at as high as 48% efficiency. There are six sections to a turbofan engine: Fan Compressor Combustor Turbine Mixer Nozzle Gas turbines A gas turbine compresses air and uses it to turn a turbine. It is essentially a jet engine which directs its output to a shaft. There are three stages to a turbine: 1) air is drawn through a compressor where the temperature rises due to compression, 2) fuel is added in the combustor, and 3) hot air is exhausted through turbine blades which rotate a shaft connected to the compressor. A gas turbine is a rotary machine similar in principle to a steam turbine and it consists of three main components: a compressor, a combustion chamber, and a turbine. The temperature of the air, after being compressed in the compressor, is increased by burning fuel in it. The heated air and the products of combustion expand in a turbine, producing work output. About of the work drives the compressor: the rest (about ) is available as useful work output. Gas turbines are among the most efficient internal combustion engines. The General Electric 7HA and 9HA turbine combined cycle electrical plants are rated at over 61% efficiency. Brayton cycle A gas turbine is a rotary machine somewhat similar in principle to a steam turbine. It consists of three main components: compressor, combustion chamber, and turbine. The air is compressed by the compressor where a temperature rise occurs. The temperature of the compressed air is further increased by combustion of injected fuel in the combustion chamber which expands the air. This energy rotates the turbine which powers the compressor via a mechanical coupling. The hot gases are then exhausted to provide thrust. Gas turbine cycle engines employ a continuous combustion system where compression, combustion, and expansion occur simultaneously at different places in the engine—giving continuous power. Notably, the combustion takes place at constant pressure, rather than with the Otto cycle, constant volume. Wankel engines The Wankel engine (rotary engine) does not have piston strokes. It operates with the same separation of phases as the four-stroke engine with the phases taking place in separate locations in the engine. In thermodynamic terms it follows the Otto engine cycle, so may be thought of as a "four-phase" engine. While it is true that three power strokes typically occur per rotor revolution, due to the 3:1 revolution ratio of the rotor to the eccentric shaft, only one power stroke per shaft revolution actually occurs. The drive (eccentric) shaft rotates once during every power stroke instead of twice (crankshaft), as in the Otto cycle, giving it a greater power-to-weight ratio than piston engines. This type of engine was most notably used in the Mazda RX-8, the earlier RX-7, and other vehicle models. The engine is also used in unmanned aerial vehicles, where the small size and weight and the high power-to-weight ratio are advantageous. Forced induction Forced induction is the process of delivering compressed air to the intake of an internal combustion engine. A forced induction engine uses a gas compressor to increase the pressure, temperature and density of the air. An engine without forced induction is considered a naturally aspirated engine. Forced induction is used in the automotive and aviation industry to increase engine power and efficiency. It particularly helps aviation engines, as they need to operate at high altitude. Forced induction is achieved by a supercharger, where the compressor is directly powered from the engine shaft or, in the turbocharger, from a turbine powered by the engine exhaust. Fuels and oxidizers All internal combustion engines depend on combustion of a chemical fuel, typically with oxygen from the air (though it is possible to inject nitrous oxide to do more of the same thing and gain a power boost). The combustion process typically results in the production of a great quantity of thermal energy, as well as the production of steam and carbon dioxide and other chemicals at very high temperature; the temperature reached is determined by the chemical make up of the fuel and oxidizers (see stoichiometry), as well as by the compression and other factors. Fuels The most common modern fuels are made up of hydrocarbons and are derived mostly from fossil fuels (petroleum). Fossil fuels include diesel fuel, gasoline and petroleum gas, and the rarer use of propane. Except for the fuel delivery components, most internal combustion engines that are designed for gasoline use can run on natural gas or liquefied petroleum gases without major modifications. Large diesels can run with air mixed with gases and a pilot diesel fuel ignition injection. Liquid and gaseous biofuels, such as ethanol and biodiesel (a form of diesel fuel that is produced from crops that yield triglycerides such as soybean oil), can also be used. Engines with appropriate modifications can also run on hydrogen gas, wood gas, or charcoal gas, as well as from so-called producer gas made from other convenient biomass. Experiments have also been conducted using powdered solid fuels, such as the magnesium injection cycle. Presently, fuels used include: Petroleum: Petroleum spirit (North American term: gasoline, British term: petrol) Diesel fuel. Autogas (liquified petroleum gas). Propane. Compressed natural gas. Jet fuel (aviation fuel) Residual fuel Coal: Gasoline can be made from carbon (coal) using the Fischer–Tropsch process Diesel fuel can be made from carbon using the Fischer–Tropsch process Biofuels and vegetable oils: Peanut oil and other vegetable oils. Woodgas, from an onboard wood gasifier using solid wood as a fuel Biofuels: Biobutanol (replaces gasoline). Biodiesel (replaces petrodiesel). Dimethyl Ether (replaces petrodiesel). Bioethanol and biomethanol (wood alcohol) and other biofuels (see Flexible-fuel vehicle). Biogas Hydrogen (mainly spacecraft rocket engines) Even fluidized metal powders and explosives have seen some use. Engines that use gases for fuel are called gas engines and those that use liquid hydrocarbons are called oil engines; however, gasoline engines are also often colloquially referred to as "gas engines" ("petrol engines" outside North America). The main limitations on fuels are that it must be easily transportable through the fuel system to the combustion chamber, and that the fuel releases sufficient energy in the form of heat upon combustion to make practical use of the engine. Diesel engines are generally heavier, noisier, and more powerful at lower speeds than gasoline engines. They are also more fuel-efficient in most circumstances and are used in heavy road vehicles, some automobiles (increasingly so for their increased fuel efficiency over gasoline engines), ships, railway locomotives, and light aircraft. Gasoline engines are used in most other road vehicles including most cars, motorcycles, and mopeds. In Europe, sophisticated diesel-engined cars have taken over about 45% of the market since the 1990s. There are also engines that run on hydrogen, methanol, ethanol, liquefied petroleum gas (LPG), biodiesel, paraffin and tractor vaporizing oil (TVO). Hydrogen Hydrogen could eventually replace conventional fossil fuels in traditional internal combustion engines. Alternatively fuel cell technology may come to deliver its promise and the use of the internal combustion engines could even be phased out. Although there are multiple ways of producing free hydrogen, those methods require converting combustible molecules into hydrogen or consuming electric energy. Unless that electricity is produced from a renewable source—and is not required for other purposes—hydrogen does not solve any energy crisis. In many situations the disadvantage of hydrogen, relative to carbon fuels, is its storage. Liquid hydrogen has extremely low density (14 times lower than water) and requires extensive insulation—whilst gaseous hydrogen requires heavy tankage. Even when liquefied, hydrogen has a higher specific energy but the volumetric energetic storage is still roughly five times lower than gasoline. However, the energy density of hydrogen is considerably higher than that of electric batteries, making it a serious contender as an energy carrier to replace fossil fuels. The 'Hydrogen on Demand' process (see direct borohydride fuel cell) creates hydrogen as needed, but has other issues, such as the high price of the sodium borohydride that is the raw material. Oxidizers Since air is plentiful at the surface of the earth, the oxidizer is typically atmospheric oxygen, which has the advantage of not being stored within the vehicle. This increases the power-to-weight and power-to-volume ratios. Other materials are used for special purposes, often to increase power output or to allow operation under water or in space. Compressed air has been commonly used in torpedoes. Compressed oxygen, as well as some compressed air, was used in the Japanese Type 93 torpedo. Some submarines carry pure oxygen. Rockets very often use liquid oxygen. Nitromethane is added to some racing and model fuels to increase power and control combustion. Nitrous oxide has been used—with extra gasoline—in tactical aircraft, and in specially equipped cars to allow short bursts of added power from engines that otherwise run on gasoline and air. It is also used in the Burt Rutan rocket spacecraft. Hydrogen peroxide power was under development for German World War II submarines. It may have been used in some non-nuclear submarines, and was used on some rocket engines (notably the Black Arrow and the Messerschmitt Me 163 rocket fighter). Other chemicals such as chlorine or fluorine have been used experimentally, but have not been found practical. Cooling Cooling is required to remove excessive heat—high temperature can cause engine failure, usually from wear (due to high-temperature-induced failure of lubrication), cracking or warping. Two most common forms of engine cooling are air-cooled and water-cooled. Most modern automotive engines are both water and air-cooled, as the water/liquid-coolant is carried to air-cooled fins and/or fans, whereas larger engines may be singularly water-cooled as they are stationary and have a constant supply of water through water-mains or fresh-water, while most power tool engines and other small engines are air-cooled. Some engines (air or water-cooled) also have an oil cooler. In some engines, especially for turbine engine blade cooling and liquid rocket engine cooling, fuel is used as a coolant, as it is simultaneously preheated before injecting it into a combustion chamber. Starting Internal combustion engines must have their cycles started. In reciprocating engines this is accomplished by turning the crankshaft (Wankel Rotor Shaft) which induces the cycles of intake, compression, combustion, and exhaust. The first engines were started with a turn of their flywheels, while the first vehicle (the Daimler Reitwagen) was started with a hand crank. All ICE engined automobiles were started with hand cranks until Charles Kettering developed the electric starter for automobiles. This method is now the most widely used, even among non-automobiles. As diesel engines have become larger and their mechanisms heavier, air starters have come into use. This is due to the lack of torque in electric starters. Air starters work by pumping compressed air into the cylinders of an engine to start it turning. Two-wheeled vehicles may have their engines started in one of four ways: By pedaling, as on a bicycle By pushing the vehicle and then engaging the clutch, known as "run-and-bump starting" By kicking downward on a single pedal, known as "kick starting" By an electric starter, as in cars There are also starters where a spring is compressed by a crank motion and then used to start an engine. Some small engines use a pull-rope mechanism called "recoil starting", as the rope rewinds itself after it has been pulled out to start the engine. This method is commonly used in pushed lawn mowers and other settings where only a small amount of torque is needed to turn an engine over. Turbine engines are frequently started by an electric motor or by compressed air. Measures of engine performance Engine types vary greatly in a number of different ways: energy efficiency fuel/propellant consumption (brake specific fuel consumption for shaft engines, thrust specific fuel consumption for jet engines) power-to-weight ratio thrust to weight ratio torque curves (for shaft engines), thrust lapse (jet engines) compression ratio for piston engines, overall pressure ratio for jet engines and gas turbines Energy efficiency Once ignited and burnt, the combustion products—hot gases—have more available thermal energy than the original compressed fuel-air mixture (which had higher chemical energy). This available energy is manifested as a higher temperature and pressure that can be converted into kinetic energy by the engine. In a reciprocating engine, the high-pressure gases inside the cylinders drive the engine's pistons. Once the available energy has been removed, the remaining hot gases are vented (often by opening a valve or exposing the exhaust outlet) and this allows the piston to return to its previous position (top dead center, or TDC). The piston can then proceed to the next phase of its cycle, which varies between engines. Any thermal energy that is not translated into work is normally considered a waste product and is removed from the engine either by an air or liquid cooling system. Internal combustion engines are considered heat engines (since the release of chemical energy in combustion has the same effect as heat transfer into the engine) and as such their theoretical efficiency can be approximated by idealized thermodynamic cycles. The thermal efficiency of a theoretical cycle cannot exceed that of the Carnot cycle, whose efficiency is determined by the difference between the lower and upper operating temperatures of the engine. The upper operating temperature of an engine is limited by two main factors; the thermal operating limits of the materials, and the auto-ignition resistance of the fuel. All metals and alloys have a thermal operating limit, and there is significant research into ceramic materials that can be made with greater thermal stability and desirable structural properties. Higher thermal stability allows for a greater temperature difference between the lower (ambient) and upper operating temperatures, hence greater thermodynamic efficiency. Also, as the cylinder temperature rises, the fuel becomes more prone to auto-ignition. This is caused when the cylinder temperature nears the flash point of the charge. At this point, ignition can spontaneously occur before the spark plug fires, causing excessive cylinder pressures. Auto-ignition can be mitigated by using fuels with high auto-ignition resistance (octane rating), however it still puts an upper bound on the allowable peak cylinder temperature. The thermodynamic limits assume that the engine is operating under ideal conditions: a frictionless world, ideal gases, perfect insulators, and operation for infinite time. Real world applications introduce complexities that reduce efficiency. For example, a real engine runs best at a specific load, termed its power band. The engine in a car cruising on a highway is usually operating significantly below its ideal load, because it is designed for the higher loads required for rapid acceleration. In addition, factors such as wind resistance reduce overall system efficiency. Vehicle fuel economy is measured in miles per gallon or in liters per 100 kilometers. The volume of hydrocarbon assumes a standard energy content. Even when aided with turbochargers and stock efficiency aids, most engines retain an average efficiency of about 18–20%. However, the latest technologies in Formula One engines have seen a boost in thermal efficiency past 50%. There are many inventions aimed at increasing the efficiency of IC engines. In general, practical engines are always compromised by trade-offs between different properties such as efficiency, weight, power, heat, response, exhaust emissions, or noise. Sometimes economy also plays a role in not only the cost of manufacturing the engine itself, but also manufacturing and distributing the fuel. Increasing the engine's efficiency brings better fuel economy but only if the fuel cost per energy content is the same. Measures of fuel efficiency and propellant efficiency For stationary and shaft engines including propeller engines, fuel consumption is measured by calculating the brake specific fuel consumption, which measures the mass flow rate of fuel consumption divided by the power produced. For internal combustion engines in the form of jet engines, the power output varies drastically with airspeed and a less variable measure is used: thrust specific fuel consumption (TSFC), which is the mass of propellant needed to generate impulses that is measured in either pound force-hour or the grams of propellant needed to generate an impulse that measures one kilonewton-second. For rockets, TSFC can be used, but typically other equivalent measures are traditionally used, such as specific impulse and effective exhaust velocity. Air and noise pollution Air pollution Internal combustion engines such as reciprocating internal combustion engines produce air pollution emissions, due to incomplete combustion of carbonaceous fuel. The main derivatives of the process are carbon dioxide , water and some soot—also called particulate matter (PM). The effects of inhaling particulate matter have been studied in humans and animals and include asthma, lung cancer, cardiovascular issues, and premature death. There are, however, some additional products of the combustion process that include nitrogen oxides and sulfur and some uncombusted hydrocarbons, depending on the operating conditions and the fuel-air ratio. Carbon dioxide emissions from internal combustion engines (particularly ones using fossil fuels such as gasoline and diesel) contribute to human-induced climate change. Increasing the engine's fuel efficiency can reduce, but not eliminate, the amount of emissions as carbon-based fuel combustion produces . Since removing from engine exhaust is impractical, there is increasing interest in alternatives. Sustainable fuels such as biofuels, synfuels, and electric motors powered by batteries are examples. Not all of the fuel is completely consumed by the combustion process. A small amount of fuel is present after combustion, and some of it reacts to form oxygenates, such as formaldehyde or acetaldehyde, or hydrocarbons not originally present in the input fuel mixture. Incomplete combustion usually results from insufficient oxygen to achieve the perfect stoichiometric ratio. The flame is "quenched" by the relatively cool cylinder walls, leaving behind unreacted fuel that is expelled with the exhaust. When running at lower speeds, quenching is commonly observed in diesel (compression ignition) engines that run on natural gas. Quenching reduces efficiency and increases knocking, sometimes causing the engine to stall. Incomplete combustion also leads to the production of carbon monoxide (CO). Further chemicals released are benzene and 1,3-butadiene that are also hazardous air pollutants. Increasing the amount of air in the engine reduces emissions of incomplete combustion products, but also promotes reaction between oxygen and nitrogen in the air to produce nitrogen oxides (). is hazardous to both plant and animal health, and leads to the production of ozone (). Ozone is not emitted directly; rather, it is a secondary air pollutant, produced in the atmosphere by the reaction of and volatile organic compounds in the presence of sunlight. Ground-level ozone is harmful to human health and the environment. Though the same chemical substance, ground-level ozone should not be confused with stratospheric ozone, or the ozone layer, which protects the earth from harmful ultraviolet rays. Carbon fuels containing sulfur produce sulfur monoxides (SO) and sulfur dioxide () contributing to acid rain. In the United States, nitrogen oxides, PM, carbon monoxide, sulfur dioxide, and ozone, are regulated as criteria air pollutants under the Clean Air Act to levels where human health and welfare are protected. Other pollutants, such as benzene and 1,3-butadiene, are regulated as hazardous air pollutants whose emissions must be lowered as much as possible depending on technological and practical considerations. , carbon monoxide and other pollutants are frequently controlled via exhaust gas recirculation which returns some of the exhaust back into the engine intake. Catalytic converters are used to convert exhaust chemicals to (a greenhouse gas), (water vapour, also a greenhouse gas) and (nitrogen). Non-road engines The emission standards used by many countries have special requirements for non-road engines which are used by equipment and vehicles that are not operated on the public roadways. The standards are separated from the road vehicles. Noise pollution Significant contributions to noise pollution are made by internal combustion engines. Automobile and truck traffic operating on highways and street systems produce noise, as do aircraft flights due to jet noise, particularly supersonic-capable aircraft. Rocket engines create the most intense noise. Idling Internal combustion engines continue to consume fuel and emit pollutants while idling. Idling is reduced by stop-start systems. Carbon dioxide formation A good way to estimate the mass of carbon dioxide that is released when one litre of diesel fuel (or gasoline) is combusted can be found as follows: As a good approximation the chemical formula of diesel is . In reality diesel is a mixture of different molecules. As carbon has a molar mass of 12 g/mol and hydrogen (atomic) has a molar mass of about 1 g/mol, the fraction by weight of carbon in diesel is roughly . The reaction of diesel combustion is given by: 2 + 3n 2n + 2n Carbon dioxide has a molar mass of 44 g/mol as it consists of 2 atoms of oxygen (16 g/mol) and 1 atom of carbon (12 g/mol). So 12 g of carbon yields 44 g of carbon dioxide. Diesel has a density of 0.838 kg per litre. Putting everything together the mass of carbon dioxide that is produced by burning 1 litre of diesel can be calculated as: The figure obtained with this estimation is close to the values found in the literature. For gasoline, with a density of 0.75 kg/L and a ratio of carbon to hydrogen atoms of about 6 to 14, the estimated value of carbon dioxide emission from burning 1 litre of gasoline is: Parasitic loss The term parasitic loss is often applied to devices that take energy from the engine in order to enhance the engine's ability to create more energy or convert energy to motion. In the internal combustion engine, almost every mechanical component, including the drivetrain, causes parasitic loss and could thus be characterized as a parasitic load. Examples Bearings, oil pumps, piston rings, valve springs, flywheels, transmissions, driveshafts, and differentials all act as parasitic loads that rob the system of power. These parasitic loads can be divided into two categories: those inherent to the working of the engine and those drivetrain losses incurred in the systems that transfer power from the engine to the road (such as the transmission, driveshaft, differentials and axles). For example, the former category (engine parasitic loads) includes the oil pump used to lubricate the engine, which is a necessary parasite that consumes power from the engine (its host). Another example of an engine parasitic load is a supercharger, which derives its power from the engine and creates more power for the engine. The power that the supercharger consumes is parasitic loss and is usually expressed in kilowatt or horsepower. While the power that the supercharger consumes in comparison to what it generates is small, it is still measurable or calculable. One of the desirable features of a turbocharger over a supercharger is the lower parasitic loss of the former. Drivetrain parasitic losses include both steady state and dynamic loads. Steady state loads occur at constant speeds and may originate in discrete components such as the torque converter, the transmission oil pump, and/or clutch drag, and in seal/bearing drag, churning of lubricant and gear windage/friction found throughout the system. Dynamic loads occur under acceleration and are caused by inertia of rotating components and/or increased friction. Measurement While rules of thumb such as a 15% power loss from drivetrain parasitic loads have been commonly repeated, the actual loss of energy due to parasitic loads varies between systems. It can be influenced by powertrain design, lubricant type and temperature and many other factors. In automobiles, drivetrain loss can be quantified by measuring the difference between power measured by an engine dynamometer and a chassis dynamometer. However, this method is primarily useful for measuring steady state loads and may not accurately reflect losses due to dynamic loads. More advanced methods can be used in a laboratory setting, such as measuring in-cylinder pressure measurements, flow rate and temperature at certain points, and testing of individual parts or sub-assemblies to determine friction and pumping losses. For example, in a dynamometer test by Hot Rod magazine, a Ford Mustang equipped with a modified 357ci small-block Ford V8 engine and an automatic transmission had a measured drivetrain power loss averaging 33%. In the same test, a Buick equipped with a modified 455ci V8 engine and a 4-speed manual transmission was measured to have an average drivetrain power loss of 21%. Laboratory testing of a heavy-duty diesel engine determined that 1.3% of the fuel energy input was lost to parasitic loads of engine accessories such as water and oil pumps. Reduction Automotive engineers and tuners commonly make design choices that reduce parasitic loads in order to improve efficiency and power output. These may involve the choice of major engine components or systems, such as the use of dry sump lubrication system over a wet sump system. Alternately, this can be effected through substitution of minor components available as aftermarket modifications, such as exchanging a directly engine-driven fan for one equipped with a fan clutch or an electric fan. Another modification to reduce parasitic loss, usually seen in track-only cars, is the replacement of an engine-driven water pump for an electrical water pump. The reduction in parasitic loss from these changes may be due to reduced friction or many other variables that cause the design to be more efficient.
Technology
Tools and machinery
null
48934192
https://en.wikipedia.org/wiki/Angle%20of%20incidence%20%28optics%29
Angle of incidence (optics)
The angle of incidence, in geometric optics, is the angle between a ray incident on a surface and the line perpendicular (at 90 degree angle) to the surface at the point of incidence, called the normal. The ray can be formed by any waves, such as optical, acoustic, microwave, and X-ray. In the figure below, the line representing a ray makes an angle θ with the normal (dotted line). The angle of incidence at which light is first totally internally reflected is known as the critical angle. The angle of reflection and angle of refraction are other angles related to beams. In computer graphics and geography, the angle of incidence is also known as the illumination angle of a surface with a light source, such as the Earth's surface and the Sun. It can also be equivalently described as the angle between the tangent plane of the surface and another plane at right angles to the light rays. This means that the illumination angle of a certain point on Earth's surface is 0° if the Sun is precisely overhead and that it is 90° at sunset or sunrise. Determining the angle of reflection with respect to a planar surface is trivial, but the computation for almost any other surface is significantly more difficult. Grazing angle or glancing angle When dealing with a beam that is nearly parallel to a surface, it is sometimes more useful to refer to the angle between the beam and the surface tangent, rather than that between the beam and the surface normal. The 90-degree complement to the angle of incidence is called the grazing angle or glancing angle. Incidence at small grazing angles is called "grazing incidence." Grazing incidence diffraction is used in X-ray spectroscopy and atom optics, where significant reflection can be achieved only at small values of the grazing angle. Ridged mirrors are designed to reflect atoms coming at a small grazing angle. This angle is usually measured in milliradians. In optics, there is Lloyd's mirror.
Physical sciences
Optics
Physics
37017694
https://en.wikipedia.org/wiki/Lymantria%20dispar%20dispar
Lymantria dispar dispar
Lymantria dispar dispar, commonly known as the gypsy moth, European gypsy moth, LDD moth, or (in North America) North American gypsy moth or spongy moth, is a species of moth in the family Erebidae. It has a native range that extends over Europe and parts of Africa, and is an invasive species in North America. Its larvae are polyphagous, consuming the leaves of over 500 species of trees, shrubs and plants. In its invasive range it is classified as a pest, notably one of the most destructive pests of hardwood trees in the Eastern United States. It is listed as one of the 100 most destructive invasive species worldwide. Taxonomy Carl Linnaeus first described the species as Phalaena [Bombyx] dispar in 1758. The subject of classification has changed throughout the years, resulting in confusion surrounding the species' taxonomy. This caused many references to describe this one species in different ways. The family has jumped between Lymantriidae, Noctuidae and Erebidae. Lymantria dispar dispar is designated the nominate subspecies of Lymantria dispar. The species L. dispar has been split into several subspecies, including L. d. subsp. asiatica and L. d. subsp. japonica. The family is Erebidae, subfamily Lymantriinae. Lymantriid larvae are commonly called tussock moths because of the tufts of hair on the larvae. Etymology The name Lymantria dispar is composed of two Latin-derived words. Lymantria means "destroyer". The word dispar is derived from the Latin for "unequal" and it depicts the differing characteristics between the sexes. L. d. dispar versus L. dispar usage For the purposes of taxonomy, the common names European gypsy moth and North American gypsy moth represent the same subspecies, Lymantria dispar dispar, as opposed to subspecies of L. dispar from Asia. Confusion over the species and subspecies is widespread. For regulatory purposes, the U. S. Department of Agriculture has defined Asian Gypsy Moth as "any biotype of Lymantria dispar (sensu lato) possessing female flight capability". despite Lymantria dispar asiatica not being the only classified subspecies that is capable of flight; even females of L. d. dispar can fly, if only very weakly. Traditionally, all L. dispar have been referred to as gypsy moth, including when referring to Japanese, Indian, and Asiatic subspecies. Common name In the species' native area, the English common name is (and remains) gypsy moth. In North America in July 2021, the Entomological Society of America announced it was delisting the common name gypsy moth as part of an effort to replace racist or otherwise offensive names, as Gypsy is considered an offensive slur by some in the North American Romani community. This led the Society to select a replacement name through a consultative process. In January 2022, they proposed the name spongy moth, in reference to the spongy mass of the egg casing. This new common name was formally adopted in North America in March 2022. Range Lymantria dispar dispar is indigenous to Europe. In southeastern England, it was widespread in the 19th century, but became extinct except for occasional migrants soon after 1900. It then recolonised naturally from the 1990s; colonies were found in London from 1995 and Buckinghamshire from 2005, and more widely since. Introduction to North America Lymantria dispar dispar was first brought to North America in 1869 and rapidly became an invasive species. Étienne Léopold Trouvelot imported the moths with the intent of interbreeding them with silk moths in order to establish a new silkworm industry in the West. The moths were accidentally released from his residence in Medford, Massachusetts. There are conflicting reports on the resulting actions. One states that despite issuing oral and written warnings of possible consequences, no officials were willing to assist in searching out and destroying the moths. The other notes that Trouvelot was, in fact, well aware of the risk and there is no direct evidence that he contacted any government officials about the moths' release. Though the Asiatic and the Eurasian moths were not classified as different subspecies in the 1980s, the differences were already notable. As noted in The Gypsy Moth (1896) by Forbush and Fernald, the moth was considered a nuisance just ten years after their release. The first major outbreak occurred in 1889, and Forbush and Fernald recount the extent of devastation: all the trees being defoliated and caterpillars covering houses and sidewalks and raining down upon residents. At first it was uncertain what species was responsible for the outbreak, but after the caterpillar was identified by entomologist Maria Elizabeth Fernald, an eradication program began in 1890. Eventually they would reach the Pacific Northwest, sporadically invading but so far not becoming established there, due to successful eradication campaigns. Spread The small larvae of the moth take to the air and are carried by the wind. The larvae spin silken threads and hang from them, waiting for the wind to blow. The light larvae have long hairs that increase their surface area, which are suitable for being carried aloft. The natural spread is slow, but transportation of the moth has led to isolated populations, with accidental transport of the eggs being noted. According to the United States Department of Agriculture, without intervention, this pest spreads about per year. A study published in 2012 suggests that storms can accelerate the spread, hypothesizing that strong easterly winds carried larvae across Lake Michigan to Wisconsin, a distance of at least . Firewood transport is a common way for the eggs to spread, since the moths will lay their eggs on dead wood. Attempts have been made to limit the movement of firewood to reduce the moth's spread. Life cycle Eggs Egg masses are typically laid on branches and trunks of trees, but may be found in any sheltered location, including rocks, foliage and vehicles. Females are flightless, so they lay eggs on a surface near where they emerged from their pupa. The eggs are covered in a coating of hairs. The covering provides protection from predators and parasites, and may be important for insulating the eggs from cold and for sealing in moisture. The larva inside the egg becomes fully developed in about one month after being laid and then enters diapause to overwinter. The egg is in the overwintering stage lasting for eight or nine months. Development ceases in preparation for the winter. After an acclimation stage, during which the larva inside the egg reduces its water content, eggs can withstand freezing temperatures. The larva inside the egg resumes activity in the spring, and reabsorbs water. The larva will then chew through the chorion of the egg and the protective hair of the egg cluster in the spring. Egg clusters are usually an oval about wide and long. The egg masses are buff yellow-brown, likened to a manila folder, but may bleach out over the winter months. As the female lays them, she covers them with hair-like setae from her abdomen. Egg clusters contain from 100 to 1000 eggs. Due to the appearance of the eggs, several common names have emerged; the German Schwammspinner (literally "sponge spinner") and French la spongieuse refer to the spongy texture of the egg cluster. Larvae Larvae (caterpillars) emerge from egg masses in the spring. Most larvae hatch within a week, but can take as long as a month. The new larvae remain on or near the egg cluster if they hatch in rainy weather or if the temperature is below . The larvae will disperse even if there is enough foliage for growth, hanging from silk threads and waiting for the wind to send them aloft. The larvae are about long when they first hatch and will grow to . The larva will first feed on the leaf hairs and then move onto the leaf epidermis. Feeding occurs in the daytime, primarily in the morning and late afternoon. As the larva grow, the feeding becomes a nocturnal activity. When not eating, the larva will remain on the underside of the leaf and make a mat of silk for attachment. To grow, the larva must molt. Larvae are characterized by the term instar, which refers to the number of times a larva has molted; a first-instar larva has not yet molted, a second instar has molted once, a third instar twice, etc. Males typically are five instars and females are six instars. When the larvae reach the fourth instar, they become nocturnal feeders, and will return to their resting places at dawn, hiding under flaps of bark, in crevices, or under branches - any place that provides protection. Newly hatched larvae are black with long, hair-like setae. Older larvae have five pairs of raised blue spots and six pairs of raised brick-red spots along their backs, and a sprinkling of setae. As the larval stage comes to an end they cease feeding and surround themselves in a silken net. Distinguishing the larvae from other species Larvae can be distinguished from other species of caterpillar by its spots. Close to the head, five pairs of blue spots and six pairs of red spots towards its tail. This distinction will prevent confusion with the spiny elm caterpillar and other caterpillars. Eastern tent caterpillars can be distinguished by their silky tents in trees, the presence of a back stripe, and their preference for cherry trees. Pupae The larvae reach maturity between mid-June and early July, then enter the pupal stage, during which larvae change into adult moths. Pupation lasts from 14–17 days. The adult moth will emerge, fully developed, by splitting the pupal skin. When the population is spread out and running low, pupation can take place under flaps of bark, in crevices, under branches, on the ground, and in other places where larvae rested. During periods when population numbers are dense, pupation is not restricted to these locations, but can take place in sheltered and open locations, even exposed on the trunks of trees or on foliage of nonhost trees. Usually, the caterpillars create flimsy cocoons made of silk strands holding the leaf together, while others do not cover their pupae in cocoons, but rather hang from a twig or tree bark, like butterfly pupae do. Adults The males have feathery antennae versus the thin antennae of females. Size differences are also noted, with the forewing of the male moth being long, and that of the female . The females are bigger than the males. Another important difference between the sexes is that females possess fully formed wings but do not fly. Female flight is common in Eurasia, but these moths may be of a different species. Moths usually emerge from pupae in July, but it can vary with population density and climate. The brown male gypsy moth emerges first, usually one or two days before the females do. The males fly in rapid zigzag patterns, but are capable of direct flight. Like most moths, the males are typically nocturnal, but can sometimes be seen flying during the day as well. The males fly up and down tree trunks, or other vertical objects in search of females. When heavy, black-and-white egg-laden females emerge, they emit a pheromone that attracts the males. The female has a small gland near the tip of the abdomen that releases the pheromone with a pumping motion, termed "calling". It can attract males from long distances, tracking the scent through its erratic flight pattern. The sex pheromone of female gypsy moths is (7R,8S)-7,8-epoxy-2-methyloctadecane. A previous failed identification of this pheromone, 10-acetoxy-cis-7-hexadec-1-ol, was given the trivial name of gyptol. It was found in female moth extracts, but was later shown to be inactive. The structure determination of this pheromone was an eventful chapter in the history of pheromone chemistry. Courtship is not elaborate: the female must raise her wing to allow the male to couple with her. The moths remain in copula for up to an hour, but the transfer of the spermatophore is usually accomplished within 10 minutes. A male moth can inseminate more than one female. Multiple mating in females is possible but uncommon, as the female stops releasing the attracting pheromone after mating. After mating, the females begin depositing the eggs. Adult moths live about one week. They do not possess an active digestive system and cannot feed, but they can drink in moisture. The reproductive chance for females lasts about two days, with the pheromone for attracting males being diminished by the third day. Due to the pheromone's potency, most females will mate. Females lay their eggs on trees, shrubs, rocks, vehicles, and plants of many types. Each of them typically lays about 500 eggs. The eggs are covered with a peachy fuzz that can cause serious rashes if touched by bare skin or fur. Behavior Many environmental factors such as resource availability, predator density and sexual competition are known to affect behavior from its larval stage to adult stage. A population will exist for many years in low densities. When the population enters the release phase, it rapidly expands to the outbreak phase where the population size will rise several orders of magnitude and fall back within only a few generations. Feeding The gypsy moth brings one of the largest impacts in defoliation of deciduous trees in the Northern Hemisphere. Since its introduction into the United States in 1868 or 1869, it has spread both west and south, now taking over most of the hardwood forests in the eastern United States and Canada. Over three hundred species of trees and shrubs are hosts. Larvae will climb up any object in their path in search of food. Hosts Larvae prefer oak trees, but may feed on many species of trees and shrubs, both hardwood and conifer. In the eastern US, the gypsy moth prefers leaves of oaks, aspen, apple, sweetgum, speckled alder, basswood, gray birch, paper birch, poplar, willow, and hawthorns, among other species. Older larvae feed on several species of softwood that younger larvae avoid, including cottonwood, hemlock, Atlantic white cypress, and pine and spruce species native to the east. The gypsy moth avoids ash trees, tulip-tree, American sycamore, butternut, black walnut, catalpa, flowering dogwood, balsam fir, arborvitae, American holly, and mountain laurel and rhododendron shrubs, but will feed on these in late instars when densities are extremely high. Predation Many species have been identified as preying on L. dispar. Some species, such as the white-footed mouse or Anastaus disparis, have a significant impact on the population dynamics of the moth. On the other hand, avian predation and invertebrate predation show only small effects on population dynamics. Small mammals The white-footed mouse, Peromyscus leucopus, is considered important for regulating sparse moth populations. Rodents consume larvae and pupae that seek resting sites near or on the ground. The white-footed mouse is the most common and widely distributed small mammal in the northeastern United States. The northern short-tailed shrew is common east of the Rocky Mountains and will consume the larva and pupa. Birds Insectivorous birds prey upon the larva of the gypsy moth, but the egg clusters are protected by their hair coverings. The effects of bird predation have not been fully studied in North America, but it has been well documented in Japan and Eurasia. When outbreaks of gypsy moths occur, bird predation has no significant effect on the population. Birds that consume gypsy moth larvae, pupae and adults include the blue jay, red-eyed vireo, eastern towhee, northern oriole, catbird and the European robin. The black-capped chickadee feeds on the moth throughout its entire life cycle, including the eggs. Many bird species feed on gypsy moth larvae, but they are not a major food source for any common bird species. Although a few European studies cite avian predation as a large influence in keeping the gypsy moth population in control, few studies exist to prove this. Invertebrate predation Calosoma sycophanta is a beetle that preys upon the gypsy moth larvae and pupae. Larvae and adults of the species tear open their prey and feed upon them. In low density populations, there is a positive correlation between larval mortality and predation rates on pupae. Parasites Gypsy moth parasitoids have been widely studied, but they do not seem to have major effects on the population. Four species of parasitic flies prey on gypsy moth larva. Parasetigana silvestris and Exorista larvarum lay an egg on the gypsy moth larva. If that egg hatches before the gypsy moth larva molts, the fly larva will penetrate the host. Compsilura concinnata pierces the gypsy moth larva and deposits its own larva inside. Blepharipa pratensis lays its eggs on leaves. The gypsy moth larva will consume the egg and the fly larva will hatch inside its gut. Eight species of parasitic wasps attack the gypsy moth. Ooencyrtus kuvanae and Anastatus disparis attack the eggs. O. kuvanae attacks the eggs, but the effectiveness is limited by the ovipositor which can only penetrate the surface layer of the egg cluster. A. disparis has limited success as a predator because it can only attack unembroyonated eggs and the female wasps do not have wings. Even so, A. disparis is the only species that is known to occasionally affect the population dynamics of the gypsy moth. Apanteles melanoscelus and Phobocampe disparis parasitize the early larva stages. intermedia and Monodontomerus aureus parasitize the gypsy moth pupae. A parasite native to North America, Itoplectes conquisitor, attacks and kills gypsy moth pupae, but development of the larva is rare within the host and the number of attacks themselves is also low. Glyptapanteles portheriae and G. liparidis are both wasps that lay eggs on the gypsy moth larva. Study of G. liparidis showed almost 90% success when the host is parasitized during premolt to the third instar. Most invertebrate predation occurred when larvae were in the litter. The population in Austria suffered high parasitism by the tachinids Parasetigena silvestris, which may have contributed in prevention of further increase in L. dispar in this locality. Pathogens Gypsy moth populations in different locations show vulnerability to different viral species. Nuclear polyhedrosis virus caused significant mortality in some cases. The most important pathogen is the Lymantria dispar multicapsid nuclear polyhedrosis virus (LdMNPV), sometimes referred to as NPV or Borralinivirus reprimens. Viral particles consumed by the larva when eating through the egg chorion will kill them during the first instar. The bodies disintegrate, spreading the virus on the foliage, which will then be consumed by other larvae. Outbreaks of the virus result in high larva mortality, and the odor of the decaying larvae permeates the area. The pathogen is used as an insecticide under the name Gypchek. Streptococcus faecalis is another pathogen that is notable for its mortality. Larva killed by the pathogen have a shriveled appearance. Fungi Entomophaga maimaiga is a Japanese fungus that helps control the population of gypsy moths. It was first introduced to North America around 1910, though was not an effective control until the 1980s. Other adaptations Population density of larvae plays a major role in their behavior. At low density, larvae remain inactive during the day, but in high populations become hyperactive. Interbreeding produces deleterious genetic effects, and dispersal serves to reduce this effect. Weather Temperature is important to the gypsy moth. Low temperatures are fatal. Temperatures of can be withstood during the winter, an extended period will kill the larvae inside and even for a short time is lethal. Egg deposits that are low or on the ground can be insulated against temperatures, including snow, and survive the lethal temperatures. Temperatures above increase growth and development. Rainfall can drown larvae before they are established; low populations are correlated to heavy rainfall during the larval stage. Wind is also critical to the dispersal of the larvae. Wind speeds of several miles per hour are enough to break the silk threads and disperse the larvae. Pheromones Disparlure has been synthesized artificially and has been used to confuse mating patterns or lead male moths into traps. However, this technique is less successful against concentrated populations, hence its main use may be to delay infestation until other ecological solutions are found. Impact The gypsy moth habitat overlaps with the northern tiger swallowtail, Papilio canadensis. Experiments indicate that known gypsy moth pathogens and gypsy moth bodily fluid negatively affect the survival of swallowtail larvae. Gypsy moth bodily fluid is lethal, and swallowtail caterpillars were prone to higher rates of parasitism when placed in the field near gypsy moth infestations. Lymantria dispar dispar causes widespread defoliation and costs the economy millions of dollars in damages. Total defoliation in America, from 1970 to 2010, was . The worst year was 1981 with defoliated. In 2010, were defoliated. Forest defoliation by the gypsy moths each year affects the populations and reproductive success of forest-dwelling birds. Nests placed in defoliated sites suffered a higher predation rate than those in non-defoliated sites. gypsy moths have a direct impact on avian behavior in the American forests. Rash The caterpillar has been reported to produce a poison ivy–like rash when some people come into contact with the hairs of the larvae (caterpillar) stage. The contact can be direct or even if the small hairs are carried by the wind and onto the skin or clothing of a person. Such rashes were documented in the early 1980s, during a major infestation in the Northeastern United States. In coastal Maine and Cape Cod, Massachusetts, caterpillar-triggered rash is much more likely due to exposure to brown-tail moth (Euproctis chrysorrhoea).
Biology and health sciences
Lepidoptera
null
57475427
https://en.wikipedia.org/wiki/Nipah%20virus%20infection
Nipah virus infection
Nipah virus infection is an infection caused by the Nipah virus. Symptoms from infection vary from none to fever, cough, headache, shortness of breath, and confusion. This may worsen into a coma over a day or two, and 50% to 75% of those infected die. Complications can include inflammation of the brain and seizures following recovery. The Nipah virus (NiV) is a type of RNA virus in the genus Henipavirus, which normally circulates among fruit bats of the genus Pteropus. Spread typically requires direct contact with an infected source; it can both spread between people and from other animals to people. Diagnosis is based on symptoms and confirmed by laboratory testing. Management is restricted to supportive care; there is neither vaccine nor specific treatment. Preventive measures include avoiding exposure to bats and infected animals such as pigs, and not drinking raw date palm sap. As of May 2018 about 700 human cases of Nipah virus were estimated to have occurred, and 50 to 75 percent of those infected died. The disease was first identified in 1998 by a team of researchers at the Faculty of Medicine, University of Malaya during an outbreak in Malaysia. The majority of the patients in Malaysia diagnosed with the disease were referred to and treated at the University of Malaya Medical Centre. The virus was isolated and identified in 1999. The disease is named after a village in Malaysia, Sungai Nipah. Pigs may also be infected, millions of which were culled by Malaysian authorities in 1999 to successfully stop the spread of the disease. Signs and symptoms Human infections range from asymptomatic infection to acute respiratory infection, seizures and fatal encephalitis. This illness typically initially presents as 3-14 days of fever and headache, often accompanied by a cough, sore throat, difficulty breathing, and other signs of respiratory illness. Infected people initially develop symptoms that include fever, headaches, myalgia, vomiting and sore throat. This can be followed by dizziness, drowsiness, altered consciousness, and neurological signs that indicate acute encephalitis. Some people can also experience atypical pneumonia and severe respiratory problems, including acute respiratory distress. Encephalitis and seizures occur in severe cases, progressing to coma within 24 to 48 hours. The incubation period is from 4 to 14 days but an incubation period as long as 45 days has been reported. Death occurs in 40-75% of cases, and some long-term side effects of infection include persistent convulsions and personality changes. Most survivors make a full recovery, although some are left with residual neurological conditions after acute encephalitis. Some cases of relapse have been reported. Transmission The initial case in human outbreaks of Nipah virus has always been zoonotic from exposure to contaminated secretions or tissues of infected bats or pigs. Subsequent human-to-human transmission of Nipah virus occurs via close contact with NiV-infected persons or exposure to NiV-infected body fluids (e.g., blood, urine, nasal secretions). Most experts do not classify Nipah virus as airborne, though there is consensus that transmission can—and does—occur from short-range exposure to NiV-infected respiratory droplets in close contact settings. Indirect transmission of Nipah virus via contaminated fomites is likely responsible for many cases in which there was no known direct contact with a NiV-infected person or animal. Risk factors The risk of exposure is high for hospital workers and caretakers of those infected with the virus. In Malaysia and Singapore, Nipah virus infected people with close contact to infected pigs. In Bangladesh and India, the disease has been linked to consumption of raw date palm sap (toddy), eating of fruits partially consumed by bats, and using water from wells inhabited by bats. Diagnosis During acute and convalescent stages of the disease, RNA can be detected using reverse transcriptase polymerase chain reaction (RT-PCR) from throat swabs, cerebrospinal fluid, urine and blood analysis. After recovery, IgG and IgM antibody detection can confirm a prior Nipah virus infection. Immunohistochemistry on tissues collected during autopsy also confirms the disease. Prevention Prevention through sanitary practices is the best protection. The likelihood of infection through animal transmission can be reduced by avoiding exposure to sick pigs, and to bats where the disease is endemic. Bats harbor a significantly higher proportion of zoonotic viruses than all other mammalian orders, and are known not to be affected by the many viruses they carry, apparently due to their developing special immune systems to deal with the stress of flying. Infection via bats can be caused by drinking raw palm sap (palm toddy) contaminated by bat excreta, eating fruits partially consumed by bats, and using water from wells infested by bats. Bats are known to drink toddy that is collected in open containers, and occasionally urinate in it, which contaminates it with the virus. Standard infection control practices can protect against human-to-human hospital-acquired infections. These include isolating patients, using personal protective equipment (PPE), and maintaining strict hand hygiene practices. Individuals identified through contact tracing are tested and monitored until confirmed negative. Healthcare facilities must enforce rigorous infection prevention protocols when caring for suspected or confirmed cases. In January 2024 a candidate vaccine, ChAdOx1 NipahB, commenced Phase I clinical trials after completing laboratory and animal testing. Treatment , there is no specific treatment for Nipah virus infection. The mainstay of treatment is supportive care. While tentative evidence supports the use of ribavirin, it has not yet been studied in people with the disease. Specific antibodies have also been studied in an animal model with potential benefit. Acyclovir, favipiravir, and remdesivir have been assessed as potential antivirals against Nipah virus. M 102.4 is a nonpatented monoclonal antibody developed by Christopher C. Broder, a professor of immunology and microbiology at Uniformed Services University of the Health Sciences in Maryland. It proved highly effective in animal models. 50 doses were sent to Kerala in 2018, but it is not clear if they were used in humans. Prognosis Nipah virus infection is fatal in 40 to 70 percent of cases, though some outbreaks have had 100% case fatality rate. Sequelae Those who survive the initial infection often struggle with debilitating long-term neurological sequelae, including memory loss, impaired cognition, seizures, convulsions, and personality changes. Moreover, Nipah virus is known to be able to persist and lie dormant in survivors and to re-activate many months or years after the initial infection. Deaths from re-activation of latent Nipah virus have been reported. Outbreaks Nipah virus outbreaks have been reported in Malaysia, Singapore, Bangladesh and India. The area is known as the Nipah Belt. The highest mortality due to Nipah virus infection was found in Bangladesh, where outbreaks are typically seen in winter. Nipah virus was first seen in 1998 in peninsular Malaysia in pigs and pig farmers. By mid-1999, more than 265 human cases of encephalitis, including 105 deaths, had been reported in Malaysia, and 11 cases of either encephalitis or respiratory illness with one fatality were reported in Singapore. In 2001, Nipah virus was reported from Meherpur District, Bangladesh and Siliguri, India. The outbreak again appeared in 2003, 2004 and 2005 in Naogaon District, Manikganj District, Rajbari District, Faridpur District and Tangail District. In Bangladesh there were further outbreaks in subsequent years. September 1998 – May 1999: in the states of Perak, Negeri Sembilan and Selangor in Malaysia. A total of 265 cases of acute encephalitis with 105 deaths caused by the virus were reported in the three states throughout the outbreak. The Malaysian health authorities at first thought Japanese encephalitis (JE) was the cause of infection which hampered the deployment of effective measures to prevent the spread of Nipah virus. 2001: 31 January – 23 February, Siliguri, India: 66 cases with a 74% mortality rate. 75% of patients were either hospital staff or had visited one of the other patients in hospital, indicating person-to-person transmission. 2001: April–May, Meherpur District, Bangladesh: 13 cases with nine fatalities (69% mortality). 2003: January, Naogaon District, Bangladesh: 12 cases with eight fatalities (67% mortality). 2004: January–February, Manikganj and Rajbari districts, Bangladesh: 42 cases with 14 fatalities (33% mortality). 2004: 19 February – 16 April, Faridpur District, Bangladesh: 36 cases with 27 fatalities (75% mortality). 92% of cases involved close contact with at least one other person infected with Nipah virus. Two cases involved a single short exposure to an ill patient, including a rickshaw driver who transported a patient to hospital. In addition, at least six cases involved acute respiratory distress syndrome, which has not been reported previously for Nipah virus illness in humans. 2005: January, Tangail District, Bangladesh: 12 cases with 11 fatalities (92% mortality). The virus was probably contracted from drinking date palm juice contaminated by fruit bat droppings or saliva. 2007: February–May, Nadia District, India: up to 50 suspected cases with 3–5 fatalities. The outbreak site borders the Bangladesh district of Kushtia where eight cases of Nipah virus encephalitis with five fatalities occurred during March and April 2007. This was preceded by an outbreak in Thakurgaon during January and February affecting seven people with three deaths. All three outbreaks showed evidence of person-to-person transmission. 2008: February–March, Manikganj and Rajbari districts, Bangladesh: Nine cases with eight fatalities. 2010: January, Bhanga subdistrict, Faridpur, Bangladesh: Eight cases with seven fatalities. During March, one physician of the Faridpur Medical College Hospital caring for confirmed Nipah cases died. 2011: February: An outbreak of Nipah Virus occurred at Hatibandha, Lalmonirhat, Bangladesh. The deaths of 21 schoolchildren due to Nipah virus infection were recorded on 4 February 2011. IEDCR confirmed the infection was due to this virus. Local schools were closed for one week to prevent the spread of the virus. People were also requested to avoid consumption of uncooked fruits and fruit products. Such foods, contaminated with urine or saliva from infected fruit bats, were the most likely source of this outbreak. 2018: May: Deaths of twenty one people in Perambra near Calicut, Kerala, India were confirmed to be due to the virus. Treatment using antivirals such as Ribavirin was initiated. 2019: June: A 23-year-old student was admitted into hospital with Nipah virus infection at Kochi in Kerala. Health Minister of Kerala K. K. Shailaja said that 86 people who had had recent interactions with the patient were under observation. This included two nurses who treated the patient, and had fever and sore throat. The situation was monitored and precautionary steps were taken to control the spread of virus by the Central and State Government. The Health Department of Kerala kept 338 people under observation, 17 of them in isolation. After undergoing treatment for 54 days at a private hospital, the 23-year-old student was discharged. On 23 July, the Kerala government declared Ernakulam district to be Nipah-free. 2021: September: 12-year-old boy, a native of Chathamangalam village was admitted to a hospital at Kozhikode in Kerala on September 1. He died from the virus four days after admission. Two healthcare workers who came into contact with the victim were already showing symptoms of Nipah infection by Monday. 2023: Since 4 January 2023 and as of 13 February 2023, 11 cases (10 confirmed and one probable) including eight deaths (Case Fatality Rate (CFR) 73%) have been reported in Bangladesh. WHO assesses the ongoing risk as high at the national level. 2023: September: Kozhikode district, Kerala, India: As of 14 September 2023, five cases, including two deaths, were confirmed in Kozhikode district in Kerala. The government has prepared a contact list of over 700 people linked to the two deaths, of whom two family members and a healthcare worker tested positive for the virus. 2024 : July: One person died and 60 were identified as at risk of infection in Malappuram district, Kerala, India. Research Ribavirin, m102.4 monoclonal antibody, and favipiravir were being studied as treatments as of 2019. Medication Ribavirin has been studied in a small number of people. , it was unclear whether it was useful, although a few people had returned to normal life after treatment. In vitro studies and animal studies have shown conflicting results in the efficacy of ribavirin against NiV and Hendra, with some studies showing effective inhibition of viral replication in cell lines, whereas some studies in animal models showed that ribavirin treatment only delayed but did not prevent death after NiV or Hendra virus infection. In 2013, the anti-malarial drug chloroquine was shown to block the critical functions needed for maturation of Nipah virus, although no clinical benefit was observed. Immunization Passive immunization using a human monoclonal antibody, m102.4, that targets the ephrin-B2 and ephrin-B3 receptor-binding domain of the henipavirus Nipah G glycoprotein was evaluated in the ferret model as post-exposure prophylaxis. m102.4 has been used in people on a compassionate use basis in Australia, and was in pre-clinical development in 2013. Society and culture The fictional MEV-1 virus featured in the 2011 film Contagion was based on a combination of Nipah and measles virus. A Malayalam movie, Virus, was released in 2019, based on the 2018 outbreak of Nipah virus in Kerala, India. In series 4 episode 1 of The Good Karma Hospital, a patient is admitted with Nipah. Episode 7 of the Korean drama Doctor John also features a Nipah virus outbreak.
Biology and health sciences
Viral diseases
Health
42657422
https://en.wikipedia.org/wiki/Post-transition%20metal
Post-transition metal
The metallic elements in the periodic table located between the transition metals to their left and the chemically weak nonmetallic metalloids to their right have received many names in the literature, such as post-transition metals, poor metals, other metals, p-block metals, basic metals, and chemically weak metals. The most common name, post-transition metals, is generally used in this article. Physically, these metals are soft (or brittle), have poor mechanical strength, and usually have melting points lower than those of the transition metals. Being close to the metal-nonmetal border, their crystalline structures tend to show covalent or directional bonding effects, having generally greater complexity or fewer nearest neighbours than other metallic elements. Chemically, they are characterised—to varying degrees—by covalent bonding tendencies, acid-base amphoterism and the formation of anionic species such as aluminates, stannates, and bismuthates (in the case of aluminium, tin, and bismuth, respectively). They can also form Zintl phases (half-metallic compounds formed between highly electropositive metals and moderately electronegative metals or metalloids). Applicable elements The post-transition metals are located on the periodic table between the transition metals to their left and the chemically weak nonmetallic metalloids or nonmetals to their right. Generally included in this category are: the group 13–16 metals in periods 4–6 namely gallium, indium and thallium, tin and lead, bismuth, and polonium; and aluminium, a group 13 metal in period 3. They can be seen at the bottom right in the accompanying plot of electronegativity values and melting points. The boundaries of the category are not necessarily sharp as there is some overlapping of properties with adjacent categories (as occurs with classification schemes generally). Some elements otherwise counted as transition metals are sometimes instead counted as post-transition metals namely the group 10 metal platinum; the group 11 coinage metals copper, silver and gold; and, more often, the group 12 metals zinc, cadmium and mercury. Similarly, some elements otherwise counted as metalloids or nonmetals are sometimes instead counted as post-transition metals namely germanium, arsenic, selenium, antimony, tellurium, and polonium (of which germanium, arsenic, antimony, and tellurium are usually considered to be metalloids). Astatine, which is usually classified as a nonmetal or a metalloid, has been predicted to have a metallic crystalline structure. If so, it would be a post-transition metal. Elements 112–118 (copernicium, nihonium, flerovium, moscovium, livermorium, tennessine, and oganesson) may be post-transition metals; insufficient quantities of them have been synthesized to allow sufficient investigation of their actual physical and chemical properties. Rationale The diminished metallic nature of the post-transition metals is largely attributable to the increase in nuclear charge going across the periodic table, from left to right. The increase in nuclear charge is partially offset by an increasing number of electrons but as these are spatially distributed each extra electron does not fully screen each successive increase in nuclear charge, and the latter therefore dominates. With some irregularities, atomic radii contract, ionisation energies increase, fewer electrons become available for metallic bonding, and "ions [become] smaller and more polarizing and more prone to covalency." This phenomenon is more evident in period 4–6 post-transition metals, due to inefficient screening of their nuclear charges by their d10 and (in the case of the period 6 metals) f14 electron configurations; the screening power of electrons decreases in the sequence s > p > d > f. The reductions in atomic size due to the interjection of the d- and f-blocks are referred to as, respectively, the 'scandide' or 'd-block contraction', and the 'lanthanide contraction'. Relativistic effects also "increase the binding energy", and hence ionisation energy, of the electrons in "the 6s shell in gold and mercury, and the 6p shell in subsequent elements of period 6." Descriptive chemistry Group 10 Platinum is a moderately hard metal (MH 3.5) of low mechanical strength, with a close-packed face-centred cubic structure (BCN 12). Compared to other metals in this category, it has an unusually high melting point (2042 K v 1338 for gold). Platinum is more ductile than gold, silver or copper, thus being the most ductile of pure metals, but it is less malleable than gold. Like gold, platinum is a chalcophile element in terms of its occurrence in the Earth's crust, preferring to form covalent bonds with sulfur. It behaves like a transition metal in its preferred oxidation states of +2 and +4. There is very little evidence of the existence of simple metal ions in aqueous media; most platinum compounds are (covalent) coordination complexes. The oxide (PtO2) is amphoteric, with acidic properties predominating; it can be fused with alkali hydroxides (MOH; M = Na, K) or calcium oxide (CaO) to give anionic platinates, such as red Na2PtO3 and green K2PtO3. The hydrated oxide can be dissolved in hydrochloric acid to give the hexachlormetallate(IV), H2PtCl6. Like gold, which can form compounds containing the −1 auride ion, platinum can form compounds containing platinide ions, such as the Zintl phases BaPt, Ba3Pt2 and Ba2Pt, being the first (unambiguous) transition metal to do so. Darmstadtium should be similar to its lighter homologue platinum. It is expected to have a close-packed body-centered cubic structure. It should be a very dense metal, with a density of 26–27 g/cm3 surpassing all stable elements. Darmstadtium chemistry is expected to be dominated by the +2 and +4 oxidation states, similar to platinum. Darmstadtium(IV) oxide (DsO2) should be amphoteric, and darmstadtium(II) oxide (DsO) basic, exactly analogous to platinum. There should also be a +6 oxidation state, similar to platinum. Darmstadtium should be a very noble metal: the standard reduction potential for the Ds2+/Ds couple is expected to be +1.7 V, more than the +1.52 V for the Au3+/Au couple. Group 11 The group 11 metals are typically categorised as transition metals given they can form ions with incomplete d-shells. Physically, they have the relatively low melting points and high electronegativity values associated with post-transition metals. "The filled d subshell and free s electron of Cu, Ag, and Au contribute to their high electrical and thermal conductivity. Transition metals to the left of group 11 experience interactions between s electrons and the partially filled d subshell that lower electron mobility." Chemically, the group 11 metals in their +1 valence states show similarities to other post-transition metals; they are occasionally classified as such. Copper is a soft metal (MH 2.5–3.0) with low mechanical strength. It has a close-packed face-centred cubic structure (BCN 12). Copper behaves like a transition metal in its preferred oxidation state of +2. Stable compounds in which copper is in its less preferred oxidation state of +1 (Cu2O, CuCl, CuBr, CuI and CuCN, for example) have significant covalent character. The oxide (CuO) is amphoteric, with predominating basic properties; it can be fused with alkali oxides (M2O; M = Na, K) to give anionic oxycuprates (M2CuO2). Copper forms Zintl phases such as Li7CuSi2 and M3Cu3Sb4 (M = Y, La, Ce, Pr, Nd, Sm, Gd, Tb, Dy, Ho, or Er). Silver is a soft metal (MH 2.5–3) with low mechanical strength. It has a close-packed face-centred cubic structure (BCN 12). The chemistry of silver is dominated by its +1 valence state in which it shows generally similar physical and chemical properties to compounds of thallium, a main group metal, in the same oxidation state. It tends to bond covalently in most of its compounds. The oxide (Ag2O) is amphoteric, with basic properties predominating. Silver forms a series of oxoargentates (M3AgO2, M = Na, K, Rb). It is a constituent of Zintl phases such as Li2AgM (M = Al, Ga, In, Tl, Si, Ge, Sn or Pb) and Yb3Ag2. Gold is a soft metal (MH 2.5–3) that is easily deformed. It has a close-packed face-centred cubic structure (BCN 12). The chemistry of gold is dominated by its +3 valence state; all such compounds of gold feature covalent bonding, as do its stable +1 compounds. Gold oxide (Au2O3) is amphoteric, with acidic properties predominating; it forms anionic hydroxoaurates , where M = Na, K, ½Ba, Tl; and aurates such as NaAuO2. Gold is a constituent of Zintl phases such as M2AuBi (M = Li or Na); Li2AuM (M = In, Tl, Ge, Pb, Sn) and Ca5Au4. Roentgenium is expected to be similar to its lighter homologue gold in many ways. It is expected to have a close-packed body-centered cubic structure. It should be a very dense metal, with its density of 22–24 g/cm3 being around that of osmium and iridium, the densest stable elements. Roentgenium chemistry is expected to be dominated by the +3 valence state, similarly to gold, in which it should similarly behave as a transition metal. Roentgenium oxide (Rg2O3) should be amphoteric; stable compounds in the −1, +1, and +5 valence states should also exist, exactly analogous to gold. Roentgenium is similarly expected to be a very noble metal: the standard reduction potential for the Rg3+/Rg couple is expected to be +1.9 V, more than the +1.52 V for the Au3+/Au couple. The cation is expected to be the softest among the metal cations. Due to relativistic stabilisation of the 7s subshell, roentgenium is expected to have a full s-subshell and a partially filled d-subshell, instead of the free s-electron and full d-subshell of copper, silver, and gold. Group 12 On the group 12 metals (zinc, cadmium and mercury), Smith observed that, "Textbook writers have always found difficulty in dealing with these elements." There is an abrupt and significant reduction in physical metallic character from group 11 to group 12. Their chemistry is that of main group elements. A 2003 survey of chemistry books showed that they were treated as either transition metals or main group elements on about a 50/50 basis. The IUPAC Red Book notes that although the group 3−12 elements are commonly referred to as the transition elements, the group 12 elements are not always included. The group 12 elements do not satisfy the IUPAC Gold Book definition of a transition metal. Zinc is a soft metal (MH 2.5) with poor mechanical properties. It has a crystalline structure (BCN 6+6) that is slightly distorted from the ideal. Many zinc compounds are markedly covalent in character. The oxide and hydroxide of zinc in its preferred oxidation state of +2, namely ZnO and Zn(OH)2, are amphoteric; it forms anionic zincates in strongly basic solutions. Zinc forms Zintl phases such as LiZn, NaZn13 and BaZn13. Highly purified zinc, at room temperature, is ductile. It reacts with moist air to form a thin layer of carbonate that prevents further corrosion. Cadmium is a soft, ductile metal (MH 2.0) that undergoes substantial deformation, under load, at room temperature. Like zinc, it has a crystalline structure (BCN 6+6) that is slightly distorted from the ideal. The halides of cadmium, with the exception of the fluoride, exhibit a substantially covalent nature. The oxides of cadmium in its preferred oxidation state of +2, namely CdO and Cd(OH)2, are weakly amphoteric; it forms cadmates in strongly basic solutions. Cadmium forms Zintl phases such as LiCd, RbCd13 and CsCd13. When heated in air to a few hundred degrees, cadmium represents a toxicity hazard due to the release of cadmium vapour; when heated to its boiling point in air (just above 1000 K; 725 C; 1340 F; cf steel ~2700 K; 2425 C; 4400 F), the cadmium vapour oxidizes, 'with a reddish-yellow flame, dispersing as an aerosol of potentially lethal CdO particles.' Cadmium is otherwise stable in air and in water, at ambient conditions, protected by a layer of cadmium oxide. Mercury is a liquid at room temperature. It has the weakest metallic bonding of all, as indicated by its bonding energy (61 kJ/mol) and melting point (−39 °C) which, together, are the lowest of all the metallic elements. Solid mercury (MH 1.5) has a distorted crystalline structure, with mixed metallic-covalent bonding, and a BCN of 6. "All of the [Group 12] metals, but especially mercury, tend to form covalent rather than ionic compounds." The oxide of mercury in its preferred oxidation state (HgO; +2) is weakly amphoteric, as is the congener sulfide HgS. It forms anionic thiomercurates (such as Na2HgS2 and BaHgS3) in strongly basic solutions. It forms or is a part of Zintl phases such as NaHg and K8In10Hg. Mercury is a relatively inert metal, showing little oxide formation at room temperature. Copernicium is expected to be a liquid at room temperature, although experiments have so far not succeeded in determining its boiling point with sufficient precision to prove this. Like its lighter congener mercury, many of its singular properties stem from its closed-shell d10s2 electron configuration as well as strong relativistic effects. Its cohesive energy is even less than that of mercury and is likely only higher than that of flerovium. Solid copernicium is expected to crystallise in a close-packed body-centred cubic structure and have a density of about 14.7 g/cm3, decreasing to 14.0 g/cm3 on melting, which is similar to that of mercury (13.534 g/cm3). Copernicium chemistry is expected to be dominated by the +2 oxidation state, in which it would behave like a post-transition metal similar to mercury, although the relativistic stabilisation of the 7s orbitals means that this oxidation state involves giving up 6d rather than 7s electrons. A concurrent relativistic destabilisation of the 6d orbitals should allow higher oxidation states such as +3 and +4 with electronegative ligands, such as the halogens. A very high standard reduction potential of +2.1 V is expected for the Cn2+/Cn couple. In fact, bulk copernicium may even be an insulator with a band gap of 6.4±0.2 V, which would make it similar to the noble gases such as radon, though copernicium has previously been predicted to be a semiconductor or a noble metal instead. Copernicium oxide (CnO) is expected to be predominantly basic. Group 13 Aluminium sometimes is or is not counted as a post-transition metal. It has a well shielded [Ne] noble gas core rather than the less well shielded [Ar]3d10, [Kr]4d10 or [Xe]4f145d10 core of the post-transition metals. The small radius of the aluminium ion combined with its high charge make it a strongly polarizing species, prone to covalency. Aluminium in pure form is a soft metal (MH 3.0) with low mechanical strength. It has a close-packed structure (BCN 12) showing some evidence of partially directional bonding. It has a low melting point and a high thermal conductivity. Its strength is halved at 200 °C, and for many of its alloys is minimal at 300 °C. The latter three properties of aluminium limit its use to situations where fire protection is not required, or necessitate the provision of increased fire protection. It bonds covalently in most of its compounds; has an amphoteric oxide; and can form anionic aluminates. Aluminium forms Zintl phases such as LiAl, Ca3Al2Sb6, and SrAl2. A thin protective layer of oxide confers a reasonable degree of corrosion resistance. It is susceptible to attack in low pH (<4) and high (> 8.5) pH conditions, a phenomenon that is generally more pronounced in the case of commercial purity aluminium and aluminium alloys. Given many of these properties and its proximity to the dividing line between metals and nonmetals, aluminium is occasionally classified as a metalloid. Despite its shortcomings, it has a good strength-to-weight ratio and excellent ductility; its mechanical strength can be improved considerably with the use of alloying additives; its very high thermal conductivity can be put to good use in heat sinks and heat exchangers; and it has a high electrical conductivity. At lower temperatures, aluminium increases its deformation strength (as do most materials) whilst maintaining ductility (as do face-centred cubic metals generally). Chemically, bulk aluminium is a strongly electropositive metal, with a high negative electrode potential. Gallium is a soft, brittle metal (MH 1.5) that melts at only a few degrees above room temperature. It has an unusual crystalline structure featuring mixed metallic-covalent bonding and low symmetry (BCN 7 i.e. 1+2+2+2). It bonds covalently in most of its compounds, has an amphoteric oxide; and can form anionic gallates. Gallium forms Zintl phases such as Li2Ga7, K3Ga13 and YbGa2. It is slowly oxidized in moist air at ambient conditions; a protective film of oxide prevents further corrosion. Indium is a soft, highly ductile metal (MH 1.0) with a low tensile strength. It has a partially distorted crystalline structure (BCN 4+8) associated with incompletely ionised atoms. The tendency of indium '...to form covalent compounds is one of the more important properties influencing its electrochemical behavior'. The oxides of indium in its preferred oxidation state of +3, namely In2O3 and In(OH)3 are weakly amphoteric; it forms anionic indates in strongly basic solutions. Indium forms Zintl phases such as LiIn, Na2In and Rb2In3. Indium does not oxidize in air at ambient conditions. Thallium is a soft, reactive metal (MH 1.0), so much so that it has no structural uses. It has a close-packed crystalline structure (BCN 6+6) but an abnormally large interatomic distance that has been attributed to partial ionisation of the thallium atoms. Although compounds in the +1 (mostly ionic) oxidation state are the more numerous, thallium has an appreciable chemistry in the +3 (largely covalent) oxidation state, as seen in its chalcogenides and trihalides. It and aluminium are the only Group 13 elements to react with air at room temperature, slowly forming the amphoteric oxide Tl2O3. It forms anionic thallates such as Tl3TlO3, Na3Tl(OH)6, NaTlO2, and KTlO2, and is present as the Tl− thallide anion in the compound CsTl. Thallium forms Zintl phases, such as Na2Tl, Na2K21Tl19, CsTl and Sr5Tl3H. Nihonium is expected to have a hexagonal close-packed crystalline structure, albeit based on extrapolation from those of the lighter group 13 elements: its density is expected to be around 16 g/cm3. A standard electrode potential of +0.6 V is predicted for the Nh+/Nh couple. The relativistic stabilisation of the 7s electrons is very high and hence nihonium should predominantly form the +1 oxidation state; nevertheless, as for copernicium, the +3 oxidation state should be reachable. Because of the shell closure at flerovium caused by spin-orbit coupling, nihonium is also one 7p electron short of a closed shell and would hence form a −1 oxidation state; in both the +1 and −1 oxidation states, nihonium should show more similarities to astatine than thallium. The Nh+ ion is expected to also have some similarities to the Ag+ ion, particularly in its propensity for complexation. Nihonium oxide (Nh2O) is expected to be amphoteric. Group 14 Germanium is a hard (MH 6), very brittle semi-metallic element. It was originally thought to be a poorly conducting metal but has the electronic band structure of a semiconductor. Germanium is usually considered to be a metalloid rather than a metal. Like carbon (as diamond) and silicon, it has a covalent tetrahedral crystalline structure (BCN 4). Compounds in its preferred oxidation state of +4 are covalent. Germanium forms an amphoteric oxide, GeO2 and anionic germanates, such as Mg2GeO4. It forms Zintl phases such as LiGe, K8Ge44 and La4Ge3. Tin is a soft, exceptionally weak metal (MH 1.5); a 1-cm thick rod will bend easily under mild finger pressure. It has an irregularly coordinated crystalline structure (BCN 4+2) associated with incompletely ionised atoms. All of the Group 14 elements form compounds in which they are in the +4, predominantly covalent, oxidation state; even in the +2 oxidation state tin generally forms covalent bonds. The oxides of tin in its preferred oxidation state of +2, namely SnO and Sn(OH)2, are amphoteric; it forms stannites in strongly basic solutions. Below 13 °C (55.4 °F) tin changes its structure and becomes 'grey tin', which has the same structure as diamond, silicon and germanium (BCN 4). This transformation causes ordinary tin to crumble and disintegrate since, as well as being brittle, grey tin occupies more volume due to having a less efficient crystalline packing structure. Tin forms Zintl phases such as Na4Sn, BaSn, K8Sn25 and Ca31Sn20. It has good corrosion resistance in air on account of forming a thin protective oxide layer. Pure tin has no structural uses. It is used in lead-free solders, and as a hardening agent in alloys of other metals, such as copper, lead, titanium and zinc. Lead is a soft metal (MH 1.5, but hardens close to melting) which, in many cases, is unable to support its own weight. It has a close-packed structure (BCN 12) but an abnormally large inter-atomic distance that has been attributed to partial ionisation of the lead atoms. It forms a semi-covalent dioxide PbO2; a covalently bonded sulfide PbS; covalently bonded halides; and a range of covalently bonded organolead compounds such as the lead(II) mercaptan , lead tetra-acetate , and the once common, anti-knock additive, tetra-ethyl lead . The oxide of lead in its preferred oxidation state (PbO; +2) is amphoteric; it forms anionic plumbates in strongly basic solutions. Lead forms Zintl phases such as , , and . It has reasonable to good corrosion resistance; in moist air it forms a mixed gray coating of oxide, carbonate and sulfate that hinders further oxidation. Flerovium is expected to be a liquid metal due to spin-orbit coupling "tearing" apart the 7p subshell, so that its 7s27p1/22 valence configuration forms a quasi-closed shell similar to those of mercury and copernicium. Solid flerovium should have a face-centered cubic structure and be a rather dense metal, with a density of around 14 g/cm3. Flerovium is expected to have a standard electrode potential of +0.9 V for the Fl2+/Fl couple. Flerovium oxide (FlO) is expected to be amphoteric, forming anionic flerovates in basic solutions. Group 15 Arsenic is a moderately hard (MH 3.5) and brittle semi-metallic element. It is commonly regarded as a metalloid, or by some other authors as either a metal or a non-metal. It exhibits poor electrical conductivity which, like a metal, decreases with temperature. It has a relatively open and partially covalent crystalline structure (BCN 3+3). Arsenic forms covalent bonds with most other elements. The oxide in its preferred oxidation state (As2O3, +3) is amphoteric, as is the corresponding oxoacid in aqueous solution (H3AsO3) and congener sulfide (As2S3). Arsenic forms a series of anionic arsenates such as Na3AsO3 and PbHAsO4, and Zintl phases such as Na3As, Ca2As and SrAs3. Antimony is a soft (MH 3.0) and brittle semi-metallic element. It is commonly regarded as a metalloid, or by some other authors as either a metal or a non-metal. It exhibits poor electrical conductivity which, like a metal, decreases with temperature. It has a relatively open and partially covalent crystalline structure (BCN 3+3). Antimony forms covalent bonds with most other elements. The oxide in its preferred oxidation state (Sb2O3, +3) is amphoteric. Antimony forms a series of anionic antimonites and antimonates such as NaSbO2 and AlSbO4, and Zintl phases such as K5Sb4, Sr2Sb3 and BaSb3. Bismuth is a soft metal (MH 2.5) that is too brittle for any structural use. It has an open-packed crystalline structure (BCN 3+3) with bonding that is intermediate between metallic and covalent. For a metal, it has exceptionally low electrical and thermal conductivity. Most of the ordinary compounds of bismuth are covalent in nature. The oxide, Bi2O3 is predominantly basic but will act as a weak acid in warm, very concentrated KOH. It can also be fused with potassium hydroxide in air, resulting in a brown mass of potassium bismuthate. The solution chemistry of bismuth is characterised by the formation of oxyanions; it forms anionic bismuthates in strongly basic solutions. Bismuth forms Zintl phases such as NaBi, Rb7In4Bi6 and Ba11Cd8Bi14. Bailar et al. refer to bismuth as being, 'the least "metallic" metal in its physical properties' given its brittle nature (and possibly) 'the lowest electrical conductivity of all metals.' Moscovium is expected to be a quite reactive metal. A standard reduction potential of −1.5 V for the Mc+/Mc couple is expected. This increased reactivity is consistent with the quasi-closed shell of flerovium and the beginning of a new series of elements with the filling of the loosely bound 7p3/2 subshell, and is rather different from the relative nobility of bismuth. Like thallium, moscovium should have a common +1 oxidation state and a less common +3 oxidation state, although their relative stabilities may change depending on the complexing ligands or the degree of hydrolysis. Moscovium(I) oxide (Mc2O) should be quite basic, like that of thallium, while moscovium(III) oxide (Mc2O3) should be amphoteric, like that of bismuth. Group 16 Selenium is a soft (MH 2.0) and brittle semi-metallic element. It is commonly regarded as a nonmetal, but is sometimes considered a metalloid or even a heavy metal. Selenium has a hexagonal polyatomic (CN 2) crystalline structure. It is a semiconductor with a band gap of 1.7 eV, and a photoconductor meaning its electrical conductivity increases a million-fold when illuminated. Selenium forms covalent bonds with most other elements, noting it can form ionic selenides with highly electropositive metals. The common oxide of selenium (SeO3) is strongly acidic. Selenium forms a series of anionic selenites and selenates such as Na2SeO3, Na2Se2O5, and Na2SeO4, as well as Zintl phases such as Cs4Se16. Tellurium is a soft (MH 2.25) and brittle semi-metallic element. It is commonly regarded as a metalloid, or by some authors either as a metal or a non-metal. Tellurium has a polyatomic (CN 2) hexagonal crystalline structure. It is a semiconductor with a band gap of 0.32 to 0.38 eV. Tellurium forms covalent bonds with most other elements, noting it has an extensive organometallic chemistry and that many tellurides can be regarded as metallic alloys. The common oxide of tellurium (TeO2) is amphoteric. Tellurium forms a series of anionic tellurites and tellurates such as Na2TeO3, Na6TeO6, and Rb6Te2O9 (the last containing tetrahedral and trigonal bipyramidal anions), as well as Zintl phases such as NaTe3. Polonium is a radioactive, soft metal with a hardness similar to lead. It has a simple cubic crystalline structure characterised (as determined by electron density calculations) by partially directional bonding, and a BCN of 6. Such a structure ordinarily results in very low ductility and fracture resistance however polonium has been predicted to be a ductile metal. It forms a covalent hydride; its halides are covalent, volatile compounds, resembling those of tellurium. The oxide of polonium in its preferred oxidation state (PoO2; +4) is predominantly basic, but amphoteric if dissolved in concentrated aqueous alkali, or fused with potassium hydroxide in air. The yellow polonate(IV) ion is known in aqueous solutions of low Cl‒ concentration and high pH. Polonides such as Na2Po, BePo, ZnPo, CdPo and HgPo feature Po2− anions; except for HgPo these are some of the more stable of the polonium compounds. Livermorium is expected to be less reactive than moscovium. The standard reduction potential of the Lv2+/Lv couple is expected to be around +0.1 V. It should be most stable in the +2 oxidation state; the 7p3/2 electrons are expected to be so weakly bound that the first two ionisation potentials of livermorium should lie between those of the reactive alkaline earth metals magnesium and calcium. The +4 oxidation state should only be reachable with the most electronegative ligands. Livermorium(II) oxide (LvO) should be basic and livermorium(IV) oxide (LvO2) should be amphoteric, analogous to polonium. Group 17 Astatine is a radioactive element that has never been seen; a visible quantity would immediately be vaporised due to its intense radioactivity. It may be possible to prevent this with sufficient cooling. Astatine is commonly regarded as a nonmetal, less commonly as a metalloid and occasionally as a metal. Unlike its lighter congener iodine, evidence for diatomic astatine is sparse and inconclusive. In 2013, on the basis of relativistic modelling, astatine was predicted to be a monatomic metal, with a face-centered cubic crystalline structure. As such, astatine could be expected to have a metallic appearance; show metallic conductivity; and have excellent ductility, even at cryogenic temperatures. It could also be expected to show significant nonmetallic character, as is normally the case for metals in, or in the vicinity of, the p-block. Astatine oxyanions AtO−, and are known, oxyanion formation being a tendency of nonmetals. The hydroxide of astatine At(OH) is presumed to be amphoteric. Astatine forms covalent compounds with nonmetals, including hydrogen astatide HAt and carbon tetraastatide CAt4. At− anions have been reported to form astatides with silver, thallium, palladium and lead. Pruszyński et al. note that astatide ions should form strong complexes with soft metal cations such as Hg2+, Pd2+, Ag+ and Tl3+; they list the astatide formed with mercury as Hg(OH)At. Tennessine, despite being in the halogen column of the periodic table, is expected to go even further towards metallicity than astatine due to its small electron affinity. The −1 state should not be important for tennessine and its major oxidation states should be +1 and +3, with +3 more stable: Ts3+ is expected to behave similarly to Au3+ in halide media. As such, tennessine oxide (Ts2O3) is expected to be amphoteric, similar to gold oxide and astatine(III) oxide. Group 18 Oganesson is expected to be a very poor "noble gas" and may even be metallised by its large atomic radius and the weak binding of the easily removed 7p3/2 electrons: certainly it is expected to be a quite reactive element that is solid at room temperature and has some similarities to tin, as one effect of the spin–orbit splitting of the 7p subshell is a "partial role reversal" of groups 14 and 18. Due to the immense polarisability of oganesson, it is expected that not only oganesson(II) fluoride but also oganesson(IV) fluoride should be predominantly ionic, involving the formation of Og2+ and Og4+ cations. Oganesson(II) oxide (OgO) and oganesson(IV) oxide (OgO2) are both expected to be amphoteric, similar to the oxides of tin. Aliases and related groupings B-subgroup metals Superficially, the B-subgroup metals are the metals in Groups IB to VIIB of the periodic table, corresponding to groups 11 to 17 using current IUPAC nomenclature. Practically, the group 11 metals (copper, silver and gold) are ordinarily regarded as transition metals (or sometimes as coinage metals, or noble metals) whereas the group 12 metals (zinc, cadmium, and mercury) may or may not be treated as B-subgroup metals depending on if the transition metals are taken to end at group 11 or group 12. The 'B' nomenclature (as in Groups IB, IIB, and so on) was superseded in 1988 but is still occasionally encountered in more recent literature. The B-subgroup metals show nonmetallic properties; this is particularly apparent in moving from group 12 to group 16. Although the group 11 metals have normal close-packed metallic structures they show an overlap in chemical properties. In their +1 compounds (the stable state for silver; less so for copper) they are typical B-subgroup metals. In their +2 and +3 states their chemistry is typical of transition metal compounds. Pseudo metals and hybrid metals The B-subgroup metals can be subdivided into pseudo metals and hybrid metals. The pseudo metals (groups 12 and 13, including boron) are said to behave more like true metals (groups 1 to 11) than non-metals. The hybrid metals As, Sb, Bi, Te, Po, At — which other authors would call metalloids — partake about equally the properties of both. The pseudo metals can be considered related to the hybrid metals through the group 14 carbon column. Base metals Mingos writes that while the p-block metals are typical, that are not strongly reducing and that, as such, they are base metals requiring oxidizing acids to dissolve them. Borderline metals Parish writes that, 'as anticipated', the borderline metals of groups 13 and 14 have non-standard structures. Gallium, indium, thallium, germanium, and tin are specifically mentioned in this context. The group 12 metals are also noted as having slightly distorted structures; this has been interpreted as evidence of weak directional (i.e. covalent) bonding. Chemically weak metals Rayner-Canham and Overton use the term chemically weak metals to refer to the metals close to the metal-nonmetal borderline. These metals behave chemically more like the metalloids, particularly with respect to anionic species formation. The nine chemically weak metals identified by them are beryllium, magnesium, aluminium, gallium, tin, lead, antimony, bismuth, and polonium. Frontier metals Vernon uses the term "frontier metal" to refer to the class of chemically weak metals adjacent to the dividing line between metals. He notes that several of them "are further distinguished by a series of…knight's move relationships, formed between one element and the element one period down and two groups to its right." For example, copper(I) chemistry resembles indium(I) chemistry: "both ions are found mostly in solid-state compounds such as CuCl and InCl; the fluorides are unknown for both ions while the iodides are the most stable." The name frontier metal is adapted from Russell and Lee, who wrote that, "…bismuth and group 16 element polonium are generally considered to be metals, although they occupy 'frontier territory' on the periodic table, adjacent to the nonmetals." Fusible metals Cardarelli, writing in 2008, categorizes zinc, cadmium, mercury, gallium, indium, thallium, tin, lead, antimony and bismuth as fusible metals. Nearly 100 years earlier, Louis (1911) noted that fusible metals were alloys containing tin, cadmium, lead, and bismuth in various proportions, "the tin ranging from 10 to 20%." Heavy metals (of low melting point) Van Wert grouped the periodic table metals into a. the light metals; b. the heavy brittle metals of high melting point, c. the heavy ductile metals of high melting point; d. the heavy metals of low melting point (Zn, Cd, Hg; Ga, In, Tl; Ge, Sn; As, Sb, Bi; and Po), and e. the strong, electropositive metals. Britton, Abbatiello and Robins speak of 'the soft, low melting point, heavy metals in columns lIB, IlIA, IVA, and VA of the periodic table, namely Zn, Cd, Hg; Al, Ga, In, Tl; [Si], Ge, Sn, Pb; and Bi. The Sargent-Welch Chart of the Elements groups the metals into: light metals, the lanthanide series; the actinide series; heavy metals (brittle); heavy metals (ductile); and heavy metals (low melting point): Zn, Cd, Hg, [Cn]; Al, Ga, In, Tl; Ge, Sn, Pb, [Fl]; Sb, Bi; and Po. Less typical metals Habashi groups the elements into eight major categories: [1] typical metals (alkali metals, alkaline earth metals, and aluminium); [2] lanthanides (Ce–Lu); [3] actinides (Th–Lr); [4] transition metals (Sc, Y, La, Ac, groups 4–10); [5] less typical metals (groups 11–12, Ga, In, Tl, Sn and Pb); [6] metalloids (B, Si, Ge, As, Se, Sb, Te, Bi and Po); [7] covalent nonmetals (H, C, N, O, P, S and the halogens); and [8] monatomic nonmetals (that is, the noble gases). Metametals The metametals are zinc, cadmium, mercury, indium, thallium, tin and lead. They are ductile elements but, compared to their metallic periodic table neighbours to the left, have lower melting points, relatively low electrical and thermal conductivities, and show distortions from close-packed forms. Sometimes beryllium and gallium are included as metametals despite having low ductility. Ordinary metals Abrikosov distinguishes between ordinary metals, and transition metals where the inner shells are not filled. The ordinary metals have lower melting points and cohesive energies than those of the transition metals. Gray identifies as ordinary metals: aluminium, gallium, indium, thallium, nihonium, tin, lead, flerovium, bismuth, moscovium, and livermorium. He adds that, 'in reality most of the metals that people think of as ordinary are in fact transition metals...'. Other metals As noted, the metals falling between the transition metals and the metalloids on the periodic table are sometimes called other metals (see also, for example, Taylor et al.). 'Other' in this sense has the related meanings of, 'existing besides, or distinct from, that already mentioned' (that is, the alkali and alkaline earth metals, the lanthanides and actinides, and the transition metals); 'auxiliary'; 'ancillary, secondary'. According to Gray there should be a better name for these elements than 'other metals'. p-block metals The p-block metals are the metals in groups 13‒16 of the periodic table. Usually, this includes aluminium, gallium, indium and thallium; tin and lead; and bismuth. Germanium, antimony and polonium are sometimes also included, although the first two are commonly recognised as metalloids. The p-block metals tend to have structures that display low coordination numbers and directional bonding. Pronounced covalency is found in their compounds; the majority of their oxides are amphoteric. Aluminium is an undisputed p-block element by group membership and its [Ne] 3s2 3p1 electron configuration, but aluminium does not literally come after transition metals unlike p-block metals from period 4 and on. The epithet "post-transition" in reference to aluminium is a misnomer, and aluminium normally has no d electrons unlike all other p-block metals. Peculiar metals Slater divides the metals 'fairly definitely, though not perfectly sharply' into the ordinary metals and the peculiar metals, the latter of which verge on the nonmetals. The peculiar metals occur towards the ends of the rows of the periodic table and include 'approximately:' gallium, indium, and thallium; carbon, silicon '(both of which have some metallic properties, though we have previously treated them as nonmetals),' germanium and tin; arsenic, antimony, and bismuth; and selenium '(which is partly metallic)' and tellurium. The ordinary metals have centro-symmetrical crystalline structures whereas the peculiar metals have structures involving directional bonding. More recently, Joshua observed that the peculiar metals have mixed metallic-covalent bonding. Poor metals Farrell and Van Sicien use the term poor metal, for simplicity, 'to denote one with a significant covalent, or directional character.' Hill and Holman observe that, 'The term poor metals is not widely used, but it is a useful description for several metals including tin, lead and bismuth. These metals fall in a triangular block of the periodic table to the right of the transition metals. They are usually low in the activity (electrochemical) series and they have some resemblances to non-metals.' Reid et al. write that 'poor metals' is, '[A]n older term for metallic elements in Groups 13‒15 of the periodic table that are softer and have lower melting points than the metals traditionally used for tools.' Post-transition metals An early usage of this name is recorded by Deming, in 1940, in his well-known book Fundamental Chemistry. He treated the transition metals as finishing at group 10 (nickel, palladium and platinum). He referred to the ensuing elements in periods 4 to 6 of the periodic table (copper to germanium; silver to antimony; gold to polonium)—in view of their underlying d10 electronic configurations—as post-transition metals. Semimetals In modern use, the term 'semimetal' sometimes refers, loosely or explicitly, to metals with incomplete metallic character in crystalline structure, electrical conductivity or electronic structure. Examples include gallium, ytterbium, bismuth, mercury and neptunium. Metalloids, which are in-between elements that are neither metals nor nonmetals, are also sometimes instead called semimetals. The elements commonly recognised as metalloids are boron, silicon, germanium, arsenic, antimony and tellurium. In old chemistry, before the publication in 1789 of Lavoisier's 'revolutionary' Elementary Treatise on Chemistry, a semimetal was a metallic element with 'very imperfect ductility and malleability' such as zinc, mercury or bismuth. Soft metals Scott and Kanda refer to the metals in groups 11 to 15, plus platinum in group 10, as soft metals, excluding the very active metals, in groups 1−3. They note many important non-ferrous alloys are made from metals in this class, including sterling silver, brass (copper and zinc), and bronzes (copper with tin, manganese and nickel). Transition metals Historically, the transition metal series "includes those elements of the Periodic Table which 'bridge the gap' between the very electropositive alkali and allkaline earth metals and the electronegative non-metals of the groups: nitrogen-phosphorus, oxygen-sulfur, and the halogens." Cheronis, Parsons and Ronneberg wrote that, "The transition metals of low melting point form a block in the Periodic Table: those of Groups II 'b' [zinc, cadmium, mercury], III 'b' [aluminium, gallium, indium, thallium], and germanium, tin and lead in Group IV. These metals all have melting points below 425 °C."
Physical sciences
Periodic table
Chemistry
59065686
https://en.wikipedia.org/wiki/Electrical%20busbar%20system
Electrical busbar system
Electrical busbar systems (sometimes simply referred to as busbar systems) are a modular approach to electrical wiring, where instead of a standard cable wiring to every single electrical device, the electrical devices are mounted onto an adapter which is directly fitted to a current carrying busbar. This modular approach is used in distribution boards, automation panels and other kinds of installation in an electrical enclosure. Busbar systems are subject to safety standards for design and installation along with electrical enclosure according to IEC 61439-1 and vary between countries and regions. Content & types of busbar systems A busbar system usually contains couple of busbar holders, busbars, Adapters to mount devices, clamps either with protective covering or without covering to powerup or distribute the current from the busbar system & busbar mountable electrical devices. Electrical busbar systems can be differentiated by the distance between center of each busbar and vary according to maximum current carrying capacity of the system which depends on IEC standards. commonly known busbar system types. 40 mm Busbar System (Current carrying capacity up to 300–400 Amps) 60 mm Busbar System (Current carrying capacity up to 800–2500 Amps) 100 mm Busbar System (Current carrying capacity up to 1250 Amps) 185 mm Busbar System (Current carrying capacity up to 2500 Amps) Advantages and disadvantages over traditional electrical wiring Advantage Source: Electrically Safe installation up to IP 60 inside the cabinet, Drastically reduce space required inside the cabinet Easy trouble shooting in case of switch gear failure Pre-tested short circuit rating Mounting of 2, 3, 4 or 5 pole switchgear in a single construction Time saving during construction of the cabinet Disadvantages Commercially not viable if the number of switch gears is low Specialists needed for construction of the busbar system from a wiring diagram Lack of adapters for mounting different electrical devices on the busbar Special type of busbars needed to construct busbar system which can carry current more than 800 Ampere
Technology
Electricity transmission and distribution
null
52835581
https://en.wikipedia.org/wiki/Alcohols%20%28medicine%29
Alcohols (medicine)
Alcohols, in various forms, are used medically as an antiseptic, disinfectant, and antidote. Alcohols applied to the skin are used to disinfect skin before a needle stick and before surgery. They may also be used as a hand sanitizer; to clean other areas; and in mouthwashes. Taken by mouth or injected into a vein, ethanol is used to treat methanol or ethylene glycol toxicity when fomepizole is not available. Side effects of alcohols applied to the skin include skin irritation. Care should be taken with electrocautery, as ethanol is flammable. Types of alcohol used include ethanol, denatured ethanol, 1-propanol, and isopropyl alcohol. Alcohols are effective against a range of microorganisms, though they do not inactivate spores. Concentrations of 60% to 90% work best. Uses 95% ABV ethanol is known as spiritus fortis in medical context. Alcohol septal ablation Ablysinol (a brand of 99% ethanol medical alcohol) was sold from $1,300 to $10,000 per 10-pack in 2020 due to FDA administrator action granting exclusivity when used for treating hypertrophic obstructive cardiomyopathy with alcohol septal ablation in the US through 2025, despite "misuse" of the orphan drug act. Antiseptics Alcohols Ethanol is listed under Antiseptics, and Alcohol based hand rub under Disinfectants, on the World Health Organization's List of Essential Medicines. Applied to the skin, alcohols are used to disinfect skin before a needle stick and before surgery. They may be used both to disinfect the skin of the person and the hands of the healthcare providers. They can also be used to clean other areas, and in mouthwashes. Both ethanol and isopropyl alcohol are common ingredients in topical antiseptics, including hand sanitizer. Risks Research indicates that alcohol used as pre-surgical antiseptic preparations for ear procedures may have some ototoxic properties. In this regard, these alcohol preparations may be considered potential ototoxic medication. Antidote When taken by mouth or injected into a vein ethanol is used to treat methanol or ethylene glycol toxicity when fomepizole is not available. Mechanism Ethanol, when used for toxicity, competes with other alcohols for the alcohol dehydrogenase enzyme, lessening metabolism into toxic aldehyde and carboxylic acid derivatives, and reducing more serious toxic effect of the glycols to crystallize in the kidneys. Euthanasia Alcohol can intensify the sedation caused by hypnotics/sedatives such as barbiturates, benzodiazepines, sedative antihistamines, opioids, nonbenzodiazepines/Z-drugs (such as zolpidem and zopiclone). In the Netherlands, pentobarbital is part of the standard protocol for physician-assisted suicide for self-administration by the patient. It is given in liquid form, in a solution of sugar syrup and alcohol, containing 9 grams of pentobarbital. This is preceded by an antiemetic to prevent vomiting. Medicinal solvent Ethanol, often in high concentrations, is used to dissolve many water-insoluble medications and related compounds. Liquid preparations of pain medications, cough and cold medicines, and mouth washes, for example, may contain up to 25% ethanol and may need to be avoided in individuals with adverse reactions to ethanol such as alcohol-induced respiratory reactions. Ethanol is present mainly as an antimicrobial preservative in over 700 liquid preparations of medicine including acetaminophen, iron supplements, ranitidine, furosemide, mannitol, phenobarbital, trimethoprim/sulfamethoxazole and over-the-counter cough medicine. Some medicinal solutions of ethanol are also known as tinctures. Mouthwash Usually mouthwashes are antiseptic solutions intended to reduce the microbial load in the mouth, although other mouthwashes might be given for other reasons such as for their analgesic, anti-inflammatory or anti-fungal action. Also, alcohol is added to mouthwash not to destroy bacteria but to act as a carrier agent for essential active ingredients such as menthol, eucalyptol and thymol, which help to penetrate plaque. A 2020 systematic review and meta-analysis concluded that there is no definitive link between alcohol-based mouthwash use and the risk of oral cancer. This should not be confused with the fact that alcohol consumption at any quantity is a risk factor for alcohol and cancer such as cancers of the mouth, esophagus, pharynx and larynx. Sclerosant Absolute ethanol is used as a sclerosant in sclerotherapy. Sclerotherapy has been used "in the treatment of simple pleural effusions, vascular malformations, lymphocytes and seromas." Sedative Ethchlorvynol, developed in the 1950s, was used to treat insomnia, but prescriptions for the drug had fallen significantly by 1990, as other hypnotics that were considered safer (i.e., less dangerous in overdose) became much more common. Also, ethchlorvynol is not compatible with intravenous injection like ethanol—serious injury (including the loss of limbs due to vascular injury) or death can occur when it is used in this manner. It is no longer prescribed in the United States due to unavailability, but it is still available in some countries and would still be considered legal to possess and use with a valid prescription. Society and culture Unproven methods against COVID-19 Unproven COVID-19 prevention from alcohol consumption Drinking alcohol will not prevent or cure COVID-19, contrary to some claims. Unproven COVID-19 hand sanitizer Vodka was alleged to be an effective homemade hand sanitizer, or an ingredient in one. The company whose brand was alleged to be protective responded to the rumours by citing the US Centers for Disease Control and Prevention statement that hand sanitizers needed to be at least 60% alcohol to be effective, and stating that their product was only 40% alcohol. History Ancient world Since antiquity, prior to the development of modern agents, alcohol was used as a general anesthetic. In the history of wound care, beer, and wine, are recognized as substances used for healing wounds. Late Middle Ages Alcohol has been used as an antiseptic as early as 1363, with evidence to support its use becoming available in the late 1800s. At times and places of poor public sanitation (such as medieval Europe), the consumption of alcoholic drinks was a way of avoiding water-borne diseases such as cholera. Modern period Early doctors recognized that ethanol was a risky anesthetic because it could lead to death from alcohol poisoning or vomit inhalation (pulmonary aspiration). This led to use of alternatives in antiquity, such as opium and cannabis, and later diethyl ether starting in the 1840s. As safer options became available, ethanol was eventually phased out as a general anesthetic. Methylpentynol, discovered 1913, prescribed for the treatment of insomnia, but its use was quickly phased out in response to newer drugs with far more favorable safety profiles. The drug has been replaced by benzodiazepines and is no longer sold anywhere.
Biology and health sciences
Anti-infectives
Health
51313762
https://en.wikipedia.org/wiki/Proxima%20Centauri%20b
Proxima Centauri b
Proxima Centauri b is an exoplanet orbiting within the habitable zone of the red dwarf star Proxima Centauri in the constellation Centaurus. It can also be referred to as Proxima b, or Alpha Centauri Cb. The host star is the closest star to the Sun, at a distance of about from Earth, and is part of the larger triple star system Alpha Centauri. Proxima b and Proxima d, along with the currently disputed Proxima c, are the closest known exoplanets to the Solar System. Proxima Centauri b orbits its parent star at a distance of about with an orbital period of approximately 11.2 Earth days. Its other properties are only poorly understood , but it is a potentially Earth-like planet with a minimum mass of and a slightly larger radius than that of Earth. The planet orbits within the habitable zone of its parent star; but it is not known whether it has an atmosphere, which would impact the habitability probabilities. Proxima Centauri is a flare star with intense emission of electromagnetic radiation that could strip an atmosphere off the planet. Announced on 24 August 2016 by the European Southern Observatory (ESO), Proxima Centauri b was confirmed via several years of Doppler spectroscopy measurements of its parent star. The detection of Proxima Centauri b was a major discovery in planetology, and has drawn interest to the Alpha Centauri star system as a whole. As of 2023, Proxima Centauri b is believed to be the best-known exoplanet to the general public. The exoplanet's proximity to Earth offers an opportunity for robotic space exploration. Discovery Proxima Centauri had become a target for exoplanet searches already before the discovery of Proxima Centauri b, but initial studies in 2008 and 2009 ruled out the existence of larger-than-Earth exoplanets in the habitable zone. Planets are very common around dwarf stars, with on average 1–2 planets per star, and about 20–40% of all red dwarfs have one in the habitable zone. Additionally, red dwarfs are by far the most common type of stars. Before 2016, observations with instruments at the European Southern Observatory in Chile had identified anomalies in Proxima Centauri which could not be satisfactorily explained by flares or chromospheric activity of the star. This suggested that Proxima Centauri may be orbited by a planet. In January 2016, a team of astronomers launched the Pale Red Dot project to confirm this hypothetical planet's existence. On 24 August 2016, the team led by Anglada-Escudé proposed that a terrestrial exoplanet in the habitable zone of Proxima Centauri could explain these anomalies and announced Proxima Centauri b's discovery. In 2022, another planet named Proxima Centauri d, which orbits even closer to the star, was confirmed. Another planet candidate named Proxima Centauri c was reported in 2020, but its existence has since been disputed due to potential artifacts in the data, while the claimed existence of a dust belt around Proxima Centauri remains unconfirmed. Physical properties Distance, orbital parameters and age Proxima Centauri b is the closest exoplanet to Earth, at a distance of about (1.3 parsecs). It orbits Proxima Centauri every Earth days at a distance of about , over 20 times closer to Proxima Centauri than Earth is to the Sun. , it is unclear whether it has an eccentricity but Proxima Centauri b is unlikely to have any obliquity. The age of the planet is unknown; Proxima Centauri itself may have been captured by Alpha Centauri and thus not necessarily of the same age as the latter pair of stars, which are about 5 billion years old. Proxima Centauri b is unlikely to have stable orbits for moons. Mass, radius and composition , the estimated minimum mass of Proxima Centauri b is ; other estimates are similar, with the most recent estimate as of 2022 being at least , but all estimates are minimum because the inclination of the planet's orbit is not yet known. This makes it similar to Earth, but the radius of the planet is poorly known and hard to determine—estimates based on possible composition give a range of 0.94 to 1.4 , and its mass may border on the cutoff between Earth-type and Neptune-type planets, if that value is lower than previously estimated. Depending on the composition, Proxima Centauri b could range from being a Mercury-like planet with a large core—which would require particular conditions early in the planet's history—to a very water-rich planet. Observations of the Fe–Si–Mg ratios of Proxima Centauri may allow a determination of the composition of the planet, since they are expected to roughly match the ratios of any planetary bodies in the Proxima Centauri system; various observations have found Solar System-like ratios of these elements. Little is known about Proxima Centauri b —mainly its distance from the star and its orbital period—but a number of simulations of its physical properties have been done. A number of simulations and models have been created that assume Earth-like compositions and include predictions of the galactic environment, internal heat generation from radioactive decay and magnetic induction heating, planetary rotation, the effects of stellar radiation, the amount of volatile species the planet consists of and the changes of these parameters over time. Proxima Centauri b likely developed under different conditions from Earth, with less water, stronger impacts and an overall faster development, assuming that it formed at its current distance from the star. Proxima Centauri b probably did not form at its current distance to Proxima Centauri, as the amount of material in the protoplanetary disk would be insufficient. Instead, the planet, or protoplanetary fragments, likely formed at larger distances and then migrated to the current orbit of Proxima Centauri b. Depending on the nature of the precursor material, it may be rich in volatiles. A number of different formation scenarios are possible, many of which depend on the existence of other planets around Proxima Centauri and which would result in different compositions. Tidal locking Proxima Centauri b is likely to be tidally locked to the host star, which for a 1:1 orbit would mean that the same side of the planet would always face Proxima Centauri. It is unclear whether habitable conditions can arise under such circumstances as a 1:1 tidal lock would lead to an extreme climate with only part of the planet habitable. However, the planet may not be tidally locked. If the eccentricity of Proxima Centauri b was higher than 0.1–0.06, it would tend to enter a Mercury-like 3:2 resonance or higher-order resonances such as 2:1. Additional planets around Proxima Centauri and interactions with Alpha Centauri could excite higher eccentricies. If the planet is not symmetrical (triaxial), a capture into a non-tidally locked orbit would be possible even with low eccentricity. A non-locked orbit, however, would result in tidal heating of the planet's mantle, increasing volcanic activity and potentially shutting down a magnetic field-generating dynamo. The exact dynamics are strongly dependent on the internal structure of the planet and its evolution in response to tidal heating. Host star Proxima b's parent star Proxima Centauri is a red dwarf, radiating only 0.005% of the amount of visible light that the Sun does and an average of about 0.17% of the Sun's energy. Despite this low radiation, due to its close orbit Proxima Centauri b still receives about 70% of the amount of infrared energy that the Earth receives from the Sun. That said, Proxima Centauri is also a flare star with its luminosity at times varying by a factor of 100 over a timespan of hours, its luminosity averaged at . Proxima Centauri has 12.2% of the Sun's mass and 15.4% of the radius of the Sun. With an effective temperature of , it has a spectral type of M5.5V, making it an M-type main-sequence star that is fusing hydrogen at its core to generate energy. The magnetic field of Proxima Centauri is considerably stronger than that of the Sun, with an intensity of ; it varies in a seven-year-long cycle. It is the closest star to the Sun, hence the name "Proxima", with a distance of . Proxima Centauri is part of a multiple star system, whose other members are Alpha Centauri A and Alpha Centauri B which form a binary star subsystem. The dynamics of the multiple star system could have caused Proxima Centauri b to move closer to its host star over its history. The detection of a planet around Alpha Centauri in 2012 was considered questionable. Despite its proximity to Earth, Proxima Centauri is too faint to be visible to the naked eye, except during superflares. Surface conditions Climate Proxima Centauri b is located within the classical habitable zone of its star and receives about 65% of Earth's irradiation. Its equilibrium temperature is estimated to be about . Various factors, such as the orbital properties of Proxima Centauri b, the spectrum of radiation emitted by Proxima Centauri and the behaviour of clouds and hazes influence the climate of an atmosphere-bearing Proxima Centauri b. There are two likely scenarios for an atmosphere of Proxima Centauri b: in one case, the planet's water could have condensed and the hydrogen would have been lost to space, which would have only left oxygen and/or carbon dioxide in the atmosphere after the planet's early history. However, it is also possible that Proxima Centauri b had a primordial hydrogen atmosphere or formed farther away from its star, which would have reduced the escape of water. Thus, Proxima Centauri b may have kept its water beyond its early history. If an atmosphere exists, it is likely to contain oxygen-bearing gases such as oxygen and carbon dioxide. Together with the star's magnetic activity, they would give rise to auroras that could be observed from Earth if the planet has a magnetic field. Climate models including general circulation models used for Earth climate have been used to simulate the properties of Proxima Centauri b's atmosphere. Depending on its properties such as whether it is tidally locked, the amount of water and carbon dioxide a number of scenarios are possible: A planet partially or wholly covered with ice, planet-wide or small oceans or only dry land, combinations between these, scenarios with one or two "eyeballs" or lobster-shaped areas with liquid water (meaning near the equator, with two nearly identical areas on each hemisphere, sprouting from the equator like lobster claws), or a subsurface ocean with a thin (less than a kilometre) ice cover that may be slushy in some places. Additional factors are: The nature of convection. The distribution of continents, which can sustain a carbonate-silicate cycle and thus stabilize the atmospheric carbon dioxide concentrations. The effects of galactic cosmic rays, which are expected to be more significant around Proxima Centauri b than around Earth. Ocean heat transport which broadens the space for habitable climates. Ocean salinity variations that alter the properties of an ocean. Ozone chemistry, which can increase the stratospheric temperature and influence surface temperatures. The rotational period of the planet which determines Rossby wave dynamics. Internal heat flow which can melt the bases of ice sheets. Sea ice dynamics which could cause a global ocean to freeze over. Stability of an atmosphere The stability of an atmosphere is a major issue for the habitability of Proxima Centauri b: Strong irradiation by UV radiation and X-rays from Proxima Centauri constitutes a challenge to habitability. Proxima Centauri b receives about 10–60 times as much of this radiation especially X-rays, as Earth. It might have received even more in the past, adding up to 7–16 times as much cumulative XUV radiation than Earth. UV radiation and X-rays can effectively evaporate an atmosphere since hydrogen readily absorbs the radiation and does not readily lose it again, thus warming until the speed of hydrogen atoms and molecules is sufficient to escape from the gravitational field of a planet. They can remove water by splitting it into hydrogen and oxygen and heating the hydrogen in the planet's exosphere until it escapes. The hydrogen can drag other elements such as oxygen and nitrogen away. Nitrogen and carbon dioxide can escape on their own from an atmosphere but this process is unlikely to substantially reduce the nitrogen and carbon dioxide content of an Earth-like planet. Stellar winds and coronal mass ejections are an even bigger threat to an atmosphere. The amount of stellar wind impacting Proxima Centauri b may amount to 4–80 times that impacting Earth, with a pressure about ten thousand times larger than the Sun's stellar wind. The more intense UV and X-rays radiation could lift the planet's atmosphere to outside of the magnetic field, increasing the loss triggered by stellar wind and mass ejections. A planet like Proxima Centauri b might develop an internal structure that precludes the existence of strong planetary magnetic fields. At Proxima Centauri b's distance from the star, the stellar wind is likely to be denser than around Earth by a factor of 10–1,000 depending on the strength and stage (Proxima Centauri has a seven-year-long magnetic cycle) of Proxima Centauri's magnetic field. it is unknown whether the planet has a magnetic field and the upper atmosphere may have its own magnetic field. Depending on the intensity of Proxima Centauri b's magnetic field, the stellar wind can penetrate deep into the atmosphere of the planet and strip parts of it off, with substantial variability over daily and annual timescales. If the planet is tidally locked to the star, the atmosphere can collapse on the night side. This is particularly a risk for a carbon dioxide-dominated atmosphere although carbon dioxide glaciers could recycle. Unlike Sun-like stars, Proxima Centauri's habitable zone would have been farther away early in the system's existence when the star was in its pre-main sequence stage. In the case of Proxima Centauri, assuming that the planet formed in its current orbit it could have spent up to 180 million years too close to its star for water to condense. Proxima Centauri b may therefore have suffered a runaway greenhouse effect, in which the planet's water would have evaporated into steam, which would then have been split into hydrogen and oxygen by UV radiation. The hydrogen and thus any water would have subsequently been lost, similar to what is believed to have happened to Venus. While the characteristics of impact events on Proxima Centauri b are currently entirely conjectural, they could destabilize the atmospheres and boil off oceans. An ice-covered Proxima Centauri b with a subsurface ocean is expected to have cryovolcanic activity at rates comparable to volcanism on Jupiter's moon Io. The cryovolcanism would generate a thin exosphere comparable to that of Jupiter's other moon Europa. Even if Proxima Centauri b lost its original atmosphere, volcanic activity could rebuild it after some time. A second atmosphere would likely contain carbon dioxide, which would make it more stable than an Earth-like atmosphere, particularly in the presence of an ocean, which, depending on its size, as well as the atmospheric mass and composition, may contribute to preventing atmospheric collapse. Additionally, impacts of exocomets could resupply water to Proxima Centauri b, if they are present. Delivery of water to Proxima Centauri b A number of mechanisms can deliver water to a developing planet; how much water Proxima Centauri b received is unknown. Modelling by Ribas et al. 2016 indicates that Proxima Centauri b would have lost no more than one Earth ocean's equivalent of water but later research suggested that the amount of water lost could be considerably larger and Airapetian et al. 2017 concluded that an atmosphere would be lost within ten million years. The estimates are strongly dependent on the initial mass of the atmosphere, however, and are thus highly uncertain. Possibility of life In the context of exoplanet research, "habitability" is usually defined as the possibility that liquid water exists on the surface of a planet. As normally understood in the context of exoplanet life, liquid water on the surface and an atmosphere are prerequisites for habitability—any life limited to the subsurface of a planet, such as in a subsurface ocean, like those that reside in Europa in the Solar System, would be difficult to detect from afar although it may constitute a model for life in a cold ocean-covered Proxima Centauri b. Setbacks to habitability The habitability of red dwarfs is a controversial subject, with a number of considerations: Both the activity of Proxima Centauri and tidal locking would hinder the establishment of these conditions on the planet. Unlike XUV radiation, UV radiation on Proxima Centauri b is redder (colder) and thus may interact less with organic compounds and may produce less ozone. Conversely, stellar activity could deplete an ozone layer sufficiently to increase UV radiation to dangerous levels. Depending on its eccentricity, it may partially lie outside of the habitable zone during part of its orbit. Oxygen and/or carbon monoxide may build up in the atmosphere of Proxima Centauri b to toxic quantities. High oxygen concentrations may, however, aid in the evolution of complex organisms. If oceans are present, the tides could lead to the flooding and drying of coastal landscapes, triggering chemical reactions conducive to the development of life, favour the evolution of biological rhythms such as the day-night cycle which otherwise would not develop in a tidally locked planet without a day-night cycle, mix oceans and supply and redistribute nutrients and stimulate periodic expansions of marine organisms such as red tides on Earth. On the other hand, red dwarfs like Proxima Centauri have a lifespan much longer than the Sun, exceeding the estimated age of the Universe, and thus give life plenty of time to develop. The radiation emitted by Proxima Centauri is ill-suited for oxygen-generating photosynthesis but sufficient for anoxygenic photosynthesis although it is unclear how life depending on anoxygenic photosynthesis could be detected. One study in 2017 estimated that the productivity of a Proxima Centauri b ecosystem based on photosynthesis may be about 20% that of Earth's. Observation and exploration , Proxima Centauri b has not yet been directly imaged, as its separation from Proxima Centauri is too small. It is unlikely to transit Proxima Centauri from Earth's perspective; all surveys have failed to find evidence for any transits of Proxima Centauri b. The star is monitored for the possible emission of technology-related radio signals by the Breakthrough Listen project which in April–May 2019 detected the BLC1 signal; later investigations, however, indicated it is probably of human origin. Future large ground-based telescopes and space-based observatories such as the James Webb Space Telescope and the Nancy Grace Roman Space Telescope could directly observe Proxima Centauri b, given its proximity to Earth, but disentangling the planet from its star would be difficult. Possible traits observable from Earth are the reflection of starlight from an ocean, the radiative patterns of atmospheric gases and hazes and of atmospheric heat transport. Efforts have been done to determine what Proxima Centauri b would look like to Earth if it has particular properties such as atmospheres of a particular composition. Even the fastest spacecraft built by humans would take a long time to travel interstellar distances; Voyager 2 would take about 75,000 years to reach Proxima Centauri. Among the proposed technologies to reach Proxima Centauri b in human lifespans are solar sails that could reach speeds of 20% the speed of light; problems would be how to decelerate a probe when it arrives in the Proxima Centauri system and collisions of the high-speed probes with interstellar particles. Among the projects of travelling to Proxima Centauri b are the Breakthrough Starshot project, which aims to develop instruments and power systems that can reach Proxima Centauri in the 21st century. View from Proxima Centauri b From the surface of Proxima Centauri b, the binary stars Alpha Centauri would be considerably brighter than Venus is from Earth, with an apparent magnitude of −6.8 and −5.2, respectively. The Sun would appear as a bright star with an apparent magnitude of 0.40 in the constellation of Cassiopeia. The brightness of the Sun would be similar to that of Achernar or Procyon from Earth. View from Earth Videos
Physical sciences
Notable exoplanets
Astronomy
57495962
https://en.wikipedia.org/wiki/Connected%20relation
Connected relation
In mathematics, a relation on a set is called connected or complete or total if it relates (or "compares") all pairs of elements of the set in one direction or the other while it is called strongly connected if it relates pairs of elements. As described in the terminology section below, the terminology for these properties is not uniform. This notion of "total" should not be confused with that of a total relation in the sense that for all there is a so that (see serial relation). Connectedness features prominently in the definition of total orders: a total (or linear) order is a partial order in which any two elements are comparable; that is, the order relation is connected. Similarly, a strict partial order that is connected is a strict total order. A relation is a total order if and only if it is both a partial order and strongly connected. A relation is a strict total order if, and only if, it is a strict partial order and just connected. A strict total order can never be strongly connected (except on an empty domain). Formal definition A relation on a set is called when for all or, equivalently, when for all A relation with the property that for all is called . Terminology The main use of the notion of connected relation is in the context of orders, where it is used to define total, or linear, orders. In this context, the property is often not specifically named. Rather, total orders are defined as partial orders in which any two elements are comparable. Thus, is used more generally for relations that are connected or strongly connected. However, this notion of "total relation" must be distinguished from the property of being serial, which is also called total. Similarly, connected relations are sometimes called , although this, too, can lead to confusion: The universal relation is also called complete, and "complete" has several other meanings in order theory. Connected relations are also called or said to satisfy (although the more common definition of trichotomy is stronger in that of the three options must hold). When the relations considered are not orders, being connected and being strongly connected are importantly different properties. Sources which define both then use pairs of terms such as and , and , and , and , or and , respectively, as alternative names for the notions of connected and strongly connected as defined above. Characterizations Let be a homogeneous relation. The following are equivalent: is strongly connected; ; ; is asymmetric, where is the universal relation and is the converse relation of The following are equivalent: is connected; ; ; is antisymmetric, where is the complementary relation of the identity relation and is the converse relation of Introducing progressions, Russell invoked the axiom of connection: Properties The relation of a tournament graph is always a connected relation on the set of s vertices. If a strongly connected relation is symmetric, it is the universal relation. A relation is strongly connected if, and only if, it is connected and reflexive. A connected relation on a set cannot be antitransitive, provided has at least 4 elements. On a 3-element set for example, the relation has both properties. If is a connected relation on then all, or all but one, elements of are in the range of Similarly, all, or all but one, elements of are in the domain of
Mathematics
Order theory
null
50506253
https://en.wikipedia.org/wiki/Locomotion%20in%20space
Locomotion in space
Locomotion in space includes all actions or methods used to move one's body in microgravity conditions through the outer space environment. Locomotion in these conditions is different from locomotion in a gravitational field. There are many factors that contribute to these differences, and they are crucial when researching long-term survival of humans in space. Challenges of locomotion in reduced gravity Humans have evolved in a 1-G environment and are therefore accustomed to Earth's standard atmospheric conditions, and the microgravity environment of space can have huge effects on the human body and its locomotion. Environmental conditions The environmental conditions in space are harsh and require extensive equipment for survival and completion of daily activities. There are many environmental factors to consider both inside and outside of a spacecraft that astronauts work in. These factors include but are not limited to movement during weightlessness, general equipment necessary to travel to the desired destination in space, and gear such as space suits that hinder mobility. When doing extravehicular activities (EVA), it is important to be protected from the vacuum of space. Exposure to this harsh environment can cause death in a small amount of time. The main environmental factors of concern in space include but are not limited to the following : lack of oxygen extreme pressure and temperature differences higher radiation levels Effects on the human body There are many detrimental effects of extended exposure to reduced gravity that are similar to aging and disease. Some long-duration effects of reduced gravity can be simulated on Earth using bed rest. These effects include: muscle atrophy deconditioning (e.g. arterial constriction, bone density loss) symptoms similar to aging or disease head-ward fluid shifts (headaches, sinus/nasal congestion, facial swelling) decreased muscle volume decreased bone strength and fracture increased fatigue and loss of general strength decreased locomotor control motion sickness vision problems excessive flatulence other physical effects psychological effects The muscle volume can decrease up to 20% over a six-month mission, and the bone density can decrease at a rate of approximately 1.4% at the hip in a month's time. A study done by Fitts and Trappe examined the effects of prolonged space flight (defined as approximately 180 days) on human skeletal muscle using muscle biopsies. Prolonged weightlessness was shown to cause significant loss in the mass, force, and power production in the soleus and gastrocnemius muscles. Many countermeasures to these effects exist, but thus far they are not sufficient to compensate for the detrimental effects of space travel and astronauts need extensive rehabilitation upon their return to Earth. Technology used to compensate for the negative effects In order to compensate for the negative effects of prolonged exposure to microgravity, scientists have developed many countermeasure technologies with varying degrees of success. Electrical stimulation Transcutaneous electrical muscle stimulation (EMS) is the use of electric current to stimulate muscle activity. This method is theoretically utilized to prevent muscle atrophy and weakness. The efficacy of this approach was tested in a 30-day bed rest study done by Duovoisin in 1989. Though the patients showed decreased rates of muscle atrophy in the stimulated limb, there was not evidence to support that this method would necessarily prevent these effects. More recently, in 2003, Yoshida et al. did a study related to hind limb suspension in rats. This study concluded that the hind limb suspension and EMS did have some success in the prevention of muscle function deterioration induced by disuse. There have been several scientific studies conducted that mention the application of this technique as a countermeasure in long-term spaceflight. Loading suits Loading suits are garments that are used to help maintain loading on the bones during their time in space, not to be confused with space suits, which aid astronauts in surviving the harsh climate outside of a vehicle such as the International Space Station (ISS). Pingvin suit The Pingvin suit is designed to add musculoskeletal loads to specific muscle groups during space flight in order to prevent atrophy of the muscles in the back. This lightweight suit has a series of elastic bands to create these vertical bodily loads. It loads both the upper and lower body separately. The upper body can be loaded up to 88 lb. (40 kg). Users have found this suit to be hot and uncomfortable, despite its low weight. Gravity Loading Countermeasure Skinsuit (GLCS) The GLCS is a garment designed to help mitigate the effects of musculoskeletal deconditioning. It is partly inspired by the Pingvin suit, a Russian space suit used since the 1970s. Employing elastic materials to place loads on the body, the GLCS attempts to mimic the gravitational loads experienced while standing. A pilot study was conducted in parabolic flight in order to assess the viability of the initial design in 2009. This skinsuit creates a loading gradient across the body that gradually increases the loading to body weight at the feet. Further iterations of the initial design have been developed and now the current version of the suit is being tested on the ISS as part of a research project sponsored by the ESA. Other loading suits DYNASUIT concept The DYNASUIT is a conceptual design that involves a suit that can be divided into many subsystems. Each subsytem controls a different aspect of the suit. For example, there is a bio-parameter subsystem that would measure physiological responses like muscle signals (EMG), heart rate, electrocardiogram, ventilation rate, body temperature, blood pressure, and oxygen saturation. There is also a central control unit or the equivalent of the suit's brain, as well as an artificial muscle subsystem that proposes to use either electro-active polymers (EAP) or pneumatics to apply forces on the body. There is also a proposed user interface to help the astronaut interact with the suit. This potential design is still in the development phase and has not been prototyped at this point. Pharmacologic therapy In general, the way a person's body absorbs medicine in reduced gravity conditions is significantly different than normal absorption properties here on Earth. In addition, there are various pharmacological or drug therapies that are used to counter certain side effects of prolonged space flight. For example, dextroamphetamine has been used by NASA to help with space motion sickness and orthostatic intolerance. The use of biophosphate alendronate has been proposed to aid in the prevention of bone loss but no conclusive evidence has been found to show that it helps in this regard. See recommended reading for more information on space pharmacology. Artificial gravity Artificial gravity (AG) is the increase or decrease of gravitational force on an object or person by artificial means. Different types of forces, including linear acceleration and centripetal force, can be used to generate this artificial gravitational force. The use of artificial gravity to counteract simulated microgravity (e.g. bed rest) on Earth has been shown to have conflicting results for the maintenance of bone, muscle, and cardiovascular systems. Short arm centrifuges can be used to generate loading conditions greater than gravity that could help prevent the skeletal muscle and bone loss associated with prolonged spaceflight and bedrest. A pilot study done by Caiozzo and Haddad in 2008 compared two groups of subjects: one that had been on bed rest of 21 days (in order to simulate the effects of prolonged space travel), and another that had been on bed rest as well as being exposed to artificial gravity for one hour a day. They used a short arm centrifuge to artificially induce the gravitational force. After taking muscle biopsy samples, they determined that the group that had been exposed to artificial gravity did not show as severe a deficit in terms of muscle fiber cross-sectional area. Even though this technology has potential to aid in counteracting the detrimental effects of prolonged spaceflight, there are difficulties in applying these artificial gravity systems in space. Rotating the whole spacecraft is expensive and introduces another layer of complexity to the design. A smaller centrifuge can be used to provide intermittent exposure, but the available exercise activities in the small centrifuge are limited due to the high rotation speed required to generate adequate artificial gravitational forces. The subject can experience "unpleasant vestibular and Coriolis effects" while in the centrifuge. Several studies have suggested that artificial gravity might be an adequate countermeasure for prolonged space flight, especially if combined with other countermeasures. A conceptual design entitled ViGAR (Virtual Gravity Artificial Reality) was proposed in 2005 by Kobrick et al. and it details a device that combines artificial gravity, exercise and virtual reality to counter the negative effects of prolonged spaceflight. It includes a bicycle on a centrifuge as well as an integrated virtual reality system. Exercise methods Treadmill Vibration Isolation and Stabilization (TVIS) The TVIS is a modified treadmill. It includes a vibration isolation system, which prevents the forces from the exercise from being transferred into the International Space Station (ISS). This device is used in a very similar manner to a regular treadmill. In order to hold the user to the surface of the treadmill, it includes a system of straps called the series bungee system (SBS) which use latex tubes or straps called "subject load devices" (SLDs) attached to a harness. These straps place resistive forces and loads in a range of 40 lb. to 220 lb. on the crew member's body as they walk or run on the treadmill. Cycle Ergometer with Vibration Isolation (CEVIS) The CEVIS provides both aerobic and cardiovascular training using recumbent cycling activities. The workload placed on the subject can be tuned very accurately. The astronauts can create target goals of speed, workload and heart rate. It is a modified version of the Inertial Vibration Isolation and Stabilization (IVIS) Cycle Ergometer. It has a control panel that displays the target workload as well as the actual workload in addition to the cycling speed, heart rate, deviation from target speed and heart rate, and elapsed exercise time. The workload range is between 25 and 350 Watts. The pedal speeds range from 30 to 120 rpm. There is a vibration isolation system that prevents the motions and forces generated by the crew member exercising from being transferred to the International Space Station (ISS). It is currently used on the International Space Station as part of the astronauts' weekly exercise schedule and it is certified for 15 years of on-orbit service. Interim Resistance Exercise Device (iRED) The iRED provides resistive exercise to the user which helps prevent muscle atrophy and minimize bone loss. It focuses on maintaining the strength, power, and endurance of the crew member. It has over 18 different exercises for both upper and lower body and provides up to a 300 lb. resistive force. Examples of possible exercises include but are not limited to: squats, straight-leg deadlifts, bent-leg deadlifts, heel raises, bend-over rows, upright rows, bicep curls, shoulder presses etc. It was used daily as a part of the crew members' exercise regimen but was retired in October 2011. Now, the Advanced Resistive Exercise Device (ARED) is used. Other exercise methods for use in space Flywheel exercise device (FWED) Multi-purpose Integrated Countermeasures Stimulator (M-ICS) Resistive Vibration Exercise Integrated Countermeasure and Rehabilitation Exerciser (ICARE) Short Arm Human Centrifuge Lower Body Negative Pressure Exercise (LBNP) Effectiveness and assessment of these methods The TVIS and iRED are largely ineffective when it comes to maintaining muscle volume and bone density. Both the TVIS and the iRED cannot generate forces that are similar to those experienced on Earth. The harnesses and bungee cords used in many of these device cause substantial discomfort, and in the future need to be redesigned for ease of long-duration use. The CEVIS, at its maximal setting, is the only permanent device on ISS that can achieve resistive loads that are comparable to Earth. The FWED (flown on ISS in 2009; photo), adapted for experimental bedrest in 1-g, achieved resistive forces exceeding body weight and mitigated bone and muscle atrophy. The European Space Agency employs many different devices to assess the effectiveness of different countermeasure technologies: Muscle Atrophy Research and Exercise System (MARES) Portable Pulmonary Function System (PPFS) Earlobe Arterialised Blood Collector (EAB C) Long-Term Medical Survey System (LTMS) ISS-compatible X-Ray Imaging System Biofeedback and Virtual-reality systems: Enhanced Virtual-Reality System (eVRS) Kinematics of locomotion in space
Physical sciences
Classical mechanics
Physics
54192700
https://en.wikipedia.org/wiki/Norfolk%20four-course%20system
Norfolk four-course system
The Norfolk four-course system is a method of agriculture that involves crop rotation. Unlike earlier methods such as the three-field system, the Norfolk system is marked by an absence of a fallow year. Instead, four different crops are grown in each year of a four-year cycle: wheat, turnips, barley, and clover or ryegrass. This system was developed in the early 16th century in the region of Waasland (in present-day northern Belgium), and was popularized in the 18th century by British agriculturist Charles Townshend. The sequence of four crops (wheat, turnips, barley and clover), included a fodder crop (turnips) and a grazing crop (clover), allowing livestock to be bred year-round. The Norfolk four-course system was a key development in the British Agricultural Revolution.
Technology
Soil and soil management
null
54195817
https://en.wikipedia.org/wiki/Korean%20units%20of%20measurement
Korean units of measurement
Korean units of measurement, called cheokgwan-beop () or cheokgeun-beop () in Korean, is the traditional system of measurement used by the people of the Korean peninsula. It is largely based on the Chinese system, with influence from Japanese standards imposed following its annexation of the Korean Empire in 1910. Both North and South Korea currently employ the metric system. Since 2007, South Korea has criminalized the use of Korean units in commercial contexts, but informal use continues, especially of the pyeong as a measure of residential and commercial floorspace. North Korea continues to use the traditional units, although their standards are now derived from metric conversions. History Ancient Korea Customary Korean units are a local adaption of the traditional Chinese system, which was adopted at a very early date. They were imposed and adjusted at various times by royal statutes. The details of the system have varied over time and location in Korea's history. Standardization—to the extent it occurred—was accomplished by officially sanctioned rulers, ropes, odometers, triangulation devices, weights, cups, and basins. Although most of the measures operate on a decimal system, the standard form was to read out the units of each place (as, e.g., 3cheok, 1chon, 4bun, 1ri) rather than list them as a single number of the largest unit (as 3.141cheok). Joseon Taejo of Joseon established a Market Bureau () or Bureau of Weights and Measures () at the foundation of the Joseon dynasty in 1392. The Joseon Kingdom later attempted a standardization of length based on square brass rulers, which were used by magistrates and the secret police to fight commercial fraud. Under the Joseon, different classes of society were permitted different numbers of kan in their homes, but in its traditional sense—like the Japanese ken—of a bay between two pillars rather than as a set unit of length. The 1496 Great Codex of State Administration (Kyŏngguk Taejŏn) included a section on approved measures and their verification. It employed a li of 360paces or 2160feet but did not explicitly mention that conversion until its 1746 supplement. The 1897 Law on Weights and Measures uniting Korea's various local systems was the first legislation enacted upon the Joseon's establishment of the Korean Empire. Japanese Korea During the Japanese occupation from 1910 to 1945, Japan imposed its standards on Korea. Two of the most common "traditional" units in Korea—the pyeong of floorspace and the jeweler's don—were among those given their modern value by the Japanese. South Korea South Korea signed the Metre Convention in 1959 and notionally adopted the metric system under Park Chung Hee on 10 May 1961, with a strict law banning the use of the Korean pound, li, gwan, and don effective as of 1 January 1964 and—after metric conversion of the land registries—the pyeong. The metrication was not applied to imported or exported goods and remained so generally spotty as to be considered a failure, with the government abandoning its attempts to enforce the statute by 1970. The traditional units feature in many Korean sayings and much of its literature and poetry, including the national anthem, which mentions Korea's "three thousand lis of rivers and mountains". Further attempts to fully metricate occurred in 1983, 2000, and 2001, with publicity campaigns praising the metric system and condemning traditional units through TV and radio ads, brochures, signs, and contests. A common theme was the origin of the present values of the units under Japanese occupation; Yun Byeong-su of the Korea Association of Standards & Testing Organizations noted that "even Japan has forsaken the don for the units of grams and ounces but here we are standing around like idiots still blathering on about don." Nonetheless, strong opposition from the construction and jewelry industries and negative media coverage forced Korean politicians to avoid the topic and regulators to settle for dual use of conventional and metric measures. A 2006 study found 88% of real estate companies and 71% of jewelers in 7 major markets were still using the pyeong and don, after which the government decided simply to criminalize further commercial use of traditional units. (Another important factor was the European Union regulation mandating the use of metric in all imported goods by 2010.) The sale of rulers marking Korean feet was ended and a Measure Act effective 1 July 2007 empowered the Korean Agency for Technology and Standards of the Ministry of Commerce to begin immediately levying fines of up to ₩500,000 for commercial use of the pyeong and don, with less common units enjoying a longer grace period. The ban also included use of American units, such as describing the display size of televisions and computer monitors in terms of inches. The "serving size" used as an informal and variable measure of meat cuts was standardized to 100g. Knowledge of the fine remained low for years, and it was later increased to a maximum of ₩1,000,000 (about $800) for users of illegal units and ₩3,000,000 for sellers of measuring devices marked with unapproved units. South Korea's measurement standards are now maintained by the Korea Research Institute of Standards and Science. Despite this strong official discouragement, some use continues in retail, manufacturing, and farming. Even among those who have adopted metric units, informal use of awkward metric fractions equivalent to round amounts of the former units is common, especially with regard to the very common pyeong of floorspace. (Such treatment was avoided in the gold market by laws requiring pricing and denomination in even amounts of grams.) Another dodge has been to treat the traditional units as a nondescript 'unit', such as marketing an air conditioner appropriate for a 20py home as a "20-Type". North Korea Despite the importance of the Soviet Union in establishing North Korea, Kim Il Sung continued official use of Korean units until the DPRK's notional metrication under National Standard 4077–75 on 14 April 1975. It joined the Meter Treaty in 1982 or 1989, although it was removed from the International Bureau for Weights and Measures and related organizations in 2012 for its years of failure to pay the necessary fees. North Korea has long used the metric system in its state-run media and international publications, but continues to use traditional units alongside the metric system in sectors approved by the government. North Korea's standards are administered by the Central Institute of Metrology under the State Administration of Quality Management in accordance with the Law on Metrology ratified by the Supreme People's Assembly on 3 February 1993. North Korea uses the pyeong in various regulations, such as the 50py per person allowed for private farming in 1987, despite guides who disparage the unit as a historical relic of the South to foreign tourists visiting the country. The metric system is thought not to have spread to domestic factories or stores prior to Kim Jong Un's metrification initiative, announced in May 2013. The change was part of Kim's policy of stressing the importance of science and technology and its "universal trends". His announcement in the state-run quarterly Cultural Language Study said that increasing use of the metric system would "strengthen international exchange and cooperation... in the fields of industry, science, and technology and even in the area of general social life". Length The base unit of Korean length is the foot, with other units changing over time based on its dimensions and multiples. Different ancient Korean kingdoms had different exact measurements; that of Sejong the Great (.1418–1450) was 31.22cm. Under the early Joseon Kingdom, the value of the foot varied by trade, with different lengths used for the carpentry foot and the fabric foot. The biggest difference between the traditional Korean and Chinese units of length is that the Korean equivalent of the bu uses a different character and its pre-Tang composition of six Korean feet rather than five. (The bu was usually treated as a synonym of this unit within Korea but sometimes distinguished as a length of 4feet). In 369 CE, during the reign of King Geunchogo of Baekje, his realm seems to have used a foot of about 28.85cm. The Korean li previously bore values around 434.16m (3rd century), 531.18m (6th–7th), 559.8m (7th–10th), 552.96m (10th–14th), and 450m (19th); it was also reckoned based on travel time and therefore varied in length between the plains and mountains. It was standardized as of the Japanese ri of in 1905. Area The base unit of Korean area is the pyeong, equivalent to a square kan or 36 square Korean feet. It comprised about 3.158m2 during Korea's Three Kingdoms Era; the present value derives from the units established by the Japanese. Despite being notionally illegal, the pyeong remains particularly common when discussing residential and commercial floorspace and tiles. A separate pyeong of 0.09m2 was used for selling glass. The floorspace pyeong is still prevalent enough that it continues to be glossed in the government's promotional material for foreign investors. Farms and large estates were formerly generally measured in majigi, which is notionally not based on multiples of the pyeong but on the amount of land suitable for the planting of one mall of rice or grain seed. In practice, it was standardized to the pyeong system but varied in size from province to province based on the average richness of their soil. Weight The base unit of Korean weight is the gwan. At the time of Korea's metrification, however, the Geun was in more common use. Although it was usually taken as equivalent to 600g, as with red pepper and meats, a separate pound of 400g was used for fruits and another of 375 or 200g was used for vegetables. The nyang also sees some use among Korea's vendors of traditional Chinese medicine. The "bag" (kama) was a variable unit usually figured as 54kg of unhusked rice or 60kg of polished rice, although 90kg "bags" were also used. Volume The base unit of Korean volume or capacity is the doi. In 3rd-century Gaya, the mal was reckoned at about 2L, the size of the present-day doe. In the early 17th century, the Joseon picul was reckoned as 15 or 20 mal, but similarly only comprised 89.464 or 119.285L owing to the smaller size of the mal at that time. The United Nations also reported a "small mal" half the size of the standard mal. In contexts involving volume, two cubic forms of "pyeong" were also formerly used. The pyeong of gravel was a cubic gan (about 6.01m3); the pyeong of firewood was as much (about2.0035m3). Palais reports alternate seoks of 15 and 20 mal each.
Physical sciences
Measurement systems
Basics and measurement
52855912
https://en.wikipedia.org/wiki/Sodium%20calcium%20edetate
Sodium calcium edetate
Sodium calcium edetate (sodium calcium EDTA), also known as edetate calcium disodium among other names, is a medication primarily used to treat lead poisoning, including both short-term and long-term lead poisoning. Sodium calcium edetate came into medical use in the United States in 1953. Chelation agent Sodium calcium edetate is in the chelating agent family of medication. It is a salt of edetate with two sodium atoms and one calcium atom. It works by binding to a number of heavy metals, which renders them almost inert and allows them to leave the body in the urine. Edetate disodium (Endrate) is a different formulation which does not have the same effects. Medical use Sodium calcium edetate's primary use is to treat lead poisoning, for which it is an alternative to succimer. It is given by slow injection into a vein or into a muscle. For lead encephalopathy sodium calcium edetate is typically used together with dimercaprol. It may also be used to treat plutonium poisoning. It does not appear to be useful for poisoning by tetra-ethyl lead. Side effects Common side effects include pain at the site of injection. Other side effects may include kidney problems, diarrhea, fever, muscle pains, and low blood pressure. Benefits when needed in pregnancy are likely greater than the risks. History Sodium calcium edetate came into medical use in the United States in 1953. It is on the World Health Organization's List of Essential Medicines.
Physical sciences
Organic salts
Chemistry
47262026
https://en.wikipedia.org/wiki/Ultra-high-definition%20television
Ultra-high-definition television
Ultra-high-definition television (also known as Ultra HD television, Ultra HD, UHDTV, UHD and Super Hi-Vision) today includes 4K UHD and 8K UHD, which are two digital video formats with an aspect ratio of 16:9. These were first proposed by NHK Science & Technology Research Laboratories and later defined and approved by the International Telecommunication Union (ITU). The Consumer Electronics Association announced on October 17, 2012, that "Ultra High Definition", or "Ultra HD", would be used for displays that have an aspect ratio of 16:9 or wider and at least one digital input capable of carrying and presenting native video at a minimum resolution of . In 2015, the Ultra HD Forum was created to bring together the end-to-end video production ecosystem to ensure interoperability and produce industry guidelines so that adoption of ultra-high-definition television could accelerate. From just 30 in Q3 2015, the forum published a list up to 55 commercial services available around the world offering 4K resolution. The "UHD Alliance", an industry consortium of content creators, distributors, and hardware manufacturers, announced during a Consumer Electronics Show (CES) 2016 press conference its "Ultra HD Premium" specification, which defines resolution, bit depth, color gamut, high dynamic range (HDR) performance required for Ultra HD (UHDTV) content and displays to carry their Ultra HD Premium logo. Alternative terms Ultra-high-definition television is also known as Ultra HD, UHD, UHDTV, and 4K. In Japan, 8K UHDTV will be known as Super Hi-Vision since Hi-Vision was the term used in Japan for HDTV. In the consumer electronics market companies had previously only used the term 4K at the 2012 CES but that had changed to "Ultra HD" during CES 2013. "Ultra HD" was selected by the Consumer Electronics Association after extensive consumer research, as the term has also been established with the introduction of "Ultra HD Blu-ray". Technical details Resolution Two resolutions are defined as UHDTV: UHDTV-1 is 3840 pixels wide by 2160 pixels tall (8.3 megapixels), which is four times as many pixels as the (2.07 megapixels) of current 1080p HDTV (full HDTV). Also known as 2160p, and 4K UHD. Although roughly similar in resolution to 4K digital cinema formats, it should not be confused with other 4K resolutions such as the DCI 4K/Cinema 4K. The total number of pixels of RGB stripe type is 8.3 megapixels. UHDTV-2 is 7680 pixels wide by 4320 pixels tall (33.18 megapixels), also referred to as 4320p and 8K UHD, which is sixteen times as many pixels as current 1080p HDTV, which brings it closer to the detail level of 15/70 mm IMAX. NHK advertises the 8K UHDTV format with 22.2 surround sound as Super Hi-Vision, which can be broadcast with H.264 codecs. Color space, dynamic range, frame rate and resolution/aliasing The human visual system has a limited ability to discern improvements in resolution when picture elements are already small enough or distant enough from the viewer. At some home viewing distances and current TV sizes, HD resolution is near the limits of resolution for the eye and increasing resolution to 4K has little perceptual impact, if consumers are beyond the critical distance (Lechner distance) to appreciate the differences in pixel count between 4K and HD. One exception is that even if resolution surpasses the resolving ability of the human eye, there is still an improvement in the way the image appears due to higher resolutions reducing spatial aliasing. UHDTV provides other image enhancements in addition to pixel density. Specifically, dynamic range and color are greatly enhanced, and these impact saturation and contrast differences that are readily resolved greatly improve the experience of 4KTV compared to HDTV. UHDTV allows the future use of the new Rec. 2020 (UHDTV) color space which can reproduce colors that cannot be shown with the current Rec. 709 (HDTV) color space. In terms of CIE 1931 color space, the Rec. 2020 color space covers 75.8%, compared to coverage by the DCI-P3 digital cinema reference projector color space of just 53.6%, 52.1% by Adobe RGB color space, while the Rec. 709 color space covers only 35.9%. UHDTV's increases in dynamic range allow not only brighter highlights but also increased detail in the greyscale. UHDTV also allows for frame rates up to 120 frames per second (fps). UHDTV potentially allows Rec. 2020, higher dynamic range, and higher frame rates to work on HD services without increasing resolution to 4K, providing improved quality without as high of an increase in bandwidth demand. History 2001–2005 The first displays capable of displaying 4K content appeared in 2001, as the IBM T220/T221 LCD monitors. NHK researchers built a UHDTV prototype which they demonstrated in 2003. They used an array of 16 HDTV recorders with a total capacity of almost 3.5TB that could capture up to 18 minutes of test footage. The camera itself was built with four CCDs, each with a resolution of only . Using two CCDs for green and one each for red and blue, they then used a spatial pixel offset method to bring it to . Subsequently, an improved and more compact system was built using CMOS image sensor technology and the CMOS image sensor system was demonstrated at Expo 2005, Aichi, Japan, the NAB 2006 and NAB 2007 conferences, Las Vegas, at IBC 2006 and IBC 2008, Amsterdam, Netherlands, and CES 2009. A review of the NAB 2006 demo was published in a broadcast engineering e-newsletter. Individuals at NHK and elsewhere projected that the timeframe for UHDTV to be available in domestic homes varied between 2015 and 2020 but Japan was to get it in the 2016 time frame. 2006–2010 On November 2, 2006, NHK demonstrated a live relay of a UHDTV program over a 260 kilometer distance by a fiber-optic network. Using dense wavelength division multiplex (DWDM), 24Gbit/s speed was achieved with a total of 16 different wavelength signals. On December 31, 2006, NHK demonstrated a live relay of their annual Kōhaku Uta Gassen over IP from Tokyo to a screen in Osaka. Using a codec developed by NHK, the video was compressed from 24Gbit/s to 180600Mbit/s and the audio was compressed from 28Mbit/s to 728Mbit/s. Uncompressed, a 20-minute broadcast would require roughly 4 TB of storage. The SMPTE first released Standard 2036 for UHDTV in 2007. UHDTV was defined as having two levels, called UHDTV1 () and UHDTV2 (). In May 2007, the NHK did an indoor demonstration at the NHK Open House in which a UHDTV signal ( at 60fps) was compressed to a 250Mbit/s MPEG2 stream. The signal was input to a 300MHz wide band modulator and broadcast using a 500MHz QPSK modulation. This "on the air" transmission had a very limited range (less than 2 meters), but shows the feasibility of a satellite transmission in the 36,000km orbit. In 2008, Aptina Imaging announced the introduction of a new CMOS image sensor specifically designed for the NHK UHDTV project. During IBC 2008 Japan's NHK, Italy's RAI, BSkyB, Sony, Samsung, Panasonic Corporation, Sharp Corporation, and Toshiba (with various partners) demonstrated the first ever public live transmission of UHDTV, from London to the conference site in Amsterdam. On June 9, 2010, Panasonic announced that its professional plasma display lineup would include an plasma display with 4K resolution. At the time of announcement, it was the largest 4K display and the largest television. On September 29, 2010, the NHK partnered up and recorded The Charlatans live in the UK in the UHDTV format, before broadcasting over the internet to Japan. 2011 On May 19, 2011, Sharp in collaboration with NHK demonstrated a direct-view LCD display capable of pixels at 10 bits per channel. It was the first direct-view Super Hi-Vision-compatible display released. Before 2011, UHDTV allowed for frame rates of 24, 25, 50, and 60fps. In an ITU-R meeting during 2011, an additional frame rate was added to UHDTV of 120fps. 2012 On February 23, 2012, NHK announced that with Shizuoka University they had developed an 8K sensor that can shoot video at 120fps. In April 2012, Panasonic, in collaboration with NHK announced a display ( at 60fps), which has 33.2 million 0.417mm square pixels. In April 2012, the four major South Korean terrestrial broadcasters (KBS, MBC, SBS, and EBS) announced that in the future, they would begin test broadcasts of UHDTV on channel 66 in Seoul. At the time of the announcement, the UHDTV technical details had not yet been decided. LG Electronics and Samsung are also involved in UHDTV test broadcasts. In May 2012, NHK showed the world's first ultra-high-definition shoulder-mount camera. By reducing the size and weight of the camera, the portability had been improved, making it more maneuverable than previous prototypes, so it could be used in a wide variety of shooting situations. The single-chip sensor uses a Bayer color-filter array, where only one color component is acquired per pixel. Researchers at NHK also developed a high-quality up-converter, which estimates the other two color components to convert the output into full resolution video. Also in May 2012, NHK showed the ultra-high-definition imaging system it has developed in conjunction with Shizuoka University, which outputs 33.2-megapixel video at 120fps with a color depth of 12bits per component. As ultra-high-definition broadcasts at full resolution are designed for large, wall-sized displays, there is a possibility that fast-moving subjects may not be clear when shot at 60fps, so the option of 120fps has been standardized for these situations. To handle the sensor output of approximately 4 billion pixels per second with a data rate as high as 51.2Gbit/s, a faster analog-to-digital converter has been developed to process the data from the pixels, and then a high-speed output circuit distributes the resulting digital signals into 96 parallel channels. This CMOS sensor is smaller and uses less power when compared to conventional ultra-high-definition sensors, and it is also the world's first to support the full specifications of the ultra-high-definition standard. During the 2012 Summer Olympics in Great Britain, the format was publicly showcased by the world's largest broadcaster, the BBC, which set up 15-meter-wide screens in London, Glasgow, and Bradford to allow viewers to see the Games in ultra-high definition. On May 31, 2012, Sony released the VPL-VW1000ES 4K 3D Projector, the world's first consumer-prosumer projector using the 4K UHDTV system, with the shutter-glasses stereoscopic 3D technology priced at US$24,999.99. On August 22, 2012, LG announced the world's first 3D UHDTV using the 4K system. On August 23, 2012, UHDTV was officially approved as a standard by the International Telecommunication Union (ITU), standardizing both 4K and 8K resolutions for the format in ITU-R Recommendation BT.2020. On September 15, 2012, David Wood, Deputy Director of the EBU Technology and Development Department (who chairs the ITU working group that created Rec. 2020), told The Hollywood Reporter that South Korea plans to begin test broadcasts of 4K UHDTV next year. Wood also said that many broadcasters have the opinion that going from HDTV to 8K UHDTV is too much of a leap and that it would be better to start with 4K UHDTV. In the same article, Masakazu Iwaki, NHK Research senior manager, said that the NHK plan to go with 8K UHDTV is for economic reasons since directly going to 8K UHDTV would avoid an additional transition from 4K UHDTV to 8K UHDTV. On October 18, 2012, the Consumer Electronics Association (CEA) announced that it had been unanimously agreed by the CEA's Board of Industry Leaders that the term "Ultra High-Definition", or "Ultra HD", would be used for displays that have a resolution of at least 8 megapixels with a vertical resolution of at least 2,160 pixels and a horizontal resolution of at least 3,840 pixels. The Ultra HD label also requires the display to have an aspect ratio of 16:9 or wider and to have at least one digital input that can carry and present a native video signal of without having to rely on a video scaler. Sony announced they would market their 4K products as 4K Ultra High-Definition (4K UHD). On October 23, 2012, Ortus Technology Co., Ltd announced the development of the world's smallest pixel LCD panel with a size of and a pixel density of 458px/in. The LCD panel is designed for medical equipment and professional video equipment. On October 25, 2012, LG Electronics began selling the first flat panel Ultra HD display in the United States with a resolution of . The LG 84LM9600 is an flat panel LED-backlit LCD display with a price of US$19,999 though the retail store was selling it for US$16,999. On November 29, 2012, Sony announced the 4K Ultra HD Video Player—a hard disk server preloaded with ten 4K movies and several 4K video clips that they planned to include with the Sony XBR-84X900. The preloaded 4K movies are The Amazing Spider-Man, Total Recall (2012), The Karate Kid (2010), Salt, Battle: Los Angeles, The Other Guys, Bad Teacher, That's My Boy, Taxi Driver, and The Bridge on the River Kwai. Additional 4K movies and 4K video clips will be offered for the 4K Ultra HD Video Player in the future. On November 30, 2012, Red Digital Cinema Camera Company announced that they were taking pre-orders for the US$1,450 REDRAY 4K Cinema Player, which can output 4K resolution to a single 4K display or to four 1080p displays arranged in any configuration via four HDMI1.4 connections. Video output can be DCI 4K (), 4K Ultra HD, 1080p, and 720p at frame rates of up to 60fps with a color depth of up to 12bpc with 4:2:2 chroma subsampling. Audio output can be up to 7.1 channels. Content is distributed online using the ODEMAX video service. External storage can be connected using eSATA, Ethernet, USB, or a Secure Digital memory card. 2013 On January 6, 2013, the NHK announced that Super Hi-Vision satellite broadcasts could begin in Japan in 2016. On January 7, 2013, Eutelsat announced the first dedicated 4K Ultra HD channel. Ateme uplinks the H.264/MPEG-4 AVC channel to the Eutelsat 10A satellite. The 4K Ultra HD channel has a frame rate of 50fps and is encoded at 40Mbit/s. The channel started transmission on January 8, 2013. On the same day Qualcomm CEO Paul Jacobs announced that mobile devices capable of playing and recording 4K Ultra HD video would be released in 2013 using the Snapdragon 800 chip. On January 8, 2013, Broadcom announced the BCM7445, an Ultra HD decoding chip capable of decoding High Efficiency Video Coding (HEVC) at up to at 60fps. The BCM7445 is a 28nm ARM architecture chip capable of 21,000 Dhrystone MIPS with volume production estimated for the middle of 2014. On the same day THX announced the "THX 4K Certification" program for Ultra HD displays. The certification involves up to 600 tests and the goal of the program is so that "content viewed on a THX Certified Ultra HD display meets the most exacting video standards achievable in a consumer television today". On January 14, 2013, Blu-ray Disc Association president Andy Parsons stated that a task force created three months ago is studying an extension to the Blu-ray Disc specification that would add support for 4K Ultra HD video. On January 25, 2013, the BBC announced that the BBC Natural History Unit would produce Survival—the first wildlife TV series recorded in 4K resolution. This was announced after the BBC had experimented with 8K during the London Olympics. On January 27, 2013, Asahi Shimbun reported that 4K Ultra HD satellite broadcasts would start in Japan with the 2014 FIFA World Cup. Japan's Ministry of Internal Affairs and Communications decided on this move to stimulate demand for 4K Ultra HD TVs. On February 21, 2013, Sony announced that the PlayStation 4 would support 4K resolution output for photos and videos but wouldn't render games at that resolution. On March 26, 2013, the Advanced Television Systems Committee (ATSC) announced a call for proposals for the ATSC 3.0 physical layer that specifies support for resolution at 60fps. On April 11, 2013, Bulb TV created by Canadian entrepreneur Evan Kosiner became the first broadcaster to provide a 4K linear channel and VOD content to cable and satellite companies in North America. The channel is licensed by the Canadian Radio-Television and Telecommunications Commission to provide educational content. On April 19, 2013, SES announced the first Ultra HD transmission using the HEVC standard. The transmission had a resolution of and a bit rate of 20Mbit/s. On May 9, 2013, NHK and Mitsubishi Electric announced that they had jointly developed the first HEVC encoder for 8K Ultra HD TV, which is also called Super Hi-Vision (SHV). The HEVC encoder supports the Main 10 profile at Level 6.1 allowing it to encode 10bpc video with a resolution of at 60fps. The HEVC encoder has 17 3G-SDI inputs and uses 17 boards for parallel processing with each board encoding a row of pixels to allow for real time video encoding. The HEVC encoder is compliant with draft 4 of the HEVC standard and has a maximum bit rate of 340Mbit/s. The HEVC encoder was shown at the NHK Science & Technology Research Laboratories Open House 2013 that took place from May 30 to June 2. At the NHK Open House 2013 the HEVC encoder used a bit rate of 85Mbit/s, which gives a compression ratio of . On May 21, 2013, Microsoft announced the Xbox One, which supports 4K resolution () video output and 7.1 surround sound. Yusuf Mehdi, corporate vice president of marketing and strategy for Microsoft, has stated that there is no hardware restriction that would prevent Xbox One games from running at 4K resolution. On May 30, 2013, Eye IO announced that their encoding technology was licensed by Sony Pictures Entertainment to deliver 4K Ultra HD video. Eye IO encodes their video assets at and includes support for the xvYCC color space. In mid-2013, a Chinese television manufacturer produced the first 50-inch UHD television set costing less than $1,000. On June 11, 2013, Comcast announced that they had demonstrated the first public U.S.-based delivery of 4K Ultra HD video at the 2013 NCTA show. The demonstration included segments from Oblivion, Defiance, and nature content sent over a DOCSIS 3.0 network. On June 13, 2013, ESPN announced that they would end the broadcast of the ESPN 3D channel by the end of that year and would "...experiment with things like UHDTV." On June 26, 2013, Sharp announced the LC-70UD1U, which is a 4K Ultra HD TV. The LC-70UD1U is the world's first TV with THX 4K certification. On July 2, 2013, Jimmy Kimmel Live! recorded in 4K Ultra HD a performance by musical guest Karmin, and the video clip was used as demonstration material at Sony stores. On July 3, 2013, Sony announced the release of their 4K Ultra HD Media Player with a price of US$7.99 for rentals and US$29.99 for purchases. The 4K Ultra HD Media Player only worked with Sony's 4K Ultra HD TVs. On July 15, 2013, the CTA published CTA-861-F, a standard that applies to interfaces such as DVI, HDMI, and LVDS. The CTA-861-F standard adds support for several Ultra HD video formats and additional color spaces. On September 2, 2013 Acer announced the first smartphone, dubbed Liquid S2, capable of recording 4K video. On September 4, 2013, the HDMI Forum released the HDMI 2.0 specification, which supports 4K resolution at 60fps. On the same day, Panasonic announced the Panasonic TC-L65WT600—the first 4K TV to support 4K resolution at 60FPS. The Panasonic TC-L65WT600 has a screen, support for DisplayPort1.2a, support for HDMI2.0, an expected ship date of October, and a suggested retail price of US$5,999. On September 12–17, 2013, at the 2013 IBC Conference in Amsterdam, Nagra introduced an Ultra HD User Interface called Project Ultra based on HTML5, which works with OpenTV 5. On October 4, 2013, DigitalEurope announced the requirements for their UHD logo in Europe. The DigitalEurope UHD logo requires that the display support a resolution of at least , a aspect ratio, the Rec. 709 (HDTV) color space, 8bpc color depth, a frame rate of 24, 25, 30, 50, or 60fps, and at least 2-channel audio. On October 29, 2013, Elemental Technologies announced support for real-time 4K Ultra HD HEVC video processing. Elemental provided live video streaming of the 2013 Osaka Marathon on October 27, 2013, in a workflow designed by K-Opticom, a telecommunications operator in Japan. Live coverage of the race in 4K Ultra HD was available to viewers at the International Exhibition Center in Osaka. This transmission of 4K Ultra HD HEVC video in real-time was an industry-first. On November 28, 2013, Organizing Committee of the XXII Olympic Winter Games and XI Paralympic Winter Games 2014 in Sochi chief Dmitri Chernyshenko stated that the 2014 Olympic Winter Games would be shot in 8K Super Hi-Vision. On December 25, 2013, YouTube added a "2160p 4K" option to its videoplayer. Previously, a visitor had to select the "original" setting in the video quality menu to watch a video in 4K resolution. With the new setting, YouTube users can much more easily identify and play 4K videos. On December 30, 2013, Samsung announced availability of its Ultra HDTV for custom orders, making this the world's largest Ultra HDTV so far. 2014 On January 22, 2014, European Southern Observatory became the first scientific organization to deliver Ultra HD footage at regular intervals. On May 6, 2014, France announced DVB-T2 tests in Paris for Ultra HD HEVC broadcast with objectives to replace by 2020 the current DVB-T MPEG4 HD national broadcast. On May 26, 2014, satellite operator Eutelsat announced the launch of Europe's first Ultra HD demo channel in HEVC, broadcasting at 50fps. The channel is available on the Hot Bird satellites and can be watched by viewers with 4K TVs equipped with DVB-S2 demodulators and HEVC decoders. In June 2014, the FIFA World Cup of that year (held in Brazil) became the first shot entirely in 4K Ultra HD, by Sony. The European Broadcasting Union (EBU) broadcast matches of the FIFA World Cup to audiences in North America, Latin America, Europe and Asia in Ultra HD via SES' NSS-7 and SES-6 satellites. Indian satellite TV provider unveils its plan to launch 4K UHD service early in 2015 and showcased live FIFA World Cup quarter final match in 4K UHD through Sony Entertainment Television Sony SIX. On June 24, 2014, the CEA updated the guidelines for Ultra High-Definition and released guidelines for Connected Ultra High-Definition, adding support for internet video delivered with HEVC. The CEA is developing a UHD logo for voluntary use by companies that make products that meet CEA guidelines. The CEA also clarified that "Ultra High-Definition", "Ultra HD", or "UHD" can be used with other modifiers and gave an example with "Ultra High-Definition TV 4K". On July 15, 2014, Researchers from the University of Essex both captured and delivered its graduation ceremonies in 4K UHDTV over the internet using H.264 in realtime. The 4K video stream was published at 8Mbit/s and 14Mbit/s for all its 11 ceremonies, with people viewing in from countries such as Cyprus, Bulgaria, Germany, Australia, UK, and others. On September 4, 2014, Canon Inc. announced that a firmware upgrade would add Rec. 2020 color space support to their EOS C500 and EOS C500 PL camera models and their DP-V3010 4K display. On September 4, 2014, Microsoft announced a firmware update for the Microsoft Lumia 1020, 930, Icon, and 1520 phones that adds 4K video recording. The update was later released by the individual phone carriers over the following weeks and months after the announcement. On September 5, 2014, the Blu-ray Disc Association announced that the 4K Blu-ray Disc specification supports 4K video at 60fps, High Efficiency Video Coding, the Rec. 2020 color space, high dynamic range, and 10bpc color depth. 4K Blu-ray Disc will have a data rate of at least 50Mbit/s and may include support for 66GB and 100GB discs. 4K Blu-ray Disc began licensing in 2015, with 4K Blu-ray Disc players released late that year. On September 5, 2014, DigitalEurope released an Ultra HD logo for companies that meet their technical requirements. On September 11, 2014, satellite operator SES announced the first Ultra HD conditional access-protected broadcast using DVB standards at the IBC show in Amsterdam. The demonstration used a Samsung Ultra HD TV, with a standard Kudelski SmarDTV CI Plus conditional access module, to decrypt a full pixel CAS-protected Ultra HD signal in HEVC broadcast via an SES Astra satellite at 19.2°E. On November 19, 2014, rock band Linkin Park's concert at Berlin's O2 World Arena was broadcast live in Ultra HD via an Astra 19.2°E satellite. The broadcast was encoded in the UHD 4K standard with the HEVC codec (50fps and a 10bpc color depth), and was a joint enterprise of satellite owner SES, SES Platform Services (later MX1, now part of SES Video) and Samsung. 2015 Indian satellite pay TV provider Tata Sky launched UHD service and UHD Set Top Box on 9 January 2015. The service is 4K at 50fps and price of the UHD box is ₹5900 for existing SD/HD customers and ₹6400 for new customers. The 2015 Cricket World Cup was telecast live in 4K for free to those who own Tata Sky's UHD 4K STB. In May 2015, France Télévisions broadcast matches from Roland Garros live in Ultra HD via the EUTELSAT 5 West A satellite in the HEVC standard. The channel "France TV Sport Ultra HD" was available via the Fransat platform for viewers in France. In May 2015, satellite operator SES announced that Europe's first free-to-air Ultra HD channel (from Germany's pearl.tv shopping channel) would launch in September 2015, broadcast in native Ultra HD via the Astra 19.2°E satellite position. In June 2015, SES launched its first Ultra HD demonstration channel for cable operators and content distributors in North America to prepare their systems and test their networks for Ultra HD delivery. The channel is broadcast from the SES-3 satellite at 103°W. In June 2015, SPI International previewed its "4K FunBox UHD" Ultra HD channel on the HOT BIRD 4K1 channel, in advance of its commercial launch on Eutelsat's HOT BIRD satellites in the autumn. In July 2015, German HD satellite broadcaster HD+ and TV equipment manufacturer TechniSat announced an Ultra HD TV set with integrated decryption for reception of existing HD+ channels (available in the Autumn) and a new Ultra HD demonstration channel due to begin broadcasting in September. On 2 August 2015, The FA Community Shield in England was broadcast in Ultra HD by broadcast company BT Sport, becoming the first live football game shown in Ultra HD on the world's first commercial Ultra HD channel. The match was shown on Europe's first Ultra HD channel, BT Sport Ultra HD where selected live English Premier League and European Champions League matches were broadcast. Fashion One 4K launched on September 2, 2015 becoming the first global Ultra HD TV channel. Reaching nearly 370 million households across the world, the fashion, lifestyle and entertainment network broadcasts via satellite from Measat at 91.5°E (for Asia Pacific, Middle East, Australia) and from SES satellites Astra 19.2°E (for Europe), SES-3 at 103°W (for North America), NSS-806 at 47.5°W (for South America). In September 2015, Eutelsat presented new consumer research, conducted by TNS and GfK, on Ultra HD and screen sales in key TV markets. The study looked at consumer exposure to Ultra HD, perceived benefits and willingness to invest in equipment and content. GfK predicts a 200% increase in Ultra HD screen sales from June to December 2015, with sales expected to reach five million by the end of the year. GfK also forecasts that Ultra HD screens in 2020 will represent more than 70% of total sales across Europe and almost 60% in the Middle East and North Africa. On 2 September 2015, Sony unveiled the Xperia Z5 Premium; the first smartphone with a 4K display. On 9 September 2015, Apple Inc. announced that their new smartphone the iPhone 6S could record video in 4K. On 6 October 2015, Microsoft unveiled the latest version of their Microsoft Surface Book laptop with a display of "over 6 million pixels" and their new phones the Microsoft Lumia 950 and 950 XL, which, aside from 4K video recording that their predecessors included, feature a display of "over 5 million pixels". On 8 December 2015, the ceremony of the opening of the Holy Door in Vatican City, which marked the beginning of the Jubilee Year of Mercy in the Roman Catholic church, was the first worldwide Ultra HD broadcast via satellite. The event was produced by the Vatican Television Center with the support of Eutelsat, Sony, Globecast and DBW Communication. The team did some advanced experimentation with 4K/High Dynamic Range live images and in particular using technology developed by the BBC's R&D division and Japan's public broadcaster NHK in terms of Hybrid Log Gamma (HLG) signals. 2016 The "UHD Alliance", an industry consortium of content creators, distributors, and hardware manufacturers, announced Monday on January 11, 2016 during CES 2016 press conference its "Ultra HD Premium" specification, which defines resolution, bit depth, color gamut, high dynamic range (HDR) performance required for Ultra HD (UHDTV) content and displays to carry their Ultra HD Premium logo. On April 2, 2016, Ultra-high-definition television demo channel UHD1 broadcast the Le Corsaire ballet in Ultra HD live from the Vienna State Opera. The programme was produced by Astra satellite owner, SES in collaboration with European culture channel ARTE, and transmitted free-to-air, available to anyone with reception of the Astra 19.2°E satellites and an ultra HD screen equipped with an HEVC decoder. As of April 2016, The NPD Group reported that 6 million 4K UHD televisions had been sold. In May 2016, Modern Times Group, owner of the Viasat DTH platform announced the launch of Viasat Ultra HD, the first UHD channel for the Nordic region. The channel features selected live sport events especially produced in Ultra HD and launch in the autumn via the SES-5 satellite at 5°E. Viasat is also launching an Ultra HD set-top box from Samsung and a TV-module that enables existing UHD TVs to display the channel. Satellite operator, SES said that the launch of Viasat Ultra HD brings the number of UHD channels (including test channels and regional versions) carried on SES satellites to 24, or 46% of all UHD channels broadcast via satellite worldwide. In August 2016, Sky announced that 4K broadcasts would begin via their new Sky Q 2TB box. The opening match of the 2016–17 Premier League between Hull City and Leicester City on Sky Sports was the first 4K transmission. 2017 On 29 September 2017, BSAT-4a, dedicated for UHDTV programming and was also claimed "the world's first 8K satellite", was launched from the Guiana Space Centre aboard Ariane 5 rocket. BSAT-4a would be used for 2020 Summer Olympics held in Japan. Additionally, in September 2017, Kaleidescape, a manufacturer of home-theater movie players and servers made 4K UHD movies compatible with their movie store, and with their movie players. In December 2017, Qualcomm announced that their Snapdragon 845 chipset and Spectra 280 Image Signal Processor would be the first phone SoC to record video in UHD Premium. 2018 In April 2018, RTL started broadcasting its own UHD channel in Germany. First available at Astra 19.2°E, the Channel shows UHD productions, Formula 1, Football and Deutschland sucht den Superstar. Satellite operator SES broadcast an 8K television signal via its satellite system for the first time in May 2018. The 8K demonstration content, with a resolution of pixels, a frame rate of 60fps, and 10bpc color depth, was encoded in HEVC and transmitted at a rate of 80Mbit/s via the Astra 3B satellite during SES's Industry Days conference in Luxembourg. In June 2018, fuboTV broadcast the 2018 FIFA World Cup live in 4K and HDR10 becoming the first OTT streaming service to do so. Quarter, Semi and Final matches were available on many popular streaming devices including Apple TV, Chromecast Ultra, Fire TV, Roku and Android TVs. Content was streamed at 60 frames per second using HLS and DASH. Video was sent in fragmented MP4 containers delivering HEVC encoded video. On December 1, 2018, NHK launched BS8K, a broadcast channel transmitting at 8K resolution. 2019 In February 25, 2019 at the event of 2019 Mobile World Congress, Sony announced the Xperia 1, the first smartphone featuring a ultrawide 21:9 aspect ratio 4K HDR OLED display (with a resolution of 3840 × 1644), which would be released on May 30, 2019. In May 2019, for the first time in Europe, 8K demonstration content was received via satellite without the need for a separate external receiver or decoder. At the 2019 SES Industry Days conference at Betzdorf, Luxembourg broadcast quality 8K content (with a resolution of pixels at 50fps) was encoded using a Spin Digital HEVC encoder (at a bit rate of 70Mbit/s), uplinked to a single 33MHz transponder on SES' Astra 28.2°E satellites and the downlink received and displayed on a Samsung Q950RB production model TV. List of 4K television channels Global Fashion 4K Festival 4K High 4K TV Europe 4K Heritage 4K UltraHD FunBox 4K Universe Astra Promo beIN Sports 4K (Spain) Canal+ Canal+ 4K Ultra HD (Poland) Digi 4K (Romania) Digiturk UHD Discovery Eurosport 4K Fashion One 4K Fashion TV 4K Festival 4K Insight UHD M3.hu UHD (online only) M6 4K Movistar Fórmula 1 UHD Movistar Partidazo UHD NASA TV NPO 1 Pearl TV ProsiebenSat.1 UHD QVC Deutschland QVC Zwei Rai 4K RMC Sport 1 RTL UHD RTVS UltraHD SES Ultra HD Demo Channel SFR Sport 4K Sky Sport 4K (Italy) Sky UHD1 (UK) Sky UHD2 (UK) Sky Sport UHD (Germany) Sky Sport Bundesliga UHD (Germany) Sky Sports Main Event UK Sky Sports F1 UHD UK Sportklub 4K Sport TV 4K UHD Travelxp Tricolor Ultra HD TF1 4K TNT Sports Ultimate TRT 4K TVE La 1 UHD (Spain) TVP 4K (Poland) UHD-1 V Sport Ultra HD Virgin TV Ultra HD Wow! 4K Africa BTV (Botswana) EBS 4K (Ethiopia) Nahoo sports+ UHD (Ethiopia) Nahoo sports+2 UHD (Ethiopia) ETV sports UHD (Ethiopia) Kana TV 4K (Ethiopia) on Time sports HD (Egypt) Americas NASA TV UHD Sportsnet 4K and Sportsnet One 4K (Canada) TSN 4K and TSN 2 4K (Canada) Hispasat TV 4K (Latin America) Fashion One 4K Fox Sports 4K and Fox Sports 1 4K (USA) DirecTV 4K and DirecTV Cinema 4K (USA) ESPN (USA) 4KUNIVERSE Insight UHD The Country Network SporTV 4K (Brazil) UHD-1 Asia CCTV 4K () CCTV 16 Olympic 4K () Guangdong Radio and Television 4K Variety Channel () Guangzhou Television Ultra HD () SiTV Joy UHD () Wasu-Discovery UHD () Beijing TV Winter Olympic & Documentaries UHD () First Media 4K (Indonesia) IndiHome 4K (Indonesia) Cable 4K (South Korea) KBS1 UHD (South Korea) KBS2 UHD (South Korea) MBC UHD (South Korea) SBS UHD (South Korea) KNN UHD (South Korea) KBC UHD (South Korea) TBC UHD (South Korea) TJB UHD (South Korea) UBC UHD (South Korea) G1 UHD (South Korea) Asia UHD (South Korea) Insight UHD Life U (South Korea) SBS F!L UHD (South Korea) IRIB UHD (Iran) Sky UHD UHD Dream TV UMAX (South Korea) UXN 4K-Sat Tata Play 4K (India) Now Sports 4K (Hong Kong) TVB Jade UHD () Bol Network (Pakistan) Hum News (Pakistan) Kan11 4K (Israel) NHK BS 4K (Japan) BS Nittele 4K (Japan) BS Asahi 4K (Japan) BS TV Tokyo 4K (Japan) BS-TBS 4K (Japan) BS Fuji 4K (Japan) SHOP CHANNEL 4K (Japan) 4K QVC (Japan) THE CINEMA 4K (Japan) J Sports (Japan) Star Channel 4K (Japan) Sukachan 4K (Japan) Japanese movie + Jidaigeki 4K (Japan) Wowow 4K (Japan) Astro Super Sport UHD (Malaysia) True 4K (Thailand) VTVcab 4K (Vietnam) SCTV 4K (Vietnam) Oceania Foxtel Movies Ultra HD (Australia) Fox Sports Ultra HD (Australia) List of 8K television channels NHK BS8K (Japan) CCTV-8K (China) Field trials of UHDTV over DTT networks Field trials using existing digital terrestrial television (DTT) transmitters have included the following. Status of standardization of UHDTV Standards that deal with UHDTV include: Standardization in ITU-R Standards approved in ITU-R: Rec. ITU-R BT.1201-1 (2004) Rec. ITU-R BT.1769 (2006) Rec. ITU-R BT.2020 (2012, revised 2014) Rec. ITU-R BT.2035-0 (07/13) A reference viewing environment for evaluation of HDTV program material or completed programmes Rec. ITU-R BS.2051-0 (02/14) Advanced sound system for programme production Rec. ITU-R BT.2100 (2016) Other documents prepared or being prepared by ITU-R: Report ITU-R BT.2246-3 (2014) The present state of ultra-high definition television Draft New Report ITU-R BT.[UHDTV-DTT TRIALS] (Sub-Working Group 6A-1) Collection of field trials of UHDTV over DTT networks Standardization in ITU-T and MPEG Standards developed in ITU-T's VCEG and ISO/IEC JTC 1's MPEG that support Ultra-HD include: H.265/MPEG-H HEVC High Efficiency Video Coding (2013, revised 2014) H.264/MPEG-4 AVC Advanced Video Coding (support for Ultra-HD added circa 2013) Standardization in SMPTE SMPTE 2036-1 (2009) SMPTE 2036-2 (2008) SMPTE 2036-3 (2010) Standardization for Europe DVB approved the Standard TS 101 154 V2.1.1, published (07/2014) in the DVB Blue Book A157 Specification for the use of Video and Audio Coding in Broadcasting Applications based on the MPEG-2 Transport Stream, which was published by ETSI in the following months. Standardization for Japan and South Korea Standards for UHDTV in South Korea have been developed by its Telecommunications Technology Association. On August 30, 2013, the scenarios for 4K-UHDTV service were described in the Report "TTAR 07.0011: A Study on the UHDTV Service Scenarios and its Considerations". On May 22, 2014, the technical report "TTAR-07.0013: Terrestrial 4K UHDTV Broadcasting Service" was published. On October 13, 2014, an interim standard – "TTAI.KO-07.0123: Transmission and Reception for Terrestrial UHDTV Broadcasting Service" – was published based on HEVC encoding, with MPEG 2 TS, and DVB-T2 serving as the standards. On June 24, 2016, a standard – "TTAK.KO-07.0127: Transmission and Reception for Terrestrial UHDTV Broadcasting Service" – was published based on HEVC encoding, with MMTP/ROUTE IP, and ATSC 3.0 serving as the standards.
Technology
Broadcasting
null
52871617
https://en.wikipedia.org/wiki/Bomdong
Bomdong
Bomdong (), also known as spring cabbage, is a hardy cabbage with tough, sweet leaves. The leaves of bomdong, unlike those of regular napa cabbages, fall to the sides, giving the plant a flat shape. This cabbage is primarily used in the making of kimchi and salads. 70% of the bomdong grown in Korea comes from South Jeolla province, near Haenam and Jindo. Growth Bomdong is picked between January and March. During growth, they spread out like a flower. Culinary use In Korea, bomdong is made into geotjeori (fresh kimchi).
Biology and health sciences
Leafy vegetables
Plants
59107356
https://en.wikipedia.org/wiki/Unbiquadium
Unbiquadium
Unbiquadium, also known as element 124 or eka-uranium, is a hypothetical chemical element; it has placeholder symbol Ubq and atomic number 124. Unbiquadium and Ubq are the temporary IUPAC name and symbol, respectively, until the element is discovered, confirmed, and a permanent name is decided upon. In the periodic table, unbiquadium is expected to be a g-block superactinide and the sixth element in the 8th period. Unbiquadium has attracted attention, as it may lie within the island of stability, leading to longer half-lives, especially for 308Ubq which is predicted to have a magic number of neutrons (184). Despite several searches, unbiquadium has not been synthesized, nor have any naturally occurring isotopes been found to exist. It is believed that the synthesis of unbiquadium will be far more challenging than that of lighter undiscovered elements, and nuclear instability may pose further difficulties in identifying unbiquadium, unless the island of stability has a stronger stabilizing effect than predicted in this region. As a member of the superactinide series, unbiquadium is expected to bear some resemblance to its possible lighter congener uranium. The valence electrons of unbiquadium are expected to participate in chemical reactions fairly easily, though relativistic effects may significantly influence some of its properties; for example, the electron configuration has been calculated to differ considerably from the one predicted by the Aufbau principle. Introduction History Synthesis attempts Because complete nuclear shells (or, equivalently, a magic number of protons or neutrons) may confer additional stability on the nuclei of superheavy elements, moving closer to the center of the island of stability, it was thought that the synthesis of element 124 or nearby elements would populate longer-lived nuclei within the island. Scientists at GANIL (Grand Accélérateur National d'Ions Lourds) attempted to measure the direct and delayed fission of compound nuclei of elements with Z = 114, 120, and 124 in order to probe shell effects in this region and to pinpoint the next spherical proton shell. In 2006, with full results published in 2008, the team provided results from a reaction involving the bombardment of a natural germanium target with uranium ions: + → * → fission The team reported that they had been able to identify compound nuclei fissioning with half-lives > 10−18 s. This result suggests a strong stabilizing effect at Z = 124 and points to the next proton shell at Z > 120, not at Z = 114 as previously thought. A compound nucleus is a loose combination of nucleons that have not arranged themselves into nuclear shells yet. It has no internal structure and is held together only by the collision forces between the target and projectile nuclei. It is estimated that it requires around 10−14 s for the nucleons to arrange themselves into nuclear shells, at which point the compound nucleus becomes a nuclide, and this number is used by IUPAC as the minimum half-life a claimed isotope must have to potentially be recognised as being discovered. Thus, the GANIL experiments do not count as a discovery of element 124. The fission of the compound nucleus 312124 was also studied in 2006 at the tandem ALPI heavy-ion accelerator at the Laboratori Nazionali di Legnaro (Legnaro National Laboratories) in Italy: + → * → fission Similarly to previous experiments conducted at the JINR (Joint Institute for Nuclear Research), fission fragments clustered around doubly magic nuclei such as 132Sn (Z = 50, N = 82), revealing a tendency for superheavy nuclei to expel such doubly magic nuclei in fission. The average number of neutrons per fission from the 312124 compound nucleus (relative to lighter systems) was also found to increase, confirming that the trend of heavier nuclei emitting more neutrons during fission continues into the superheavy mass region. Possible natural occurrence A study in 1976 by a group of American researchers from several universities proposed that primordial superheavy elements, mainly livermorium, unbiquadium, unbihexium, and unbiseptium, could be a cause of unexplained radiation damage (particularly radiohalos) in minerals. Unbiquadium was then suggested to exist in nature with its possible congener uranium in detectable quantities, at a relative abundance of 10−11. Such unbiquadium nuclei were thought to undergo alpha decay with very long half-lives down to flerovium, which would then exist in natural lead at a similar concentration (10−11) and undergo spontaneous fission. This prompted many researchers to search for them in nature from 1976 to 1983. A group led by Tom Cahill, a professor at the University of California at Davis, claimed in 1976 that they had detected alpha particles and X-rays with the right energies to cause the damage observed, supporting the presence of these elements. Others claimed that none had been detected, and questioned the proposed characteristics of primordial superheavy nuclei. In particular, they cited that the magic number N = 228 necessary for enhanced stability would create a neutron-excessive nucleus in unbiquadium that would not be beta-stable. This activity was also proposed to be caused by nuclear transmutations in natural cerium, raising further ambiguity upon this claimed observation of superheavy elements. The possible extent of primordial superheavy elements on Earth today is uncertain. Even if they are confirmed to have caused the radiation damage long ago, they might now have decayed to mere traces, or even be completely gone. It is also uncertain if such superheavy nuclei may be produced naturally at all, as spontaneous fission is expected to terminate the r-process responsible for heavy element formation between mass number 270 and 290, well before elements such as unbiquadium may be formed. Naming Using the 1979 IUPAC recommendations, the element should be temporarily called unbiquadium (symbol Ubq) until it is discovered, the discovery is confirmed, and a permanent name chosen. Although widely used in the chemical community on all levels, from chemistry classrooms to advanced textbooks, the recommendations are mostly ignored among scientists who work theoretically or experimentally on superheavy elements, who call it "element 124", with the symbol E124, (124), or 124. Some researchers have also referred to unbiquadium as eka-uranium, a name derived from the system Dmitri Mendeleev used to predict unknown elements, though such an extrapolation might not work for g-block elements with no known congeners and eka-uranium would instead refer to element 144 or 146 when the term is meant to denote the element directly below uranium. Prospects for future synthesis Every element from mendelevium onward was produced in fusion-evaporation reactions, culminating in the discovery of the heaviest known element oganesson in 2002 and more recently tennessine in 2010. These reactions approached the limit of current technology; for example, the synthesis of tennessine required 22 milligrams of 249Bk and an intense 48Ca beam for six months. The intensity of beams in superheavy element research cannot exceed 1012 projectiles per second without damaging the target and detector, and producing larger quantities of increasingly rare and unstable actinide targets is impractical. Consequently, future experiments must be done at facilities such as the superheavy element factory (SHE-factory) at the Joint Institute for Nuclear Research (JINR) or RIKEN, which will allow experiments to run for longer stretches of time with increased detection capabilities and enable otherwise inaccessible reactions. Even so, it is expected to be a great challenge to continue past elements 120 or 121 given short predicted half-lives and low predicted cross sections. The production of new superheavy elements will require projectiles heavier than 48Ca, which was successfully used in the discovery of elements 114–118, though this necessitates more symmetric reactions which are less favorable. Hence, it is likely that the reactions between 58Fe and a 249Cf or 251Cf target are most promising. Studies on the fission of various superheavy compound nuclei have found that the dynamics of 48Ca- and 58Fe-induced reactions are similar, suggesting that 58Fe projectiles may be viable in producing superheavy nuclei up to Z = 124 or possibly 125. It is also possible that a reaction with 251Cf will produce the compound nucleus 309Ubq* with 185 neutrons, immediately above the N = 184 shell closure. For this reason, the compound nucleus is predicted to have relatively high survival probability and low neutron separation energy, leading to the 1n–3n channels and isotopes 306–308Ubq with a relatively high cross section. These dynamics are highly speculative, as the cross section may be far lower should trends in the production of elements 112–118 continue or the fission barriers be lower than expected, regardless of shell effects, leading to decreased stability against spontaneous fission (which is of growing importance). Nonetheless, the prospect of reaching the N = 184 shell on the proton-rich side of the chart of nuclides by increasing proton number has long been considered; already in 1970, Soviet nuclear physicist Georgy Flyorov suggested bombarding a plutonium target with zinc projectiles to produce isotopes of element 124 at the N = 184 shell. Predicted properties Nuclear stability and isotopes Unbiquadium is of interest to researchers because of its possible location near the center of an island of stability, a theoretical region comprising longer-lived superheavy nuclei. Such an island of stability was first proposed by University of California professor Glenn Seaborg, specifically predicting a region of stability centered at element 126 (unbihexium) and encompassing nearby elements, including unbiquadium, with half-lives possibly as long as 109 years. In known elements, the stability of nuclei decreases greatly with the increase in atomic number after uranium, the heaviest primordial element, so that all observed isotopes with an atomic number above 101 decay radioactively with a half-life under a day. Nevertheless, there is a slight increase in nuclear stability in nuclides around atomic numbers 110–114, which suggests the presence of an island of stability. This is attributed to the possible closure of nuclear shells in the superheavy mass region, with stabilizing effects that may lead to half-lives on the order of years or longer for some as-yet undiscovered isotopes of these elements. While still unproven, the existence of superheavy elements as heavy as oganesson provides evidence of such stabilizing effects, as elements with an atomic number greater than approximately 104 are extremely unstable in models neglecting magic numbers. In this region of the periodic table, N = 184 and N = 228 have been proposed as closed neutron shells, and various atomic numbers have been proposed as closed proton shells, including Z = 124. The island of stability is characterized by longer half-lives of nuclei located near these magic numbers, though the extent of stabilizing effects is uncertain due to predictions of weakening of the proton shell closures and possible loss of double magicity. More recent research predicts the island of stability to instead be centered at beta-stable copernicium isotopes 291Cn and 293Cn, which would place unbiquadium well above the island and result in short half-lives regardless of shell effects. A 2016 study on the decay properties of unbiquadium isotopes 284–339Ubq predicts that 284–304Ubq lie outside the proton drip line and thus may be proton emitters, 305–323Ubq may undergo alpha decay, with some chains terminating as far as flerovium, and heavier isotopes will decay by spontaneous fission. These results, as well as those from a quantum-tunneling model, predict no half-lives over a millisecond for isotopes lighter than 319Ubq, as well as especially short half-lives for 309–314Ubq in the sub-microsecond range due to destabilizing effects immediately above the shell at N = 184. This renders the identification of many unbiquadium isotopes nearly impossible with current technology, as detectors cannot distinguish rapid successive signals from alpha decays in a time period shorter than microseconds. Increasingly short spontaneous fission half-lives of superheavy nuclei and the possible domination of fission over alpha decay will also probably determine the stability of unbiquadium isotopes. While some fission half-lives constituting a "sea of instability" may be on the order of 10−18 s as a consequence of very low fission barriers, especially in even–even nuclei due to pairing effects, stabilizing effects at N = 184 and N = 228 may allow the existence of relatively long-lived isotopes. For N = 184, fission half-lives may increase, though alpha half-lives are still expected to be on the order of microseconds or less, despite the shell closure at 308Ubq. It is also possible that the island of stability may shift to the N = 198 region, where total half-lives may be on the order of seconds, in contrast to neighboring isotopes that would undergo fission in less than a microsecond. In the neutron-rich region around N = 228, alpha half-lives are also predicted to increase with increasing neutron number, meaning that the stability of such nuclei would primarily depend on the location of the beta-stability line and resistance to fission. One early calculation by P. Moller, a physicist at Los Alamos National Laboratory, estimates the total half-life of 352Ubq (with N = 228) to be around 67 seconds, and possibly the longest in the N = 228 region. Chemical Unbiquadium is the fourth member of the superactinide series and should be similar to uranium: both elements have six valence electrons over a noble gas core. In the superactinide series, the Aufbau principle is expected to break down due to relativistic effects, and an overlap of the 5g, 6f, 7d, and 8p orbitals is expected. The ground state electron configuration of unbiquadium is thus predicted to be [Og] 6f3 8s2 8p1 or 6f2 8s2 8p2, in contrast to [Og] 5g4 8s2 derived from Aufbau. This predicted overlap of orbitals and uncertainty in order of filling, especially for f and g orbitals, renders predictions of chemical and atomic properties of these elements very difficult. One predicted oxidation state of unbiquadium is +6, which would exist in the halides UbqX6 (X = a halogen), analogous to the known +6 oxidation state in uranium. Like the other early superactinides, the binding energies of unbiquadium's valence electrons are predicted to be small enough that all six should easily participate in chemical reactions. The predicted electron configuration of the Ubq5+ ion is [Og] 6f1.
Physical sciences
Periods
Chemistry