id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
966,870
https://en.wikipedia.org/wiki/Baird%27s%20tapir
The Baird's tapir (Tapirus bairdii), also known as the Central American tapir, is a species of tapir native to Mexico, Central America, and northwestern South America. It is the largest of the three species of tapir native to the Americas, as well as the largest native land mammal in both Central and South America. Names The Baird's tapir is named after the American naturalist Spencer Fullerton Baird, who traveled to Mexico in 1843 and observed the animals. However, the species was first documented by another American naturalist, W. T. White. Like the other American tapirs (the mountain tapir and the South American tapir), the Baird's tapir is commonly called danta by people in all areas. In the regions around Oaxaca and Veracruz, it is referred to as the . Panamanians, and Colombians call it , and in Belize, where the Baird's tapir is the national animal, it is known as the mountain cow. In Mexico, it is called in Tzeltal; in Lacandon, it is called , meaning "jungle horse" and in Tojolab'al it is called , meaning "big animal". In Panama, the Kunas people call the Baird's tapir moli in their colloquial language (Tule kaya), oloalikinyalilele, oloswikinyaliler, or oloalikinyappi in their political language (Sakla kaya), and ekwirmakka or ekwilamakkatola in their spiritual language (Suar mimmi kaya). Habitat The Baird's tapir is found in many diverse vegetation types. They can withstand elevations from sea level to up to . The animal can be found in wet areas like mangrove forests, marshes, swamp areas, and wet tropical rainforests. It also resides in drier areas like riparian woodlands, deciduous forests, and mountainous cloud forests. It prefers secondary growth forests, when available, due to increase in understory plants for foraging and protection. Food and water availability as well as protection are key factors in habitat selection. Description The Baird's tapir has a distinctive cream-colored marking on its face, throat, and tips of its ears, with a dark spot on each cheek, behind and below the eye. The rest of its bristly hair is dark brown or grayish brown. The animal is very muscly, and about the size of a small donkey. A long thin mane is present but not always conspicuous. It has two small oval shaped eyes flush with the side of the head. Its ears are large, oval-shaped and not very mobile. Baird's tapirs average in length, but can range between , not counting a stubby, vestigial tail of , and in height. Body mass in adults can range from . Like the other species of tapirs, they have small, stubby tails. Their snout and upper lips project forward to create a fleshy and flexible proboscis. This proboscis is their strongest sense organ that aids in finding food and detecting physical stimuli. Their legs are short and slender; well adapted to rapid movement through underbrush. They have four toes on each front foot, and three toes on each back foot. Lifecycle The gestation period is about 400 days, after which one offspring is born to an average mass of . Multiple births are extremely rare, but in September 2020, a Baird's tapir in Boston's Franklin Park Zoo birthed twins. The babies, as with all species of tapir, have reddish-brown hair with white spots and stripes. This pattern creates a camouflage which affords them excellent protection in the shady understory of the forest. This pattern eventually fades into the adult coloration. For the first week of their lives, infant Baird's tapirs are hidden in secluded locations while their mothers forage for food and return periodically to nurse them. Later, the young follow their mothers on feeding expeditions. At three weeks of age, the young are able to swim. Weaning occurs after one year, and sexual maturity is usually reached 6 to 12 months later. Baird's tapirs can live for over 30 years. Behavior The Baird's tapir may be active at all hours, but is primarily nocturnal. It forages for leaves and fallen fruit, using well-worn tapir paths which zigzag through the thick undergrowth of the forest. The animal usually stays close to water and enjoys swimming and wading – on especially hot days, individuals will rest in a watering hole for hours with only their heads above water. When in danger, these animals will seek water. It generally leads a solitary life, though small feeding groups are not uncommon, and individuals, especially those of different ages (young with their mothers, juveniles with adults), are often observed together. The animals communicate with one another through shrill whistles and squeaks. When the Baird's tapir mate, they form long-term monogamous pairs. These pairs are known to defend territory. Though they can breed at any point in the year, it is most common prior to rainy seasons. Both parents take part in raising the children, as they move and sleep together as a unit. The mother will guide young by a nudging movement with her proboscis. Ecological relationships The Baird's tapir has a symbiotic relationship with cleaner birds that remove ticks from its fur: the yellow-headed caracara (Milvago chimachima) and the black vulture (Coragyps atratus) have both been observed removing and eating ticks from tapirs. Baird's tapirs often lie down for cleaning, and also present tick-infested areas to the cleaner birds by lifting its limbs and rolling from one side to the other. These animals also have a marginal but noted effect as seed dispersers. Guanacaste (Enterolobium cyclocarpum), Sapodilla (Manilkara zapota), and Encina (Quercus oleoides) have all found to be sometimes passable through the tapir digestive system. The intense chewing of these hard seeds serve to scarify them before germination and can improve the seed's likelihood of success. Diet The Baird's tapir is herbivorous, rummaging from the forest floor to over the ground. Leaves from an assortment of plant types provide the greater part of their eating regimen, yet they likewise eat twigs, blossoms, hedges, grasses, and fruits. Fruits tend to be favorable when in season, but it depends on its availability. Dietary makeup of plant species additionally fluctuates with season. The presence of armor or biting insects on a plant does not hinder them from consuming that plant. They burn through the majority of their waking hours foraging in a zigzag fashion. These animals lean toward plant types of medium to tall level, yet the main plants that are totally kept away from are small, widely dispersed seedlings and large shade-level trees. By and large, it will move onto another plant before each of the leaves are consumed on the one it is presently eating. They commonly feed in enormous tree falls or secondary forests because of the great thickness of understory plants which are for, the most part, exceptionally digestible and have not many protective poisons. Once in a while they will ascend on their rear feet to arrive at leaves past their ordinary reach, or knock down slim or dead plants to get fruit or leaves. The absorption of nutrients in light of the huge volume and extreme diversity of recognizable plant parts in their excrement is by all accounts poor. Potential danger to humans Attacks on humans are rare and normally in self-defense. In 2006, Carlos Manuel Rodríguez Echandi, the former Costa Rican Minister of Environment and Energy, was attacked and injured by a Baird's tapir after he followed it off the trail. Due to their size, adults can be potentially dangerous to humans, and should not be approached if spotted in the wild. The animal is most likely to follow or chase a human for a bit, though they have been known to charge and bite humans on rare occasions. Threats According to the IUCN, the Baird's tapir is endangered. There are many contributing factors in the decline of the species, including loss of habitat from deforestation, forest fires, and large scale industrial projects. In certain areas, poaching, disease transmission from domesticated animals, pollution of native water bodies, and the developing effects of climate change all threaten this species. Though the animal is only hunted by a few humans, any loss of life is a serious blow to the tapir population, especially because their reproductive rate is so slow. Conservation In Mexico, Belize, Guatemala, Costa Rica, and Panama, hunting of the Baird's tapirs is illegal, but the laws protecting them are often unenforced. The issues of illegal logging in conserved areas also threaten these animals. Therefore, many conservationists are urging for the protection of existing habitat by improving maintenance and protection in existing habitat, through strengthening partnership with indigenous territories. Goals also include re-establishing corridors of connection between existing habitat including the Mesoamerican Biological Corridor, and improving education of locals to uphold and protect biodiversity. Captive breeding programs are helpful with many large terrestrial species, but there is a study showing a small population of Baird's tapirs in North American and Central American zoos had inbreeding and divergence from the wild population. Conservationists are urging for thoughtful approaches to breeding programs that focus on maintaining genetic diversity. Predators Due to its size, an adult Baird's tapir has very few natural predators, with only large adult American crocodiles ( or more) and adult jaguars capable of preying on Baird's tapirs. Even in these cases, the outcomes are unpredictable and often in the Baird's tapir's favor, as is evident on multiple Baird's tapirs documented in Corcovado National Park with large claw marks covering their hides. However, juveniles may be preyed on by smaller crocodiles and by pumas. In a remote video-monitor, a spectacled bear was captured attacking an adult tapir perhaps nearly twice its own body mass. References External links ARKive – images and movies of the Baird's Tapir (Tapirus bairdii) Tapir Specialist Group – Baird's Tapir Tapirs Ungulates of Central America Mammals of Colombia Mammals of Mexico Fauna of Southern Mexico EDGE species Mammals described in 1865
Baird's tapir
[ "Biology" ]
2,168
[ "EDGE species", "Biodiversity" ]
966,970
https://en.wikipedia.org/wiki/NIST%20stone%20test%20wall
The NIST stone test wall is an experiment by the United States National Institute of Standards and Technology to determine how different types of construction stone weather. It includes 2352 samples of stone from 47 US states and 16 countries. The wall measures approximately 12 m long, 4 m high, 0.6 m thick at the bottom, and 0.3 m at the top. It includes varieties of andesite, argillite, basalt, bluestone, breccia, conglomerate, coquina, coral, dacite, diabase, diorite, dolomite, gabbro, gneiss, granite, granodiorite, greenstone, labradorite, limestone, marble, melaphyre, pitchstone, pumice, pyrophyllite, quartz, quartzite, sandstone, schist, serpentinite, shellstone, soapstone, syenite, travertine, and tuff. The wall was built by one stonemason, Vincent Di Benedeto, in 1948. He used two types of stone-setting mortar on the front. He used both a 1:3 lime mortar, with a high calcium hydrate and a 1:0.4:3 portland cement, whiting, and sand mortar. The wall was moved from its original location in Washington, D.C. to Gaithersburg, Maryland in May 1977. References NIST's stone test wall site Stonemasonry Building stone National Institute of Standards and Technology Product testing
NIST stone test wall
[ "Engineering" ]
311
[ "Construction", "Stonemasonry" ]
967,164
https://en.wikipedia.org/wiki/Edman%20degradation
Edman degradation, developed by Pehr Edman, is a method of sequencing amino acids in a peptide. In this method, the amino-terminal residue is labeled and cleaved from the peptide without disrupting the peptide bonds between other amino acid residues. Mechanism Phenyl isothiocyanate is reacted with an uncharged N-terminal amino group, under mildly alkaline conditions, to form a cyclical phenylthiocarbamoyl derivative. Then, under acidic conditions, this derivative of the terminal amino acid is cleaved as a thiazolinone derivative. The thiazolinone amino acid is then selectively extracted into an organic solvent and treated with acid to form the more stable phenylthiohydantoin (PTH)- amino acid derivative that can be identified by using chromatography or electrophoresis. This procedure can then be repeated again to identify the next amino acid. A major drawback to this technique is that the peptides being sequenced in this manner cannot have more than 50 to 60 residues (and in practice, under 30). The peptide length is limited due to the cyclical derivatization not always going to completion. The derivatization problem can be resolved by cleaving large peptides into smaller peptides before proceeding with the reaction. It is able to accurately sequence up to 30 amino acids with modern machines capable of over 99% efficiency per amino acid. An advantage of the Edman degradation is that it only uses 10 - 100 pico-moles of peptide for the sequencing process. The Edman degradation reaction was automated in 1967 by Edman and Beggs to speed up the process and 100 automated devices were in use worldwide by 1973. Limitations Because the Edman degradation proceeds from the N-terminus of the protein, it will not work if the N-terminus has been chemically modified (e.g. by acetylation or formation of pyroglutamic acid). Sequencing will stop if a non-α-amino acid is encountered (e.g. isoaspartic acid), since the favored five-membered ring intermediate is unable to be formed. Edman degradation is generally not useful to determine the positions of disulfide bridges. It also requires peptide amounts of 1 picomole or above for discernible results. Coupled analysis Following 2D SDS PAGE the proteins can be transferred to a polyvinylidene difluoride (PVDF) blotting membrane for further analysis. Edman degradations can be performed directly from a PVDF membrane. N-terminal residue sequencing resulting in five to ten amino acid may be sufficient to identify a Protein of Interest (POI). See also Bergmann degradation Dansyl chloride References Molecular biology Organic reactions Protein structure Proteomics Name reactions Chemical tests Degradation reactions
Edman degradation
[ "Chemistry", "Biology" ]
583
[ "Chemical tests", "Organic reactions", "Name reactions", "Structural biology", "Molecular biology", "Biochemistry", "Degradation reactions", "Protein structure" ]
967,212
https://en.wikipedia.org/wiki/Biological%20patents%20in%20the%20United%20States
As with all utility patents in the United States, a biological patent provides the patent holder with the right to exclude others from making, using, selling, or importing the claimed invention or discovery in biology for a limited period of time - for patents filed after 1998, 20 years from the filing date. Until recently, natural biological substances themselves could be patented (apart from any associated process or usage) in the United States if they were sufficiently "isolated" from their naturally occurring states. Prominent historical examples of such patents on isolated products of nature include adrenaline, insulin, vitamin B12, and gene patents. However, the US Supreme Court ruled in 2013 that mere isolation by itself is not sufficient for something to be deemed inventive subject matter. History The United States has been patenting chemical compositions based upon human products for over 100 years. The first patent for a human product was granted on March 20, 1906, for a purified form of adrenaline. It was challenged and upheld in Parke-Davis v. Mulford. Judge Hand argued that natural substances when they are purified are more useful than the original natural substances. The 1970s marked the first time when scientists patented methods on their biotechnological inventions with recombinant DNA. It was not until 1980 that patents for whole-scale living organisms were permitted. In 1980, the U.S. Supreme Court, in Diamond v. Chakrabarty, upheld the first patent on a newly created living organism, a bacterium for digesting crude oil in oil spills. The patent examiner for the United States Patent and Trademark Office had rejected the patent of a living organism, but Chakrabarty appealed. As a rule, raw natural material is generally rejected for patent approval by the USPTO. The Court ruled that as long as the organism is truly "man-made", such as through genetic engineering, then it is patentable. Because the DNA of Chakrabarty's organism was modified, it was patentable. Since that 1980 court case, there have been many patents of genetically modified organisms. This includes bacteria (as just mentioned), viruses, seeds, plants, cells, and even non-human animals. Isolated and manipulated cells - even human cells - can also be patented. In 1998, the U.S. Patent and Trademark Office (PTO) issued a broad patent claiming primate (including human) embryonic stem cells, entitled "Primate Embryonic Stem Cells" (). On 13 March 2001, a second patent () was issued with the same title but focused on human embryonic stem cells. In another example, a genetically modified mouse, dubbed the Oncomouse, that is useful for studying cancer, was patented by Harvard University as . Companies and organizations, like the University of California, have patented entire genomes. Food patents An early example of a food patent is the patent granted to RiceTec for basmati rice in 1997. In 1999, a patent was filed for a peanut butter and jelly sandwich that was without crust. Agriculture giant Monsanto filed for a patent on certain pig genes in 2004. Gene patents A gene patent is a patent on a specific isolated gene sequence, its chemical composition, the processes for obtaining or using it, or a combination of such claims. With respect to subject matter, gene patents may be considered a subset of the broader category of biological patents. Gene patents may claim the isolated natural sequences of genes, the use of a natural sequence for purposes such as diagnostic testing, or a natural sequence that has been altered by adding a promoter or other changes to make it more useful. In the United States, patents on genes have only been granted on isolated gene sequences with known functions, and these patents cannot be applied to the naturally occurring genes in humans or any other naturally occurring organism. Examples The "Chakrabarty patent", owned by General Electric, was filed in 1972 and issued in 1981 after the Supreme Court decision discussed above. While not commercially important, this patent and the Supreme Court case "opened the floodgates for protection of biotechnology-related inventions and helped spark the growth of an industry". In 1978 University of California filed a patent application for the cDNA encoding human growth hormone, which issued in 1982 as U.S. Patent No. 4,363,877 and listed Howard M. Goodman, John Shine, and Peter H. Seeburg as inventors. University of California licensed its patent to Lilly, leading to extended litigation among University of California, Lilly, and Genentech; each of Lilly and Genentech had introduced recombinant human growth hormone drugs, which were among the first biotech drugs brought to market. The "Cohen/Boyer patents" were invented by Stanley Cohen of Stanford University and Herbert Boyer of University of California, San Francisco. The patents cover inventions for splicing genes to make recombinant proteins that are foundational to the biotechnology industry. Stanford managed the patents and licensed them nonexclusively and broadly, earning over $200 million for the universities. The "Axel Patents" were invented by Richard Axel, Michael H. Wigler, and Saul J. Silverstein of Columbia University. These patents covered cotransformation, a form of transformation, another foundational method of biotechnology; Columbia licensed these patents nonexclusively and broadly and earned about $790 million. Key methods to manipulate DNA to create monoclonal antibodies are covered by a thicket of patents, including the "Winter patent" was invented by Gregory P. Winter of the Medical Research Council which covers methods to make chimeric, humanized antibodies and has been licensed to about fifty companies. Abgenix owned a patent on methods of making transgenic mice lacking endogenous heavy chains. The "Boss patent" was owned by Celltech and covered methods of making recombinant antibodies and antibody fragments, together with vectors and host cells useful in these processes. Genentech owned the "Old Cabilly" patent that covered altered and native immunoglobulins prepared in recombinant cell culture, as well as the "New Cabilly" patent that covers artificial synthesis of antibody molecules. Medarex owned a patent that covered high affinity human antibodies from transgenic mice. These patents have been broadly licensed and have been the subject of litigation among patent holders and companies that have brought monoclonal antibody drugs to market. A patent application for the isolated BRCA1 gene and cancer-promoting mutations, as well as methods to diagnose the likelihood of getting breast cancer, was filed by the University of Utah, National Institute of Environmental Health Sciences (NIEHS) and Myriad Genetics in 1994; over the next year, Myriad, in collaboration with investigators from Endo Recherche, Inc., HSC Research & Development Limited Partnership, and University of Pennsylvania, isolated and sequenced the BRCA2 gene, and the first BRCA2 patent was filed in the U.S. by Myriad and other institutions in 1995. Myriad is the exclusive licensee of these patents and has enforced them in the US against clinical diagnostic labs. This means that legally all testing must be done through Myriad's lab or by a lab which it had licensed. This business model led from Myriad being a startup in 1994 to being a publicly traded company with 1200 employees and about $500M in annual revenue in 2012; it also led to controversy and the Association for Molecular Pathology v. Myriad Genetics lawsuit mentioned below. The patents expire, starting in 2014. Myriad Genetics case Association for Molecular Pathology v. Myriad Genetics was a 2013 case challenging the validity of gene patents in the United States, specifically challenging certain claims in issued patents owned or controlled by Myriad Genetics that covered isolated DNA sequences, methods to diagnose propensity to cancer by looking for mutated DNA sequences, and methods to identify drugs using isolated DNA sequences. The case was originally heard in the United States District Court for the Southern District of New York, which ruled that all the challenged claims were not patentable subject matter. Myriad then appealed to the United States Court of Appeals for the Federal Circuit. The Circuit court overturned the previous decision in part, ruling that isolated DNA which does not exist alone in nature can be patented and that the drug screening claims were valid, and confirmed in part, finding the diagnostic claims unpatentable. The plaintiffs appealed to the Supreme Court, which granted cert and remanded the case back to the Federal Circuit. The Federal Circuit did not change its opinion, so on September 25, 2012, the American Civil Liberties Union and the Public Patent Foundation filed a petition for certiorari with the Supreme Court with respect to the second Federal Circuit Decision. On November 30, 2012, the Supreme Court agreed to hear the plaintiffs' appeal of the Federal Circuit's ruling. In June 2013, in Association for Molecular Pathology v. Myriad Genetics (No. 12-398), the court unanimously ruled that, "A naturally occurring DNA segment is a product of nature and not patent eligible merely because it has been isolated," invalidating Myriad's patents on the BRCA1 and BRCA2 genes. However, the Court also held synthesized DNA sequences, not occurring in nature, can still be eligible for patent protection. Controversy Controversy over biological patents occurs on many levels, driven by, for example, concern over the expense of patented medicines or diagnostics tests (against Myriad Genetics with respect to their breast cancer diagnostic test), concerns over genetically modified food which comes from patented genetically modified seeds as well as farmer's rights to harvest and plant seeds from the crops, for example legal actions by Monsanto using its patents. The patenting of organisms or extracts from indigenous plants or animals that are already known to local populations has been called biopiracy. Critics say that such patents deny local populations the right to use those inventions, for instance, to grow food. In the United States, biological material derived from humans can be patented if it has been sufficiently transformed. In litigation that was famous at the time, a cancer patient, John Moore, sued the University of California. Cancer cells had been removed from Moore as part of his medical treatment; these cells were studied and manipulated by researchers. The resulting cells were "immortalized" and were patented by the university as and have become widely used research tools. The subject of the litigation was the financial gain that the university and researchers achieved by additionally charging money to companies by licensing the cell line. Michael Heller and Rebecca Eisenberg are academic law professors who believe that biological patents are creating a "tragedy of the anticommons," "in which people underuse scarce resources because too many owners can block each other". Others claim that patents have not created this "anticommons" effect on research, based on surveys of scientists. Professional societies of pathologists have criticized patents on disease genes and exclusive licenses to perform DNA diagnostic tests. In the 2009 Myriad case, doctors and pathologists complained that the patent on BRCA1 and BRCA2 genes prevented patients from receiving second opinions on their test results. Pathologists complained that the patent prevented them from carrying out their medical practice of doing diagnostic tests on patient samples and interpreting the results. Another example is a series of lawsuits filed by the Alzheimer's Institute of America (AIA) starting in 2003 with the last ending in 2013, concerning a gene patent it controlled on the Swedish mutation and transgenic mice carrying it; the mutation that is important in Alzheimers. The mice are widely used in Alzheimer's research, both by academic scientists doing basic research and by companies that use the mice to test products in development. Two of these suits are directed to companies that were started based on inventions made at universities (Comentis and Avid), and in each of those cases, the university was sued along with the company. While none of the suits target universities that are conducting basic research using the mice, one of the suits is against Jackson Labs, a nonprofit company that provides transgenic mice to academic and commercial researchers and is an important repository of such mice. Ultimately all the suits failed; the suit against Jackson Labs failed after the NIH granted it protection as a government contractor. See also American Type Culture Collection (ATCC) Bioethics Biopiracy Budapest Treaty Commercialization of indigenous knowledge Diamond v. Chakrabarty was a United States Supreme Court case dealing with whether genetically modified micro-organisms can be patented. Genetically modified food Human Genome Project Intellectual property John Moore (patent) Pharmaceutical patent Plant breeders' rights Stem cell controversy Traditional knowledge References External links Biotechnology on the WIPO web site Amicus Brief by Dr. James Watson Genomics Law Report. Newsletter published by law firm of Robinson Bradshaw & Hinson. The Human Genome Project information pages Human Genome Project pages on Genetics and Patenting Bioethics and Patent Law: The Cases of Moore and the Hagahai People by Anja von der Ropp and Tony Taubman, WIPO Magazine, September 2006. "An Examination of the Issues Surrounding Biotechnology Patenting and its Effect Upon Entrepreneurial Companies", United States Congressional Research Service, August 31, 2000 "Stem Cell Research and Patents: An Introduction to the Issues", United States Congressional Research Service, September 10, 2001 United States patent law Biological patent law Biotechnology in the United States Genetic engineering in the United States Environmental law in the United States
Biological patents in the United States
[ "Biology" ]
2,704
[ "Biotechnology law", "Biological patent law", "Biotechnology in the United States", "Biotechnology by country" ]
967,315
https://en.wikipedia.org/wiki/Temporal%20paradox
A temporal paradox, time paradox, or time travel paradox, is a paradox, an apparent contradiction, or logical contradiction associated with the idea of time travel or other foreknowledge of the future. While the notion of time travel to the future complies with the current understanding of physics via relativistic time dilation, temporal paradoxes arise from circumstances involving hypothetical time travel to the past – and are often used to demonstrate its impossibility. Types Temporal paradoxes fall into three broad groups: bootstrap paradoxes, consistency paradoxes, and Newcomb's paradox. Bootstrap paradoxes violate causality by allowing future events to influence the past and cause themselves, or "bootstrapping", which derives from the idiom "." Consistency paradoxes, on the other hand, are those where future events influence the past to cause an apparent contradiction, exemplified by the grandfather paradox, where a person travels to the past to prevent the conception of one of their ancestors, thus eliminating all the ancestor's descendants. Newcomb's paradox stems from the apparent contradictions that stem from the assumptions of both free will and foreknowledge of future events. All of these are sometimes referred to individually as "causal loops." The term "time loop" is sometimes referred to as a causal loop, but although they appear similar, causal loops are unchanging and self-originating, whereas time loops are constantly resetting. Bootstrap paradox A bootstrap paradox, also known as an information loop, an information paradox, an ontological paradox, or a "predestination paradox" is a paradox of time travel that occurs when any event, such as an action, information, an object, or a person, ultimately causes itself, as a consequence of either retrocausality or time travel. Backward time travel would allow information, people, or objects whose histories seem to "come from nowhere". Such causally looped events then exist in spacetime, but their origin cannot be determined. The notion of objects or information that are "self-existing" in this way is often viewed as paradoxical. A notable example occurs in the 1958 science fiction short story "—All You Zombies—", by Robert A. Heinlein, wherein the main character, an intersex individual, becomes both their own mother and father; it was adapted with great fidelity in the 2014 film Predestination. Allen Everett gives the movie Somewhere in Time as an example involving an object with no origin: an old woman gives a watch to a playwright who later travels back in time and meets the same woman when she was young, and gives her the same watch that she will later give to him. An example of information which "came from nowhere" is in the movie Star Trek IV: The Voyage Home, in which a 23rd-century engineer travels back in time, and gives the formula for transparent aluminum to the 20th-century engineer who supposedly invented it. Predestination paradox Smeenk uses the term "predestination paradox" to refer specifically to situations in which a time traveler goes back in time to try to prevent some event in the past. The "predestination paradox" is a concept in time travel and temporal mechanics, often explored in science fiction. It occurs when a future event is the cause of a past event, which in turn becomes the cause of the future event, forming a self-sustaining loop in time. This paradox challenges conventional understandings of cause and effect, as the events involved are both the origin and the result of each other. A notable example is found in the TV series Doctor Who, where a character saves her father in the past, fulfilling a memory he had shared with her as a child about a strange woman having saved his life. The predestination paradox raises philosophical questions about free will, determinism, and the nature of time itself. It is commonly used as a narrative device in fiction to highlight the interconnectedness of events and the inevitability of certain outcomes. Consistency paradox The consistency paradox or grandfather paradox occurs when the past is changed in any way that directly negates the conditions required for the time travel to occur in the first place, thus creating a contradiction. A common example given is traveling to the past and intervening with the conception of one's ancestors (such as causing the death of the parent beforehand), thus affecting the conception of oneself. If the time traveler were not born, then it would not be possible for the traveler to undertake such an act in the first place. Therefore, the ancestor lives to conceive the time-traveler's next-generation ancestor, and eventually the time traveler. There is thus no predicted outcome to this. Consistency paradoxes occur whenever changing the past is possible. A possible resolution is that a time traveller can do anything that did happen, but cannot do anything that did not happen. Doing something that did not happen results in a contradiction. This is referred to as the Novikov self-consistency principle. Variants The grandfather paradox encompasses any change to the past, and it is presented in many variations, including killing one's past self. Both the "retro-suicide paradox" and the "grandfather paradox" appeared in letters written into Amazing Stories in the 1920s. Another variant of the grandfather paradox is the "Hitler paradox" or "Hitler's murder paradox", in which the protagonist travels back in time to murder Adolf Hitler before he can instigate World War II and the Holocaust. Rather than necessarily physically preventing time travel, the action removes any reason for the travel, along with any knowledge that the reason ever existed. Physicist John Garrison et al. give a variation of the paradox of an electronic circuit that sends a signal through a time machine to shut itself off, and receives the signal before it sends it. Newcomb's paradox Newcomb's paradox is a thought experiment showing an apparent contradiction between the expected utility principle and the strategic dominance principle. The thought experiment is often extended to explore causality and free will by allowing for "perfect predictors": if perfect predictors of the future exist, for example if time travel exists as a mechanism for making perfect predictions, then perfect predictions appear to contradict free will because decisions apparently made with free will are already known to the perfect predictor. Predestination does not necessarily involve a supernatural power, and could be the result of other "infallible foreknowledge" mechanisms. Problems arising from infallibility and influencing the future are explored in Newcomb's paradox. Proposed resolutions Logical impossibility Even without knowing whether time travel to the past is physically possible, it is possible to show using modal logic that changing the past results in a logical contradiction. If it is necessarily true that the past happened in a certain way, then it is false and impossible for the past to have occurred in any other way. A time traveler would not be able to change the past from the way it is, but would only act in a way that is already consistent with what necessarily happened. Consideration of the grandfather paradox has led some to the idea that time travel is by its very nature paradoxical and therefore logically impossible. For example, the philosopher Bradley Dowden made this sort of argument in the textbook Logical Reasoning, arguing that the possibility of creating a contradiction rules out time travel to the past entirely. However, some philosophers and scientists believe that time travel into the past need not be logically impossible provided that there is no possibility of changing the past, as suggested, for example, by the Novikov self-consistency principle. Dowden revised his view after being convinced of this in an exchange with the philosopher Norman Swartz. Illusory time Consideration of the possibility of backward time travel in a hypothetical universe described by a Gödel metric led famed logician Kurt Gödel to assert that time might itself be a sort of illusion. He suggests something along the lines of the block time view, in which time is just another dimension like space, with all events at all times being fixed within this four-dimensional "block". Physical impossibility Sergey Krasnikov writes that these bootstrap paradoxes – information or an object looping through time – are the same; the primary apparent paradox is a physical system evolving into a state in a way that is not governed by its laws. He does not find these paradoxical and attributes problems regarding the validity of time travel to other factors in the interpretation of general relativity. Self-sufficient loops A 1992 paper by physicists Andrei Lossev and Igor Novikov labeled such items without origin as Jinn, with the singular term Jinnee. This terminology was inspired by the Jinn of the Quran, which are described as leaving no trace when they disappear. Lossev and Novikov allowed the term "Jinn" to cover both objects and information with the reflexive origin; they called the former "Jinn of the first kind", and the latter "Jinn of the second kind". They point out that an object making circular passage through time must be identical whenever it is brought back to the past, otherwise it would create an inconsistency; the second law of thermodynamics seems to require that the object tends to a lower energy state throughout its history, and such objects that are identical in repeating points in their history seem to contradict this, but Lossev and Novikov argued that since the second law only requires entropy to increase in closed systems, a Jinnee could interact with its environment in such a way as to regain "lost" entropy. They emphasize that there is no "strict difference" between Jinn of the first and second kind. Krasnikov equivocates between "Jinn", "self-sufficient loops", and "self-existing objects", calling them "lions" or "looping or intruding objects", and asserts that they are no less physical than conventional objects, "which, after all, also could appear only from either infinity or a singularity." Novikov self-consistency principle The self-consistency principle developed by Igor Dmitriyevich Novikov expresses one view as to how backward time travel would be possible without the generation of paradoxes. According to this hypothesis, even though general relativity permits some exact solutions that allow for time travel that contain closed timelike curves that lead back to the same point in spacetime, physics in or near closed timelike curves (time machines) can only be consistent with the universal laws of physics, and thus only self-consistent events can occur. Anything a time traveler does in the past must have been part of history all along, and the time traveler can never do anything to prevent the trip back in time from happening, since this would represent an inconsistency. The authors concluded that time travel need not lead to unresolvable paradoxes, regardless of what type of object was sent to the past. Physicist Joseph Polchinski considered a potentially paradoxical situation involving a billiard ball that is fired into a wormhole at just the right angle such that it will be sent back in time and collides with its earlier self, knocking it off course, which would stop it from entering the wormhole in the first place. Kip Thorne referred to this problem as "Polchinski's paradox". Thorne and two of his students at Caltech, Fernando Echeverria and Gunnar Klinkhammer, went on to find a solution that avoided any inconsistencies, and found that there was more than one self-consistent solution, with slightly different angles for the glancing blow in each case. Later analysis by Thorne and Robert Forward showed that for certain initial trajectories of the billiard ball, there could be an infinite number of self-consistent solutions. It is plausible that there exist self-consistent extensions for every possible initial trajectory, although this has not been proven. The lack of constraints on initial conditions only applies to spacetime outside of the chronology-violating region of spacetime; the constraints on the chronology-violating region might prove to be paradoxical, but this is not yet known. Novikov's views are not widely accepted. Visser views causal loops and Novikov's self-consistency principle as an ad hoc solution, and supposes that there are far more damaging implications of time travel. Krasnikov similarly finds no inherent fault in causal loops but finds other problems with time travel in general relativity. Another conjecture, the cosmic censorship hypothesis, suggests that every closed timelike curve passes through an event horizon, which prevents such causal loops from being observed. Parallel universes The interacting-multiple-universes approach is a variation of the many-worlds interpretation of quantum mechanics that involves time travelers arriving in a different universe than the one from which they came; it has been argued that, since travelers arrive in a different universe's history and not their history, this is not "genuine" time travel. Stephen Hawking has argued for the chronology protection conjecture, that even if the MWI is correct, we should expect each time traveler to experience a single self-consistent history so that time travelers remain within their world rather than traveling to a different one. David Deutsch has proposed that quantum computation with a negative delay—backward time travel—produces only self-consistent solutions, and the chronology-violating region imposes constraints that are not apparent through classical reasoning. However Deutsch's self-consistency condition has been demonstrated as capable of being fulfilled to arbitrary precision by any system subject to the laws of classical statistical mechanics, even if it is not built up by quantum systems. Allen Everett has also argued that even if Deutsch's approach is correct, it would imply that any macroscopic object composed of multiple particles would be split apart when traveling back in time, with different particles emerging in different worlds. See also Quantum mechanics of time travel Fermi paradox Cosmic censorship hypothesis Retrocausality Wormhole Causality Causal structure Chronology protection conjecture Münchhausen trilemma Time loop Time travel in fiction Time travel References Causality Physical paradoxes Thought experiments in physics Time travel
Temporal paradox
[ "Physics" ]
2,873
[ "Spacetime", "Physical quantities", "Time", "Time travel" ]
967,440
https://en.wikipedia.org/wiki/Riesz%E2%80%93Thorin%20theorem
In mathematics, the Riesz–Thorin theorem, often referred to as the Riesz–Thorin interpolation theorem or the Riesz–Thorin convexity theorem, is a result about interpolation of operators. It is named after Marcel Riesz and his student G. Olof Thorin. This theorem bounds the norms of linear maps acting between spaces. Its usefulness stems from the fact that some of these spaces have rather simpler structure than others. Usually that refers to which is a Hilbert space, or to and . Therefore one may prove theorems about the more complicated cases by proving them in two simple cases and then using the Riesz–Thorin theorem to pass from the simple cases to the complicated cases. The Marcinkiewicz theorem is similar but applies also to a class of non-linear maps. Motivation First we need the following definition: Definition. Let be two numbers such that . Then for define by: . By splitting up the function in as the product and applying Hölder's inequality to its power, we obtain the following result, foundational in the study of -spaces: This result, whose name derives from the convexity of the map on , implies that . On the other hand, if we take the layer-cake decomposition , then we see that and , whence we obtain the following result: In particular, the above result implies that is included in , the sumset of and in the space of all measurable functions. Therefore, we have the following chain of inclusions: In practice, we often encounter operators defined on the sumset . For example, the Riemann–Lebesgue lemma shows that the Fourier transform maps boundedly into , and Plancherel's theorem shows that the Fourier transform maps boundedly into itself, hence the Fourier transform extends to by setting for all and . It is therefore natural to investigate the behavior of such operators on the intermediate subspaces . To this end, we go back to our example and note that the Fourier transform on the sumset was obtained by taking the sum of two instantiations of the same operator, namely These really are the same operator, in the sense that they agree on the subspace . Since the intersection contains simple functions, it is dense in both and . Densely defined continuous operators admit unique extensions, and so we are justified in considering and to be the same. Therefore, the problem of studying operators on the sumset essentially reduces to the study of operators that map two natural domain spaces, and , boundedly to two target spaces: and , respectively. Since such operators map the sumset space to , it is natural to expect that these operators map the intermediate space to the corresponding intermediate space . Statement of the theorem There are several ways to state the Riesz–Thorin interpolation theorem; to be consistent with the notations in the previous section, we shall use the sumset formulation. In other words, if is simultaneously of type and of type , then is of type for all . In this manner, the interpolation theorem lends itself to a pictorial description. Indeed, the Riesz diagram of is the collection of all points in the unit square such that is of type . The interpolation theorem states that the Riesz diagram of is a convex set: given two points in the Riesz diagram, the line segment that connects them will also be in the diagram. The interpolation theorem was originally stated and proved by Marcel Riesz in 1927. The 1927 paper establishes the theorem only for the lower triangle of the Riesz diagram, viz., with the restriction that and . Olof Thorin extended the interpolation theorem to the entire square, removing the lower-triangle restriction. The proof of Thorin was originally published in 1938 and was subsequently expanded upon in his 1948 thesis. Proof We will first prove the result for simple functions and eventually show how the argument can be extended by density to all measurable functions. Simple functions By symmetry, let us assume (the case trivially follows from ()). Let be a simple function, that is for some finite , and , . Similarly, let denote a simple function , namely for some finite , and , . Note that, since we are assuming and to be -finite metric spaces, and for all . Then, by proper normalization, we can assume and , with and with , as defined by the theorem statement. Next, we define the two complex functions Note that, for , and . We then extend and to depend on a complex parameter as follows: so that and . Here, we are implicitly excluding the case , which yields : In that case, one can simply take , independently of , and the following argument will only require minor adaptations. Let us now introduce the function where are constants independent of . We readily see that is an entire function, bounded on the strip . Then, in order to prove (), we only need to show that for all and as constructed above. Indeed, if () holds true, by Hadamard three-lines theorem, for all and . This means, by fixing , that where the supremum is taken with respect to all simple functions with . The left-hand side can be rewritten by means of the following lemma. In our case, the lemma above implies for all simple function with . Equivalently, for a generic simple function, Proof of () Let us now prove that our claim () is indeed certain. The sequence consists of disjoint subsets in and, thus, each belongs to (at most) one of them, say . Then, for , which implies that . With a parallel argument, each belongs to (at most) one of the sets supporting , say , and We can now bound : By applying Hölder’s inequality with conjugate exponents and , we have We can repeat the same process for to obtain , and, finally, Extension to all measurable functions in Lpθ So far, we have proven that when is a simple function. As already mentioned, the inequality holds true for all by the density of simple functions in . Formally, let and let be a sequence of simple functions such that , for all , and pointwise. Let and define , , and . Note that, since we are assuming , and, equivalently, and . Let us see what happens in the limit for . Since , and , by the dominated convergence theorem one readily has Similarly, , and imply and, by the linearity of as an operator of types and (we have not proven yet that it is of type for a generic ) It is now easy to prove that and in measure: For any , Chebyshev’s inequality yields and similarly for . Then, and a.e. for some subsequence and, in turn, a.e. Then, by Fatou’s lemma and recalling that () holds true for simple functions, Interpolation of analytic families of operators The proof outline presented in the above section readily generalizes to the case in which the operator is allowed to vary analytically. In fact, an analogous proof can be carried out to establish a bound on the entire function from which we obtain the following theorem of Elias Stein, published in his 1956 thesis: The theory of real Hardy spaces and the space of bounded mean oscillations permits us to wield the Stein interpolation theorem argument in dealing with operators on the Hardy space and the space of bounded mean oscillations; this is a result of Charles Fefferman and Elias Stein. Applications Hausdorff–Young inequality It has been shown in the first section that the Fourier transform maps boundedly into and into itself. A similar argument shows that the Fourier series operator, which transforms periodic functions into functions whose values are the Fourier coefficients maps boundedly into and into . The Riesz–Thorin interpolation theorem now implies the following: where and . This is the Hausdorff–Young inequality. The Hausdorff–Young inequality can also be established for the Fourier transform on locally compact Abelian groups. The norm estimate of 1 is not optimal. See the main article for references. Convolution operators Let be a fixed integrable function and let be the operator of convolution with , i.e., for each function we have . It follows from Fubini's theorem that is bounded from to and it is trivial that it is bounded from to (both bounds are by ). Therefore the Riesz–Thorin theorem gives We take this inequality and switch the role of the operator and the operand, or in other words, we think of as the operator of convolution with , and get that is bounded from to Lp. Further, since is in we get, in view of Hölder's inequality, that is bounded from to , where again . So interpolating we get where the connection between p, r and s is The Hilbert transform The Hilbert transform of is given by where p.v. indicates the Cauchy principal value of the integral. The Hilbert transform is a Fourier multiplier operator with a particularly simple multiplier: It follows from the Plancherel theorem that the Hilbert transform maps boundedly into itself. Nevertheless, the Hilbert transform is not bounded on or , and so we cannot use the Riesz–Thorin interpolation theorem directly. To see why we do not have these endpoint bounds, it suffices to compute the Hilbert transform of the simple functions and . We can show, however, that for all Schwartz functions , and this identity can be used in conjunction with the Cauchy–Schwarz inequality to show that the Hilbert transform maps boundedly into itself for all . Interpolation now establishes the bound for all , and the self-adjointness of the Hilbert transform can be used to carry over these bounds to the case. Comparison with the real interpolation method While the Riesz–Thorin interpolation theorem and its variants are powerful tools that yield a clean estimate on the interpolated operator norms, they suffer from numerous defects: some minor, some more severe. Note first that the complex-analytic nature of the proof of the Riesz–Thorin interpolation theorem forces the scalar field to be . For extended-real-valued functions, this restriction can be bypassed by redefining the function to be finite everywhere—possible, as every integrable function must be finite almost everywhere. A more serious disadvantage is that, in practice, many operators, such as the Hardy–Littlewood maximal operator and the Calderón–Zygmund operators, do not have good endpoint estimates. In the case of the Hilbert transform in the previous section, we were able to bypass this problem by explicitly computing the norm estimates at several midway points. This is cumbersome and is often not possible in more general scenarios. Since many such operators satisfy the weak-type estimates real interpolation theorems such as the Marcinkiewicz interpolation theorem are better-suited for them. Furthermore, a good number of important operators, such as the Hardy-Littlewood maximal operator, are only sublinear. This is not a hindrance to applying real interpolation methods, but complex interpolation methods are ill-equipped to handle non-linear operators. On the other hand, real interpolation methods, compared to complex interpolation methods, tend to produce worse estimates on the intermediate operator norms and do not behave as well off the diagonal in the Riesz diagram. The off-diagonal versions of the Marcinkiewicz interpolation theorem require the formalism of Lorentz spaces and do not necessarily produce norm estimates on the -spaces. Mityagin's theorem B. Mityagin extended the Riesz–Thorin theorem; this extension is formulated here in the special case of spaces of sequences with unconditional bases (cf. below). Assume: Then for any unconditional Banach space of sequences , that is, for any and any , . The proof is based on the Krein–Milman theorem. See also Marcinkiewicz interpolation theorem Interpolation space Notes References . . Translated from the Russian and edited by G. P. Barker and G. Kuerti. . . External links Theorems involving convexity Theorems in Fourier analysis Theorems in functional analysis Theorems in harmonic analysis Banach spaces Lp spaces Operator theory
Riesz–Thorin theorem
[ "Mathematics" ]
2,551
[ "Theorems in mathematical analysis", "Theorems in functional analysis", "Theorems in harmonic analysis" ]
967,488
https://en.wikipedia.org/wiki/Messier%2077
Messier 77 (M77), also known as NGC 1068 or the Squid Galaxy, is a barred spiral galaxy in the constellation Cetus. It is about away from Earth, and was discovered by Pierre Méchain in 1780, who originally described it as a nebula. Méchain then communicated his discovery to Charles Messier, who subsequently listed the object in his catalog. Both Messier and William Herschel described this galaxy as a star cluster. Today, however, the object is known to be a galaxy. It is one of the brightest Seyfert galaxies visible from Earth and has a D25 isophotal diameter of about . The morphological classification of NGC 1068 in the De Vaucouleurs system is (R)SA(rs)b, where the '(R)' indicates an outer ring-like structure, 'SA' denotes a non-barred spiral, '(rs)' means a transitional inner ring/spiral structure, and 'b' says the spiral arms are moderately wound. Ann et al. (2015) gave it a class of SAa, suggesting tightly wound arms. However, infrared images of the inner part of the galaxy reveal a prominent bar not seen in visual light, and for this reason it is now considered a barred spiral. Messier 77 is an active galaxy with an active galactic nucleus (AGN), which is obscured from view by astronomical dust at visible wavelengths. The diameter of the molecular disk and hot plasma associated with the obscuring material was first measured at radio wavelengths by the VLBA and VLA. The hot dust around the nucleus was subsequently measured in the mid-infrared by the MIDI instrument at the VLTI. It is the brightest and one of the closest and best-studied type 2 Seyfert galaxies, forming a prototype of this class. X-ray source 1H 0244+001 in Cetus has been identified as Messier 77. Only one supernova has been detected in Messier 77. The supernova, named SN 2018ivc, was discovered on 24 November 2018 by the DLT40 Survey. It is a type II supernova, and at discovery it was 15th magnitude and brightening. It has a radio jet consisting of a northeast and a southwest region, caused by interactions with the interstellar medium. In February 2022 astronomers reported a cloud of cosmic dust, detected through infrared interferometry observations, located at the centre of Messier 77 that is hiding a supermassive black hole. In November 2022, the IceCube collaboration announced the detection of a neutrino source emitted by the active galactic nucleus of Messier 77. It is the second detection by IceCube after TXS 0506+056, and only the fourth known source including SN1987A and solar neutrinos. See also List of Messier objects NGC 1106 Another galaxy with an active galactic nucleus. References External links "StarDate: M77 Fact Sheet" Spiral Galaxy M77 @ SEDS Messier pages VLBA image of the month: radio continuum and water masers of NGC 1068 Press release about VLTI observations of NGC 1068 ESO:Dazzling Spiral with an Active Heart incl. Potos & Animation Barred spiral galaxies Seyfert galaxies Radio galaxies Luminous infrared galaxies Cetus 077 NGC objects 02188 10266 041 Astronomical objects discovered in 1780 071 4C objects Discoveries by Pierre Méchain
Messier 77
[ "Astronomy" ]
712
[ "Cetus", "Constellations" ]
967,511
https://en.wikipedia.org/wiki/Messier%2078
Messier 78 or M78, also known as NGC 2068, is a reflection nebula in the constellation Orion. It was discovered by Pierre Méchain in 1780 and included by Charles Messier in his catalog of comet-like objects that same year. M78 is the brightest diffuse reflection nebula of a group of nebulae that includes NGC 2064, NGC 2067 and NGC 2071. This group belongs to the Orion B molecular cloud complex and is about distant from Earth. M78 is easily found in small telescopes as a hazy patch and involves two stars of 10th and 11th magnitude. These two B-type stars, and , are responsible for making the cloud of dust in M78 visible by reflecting their light. The M78 cloud contains a cluster of stars that is visible in the infrared. Due to gravity, the molecular gas in the nebula has fragmented into a hierarchy of clumps, whose cores have masses ranging from to . About 45 variable stars of the T Tauri type, young stars still in the process of formation, are members as well. Similarly, 17 Herbig–Haro objects are known in M78. On May 23, 2024, the European Space Agency released an initial set of images from their Euclid mission. This included an unprecedented image of the region including M78. It showed hundreds of thousands of new objects including sub-stellar sized ones for the first time. Gallery See also List of Messier objects References External links SEDS: Starforming Nebula M78 NightSkyInfo.com – M78 Astronomy Picture of the Day M78 Wide Field 2009 November 26 M78 and Reflecting Dust Clouds in Orion 2010 March 2 Messier 078 Messier 078 Orion–Cygnus Arm Messier 078 078 Messier 078 Astronomical objects discovered in 1780 Discoveries by Pierre Méchain
Messier 78
[ "Astronomy" ]
383
[ "Constellations", "Orion (constellation)" ]
967,512
https://en.wikipedia.org/wiki/Dragonfly%3A%20NASA%20and%20the%20Crisis%20Aboard%20Mir
Dragonfly: NASA and the Crisis Aboard Mir () is a 1999 book by Bryan Burrough about the Russian Mir space station and the cosmonauts and astronauts who served aboard. The story centres on astronaut Jerry Linenger and the events on the Shuttle and Mir Space Programme in 1997. See also The Buran Spacecraft designed as an equivalent to the US Space Shuttle. The Energia Rocket, designed to serve as an expendable launch system for the soviet space programme. Baikonur Cosmodrome, the launch base in Kazakhstan. Ethylene glycol, the anti-freeze which leaked on board Mir. External links Houston, We Have a Problem. New York Times Review NASA Photo Gallery for STS-84 mission 1999 non-fiction books Mir Spaceflight books
Dragonfly: NASA and the Crisis Aboard Mir
[ "Astronomy" ]
158
[ "Outer space stubs", "Astronomy book stubs", "Outer space", "Astronomy stubs" ]
967,532
https://en.wikipedia.org/wiki/Messier%2079
Messier 79 (also known as M79 or NGC 1904) is a globular cluster in the southern constellation Lepus. It was discovered by Pierre Méchain in 1780 and is about 42,000 light-years away from Earth and 60,000 light years from the Galactic Center. Like Messier 54 (the other extragalactic globular on Messier's list), it is believed to not be native to the Milky Way galaxy at all, but instead to the putative Canis Major Dwarf Galaxy, which is currently experiencing a very close encounter with our galaxy. This is, however, a contentious subject as astronomers are still debating the nature of the Canis Major dwarf galaxy itself. Messier 79 may also be part of the Gaia Sausage. The cluster is being disrupted by the galactic tide, trailing a long tidal tail. Color-magnitude diagram This color-magnitude diagram was made using near-infrared images of the cluster in J and K bands. J-band magnitude is plotted along the y-axis and J to K dominant color is plotted along the x-axis. Such a diagram is made rapidly with specialized code for crowded-field photometry. From this, it is evident that most of the bright stars in this cluster are red giants. The elongated branch is the red giant branch. Some of the stars in the diagram, including those extending outward from the red giant branch toward the upper left, are actually foreground stars that are not members of the cluster. Altogether three regions of the Hertzsprung–Russell diagram are present here: the low-mass end of the main sequence, the complete red giant branch and the horizontal branch. Compared to optical bands, in infrared bands the lower main sequence is shallower and the horizontal branch is steeper (the blue end is fainter and the red end is brighter). See also List of Messier objects References External links Messier 79, SEDS Messier pages Messier 79, Galactic Globular Clusters Database page Messier 079 Messier 079 Messier 079 079 Messier 079 Astronomical objects discovered in 1780 Discoveries by Pierre Méchain
Messier 79
[ "Astronomy" ]
437
[ "Lepus (constellation)", "Constellations" ]
967,561
https://en.wikipedia.org/wiki/Uppsala%E2%80%93DLR%20Trojan%20Survey
The Uppsala–DLR Trojan Survey (UDTS, also known as UAO–DLR Trojan Survey) is an astronomical survey to study the movements and locations of asteroids near Jupiter, which includes Jupiter trojans and other asteroids, which line-of sight are frequently blocked by the giant planet. The survey was carried out at the Uppsala Astronomical Observatory in Sweden, in collaboration with the German Aerospace Center (DLR). Principal investigators were the astronomers Claes-Ingvar Lagerkvist, Gerhard Hahn, Stefano Mottola, Magnus Lundström and Uri Carsenty. The Uppsala–DLR Trojan Survey, UDTS, should not be confused with its successor, the Uppsala-DLR Asteroid Survey (UDAS), which started shortly after the UDTS concluded. During the course of the survey, two telescopes were used at ESO's La Silla site in northern Chile. In fall of 1996, the ESO Schmidt telescope surveyed approximately 900 deg2 at Jupiter's Lagrangian point, location of the so-called Greek camp. Additional positions and magnitudes of asteroids were obtained using the (now decommissioned) 0.61-meter Bochum telescope. There is some notable controversy over P/1997 T3, one of the objects found in this survey, namely an asteroid-like object with a comet-like tail. It is thought that this tail is composed of dust, due to its consistent appearance, and the fact that it is pointing towards the Sun, not away from it. The group of Jupiter trojan contains about 6,000 asteroid. They are named after figures from Greek mythology, typically after the heroes of the Trojan War as narrated in Homer's Iliad. List of discovered minor planets The Minor Planet Center credits the Uppsala–DLR Trojan Survey with the discovery of 62 numbered minor planets during 1996–1997. See also List of asteroid-discovering observatories Uppsala–ESO Survey of Asteroids and Comets, UESAC References External links Comet or Asteroid? Trojans Astronomical surveys Asteroid surveys Uppsala University
Uppsala–DLR Trojan Survey
[ "Astronomy" ]
418
[ "Astronomical surveys", "Works about astronomy", "Astronomical objects" ]
967,636
https://en.wikipedia.org/wiki/Malassezia
Malassezia is a genus of fungi (specifically, a yeast). Some species of Malassezia are found on the skin of animals, including humans. Because malassezia requires fat to grow, it is most common in areas with many sebaceous glandson the scalp, face, and upper part of the body. Role in human diseases Malassezia infections of human skin can cause or aggravate a variety of conditions, including dandruff, seborrheic dermatitis, and acne. Dermatitis and dandruff When Malassezia grows too rapidly, the natural renewal of cells is disturbed, and dandruff can appear with itching (a similar process may also occur with other fungi or bacteria). Identification of Malassezia on skin has been aided by the application of molecular or DNA-based techniques. These investigations show that the M. globosa is the species that causes most skin disease in humans, and that it is the most common cause of dandruff and seborrhoeic dermatitis (though M. restricta is also involved). There can be as many as ten million M. globosa organisms on a human head. A project in 2007 sequenced the genome of dandruff-causing Malassezia globosa and found it to have 4,285 genes. M. globosa uses eight different types of lipase, along with three phospholipases, to break down the oils on the scalp. Any of these 11 proteins would be a suitable target for dandruff medications. Prescription and over-the-counter shampoos containing ketoconazole are commonly used to treat dandruff caused by Malassezia. M. globosa has been predicted to have the ability to reproduce sexually, but this has not been observed. Skin pigmentation disorders In occasional opportunistic infections of the trunk and other locations on humans, some species of Malassezia can cause hypopigmentation or hyperpigmentation. Allergy tests for these fungi are available. The skin rash of tinea versicolor (pityriasis versicolor) is also caused by an infection of this fungus. Cancer Translocation of Malassezia species from the intestines into pancreatic neoplasms has been associated with pancreatic ductal adenocarcinoma, and the fungi may promote tumor progression through activation of host complement. Crohn's and inflammatory bowel disease M. restricta, which is normally found in the skin, is linked to disorders like Crohn's disease and inflammatory bowel disease when found in the gut. This is especially true for organism with the N12 CARD9 allele, which provokes a stronger inflammatory response. Malassezia folliculitis Malassezia folliculitis (also called pityrosporum folliculitis) is caused by infection with Malassezia. Systematics Malassezia is the sole genus in family Malasseziaceae, which is the only family in order Malasseziales, itself the single member of class Malasseziomycetes. Due to progressive changes in their nomenclature, some confusion exists about the naming and classification of Malassezia yeast species. Work on these yeasts has been complicated because they require specific growth media and sometimes grow very slowly in laboratory culture. Malassezia was originally identified by the French scientist Louis-Charles Malassez in the late nineteenth century; he associated it with the condition seborrhoeic dermatitis. Raymond Sabouraud identified a dandruff-causing organism in 1904 and called it Pityrosporum Malassezii, honoring Malassez, but at the species level as opposed to the genus level. When it was determined that the organisms were the same, the term "Malassezia" was judged to possess priority. In the mid-twentieth century, it was reclassified into two species: Pityrosporum (Malassezia) ovale, which is lipid-dependent and found only on humans. P. ovale was later divided into two species, P. ovale and P. orbiculare, but current sources consider these terms to refer to a single species of fungus, with M. furfur the preferred name. Pityrosporum (Malassezia) pachydermatis, which is lipophilic but not lipid-dependent. It is found on the skin of most animals. Malassezia is the sole genus in the family Malasseziaceae, which was validated by Cvetomir Denchev and Royall T. Moore in 2009. The order Malasseziales had been previously proposed by Moore in 1980, and later emended by Begerow and colleagues in 2000. At this time the order was classified as a member of unknown class placement in the subdivision Ustilaginomycotina. In 2014, Cvetomir and Teodor Denchev circumscribed the class Malasseziomycetes to contain the group. Description Malassezia grows rapidly, typically maturing within 5 days when incubated at temperatures ranging from . Growth is slower at , and certain species struggle at . These organisms can proliferate on media infused with cycloheximide. An essential factor for the growth of Malassezia is the presence of long-chain fatty acids, with M. pachydermatis being an exception. The most conventional cultivation method involves overlaying solid media with a layer of olive oil. However, for nurturing some clinically relevant species, such as the challenging-to-cultivate M. restricta, more intricate culture media may be required. For the most efficient recovery of Malassezia, it has been recommended to collect blood through a lipid infusion catheter and subsequently use lysis-centrifugation—a recommendation backed by multiple comparative studies. The yeast-like cells of Malassezia, measuring between 1.5–4.5 μm by 3–7 μm, are characterised as phialides featuring tiny collarettes (a small, collar-like flange or lip at the mouth of a phialide from which spores or conidia are produced and released). These collarettes are challenging to identify using standard light microscopes. A defining characteristic of cells from this genus is their morphology: one end is round, while the other has a distinctly blunt termination. This latter end is where singular, broad-based bud-like structures emerge, although in certain species, these structures might be narrower. To effectively visualise the organism's shape, a staining technique involving safranin is recommended, followed by observation under oil immersion. Furthermore, Calcofluor-white staining provides an enhanced clarity of the cell wall and its unique contour. While Malassezia typically lacks hyphal elements, rudimentary forms can sporadically be present. Species The Index Fungorum lists 22 species of Malassezia. The following list gives the name, the taxonomic authority (those who first described the fungus, or who transferred it into Malassezia from another genus; standardized author abbreviations are used), and the name of the organism from which the fungus was isolated, if not human. In the mid-1990s, scientists at the Pasteur Institute in Paris, France, discovered additional species. Malassezia arunalokei Malassezia brasiliensis – from lesions on the beak of turquoise-fronted amazon parrot Malassezia caprae – from skin of goat Malassezia cuniculi – from healthy skin of external ear canal of rabbit Malassezia dermatis Malassezia equi – from skin of horse Malassezia equina – from skin of horse Malassezia furfur Malassezia globosa Malassezia japonica Malassezia muris – skin of mouse Malassezia nana – from discharge from ear of cat Malassezia obtusa Malassezia ochoterenai Malassezia pachydermatis – from skin of Indian rhinoceros Malassezia psittaci – from lesions on the beak of blue-headed parrot Malassezia restricta Malassezia slooffiae – from skin of pig Malassezia sympodialis Malassezia tropica Malassezia vespertilionis – from vesper bats in subfamily Myotinae Malassezia yamatoensis References Further reading Basidiomycota Parasitic fungi Yeasts Taxa described in 1889 Taxa named by Henri Ernest Baillon Basidiomycota genera
Malassezia
[ "Biology" ]
1,779
[ "Yeasts", "Fungi" ]
967,653
https://en.wikipedia.org/wiki/Plafond
A plafond (French for "ceiling"), in a broad sense, is a (flat, vaulted or dome) ceiling. A plafond can be a product of monumental painting or sculpture. Picturesque plafonds can be painted directly on plaster (as a fresco, oil, tempera, synthetic paints), on a canvas attached to a ceiling (panel), or a mosaic. As a decorative feature of churches and staterooms, plafonds were popular from the 17th century until the beginning of the 19th century. Designs of this period typically used illusionistic ceiling painting showing the architectural structure behind, strongly foreshortened figures, architectural details, and/or the open sky. References
Plafond
[ "Engineering" ]
146
[ "Structural engineering", "Ceilings" ]
967,654
https://en.wikipedia.org/wiki/Bottling%20line
Bottling lines are production lines that fill a liquid product, often a beverage, into bottles on a large scale. Many prepared foods are also bottled, such as sauces, syrups, marinades, oils and vinegars. Bottling lines usually include label application equipment, capping operations, date stamps, etc. Quality assurance verification equipemt is often included. Beer bottling process Packaging of bottled beer typically involves drawing the product from a holding tank and filling it into bottles in a filling machine (filler), which are then capped, labeled and packed into cases or cartons. Many smaller breweries send their bulk beer to large facilities for contract bottling—though some will bottle by hand. Virtually all beer bottles are glass. The first step in bottling beer is depalletising, where the empty bottles are removed from the original pallet packaging delivered from the manufacturer, so that individual bottles may be handled. The bottles may then be rinsed with filtered water or air, and may have carbon dioxide injected into them in attempt to reduce the level of oxygen within the bottle. The bottle then enters a "filler" which fills the bottle with beer and may also inject a small amount of inert gas (usually carbon dioxide or nitrogen) on top of the beer to disperse the oxygen, as oxygen can ruin the quality of the product via oxidation. Finally, the bottles go through a "capper", which applies a bottle cap, sealing the bottle. A few beers are bottled with a cork and cage. Next the bottle enters a labelling machine ("labeller") where a label is applied. To ensure traceability of the product, a lot number, generally the date and time of bottling, may also be printed on the bottle. The product is then packed into boxes and warehoused, ready for sale. Depending on the magnitude of the bottling endeavor, there are many different types of bottling machinery available. Liquid level machines fill bottles so they appear to be filled to the same line on every bottle, while volumetric filling machines fill each bottle with exactly the same amount of liquid. Overflow pressure fillers are the most popular machines with beverage makers, while gravity filling machines are most cost effective. In terms of automation, inline filling machines are most popular, but rotary machines are much faster albeit much more expensive. Wine bottling process The process for bottling wine is largely similar to that for bottling beer, except wine bottles differ in volumes and shapes. Traditionally, a cork is used to provide closure to wine bottles. After filling, a bottle travels to a corking machine (corker) where a cork is compressed and pushed into the neck of the bottle. Whilst this is happening, the corker vacuums the air out of the bottle to form a negative pressure headspace. This removes any oxygen from the headspace, which is useful as latent oxygen can ruin the quality of the product via oxidation. A negative pressure headspace will also counteract pressure caused by the thermal expansion of the wine, preventing the cork from being forced from the bottle. Champagnes and sparkling wines may further be sealed with a muselet, which ensures the cork will not explode off in transit. Alternative wine closures such as screw caps are available. Some bottling lines incorporate a fill height detector which reject under or over-filled bottles, and also a metal detector. After filling and corking, a plastic or tin capsule is applied to the neck of the bottle in a capsular. Next the bottle enters a labeller where a wine label is applied. The product is then packed into boxes and warehoused, ready for sale. See also Beverage can Packaging and labeling References Further reading Yam, K. L., "Encyclopedia of Packaging Technology", John Wiley & Sons, 2009, External links Liquid Filling Lines Wine packaging and storage Brewing Packaging machinery Beer vessels and serving Food storage containers Packaging Food packaging Line
Bottling line
[ "Engineering" ]
796
[ "Packaging machinery", "Industrial machinery" ]
967,673
https://en.wikipedia.org/wiki/News%20ticker
A news ticker (sometimes called a crawler, crawl, slide, zipper, ticker tape, or chyron) is a horizontal or vertical (depending on a language's writing system) text-based display either in the form of a graphic that typically resides in the lower third of the screen space on a television station or network (usually during news programming) or as a long, thin scoreboard-style display seen around the facades of some offices or public buildings dedicated to presenting headlines or minor pieces of news. It is an evolution of the ticker tape, a continuous paper print-out of stock quotes from a printing telegraph which was mainly used in stock exchanges before the advance of technology in the 1960s. News tickers have been used in Europe in countries such as United Kingdom, Germany and Ireland for some years; they are also used in several Asian countries and Australia. In the United States, tickers were long used on a special event basis by broadcast television stations to disseminate weather warnings, school closings, and election results. Sports telecasts occasionally used a ticker to update other contests in progress before the expansion of cable news networks and the internet for news content. In addition, some ticker displays are used to relay continuous stock quotes (usually with a delay of as much as 15 minutes) during trading hours of major stock market exchanges. Most tickers are traditionally displayed in the form of scrolling text running from right to left across the screen or building display (or in the opposite direction for right-to-left writing systems such as Arabic script and Hebrew), allowing for headlines of varying degrees of detail; some used by television broadcasters, however, display stories in a static manner (allowing for the seamless switching of each story individually programmed for display) or utilize a "flipping" effect (in which each individual headline is shown for a few seconds before transitioning to the next, instead of scrolling across the screen, usually resulting in a relatively quicker run through of all of the information programmed into the ticker). Since the growth in usage of the World Wide Web, some news tickers have syndicated news stories posted largely on websites of broadcasters or by other independent news agencies. Current uses Television The presentation of headlines or other information in a news ticker has become a common element of many different news networks. The use of the ticker has differed on a number of channels: News networks and local newscasts commonly use a setup in which news headlines are scrolled across an area near the bottom of the screen, though some variations have formed, such as showing one headline at a time with a scrolling or "flipper" effect. Financial news channels use two or more tickers displaying stock prices and business headlines. Networks with a focus on sports often use a slightly different system, where scores and statuses of ongoing and finished games are displayed one by one, along with minor sports highlights, statistics and sports news headlines. They are typically divided into categories devoted to specific leagues and events (with college basketball and football usually focusing on the top 25 ranked teams on the AP Poll, occasionally supplemented by sections for specific conferences). Some programs, including news-based programs emphasizing viewer interactivity, or special events, may also use tickers to display messages and reactions from viewers and others that relate to the program. These comments are often sourced from social networking services such as Facebook and Twitter, typically curating comments from a specific page or hashtag. Due to their current prevalence, they have been occasionally been made targets of pranks and vandalism. In one such example, News 14 Carolina allowed viewers to submit relevant information such as school closings or traffic delays via telephone or the Internet that would be incorporated into the ticker; the system was exploited in February 2004 to display humorous and crude messages, including the infamous "All your base are belong to us". Occasionally messages intended for training accidentally end up being put on the live ticker as happened on BBC News in 2022 when "Weather rain everywhere" and "Manchester United are rubbish" appeared on the live news ticker. Some businesses and organizations have utilized tickers intended for relaying weather-related closings as a surreptitious source for free guerrilla marketing, proclaiming they were open rather than closed and giving their phone number if possible, allowing them to 'advertise' on a television station all day for free. Since then, many stations have required pre-registration of businesses or organizations with an authorized representative and a signed affidavit on company letterhead affirming their authenticity, along with filtering out unfamiliar businesses and organizations, before being able to display their closing announcements. Stations also confirm all closings involving school districts with authorized officials to prevent situations in which students either show up to canceled classes in dangerous conditions, or do not attend school due to an erroneous, prank-submitted, or false listing. On personal computers Various applications have been developed over time to install news tickers on personal computer desktops using RSS feeds from news organizations, which are displayed in a fashion similar to those used by television channels but enable the user to access to underlying news stories, a feature not offered by traditional television channels. The Bloomberg Terminal and other stock market-tracking programs and devices also utilize tickers. A ticker may also be used as an unobtrusive method by businesses in order to deliver important information to their staff. The ticker can be set to reappear, stay on screen, or be put into a retractable mode (where a small tab is left visible on-screen). In the United Kingdom, broadcasters have stopped using this technology as other forms of communications have become available and increased in popularity. BBC News and Sky News discontinued their respective desktop tickers in March 2011 and 2012 to focus on other products, such as smartphone applications, to deliver updated information on breaking news and sport stories. News tickers on buildings Since the advent of the telegraph, newspapers commonly used their buildings to share the latest headlines. At first simple chalkboard signs were used for bulletins, but limelight illumination, electric lights, magic lantern projections, and other novel techniques were later employed. The method of using electric lights to spell out moving letters was invented by Frank C. Reilly (August 20, 1888 – April 10, 1947) and patented in 1923. Reilly called his invention the Motograph News Bulletin. In 1928, The New York Times installed a Motograph News Bulletin to display news headlines on the sides of Times Tower. The display was long, high, and employed over 14,800 light bulbs. Popularly known as the "Zipper", the sign remained in use until the building was sold in 1961. The sign was darkened during World War II to comply with wartime lighting restrictions. The Motograph operated until 1994 and was replaced by an electronic version in 1995, which was in turn removed in 2018 due to the replacement of all individual screens on the front of One Times Square with a -tall LED billboard in 2018. Ticker displays appear today on the exterior of the News Corp Building, which houses the headquarters for Fox News Channel/News Corp in the west extension of Manhattan's Rockefeller Center, as well as one that displays delayed stock market data that is located in Times Square. NASDAQ itself features a large display screen on the facade of the NASDAQ MarketSite building in Times Square. The Reuters buildings at Canary Wharf and in Toronto have news and stock tickers; the latter type features market data for the New York Stock Exchange, NASDAQ and London Stock Exchange, while the Toronto building's ticker also includes quotes from the Toronto Stock Exchange. A red-LED ticker was added to the perimeter of 10 Rockefeller Center in 1994, as the building was being renovated to accommodate the studios for NBC's Today. Placed at the juncture of the first and second floors, the ticker is visible to spectators in Rockefeller Plaza and passersby on West 49th Street and updates continuously, even at times when Today is not being produced and broadcast. As of 2015, the ticker strip is only a small part of a large two-floor LCD video display that is placed within the window of the studio showing promotional information. The Martin Place Headquarters of Seven News, the news division of Australian television broadcaster Seven Network, also incorporates a ticker that wraps around the building. In popular culture The use of news tickers has also been parodied on a number of films and television programs, including a 2003 episode of The Simpsons ("Mr. Spritz Goes to Washington"), as well as a sketch featured on Saturday Night Live. Some programs and films such as Austin Powers in Goldmember sometimes place jokes within their parody news crawls. The Onion News Network uses a parody ticker to offer jokes in its online newscasts. The Australian comedy news series CNNNN went a step further: although it featured a joke news ticker throughout the show, one episode featured a news ticker that summarized the initial news ticker, as well as one for the sight impaired, which covered the whole screen. The music video for the Chamillionaire rap single "Hip Hop Police" incorporated a parodical news ticker announcing the arrests of famous musicians. See also Character generator—a means by which news tickers are created Chyron Corporation—a company whose name has become a genericized trademark for a type of news ticker (chyron) Television news screen layout Ticker tape References Digital media Television news Television terminology Film and video technology
News ticker
[ "Technology" ]
1,929
[ "Multimedia", "Digital media" ]
967,813
https://en.wikipedia.org/wiki/NGC%202403
NGC 2403 (also known as Caldwell 7) is an intermediate spiral galaxy in the constellation Camelopardalis. It is an outlying member of the M81 Group, and is approximately 8 million light-years distant. It bears a similarity to M33, being about 50,000 light years in diameter and containing numerous star-forming H II regions. The northern spiral arm connects it to the star forming region NGC 2404. NGC 2403 can be observed using 10×50 binoculars. NGC 2404 is 940 light-years in diameter, making it one of the largest known H II regions. This H II region represents striking similarity with NGC 604 in M33, both in size and location in galaxy. Supernovae and Supernovae Imposters There have been four reported astronomical transients in the galaxy: SN 1954J was first noticed by Gustav Tammann and Allan Sandage as a "bright blue irregular variable" star, which they named V12. They noted it underwent a major outburst on 2/3 November 1954, which attained a magnitude of 16 at its brightest. In 1972, Fritz Zwicky classified this event as a type V supernova. It was later determined to be a supernova imposter: a highly luminous, very massive eruptive star, surrounded by a dusty nebula, similar to the 1843 Great Eruption of η Carinae in the Milky Way. SN 2002 kg was discovered by LOTOSS (Lick Observatory and Tenagra Observatory Supernova Searches) on 26 October 2002 and initially classified as a type IIn, or possibly the outburst of a luminous blue variable. On 24 August 2021, it was reclassified as a Gap transient. SN 2004dj (type II-P, mag. 11.2) was discovered by Kōichi Itagaki on 31 July 2004. At the time of its discovery, it was the nearest and brightest supernova observed in the 21st century. AT2016ccd, initially designated as SNhunt225, is a luminous blue variable, first discovered by Catalina Real-time Transient Survey (CRTS) and Stan Howerton in December 2013. Outbursts from this star have been observed as recently as November 2021. History The galaxy was discovered by William Herschel in 1788. Edwin Hubble detected Cepheid variables in NGC 2403 using the Hale Telescope, making it the first galaxy beyond the Local Group within which a Cepheid was discovered. By 1963, 59 variables had been found in NGC 2403, of which 17 were eventually confirmed as Cepheids, with periods between 20 and 87 days. As late as 1950 Hubble was using a distance of just under 2 million light years for the galaxy's distance, but by 1968 the analysis of the Cepheids increased this by almost a factor of five, to within 0.2 magnitudes of the current value. Companions NGC 2403 has two known companions. One is the relatively massive dwarf galaxy DDO 44. It is currently being disrupted by NGC 2403, as evidenced by a tidal stream extending on both sides of DDO 44. DDO 44 is approaching NGC 2403 at a distance much closer than typical for dwarf galaxy interactions. It currently has a V-band absolute magnitude of −12.9, but its progenitor was even more luminous. The other known companion is officially named MADCASH J074238+652501-dw, although it is nicknamed MADCASH-1. The name refers to the MADCASH (Magellanic Analog Dwarf Companions and Stellar Halos) project. MADCASH-1 is similar to typical dwarf spheroidal galaxies in the Local Group; it is quite faint, with an absolute V-band magnitude of −7.81, and has only an ancient, metal-poor population of red giant stars. Luminous blue variables in NGC 2403 NGC 2403 has four known luminous blue variables. AT 2016ccd, NGC 2403 V14, NGC 2403 V37, and NGC 2403 V12. Not much is known about AT 2016ccd, besides that it is a luminous blue variable. AT 2016ccd has a magnitude of 18-19.95, so it is quite dim. NGC 2403 V14 is more well known then AT 2016ccd. NGC 2403 V14 has a size of 1,260.2 solar radii, it has a mass of 24 solar masses and has a temperature of 7,041 K. NGC 2403 V14 has a magnitude of 12.9. NGC 2403 V37 is not well known, it is believed to be a luminous blue variable with a magnitude of 12.9. NGC 2403 V12 is an unknown luminous blue variable with a magnitude of 6.5. See also Triangulum Galaxy-looks very similar to NGC 2403. References External links Spiral Galaxy NGC 2403 at the astro-photography site of Mr. Takayuki Yoshida NGC 2403 at ESA/Hubble SEDS – NGC 2403 Intermediate spiral galaxies M81 Group Camelopardalis 2403 03918 21396 007b Astronomical objects discovered in 1788 Discoveries by William Herschel
NGC 2403
[ "Astronomy" ]
1,062
[ "Camelopardalis", "Constellations" ]
967,825
https://en.wikipedia.org/wiki/Tine%20test
The tine test is a multiple-puncture tuberculin skin test used to aid in the medical diagnosis of tuberculosis (TB). The tine test is similar to the Heaf test, although the Mantoux test is usually used instead. There are various forms of the tine tests which usually fall into two categories: the old tine test (OT) and the purified protein derivative (PPD) tine test. Common brand names of the test include Aplisol, Aplitest, Tuberculin PPD TINE TEST, and Tubersol. Procedure This test uses a small "button" that has four to six short needles coated with TB antigens (tuberculin), either an old tuberculin or a PPD-tuberculin. The needles are pressed into the skin (usually on the inner side of the forearm), forcing the antigens into the skin. The test is then read 48 to 72 hours later by measuring the size of the largest papule or induration. Indications are usually classified as positive, negative, or doubtful. Because it is not possible to control precisely the amount of tuberculin used in the tine test, a positive test should be verified using the Mantoux test. PPD Tuberculin is a glycerol extract of the tubercle bacillus. Purified protein derivative (PPD) tuberculin is a precipitate of non-species-specific molecules obtained from filtrates of sterilized, concentrated cultures. It was first described by Robert Koch in 1890 and then Giovanni Petragnani. A batch of PPD created in 1939 serves as the US and international standard, called PPD-S. PPD-S concentration is not standardized for multiple-puncture techniques, and should be designed for the specific multiple-puncture system. Comparison to Mantoux test The American Thoracic Society or Centers for Disease Control and Prevention (CDC) do not recommend the tine test, since the amount of tuberculin that enters the skin cannot be measured. For this reason, the tine test is often considered to be less reliable. Contrary to this, however, studies have shown that the tine test can give results that correlate well to the Mantoux test. If a minor reaction is considered doubtful, the OT test is less accurate and may fail to detect TB, producing a false negative. If all doubtful indications are instead classified as positive, there is no significant difference between the OT test, the PPD tine test, or the Mantoux test. Furthermore, the tine test is faster and easier to administer than the Mantoux test and has been recommended for screening children. References Immunologic tests Skin tests Tuberculosis
Tine test
[ "Biology" ]
566
[ "Immunologic tests" ]
967,933
https://en.wikipedia.org/wiki/Prehensility
Prehensility is the quality of an appendage or organ that has adapted for grasping or holding. The word is derived from the Latin term prehendere, meaning "to grasp". The ability to grasp is likely derived from a number of different origins. The most common are tree-climbing and the need to manipulate food. Examples Appendages that can become prehensile include: Uses Prehensility affords animals a great natural advantage in manipulating their environment for feeding, climbing, digging, and defense. It enables many animals, such as primates, to use tools to complete tasks that would otherwise be impossible without highly specialized anatomy. For example, chimpanzees have the ability to use sticks to obtain termites and grubs in a manner similar to human fishing. However, not all prehensile organs are applied to tool use; the giraffe tongue, for instance, is instead used in feeding and self-cleaning. See also Robot end effector References Animal anatomy Biology terminology Evolutionary biology
Prehensility
[ "Biology" ]
212
[ "Evolutionary biology", "nan" ]
968,133
https://en.wikipedia.org/wiki/Messier%2084
Messier 84 or M84, also known as NGC 4374, is a giant elliptical or lenticular galaxy in the constellation Virgo. Charles Messier discovered the object in 1781 in a systematic search for "nebulous objects" in the night sky. It is the 84th object in the Messier Catalogue and in the heavily populated core of the Virgo Cluster of galaxies, part of the local supercluster. This galaxy has morphological classification E1, denoting it has flattening of about 10%. The extinction-corrected total luminosity in the visual band is about . The central mass-to-light ratio is 6.5, which, to a limit, steadily increases away from the core. The visible galaxy is surrounded by a massive dark matter halo. Radio observations and Hubble Space Telescope images of M84 have revealed two jets of matter shooting out from its center as well as a disk of rapidly rotating gas and stars indicating the presence of a supermassive black hole. It also has a few young stars and star clusters, indicating star formation at a very low rate. The number of globular clusters is , which is much lower than expected for an elliptical galaxy. Viewed from Earth its half-light radius, relative angular size of its 50% peak of lit zone of the sky, is , thus just over an arcminute. Supernovae Three supernovae have been observed in M84: SN 1957B (typeIa, mag. 12.5) was discovered by H. S. Gates on 28 April 1957, and independently by Dr. G. Romano on 18 May 1957. SN 1980I (type Ia, mag. 14) was discovered by M. Rosker on 13 June 1980. Historically, this supernova has been catalogued as belonging to M84, but it may have been in either neighboring galaxy NGC 4387 or M86. SN 1991bg (type Ia-pec, mag. 14) was discovered by Reiki Kushida on 3 December 1991. This high rate of supernovae is rare for elliptical galaxies, which may indicate there is a population of stars of intermediate age in M84. See also List of Messier objects References and footnotes External links StarDate: M84 Fact Sheet SEDS Lenticular Galaxy M84 Elliptical galaxies Lenticular galaxies Virgo Cluster Virgo (constellation) 084 NGC objects 07494 40455 17810318 Discoveries by Charles Messier Radio galaxies 272 4C objects
Messier 84
[ "Astronomy" ]
519
[ "Virgo (constellation)", "Constellations" ]
968,141
https://en.wikipedia.org/wiki/Messier%2085
Messier 85 (also known as M85 or NGC 4382 or PGC 40515 or ISD 0135852) is a lenticular galaxy, or elliptical galaxy for other authors, in the Coma Berenices constellation. It is 60 million light-years away, and has a diameter of about across. Pierre Méchain discovered M85 in 1781. It is within the outskirts of the Virgo cluster, and is relatively isolated. Properties M85 is extremely poor in neutral hydrogen and has a very complex outer structure with shells and ripples that are thought to have been caused by a merger with another galaxy that took place between 4 and 7 billion years ago, as well as a relatively young (<3 billion years old) stellar population on its centermost region, some of it in a ring, that may have been created by a late starburst. Like other massive, early-type galaxies, it has different populations of globular clusters. Aside from the typical "red" and "blue" populations, there is also a population with intermediate colors and an even redder population. It is likely transitioning from being a lenticular galaxy into an elliptical galaxy. While indirect methods imply that Messier 85 should contain a central supermassive black hole of around 100 million solar masses, velocity dispersion observations imply that the galaxy may entirely lack a central massive black hole. M85 is interacting with the nearby spiral galaxy NGC 4394, and a small elliptical galaxy called MCG 3-32-38. Compared to other early-type galaxies, M85 emits a relatively smaller proportion of X-rays. Novae and Supernovae SN 1960R (typeIa, mag. 13.5), was discovered by H. S. Gates on 20 December 1960, and independently discovered by Leonida Rosino on 18 January 1961. M85 has been the host of the first luminous red nova identified as such. On 7 January 7 the Lick Observatory Supernova Search (LOSS) discovered M85 OT2006-1 on the outskirts of the galaxy. SN 2020nlb (type Ia, mag. 17.436) was discovered by the ATLAS telescope in Hawaii on 25 June 2020. This supernova got as bright as magnitude 12. See also List of Messier objects References External links SEDS Lenticular Galaxy M85 Lenticular galaxies Virgo Cluster Coma Berenices 085 NGC objects 07508 40515 Astronomical objects discovered in 1781 Discoveries by Pierre Méchain
Messier 85
[ "Astronomy" ]
510
[ "Coma Berenices", "Constellations" ]
968,155
https://en.wikipedia.org/wiki/Messier%2086
Messier 86 (also known as M86 or NGC 4406) is an elliptical or lenticular galaxy in the constellation Virgo. It was discovered by Charles Messier in 1781. M86 lies in the heart of the Virgo Cluster of galaxies and forms a most conspicuous group with another large galaxy known as Messier 84. It displays the highest blue shift of all Messier objects, as it is, net of its other vectors of travel, approaching the Milky Way at 244 km/s. This is due to both galaxies falling roughly towards the center of the Virgo cluster from opposing ends. Messier 86 is linked by several filaments of ionized gas to the severely disrupted spiral galaxy NGC 4438, indicating that M86 may have stripped some gas and interstellar dust from the spiral. It is also suffering ram-pressure stripping as it moves at high speed through Virgo's intracluster medium, losing its interstellar medium and leaving behind a very long trail of X ray-emitting hot gas that has been detected with the help of the Chandra space telescope. Messier 86 has a rich array of globular clusters, with a total number of around 3,800. Its halo also has a number of stellar streams interpreted as remnants of dwarf galaxies that have been disrupted and absorbed by this galaxy. See also List of Messier objects References External links SEDS Lenticular Galaxy M86 Lenticular galaxies Virgo Cluster Virgo (constellation) 086 NGC objects 07532 40653 Astronomical objects discovered in 1781 Discoveries by Charles Messier
Messier 86
[ "Astronomy" ]
316
[ "Virgo (constellation)", "Constellations" ]
968,164
https://en.wikipedia.org/wiki/Messier%2088
Messier 88 (also known as M88 or NGC 4501) is a spiral galaxy about 50 to 60 million light-years away from Earth in the constellation Coma Berenices. It was discovered by Charles Messier in 1781. Properties M88 is one of the fifteen Messier objects that belong to the nearby Virgo Cluster of galaxies. It is galaxy number 1401 in the Virgo Cluster Catalogue (VCC) of 2096 galaxies that are candidate members of the cluster. M88 appears to be on or ending a highly elliptical orbit, currently on an approximate or direct course toward the cluster center, which is occupied by the giant elliptical galaxy M87. It is currently 0.3 to 0.48 million parsecs from the center and will come closest to the core in about 200 to 300 million years. Its motion through the intergalactic medium of its cluster is creating, as expected, ram pressure that is stripping away the outer region of neutral hydrogen. To date, this has been detected along the western, leading edge of the galaxy. This galaxy is inclined to the line of sight by 64°. It is classified as an Sbc spiral, a status between Sb (medium-wound) and Sc (loosely wound) spiral arms. The spiralling arms are very regular and can be followed down to the galactic core. The maximum rotation velocity of the gas is 241.6 ± 4.5 km/s. M88 is classified as a type 2 Seyfert galaxy, which means it produces narrow spectral line emission from highly ionized gas in the nucleus. In the core region there is a central condensation with a 230 parsec diameter, which has two concentration peaks. This condensation is being fed by inflow from the spiral arms. The supermassive black hole at the core of this galaxy has 107.9 solar masses, or about 80 million solar masses (). One supernova has been observed in M88: SN 1999cl (type Ia, mag. 16.4). See also List of Messier objects Messier 98 References External links Spiral Galaxy M88 @ SEDS Messier pages Unbarred spiral galaxies Virgo Cluster Coma Berenices 088 NGC objects 07675 41517 Astronomical objects discovered in 1781 Discoveries by Charles Messier
Messier 88
[ "Astronomy" ]
475
[ "Coma Berenices", "Constellations" ]
968,172
https://en.wikipedia.org/wiki/Messier%2089
Messier 89 (M89 for short, also known as NGC 4552) is an elliptical galaxy in the constellation Virgo. It was discovered by Charles Messier on March 18, 1781. M89 is a member of the Virgo Cluster of galaxies. Features Current observations allow the possibility that M89 may be nearly perfectly spherical. Distinct flattening as ellipsoids is found in all easily measurable comparators up to a few times of its distance. The alternative explanation is that it is an ellipsoid oriented so that it appears spherical to an observer on Earth. The galaxy features a surrounding structure of gas and dust, extending up to 150,000 light-years and jets of heated particles up to two-thirds of that. This indicates that it may have once been an active quasar or radio galaxy. M89 has an extensive and complex system of surrounding shells and plumes, indicating that it has seen one or several notable mergers. Chandra studies in the wavelength of the X-Rays show two ring-like structures of hot gas in M89's nucleus, suggesting an outburst there 1 to 2 million years ago as well as ram-pressure stripping acting on the galaxy as it moves through Virgo's intracluster medium. The supermassive black hole at the core has a mass of . M89 also has a large array of globular clusters. A 2006 survey estimates that there are 2,000 ± 700 of these within 25′. This compares to 150 to 200 of these thought (among which many proven) to surround the Milky Way. Gallery See also List of Messier objects References External links SEDS: Messier Object 89 Messier 089 Messier 089 Messier 089 089 Messier 089 07760 41968 17810318 Discoveries by Charles Messier
Messier 89
[ "Astronomy" ]
382
[ "Virgo (constellation)", "Constellations" ]
968,179
https://en.wikipedia.org/wiki/Messier%2090
Messier 90 (also known as M90 and NGC 4569) is an intermediate spiral galaxy exhibiting a weak inner ring structure about 60 million light-years away in the constellation Virgo. It was discovered by Charles Messier in 1781. Membership of the Virgo Cluster Messier 90 is a member of the Virgo Cluster, being one of its largest and brightest spiral galaxies, with an absolute magnitude of around −22 (brighter than the Andromeda Galaxy). The galaxy is found about 1.5° from the central subgroup of Messier 87. Due to the galaxy's interaction with the intracluster medium in its cluster, the galaxy has lost much of its interstellar medium. As a result of this process, which is referred to as ram-pressure stripping, the medium and star formation regions appear severely truncated compared to similar galaxies outside the Virgo Cluster and there are even H II regions outside the galactic plane, as well as long (up to 80,000-parsec—that is, 260,000-light-year) tails of ionized gas that has been stripped away. Star formation activity As stated above, the star formation in Messier 90 appears truncated. Consequently, the galaxy's spiral arms appear to be smooth and featureless, rather than knotted like galaxies with extended star formation, which justifies why this galaxy, along with NGC 4921 in the Coma Cluster has been classified as the prototype of an anemic galaxy. Some authors go even further and consider it is a passive spiral galaxy, similar to those found on galaxy clusters with high redshift. However, its center appears to host significant nebula and star formation, where around 50,000 stars of spectral types O and B that formed around 5 to 6 million years ago are set amidst many A-type supergiants that were born in earlier starbursts, between 15 and 30 million years ago. Multiple supernovae (up to 100,000) in the nucleus have produced 'superwinds' that are blowing the galaxy's interstellar medium outward into the intracluster medium collimated in two jets, one of which is being disturbed by interaction with Virgo's intracluster medium as the galaxy moves through it. Blueshift The spectrum of Messier 90 is blueshifted, which indicates that, net of non-aligned vectors of motion, the gap between it and our galaxy is narrowing. The spectra of most galaxies are redshifted. The blueshift was originally used to argue that Messier 90 was actually an object in the foreground of the Virgo Cluster. However, since the phenomenon was limited mostly to galaxies in the same part of the sky as the Virgo Cluster, it appeared that this inference based on the blueshift was incorrect. Instead, many blueshifts exhibit the large range in velocities of objects within the Virgo Cluster. Distance measurements Low levels of H I gas prevents using the Tully–Fisher relation to estimate the distance to Messier 90. Companion galaxies Messier 90 is rich in globular clusters, with around 1,000 of them. The galaxy IC 3583 was once thought to be a satellite of Messier 90; however, it is now thought they are too far away to be interacting at all. Gallery See also List of Messier objects Black Eye Galaxy (Messier 64), a similar spiral galaxy Notes uses a Hubble constant of 75 (km/s)/Mpc to estimate a distance of 16.8 Mpc to NGC 4569. Adjusting for the 2006 value of 70 (km/s)/Mpc we get a distance of 18.0 Mpc. References External links SEDS: Spiral Galaxy M90 Intermediate spiral galaxies Virgo Cluster Virgo (constellation) 090 NGC objects 07786 42089 076 Astronomical objects discovered in 1781 Discoveries by Charles Messier
Messier 90
[ "Astronomy" ]
797
[ "Virgo (constellation)", "Constellations" ]
968,202
https://en.wikipedia.org/wiki/Dog%20intelligence
Dog intelligence or dog cognition is the process in dogs of acquiring information and conceptual skills, and storing them in memory, retrieving, combining and comparing them, and using them in new situations. Studies have shown that dogs display many behaviors associated with intelligence. They have advanced memory skills, and are able to read and react appropriately to human body language such as gesturing and pointing, and to understand human voice commands. Dogs demonstrate a theory of mind by engaging in deception, and self-awareness by detecting their own smell during the "sniff test", a proposed olfactory equivalent to the mirror test. Evolutionary perspective Dogs have often been used in studies of cognition, including research on perception, awareness, memory, and learning, notably research on classical and operant conditioning. In the course of this research, behavioral scientists uncovered a surprising set of social-cognitive abilities in the domestic dog, abilities that are neither possessed by dogs' closest canine relatives nor by other highly intelligent mammals such as great apes. Rather, these skills resemble some of the social-cognitive skills of human children. This may be an example of convergent evolution, which happens when distantly related species independently evolve similar solutions to the same problems. For example, fish, penguins and dolphins have each separately evolved flippers as solution to the problem of moving through the water. With dogs and humans, we may see psychological convergence; that is, dogs have evolved to be cognitively more similar to humans than we are to our closest genetic relatives. However, it is questionable whether the cognitive evolution of humans and animals may be called "independent". The cognitive capacities of dogs have inevitably been shaped by millennia of contact with humans. As a result of this physical and social evolution, many dogs readily respond to social cues common to humans, quickly learn the meaning of words, show cognitive bias and exhibit emotions that seem to reflect those of humans. Research suggests that domestic dogs may have lost some of their original cognitive abilities once they joined humans. For example, one study showed compelling evidence that dingoes (Canis dingo) can outperform domestic dogs in non-social problem-solving experiments. Another study indicated that after being trained to solve a simple manipulation task, dogs that are faced with an unsolvable version of the same problem look at a nearby human, while socialized wolves do not. Thus, modern domestic dogs seem to use humans to solve some of their problems for them. In 2014, a whole genome study of the DNA differences between wolves and dogs found that dogs did not show a reduced fear response; they showed greater synaptic plasticity. Synaptic plasticity is widely believed to be the cellular correlate of learning and memory, and this change may have altered the learning and memory abilities of dogs. Most modern research on dog cognition has focused on pet dogs living in human homes in developed countries, a small fraction of the dog population. Dogs from other populations may show different cognitive behaviors. Breed differences possibly could impact on spatial learning and memory abilities. Studies history The first intelligence test for dogs was developed in 1976. It included measurements of short-term memory, agility, and ability to solve problems such as detouring to a goal. It also assessed the ability of a dog to adapt to new conditions and cope with emotionally difficult situations. The test was administered to 100 dogs and standardized, and breed norms were developed. Stanley Coren used surveys done by dog obedience judges to rank dog breeds by intelligence and published the results in his 1994 book The Intelligence of Dogs. Perception Perception refers to mental processes through which incoming sensory information is organized and interpreted in order to represent and understand the environment. Perception includes such processes as the selection of information through attention, the organization of sensory information through grouping, and the identification of events and objects. In the dog, olfactory information (the sense of smell) is particularly salient (compared with humans) but the dog's senses also include vision, hearing, taste, touch and proprioception. There is also evidence that dogs sense the Earth's magnetic field. One researcher has proposed that dogs perceive the passing of time through the dissipation of smells. Awareness The concept of object permanence is the ability of an animal to understand that objects continue to exist even when they have moved outside of their field of view. This ability is not present at birth, and developmental psychologist Jean Piaget described six stages in the development of object permanence in human infants. A similar approach has been used with dogs, and there is evidence that dogs go through similar stages and reach the advanced fifth stage by an age of 8 weeks. At this stage they can track "successive visible displacement" in which the experimenter moves the object behind multiple screens before leaving it behind the last one. It is unclear whether dogs reach Stage 6 of Piaget's object permanence development model. A study in 2013 indicated that dogs appear to recognize other dogs regardless of breed, size, or shape, and distinguish them from other animals. In 2014, a study using magnetic resonance imaging demonstrated that voice-response areas exist in the brains of dogs and that they show a response pattern in the anterior temporal voice areas that is similar to that in humans. Dogs can pass the "sniff test" suggesting potential self-awareness in the olfactory sense, and also show awareness of the size and movement of their bodies. Social cognition Social learning: observation and rank Dogs are capable of learning through simple reinforcement (e.g., classical or operant conditioning), but they also learn by watching humans and other dogs. One study investigated whether dogs engaged in partnered play would adjust their behavior to the attention-state of their partner. The experimenters observed that play signals were only sent when the dog was holding the attention of its partner. If the partner was distracted, the dog instead engaged in attention-getting behavior before sending a play signal. Puppies learn behaviors quickly by following examples set by experienced dogs. This form of intelligence is not particular to those tasks dogs have been bred to perform, but can be generalized to various abstract problems. For example, Dachshund puppies were set the problem of pulling a cart by tugging on an attached piece of ribbon in order to get a reward from inside the cart. Puppies that watched an experienced dog perform this task learned the task fifteen times faster than those left to solve the problem on their own. The social rank of dogs affects their performance in social learning situations. In social groups with a clear hierarchy, dominant individuals are the more influential demonstrators and the knowledge transfer tends to be unidirectional, from higher rank to lower. In a problem-solving experiment, dominant dogs generally performed better than subordinates when they observed a human demonstrator's actions, a finding that reflects the dominance of the human in dog-human groups. Subordinate dogs learn best from the dominant dog that is adjacent in the hierarchy. Following human cues Dogs show human-like social cognition in various ways. For example, dogs can react appropriately to human body language such as gesturing and pointing, and they also understand human voice commands. In one study, puppies were presented with a box, and shown that, when a handler pressed a lever, a ball would roll out of the box. The handler then allowed the puppy to play with the ball, making it an intrinsic reward. The pups were then allowed to interact with the box. Roughly three quarters of the puppies subsequently touched the lever, and over half successfully released the ball, compared to only 6% in a control group that did not watch the human manipulate the lever. Similarly, dogs may be guided by cues indicating the direction of a human's attention. In one task a reward was hidden under one of two buckets. The experimenter then indicated the location of the reward by tapping the bucket, pointing to the bucket, nodding at the bucket, or simply looking at the bucket. The dogs followed these signals, performing better than chimpanzees, wolves, and human infants at this task; even puppies with limited exposure to humans performed well. Dogs can follow the direction of pointing by humans. New Guinea singing dogs are a half-wild proto-dog endemic to the remote alpine regions of New Guinea and these can follow human pointing as can Australian dingoes. These both demonstrate an ability to read human gestures that arose early in domestication without human selection. Dogs and wolves have also been shown to follow more complex pointing made with body parts other than the human arm and hand (e.g. elbow, knee, foot). Dogs tend to follow hand/arm pointed directions more when combined with eye signaling as well. In general, dogs seem to use human cues as an indication on where to go and what to do. Overall, dogs appear to have several cognitive skills necessary to understand communication as information; however, findings on dogs' understanding of referentiality and others' mental states are controversial and it is not clear whether dog themselves communicate with informative motives. For canines to perform well on traditional human-guided tasks (e.g. following the human point) both relevant lifetime experiences with humans—including socialization to humans during the critical phase for social development—and opportunities to associate human body parts with certain outcomes (such as food being provided by humans, a human throwing or kicking a ball, etc.) are required. In 2016, a study of water rescue dogs that respond to words or gestures found that the dogs would respond to the gesture rather than the verbal command. Memory Episodic memory Dogs have demonstrated episodic-like memory by recalling past events that included the complex actions of humans. In a 2019 study, a correlation has been shown between the size of the dog and the functions of memory and self-control, with larger dogs performing significantly better than smaller dogs in these functions. However, in the study brain size did not predict a dog's ability to follow human pointing gestures, nor was it associated with their inferential and physical reasoning abilities. A 2018 study on canine cognitive abilities found that various animals, including pigs, pigeons and chimpanzees, are able to remember the what, where and when of an event, which dogs cannot do. Learning and using words Various studies have shown that dogs readily learn the names of objects and can retrieve an item from among many others when given its name. For example, in 2008, Betsy, a Border Collie, knew over 345 words by the retrieval test, and she was also able to connect an object with a photographic image of the object, despite having seen neither before. In another study, a dog watched as experimenters handed an object back and forth to each other while using the object's name in a sentence. The dog subsequently retrieved the item given its name. In humans, "fast mapping" is the ability to form quick and rough hypotheses about the meaning of a new word after only a single exposure. In 2004, a study with Rico, a Border Collie, showed he was able to fast map. Rico initially knew the labels of over 200 items. He inferred the names of novel items by exclusion, that is, by knowing that the novel item was the one that he did not already know. Rico correctly retrieved such novel items immediately and four weeks after the initial exposure. Rico was also able to interpret phrases such as "fetch the sock" by its component words (rather than considering its utterance to be a single word). Rico could also give the sock to a specified person. This performance is comparable to that of 3-year-old humans. In 2013, a study documented the learning and memory capabilities of a Border Collie, "Chaser", who had learned the names and could associate by verbal command over 1,000 words at the time of its publishing. Chaser was documented as capable of learning the names of new objects "by exclusion", and capable of linking nouns to verbs. It is argued that central to the understanding of the Border Collie's remarkable accomplishments is the dog's breeding background—collies bred for herding work are uniquely suited for intellectual tasks like word association which may require the dog to work "at a distance" from their human companions, and the study credits this dog's selective breeding in addition to rigorous training for her intellectual prowess. Some research has suggested that while dogs can easily make a distinction between familiar known words and nonsensical dissimilar words, they struggle to differentiate between known familiar words and nonsense words that differ by only a single sound, as measurements of the dogs' brain activity showed no difference in response between a known word and a similar but nonsensical word. This would give dogs the word processing capability equivalent to the average 14-month human infant. An fMRI study found that the dog brain distinguished, without training, a familiar from an unfamiliar language. The study also found that older dogs were better at discriminating one language from the other, suggesting an effect of the amount of exposition to the language. Emotions Studies suggest that dogs feel complex emotions, like jealousy and anticipation. However, behavioral evidence of seemingly human emotions must be interpreted with care. For example, in his 1996 book Good Natured, ethologist Frans de Waal discusses an experiment on guilt and reprimands conducted on a female Siberian Husky. The dog had the habit of shredding newspapers, and when her owner returned home to find the shredded papers and scold her she would act guilty. However, when the owner himself shredded the papers without the dog's knowledge, the dog "acted just as 'guilty' as when she herself had created the mess." De Waal concludes that the dog did not display true guilt as humans understand it, but rather simply the anticipation of reprimand. One limitation in the study of emotions in non-human animals, is that they cannot verbalize to express their feelings. However, dogs' emotions can be studied indirectly through cognitive tests, called cognitive bias test, which measure a cognitive bias and allow to make inference about the mood of the animal. Researchers have found that dogs suffering from separation anxiety have a more negative cognitive bias, compared to dogs without separation anxiety. On the other hand, when dogs' separation anxiety is treated with medications and behavior therapy, their cognitive bias becomes less negative than before treatment. Also administration of oxytocin, rather than a placebo, induces a more positive cognitive bias and positive expectation in dogs. It is therefore suggested that the cognitive bias test can be used to monitor positive emotional states and therefore welfare in dogs. There is evidence that dogs can discriminate the emotional expressions of human faces. In addition, they seem to respond to faces in somewhat the same way as humans. For example, humans tend to gaze at the right side of a person's face, which may be related to the use of right brain hemisphere for facial recognition. Research indicates that dogs also fixate the right side of a human face, but not that of other dogs or other animals. Dogs are the only non-primate species known to do so. Problem solving Dogs learned to activate a robot to deliver them food rewards. Dogs have been observed to learn to use public transport to arrive at a desired destination. In Moscow out of 500 dogs, 20 learned to commute. Eclipse, a black labrador in Seattle, would occasionally ride the bus ahead of its owner when eager to get to the dog park. Ratty, a Jack Russell terrier in Yorkshire, England, traveled by bus to be fed at two pubs. Captive-raised dingoes (Canis dingo) can outperform domestic dogs in non-social problem-solving. Another study indicated that after undergoing training to solve a simple manipulation task, dogs faced with an unsolvable version of the same problem look at the human, whereas socialized wolves do not. Modern domestic dogs use humans to solve their problems for them. Sex-specific dynamics are an important contributor to individual differences in cognitive performance of pet dogs in repeated problem-solving tasks. Learning by inference Dogs have been shown to learn by making inferences in a similar way to children.Dogs have the ability to train themselves and learn behaviors through watching and interacting with other dogs. In one study dogs were first introduced to a setting with two bowls, one with a reward. After four demonstrations to show dogs there was at least one reward in one of the bowls, the empty bowl was lifted and shown then, laid down, 33% of dogs correctly picked the reward bowl more often than the empty bowl. Theory of mind Theory of mind is the ability to attribute mental states—beliefs, intents, desires, pretending, knowledge, etc.—to oneself and others and to understand that others have beliefs, desires, intentions, and perspectives that are different from one's own. There is some evidence that dogs demonstrate a theory of mind by engaging in deception. For example, one observer reported that a dog hid a stolen treat by sitting on it until the rightful owner of the treat left the room. Although this could have been accidental, it suggests that the thief understood that the treat's owner would be unable to find the treat if it were out of view. A study found that dogs are able to discriminate an object that a human partner is looking for based on its relevance for the partner and they are more keen on indicating an object that is relevant to the partner compared to an irrelevant one; this suggests that dogs might have a rudimentary version of some of the skills necessary for theory of mind. Dogs can figure what a human is seeing. Tool use Dogs have been trained to follow commands to drive cars. See also Animal cognition Cat intelligence Dog behavior List of individual dogs Animals taking public transportation Pig intelligence References Further reading Bradshaw, John. Dog Sense (2012 Basic Books). Coren, Stanley. The Intelligence of Dogs (1994) Hare, Brian & Woods, Vanessa. The Genius of Dogs (2013 Penguin Publishing Group). Reveals research findings about how dogs think and how we humans can have deeper relationships with them. Horowitz, Alexandra. Inside of a Dog: What Dogs, See, Smell, and Know (2009 Scribner). Miklosi, Adam. Dog Behaviour, Evolution, and Cognition (2016 Oxford University Press). Provides a basis for a complete dog behavioral biology based on concepts derived from contemporary ethology. Pilley, John and Hinzmann, Hilary. Chaser: Unlocking the Genius of the Dog Who Knows a Thousand Words (2013 Houghton Mifflin Harcourt). Animal intelligence Dogs Ethology
Dog intelligence
[ "Biology" ]
3,765
[ "Behavioural sciences", "Ethology", "Behavior" ]
968,734
https://en.wikipedia.org/wiki/Integrability%20conditions%20for%20differential%20systems
In mathematics, certain systems of partial differential equations are usefully formulated, from the point of view of their underlying geometric and algebraic structure, in terms of a system of differential forms. The idea is to take advantage of the way a differential form restricts to a submanifold, and the fact that this restriction is compatible with the exterior derivative. This is one possible approach to certain over-determined systems, for example, including Lax pairs of integrable systems. A Pfaffian system is specified by 1-forms alone, but the theory includes other types of example of differential system. To elaborate, a Pfaffian system is a set of 1-forms on a smooth manifold (which one sets equal to 0 to find solutions to the system). Given a collection of differential 1-forms on an -dimensional manifold , an integral manifold is an immersed (not necessarily embedded) submanifold whose tangent space at every point is annihilated by (the pullback of) each . A maximal integral manifold is an immersed (not necessarily embedded) submanifold such that the kernel of the restriction map on forms is spanned by the at every point of . If in addition the are linearly independent, then is ()-dimensional. A Pfaffian system is said to be completely integrable if admits a foliation by maximal integral manifolds. (Note that the foliation need not be regular; i.e. the leaves of the foliation might not be embedded submanifolds.) An integrability condition is a condition on the to guarantee that there will be integral submanifolds of sufficiently high dimension. Necessary and sufficient conditions The necessary and sufficient conditions for complete integrability of a Pfaffian system are given by the Frobenius theorem. One version states that if the ideal algebraically generated by the collection of αi inside the ring Ω(M) is differentially closed, in other words then the system admits a foliation by maximal integral manifolds. (The converse is obvious from the definitions.) Example of a non-integrable system Not every Pfaffian system is completely integrable in the Frobenius sense. For example, consider the following one-form : If dθ were in the ideal generated by θ we would have, by the skewness of the wedge product But a direct calculation gives which is a nonzero multiple of the standard volume form on R3. Therefore, there are no two-dimensional leaves, and the system is not completely integrable. On the other hand, for the curve defined by then θ defined as above is 0, and hence the curve is easily verified to be a solution (i.e. an integral curve) for the above Pfaffian system for any nonzero constant c. Examples of applications In pseudo-Riemannian geometry, we may consider the problem of finding an orthogonal coframe θi, i.e., a collection of 1-forms that form a basis of the cotangent space at every point with that are closed (dθi = 0, ). By the Poincaré lemma, the θi locally will have the form dxi for some functions xi on the manifold, and thus provide an isometry of an open subset of M with an open subset of Rn. Such a manifold is called locally flat. This problem reduces to a question on the coframe bundle of M. Suppose we had such a closed coframe If we had another coframe , then the two coframes would be related by an orthogonal transformation If the connection 1-form is ω, then we have On the other hand, But is the Maurer–Cartan form for the orthogonal group. Therefore, it obeys the structural equation , and this is just the curvature of M: After an application of the Frobenius theorem, one concludes that a manifold M is locally flat if and only if its curvature vanishes. Generalizations Many generalizations exist to integrability conditions on differential systems thar are not necessarily generated by one-forms. The most famous of these are the Cartan–Kähler theorem, which only works for real analytic differential systems, and the Cartan–Kuranishi prolongation theorem. See for details. The Newlander–Nirenberg theorem gives integrability conditions for an almost-complex structure. Further reading Bryant, Chern, Gardner, Goldschmidt, Griffiths, Exterior Differential Systems, Mathematical Sciences Research Institute Publications, Springer-Verlag, Olver, P., Equivalence, Invariants, and Symmetry, Cambridge, Ivey, T., Landsberg, J.M., Cartan for Beginners: Differential Geometry via Moving Frames and Exterior Differential Systems, American Mathematical Society, Dunajski, M., Solitons, Instantons and Twistors, Oxford University Press, Partial differential equations Differential topology Differential systems
Integrability conditions for differential systems
[ "Mathematics" ]
1,011
[ "Topology", "Differential topology" ]
968,762
https://en.wikipedia.org/wiki/Messier%2091
Messier 91 (also known as NGC 4548 or M91) is a barred spiral galaxy that is found in the south of Coma Berenices. It is in the local supercluster and is part of the Virgo Cluster of galaxies. It is about 63 million light-years away from our galaxy. It was the last of a group of eight "nebulae" – the term 'galaxy' only coming into use for these objects once it was realized in the 20th century that they were extragalactic – discovered by Charles Messier in 1781. It is the faintest object in the Messier catalog, with an apparent magnitude of 10.2. As a result of a bookkeeping error by Messier, M91 was for a long time one of the few missing entries in the Messier catalog, not matching any known object in the sky. It was not until 1969 that amateur astronomer William C. Williams realized that M91 was NGC 4548, which was catalogued by William Herschel in 1784. Some sources contend the nearby spiral galaxy NGC 4571 was considered as a candidate for this object by Herschel. Observation history The object was discovered in 1781 by Messier who described it as nebula without stars, fainter than M90. Messier mistakenly logged its position from Messier 58, where in fact it should have been Messier 89. William Herschel observed the same object in 1784. In 1969 Williams solved this lost Messier object by measuring its right ascension and declination relative to those of the nearby galaxy M89 (notable reference stars angularly nearby are sparse) – rather than M58, a 9th-magnitude galaxy which Messier recorded in 1778. This amended night sky "star-hopping" reference point matches Messier's figures to 0.1 of an arcminute () in right ascension and 1 in declination, a sixtieth of a degree. Features Inclusion of Messier 91 in the Virgo cluster was confirmed in 1997 from observing Cepheid variables which place it at million light years away. Its bar is very conspicuous – it is seen with position angle of 65 to 245 degrees when being measured from the North direction to the East. There is a countering peculiar (local) velocity toward us through the Virgo cluster of about 700 km/s within the cluster's recession velocity of about 1100 km/s, which produces its observed recessional velocity of only about 400 km/s. Another source gives the latter figure as 803 km/s. Messier 91 is also classified as an anemic galaxy, that is: a spiral galaxy with little star formation and gas compared with other galaxies of its type. See also List of Messier objects References and footnotes External links SEDS: Messier Object 91 Galaxy M91 Barred spiral galaxies Virgo Cluster Coma Berenices 091 NGC objects 07753 41934 Astronomical objects discovered in 1781 Discoveries by Charles Messier
Messier 91
[ "Astronomy" ]
604
[ "Coma Berenices", "Constellations" ]
968,834
https://en.wikipedia.org/wiki/VO2%20max
{{DISPLAYTITLE:VO2 max}} V̇O2 max (also maximal oxygen consumption, maximal oxygen uptake or maximal aerobic capacity) is the maximum rate of oxygen consumption attainable during physical exertion. The name is derived from three abbreviations: "V̇" for volume (the dot over the V indicates "per unit of time" in Newton's notation), "O2" for oxygen, and "max" for maximum and usually normalized per kilogram of body mass. A similar measure is V̇O2 peak (peak oxygen consumption), which is the measurable value from a session of physical exercise, be it incremental or otherwise. It could match or underestimate the actual V̇O2 max. Confusion between the values in older and popular fitness literature is common. The capacity of the lung to exchange oxygen and carbon dioxide is constrained by the rate of blood oxygen transport to active tissue. The measurement of V̇O2 max in the laboratory provides a quantitative value of endurance fitness for comparison of individual training effects and between people in endurance training. Maximal oxygen consumption reflects cardiorespiratory fitness and endurance capacity in exercise performance. Elite athletes, such as competitive distance runners, racing cyclists or Olympic cross-country skiers, can achieve V̇O2 max values exceeding 90 mL/(kg·min), while some endurance animals, such as Alaskan huskies, have V̇O2 max values exceeding 200 mL/(kg·min). In physical training, especially in its academic literature, V̇O2 max is often used as a reference level to quantify exertion levels, such as 65% V̇O2 max as a threshold for sustainable exercise, which is generally regarded as more rigorous than heart rate, but is more elaborate to measure. Normalization per body mass V̇O2 max is expressed either as an absolute rate in (for example) litres of oxygen per minute (L/min) or as a relative rate in (for example) millilitres of oxygen per kilogram of the body mass per minute (e.g., mL/(kg·min)). The latter expression is often used to compare the performance of endurance sports athletes. However, V̇O2 max generally does not vary linearly with body mass, either among individuals within a species or among species, so comparisons of the performance capacities of individuals or species that differ in body size must be done with appropriate statistical procedures, such as analysis of covariance. Measurement and calculation Measurement Accurately measuring V̇O2 max involves a physical effort sufficient in duration and intensity to fully tax the aerobic energy system. In general clinical and athletic testing, this usually involves a graded exercise test in which exercise intensity is progressively increased while measuring: ventilation and oxygen and carbon dioxide concentration of the inhaled and exhaled air. V̇O2 max is measured during a cardiopulmonary exercise test (CPX test). The test is done on a treadmill or cycle ergometer. In untrained subjects, V̇O2 max is 10% to 20% lower when using a cycle ergometer compared with a treadmill. However, trained cyclists' results on the cycle ergometer are equal to or even higher than those obtained on the treadmill. The classic V̇O2 max, in the sense of Hill and Lupton (1923), is reached when oxygen consumption remains at a steady state ("plateau") despite an increase in workload. The occurrence of a plateau is not guaranteed and may vary by person and sampling interval, leading to modified protocols with varied results. Calculation: the Fick equation V̇O2 may also be calculated by the Fick equation: , when these values are obtained during exertion at a maximal effort. Here Q is the cardiac output of the heart, CaO2 is the arterial oxygen content, and CvO2 is the venous oxygen content. (CaO2 – CvO2) is also known as the arteriovenous oxygen difference. The Fick equation may be used to measure V̇O2 in critically ill patients, but its usefulness is low even in non-exerted cases. Using a breath-based VO2 to estimate cardiac output, on the other hand, seems to be reliable enough. Estimation using submaximal exercise testing The necessity for a subject to exert maximum effort in order to accurately measure V̇O2 max can be dangerous in those with compromised respiratory or cardiovascular systems; thus, sub-maximal tests for estimating V̇O2 max have been developed. The heart rate ratio method An estimate of V̇O2 max is based on maximum and resting heart rates. In the Uth et al. (2004) formulation, it is given by: This equation uses the ratio of maximum heart rate (HRmax) to resting heart rate (HRrest) to predict V̇O2 max. The researchers cautioned that the conversion rule was based on measurements on well-trained men aged 21 to 51 only, and may not be reliable when applied to other sub-groups. They also advised that the formula is most reliable when based on actual measurement of maximum heart rate, rather than an age-related estimate. The Uth constant factor of 15.3 is given for well-trained men. Later studies have revised the constant factor for different populations. According to Voutilainen et al. 2020, the constant factor should be 14 in around 40-year-old normal weight never-smoking men with no cardiovascular diseases, bronchial asthma, or cancer. Every 10 years of age reduces the coefficient by one, as well as does the change in body weight from normal weight to obese or the change from never-smoker to current smoker. Consequently, V̇O2 max of 60-year-old obese current smoker men should be estimated by multiplying the HRmax to HRrest ratio by 10. Cooper test Kenneth H. Cooper conducted a study for the United States Air Force in the late 1960s. One of the results of this was the Cooper test in which the distance covered running in 12 minutes is measured. Based on the measured distance, an estimate of V̇O2 max [in mL/(kg·min)] can be calculated by inverting the linear regression equation, giving us: where d12 is the distance (in metres) covered in 12 minutes. An alternative equation is: where d′12 is distance (in miles) covered in 12 minutes. Multi-stage fitness test There are several other reliable tests and V̇O2 max calculators to estimate V̇O2 max, most notably the multi-stage fitness test (or beep test). Rockport fitness walking test Estimation of V̇O2 max from a timed one-mile track walk (as fast as possible) in decimal minutes (, e.g.: 20:35 would be specified as 20.58), sex, age in years, body weight in pounds (, lbs), and 60-second heart rate in beats-per-minute (, bpm) at the end of the mile. The constant is 6.3150 for males, 0 for females. Correlation coefficient for the generalized formula is 0.88. Reference values Men have a V̇O2 max that is 26% higher (6.6 mL/(kg·min)) than women for treadmill and 37.9% higher (7.6 mL/(kg·min)) than women for cycle ergometer on average. V̇O2 max is on average 22% higher (4.5 mL/(kg·min)) when measured using a treadmill compared with a cycle ergometer. Effect of training Non-athletes The average untrained healthy male has a V̇O2 max of approximately 35–40 mL/(kg·min). The average untrained healthy female has a V̇O2 max of approximately 27–31 mL/(kg·min). These scores can improve with training and decrease with age, though the degree of trainability also varies widely. Athletes In sports where endurance is an important component in performance, such as road cycling, rowing, cross-country skiing, swimming, and long-distance running, world-class athletes typically have high V̇O2 max values. Elite male runners can consume up to 85 mL/(kg·min), and female elite runners can consume about 77 mL/(kg·min). Norwegian cyclist Oskar Svendsen holds the record for the highest V̇O2 ever tested with 97.5 mL/(kg·min). Animals V̇O2 max has been measured in other animal species. During loaded swimming, mice had a V̇O2 max of around 140 mL/(kg·min). Thoroughbred horses had a V̇O2 max of around 193 mL/(kg·min) after 18 weeks of high-intensity training. Alaskan huskies running in the Iditarod Trail Sled Dog Race had V̇O2 max values as high as 240 mL/(kg·min). Estimated V̇O2 max for pronghorn antelopes was as high as 300 mL/(kg·min). Limiting factors The factors affecting V̇O2 may be separated into supply and demand. Supply is the transport of oxygen from the lungs to the mitochondria (combining pulmonary function, cardiac output, blood volume, and capillary density of the skeletal muscle) while demand is the rate at which the mitochondria can reduce oxygen in the process of oxidative phosphorylation. Of these, the supply factors may be more limiting. However, it has also been argued that while trained subjects are probably supply limited, untrained subjects can indeed have a demand limitation. General characteristics that affect V̇O2 max include age, sex, fitness and training, and altitude. V̇O2 max can be a poor predictor of performance in runners due to variations in running economy and fatigue resistance during prolonged exercise. The body works as a system. If one of these factors is sub-par, then the whole system's normal capacity is reduced. The drug erythropoietin (EPO) can boost V̇O2 max by a significant amount in both humans and other mammals. This makes EPO attractive to athletes in endurance sports, such as professional cycling. EPO has been banned since the 1990s as an illicit performance-enhancing substance, but by 1998 it had become widespread in cycling and led to the Festina affair as well as being mentioned ubiquitously in the USADA 2012 report on the U.S. Postal Service Pro Cycling Team. Greg LeMond has suggested establishing a baseline for riders' V̇O2 max (and other attributes) to detect abnormal performance increases. Clinical use to assess cardiorespiratory fitness and mortality V̇O2 max/peak is widely used as an indicator of cardiorespiratory fitness (CRF) in select groups of athletes or, rarely, in people under assessment for disease risk. In 2016, the American Heart Association (AHA) published a scientific statement recommending that CRF quantifiable as V̇O2 max/peak be regularly assessed and used as a clinical vital sign; ergometry (exercise wattage measurement) may be used if V̇O2 is unavailable. This statement was based on evidence that lower fitness levels are associated with a higher risk of cardiovascular disease, all-cause mortality, and mortality rates. In addition to risk assessment, the AHA recommendation cited the value for measuring fitness to validate exercise prescriptions, physical activity counseling, and improve both management and health of people being assessed. A 2023 meta-analysis of observational cohort studies showed an inverse and independent association between V̇O2 max and all-cause mortality risk. Every one metabolic equivalent increase in estimated cardiorespiratory fitness was associated with an 11% reduction in mortality. The top third of V̇O2 max scores represented a 45% lower mortality in people compared with the lowest third. As of 2023, V̇O2 max is rarely employed in routine clinical practice to assess cardiorespiratory fitness or mortality due to its considerable demand for resources and costs. History British physiologist Archibald Hill introduced the concepts of maximal oxygen uptake and oxygen debt in 1922. Hill and German physician Otto Meyerhof shared the 1922 Nobel Prize in Physiology or Medicine for their independent work related to muscle energy metabolism. Building on this work, scientists began measuring oxygen consumption during exercise. Key contributions were made by Henry Taylor at the University of Minnesota, Scandinavian scientists Per-Olof Åstrand and Bengt Saltin in the 1950s and 60s, the Harvard Fatigue Laboratory, German universities, and the Copenhagen Muscle Research Centre. See also Anaerobic exercise Arteriovenous oxygen difference Cardiorespiratory fitness Comparative physiology Oxygen pulse Respirometry Running economy Training effect VDOT vVO2max References Exercise biochemistry Sports terminology Respiratory physiology
VO2 max
[ "Chemistry", "Biology" ]
2,589
[ "Biochemistry", "Exercise biochemistry" ]
968,837
https://en.wikipedia.org/wiki/Messier%2092
Messier 92 (also known as M92, M 92, or NGC 6341) is a globular cluster of stars in the northern constellation of Hercules. Discovery It was discovered by Johann Elert Bode on December 27, 1777, then published in the Berliner Astronomisches Jahrbuch during 1779. It was inadvertently rediscovered by Charles Messier on March 18, 1781, and added as the 92nd entry in his catalogue. William Herschel first resolved individual stars in 1783. Visibility It is one of the brighter of its sort in apparent magnitude in the northern hemisphere and in its absolute magnitude in the galaxy, but it is often overlooked by amateur astronomers due to angular proximity to bright cluster Messier 13, about 20% closer. Though when compared to M13, M92 is only slightly less bright, but about 1/3 less extended. It is visible to the naked eye under very good viewing conditions. With a small telescope, M92 can be seen as a nebulous smudge even in a severely light-polluted sky, and can be further resolved in darker conditions. Characteristics It is also one of the galaxy's oldest clusters. It is around above/below the galactic plane and from the Galactic Center. It is about 26,700 light-years away from the Solar System.The half-light radius, or radius containing the upper half of its light emission, is 1.09 arcminutes (), while the tidal radius, the broadest standard measure, is 15.17. It appears only slightly flattened: its minor axis is about 89% ± 3% of the major. Characteristic of other globulars, it has little of the elements other than hydrogen and helium; astronomers term this low metallicity. Specifically, relative to the Sun, its iron abundance is [Fe/H] = –2.32 dex, which is 0.5% of 1.0, on this logarithmic scale, the solar abundance. This puts the estimated age range for the cluster at . Its true diameter is 108 ly, and may have a mass corresponding to 330,000 suns. The cluster is not yet in, nor guaranteed to undergo, core collapse and the core radius figures as about 2 arcseconds (). It is an Oosterhoff type II (OoII) globular cluster, which means it belongs to the group of metal-poor clusters with longer period RR Lyrae variable stars. The 1997 Catalogue of Variable Stars in Globular Clusters listed 28 candidate variable stars in the cluster, although only 20 have been confirmed. As of 2001, there are 17 known RR Lyrae variables in Messier 92. 10 X-ray sources have been detected within the 1.02 arcminute half-mass radius of the cluster, of which half are candidate cataclysmic variable stars. M92 is approaching us at 112 km/sec. Its coordinates indicate that the Earth's North Celestial Pole periodically passes less than one degree of this cluster during the precession of Earth's axis. Thus, M92 was a "Polarissima Borealis", or "North Cluster", about 12,000 years ago (10,000 BC), and it will again in about 14,000 years (16,000 AD). The multiple stellar populations in this cluster, revealing that it hosts at least two stellar generations of stars named 1G and 2G, as well as two distinct groups of 2G stars (2GA and 2GB). The helium abundances of 2GA and 2GB stars have higher mass fractions than that of the 1G stars by 0.01 and 0.04, respectively. Gallery See also List of Messier objects References and footnotes External links Messier 92 @ SEDS Messier pages Messier 92, Galactic Globular Clusters Database page Messier 092 Messier 092 092 Messier 092 Astronomical objects discovered in 1777 Discoveries by Johann Elert Bode
Messier 92
[ "Astronomy" ]
822
[ "Hercules (constellation)", "Constellations" ]
968,879
https://en.wikipedia.org/wiki/Vortex%20tube
The vortex tube, also known as the Ranque-Hilsch vortex tube, is a mechanical device that separates a compressed gas into hot and cold streams. The gas emerging from the hot end can reach temperatures of , and the gas emerging from the cold end can reach . It has no moving parts and is considered an environmentally friendly technology because it can work solely on compressed air and does not use Freon. Its efficiency is low, however, counteracting its other environmental advantages. Pressurised gas is injected tangentially into a swirl chamber near one end of a tube, leading to a rapid rotation—the first vortex—as it moves along the inner surface of the tube to the far end. A conical nozzle allows gas specifically from this outer layer to escape at that end through a valve. The remainder of the gas is forced to return in an inner vortex of reduced diameter within the outer vortex. Gas from the inner vortex transfers energy to the gas in the outer vortex, so the outer layer is hotter at the far end than it was initially. The gas in the central vortex is likewise cooler upon its return to the starting-point, where it is released from the tube. Method of operation To explain the temperature separation in a vortex tube, there are two main approaches: Fundamental approach: the physics This approach is based on first-principles physics alone and is not limited to vortex tubes only, but applies to moving gas in general. It shows that temperature separation in a moving gas is due only to enthalpy conservation in a moving frame of reference. The thermal process in the vortex tube can be estimated in the following way: The main physical phenomenon of the vortex tube is the temperature separation between the cold vortex core and the warm vortex periphery. The "vortex tube effect" is fully explained with the work equation of Euler, also known as Euler's turbine equation, which can be written in its most general vectorial form as: , where is the total, or stagnation temperature of the rotating gas at radial position , the absolute gas velocity as observed from the stationary frame of reference is denoted with ; the angular velocity of the system is and is the isobaric heat capacity of the gas. This equation was published in 2012; it explains the fundamental operating principle of vortex tubes (Here's a video with animated demonstration of how this works). The search for this explanation began in 1933 when the vortex tube was discovered and continued for more than 80 years. The above equation is valid for an adiabatic turbine passage; it clearly shows that while gas moving towards the center is getting colder, the peripheral gas in the passage is "getting faster". Therefore, vortex cooling is due to angular propulsion. The more the gas cools by reaching the center, the more rotational energy it delivers to the vortex and thus the vortex rotates even faster. This explanation stems directly from the law of energy conservation. Compressed gas at room temperature is expanded in order to gain speed through a nozzle; it then climbs the centrifugal barrier of rotation during which energy is also lost. The lost energy is delivered to the vortex, which speeds its rotation. In a vortex tube, the cylindrical surrounding wall confines the flow at periphery and thus forces conversion of kinetic into internal energy, which produces hot air at the hot exit. Therefore, the vortex tube is a rotorless turboexpander. It consists of a rotorless radial inflow turbine (cold end, center) and a rotorless centrifugal compressor (hot end, periphery). The work output of the turbine is converted into heat by the compressor at the hot end. Phenomenological approach This approach relies on observation and experimental data. It is specifically tailored to the geometrical shape of the vortex tube and the details of its flow and is designed to match the particular observables of the complex vortex tube flow, namely turbulence, acoustic phenomena, pressure fields, air velocities and many others. The earlier published models of the vortex tube are phenomenological. They are: Radial pressure difference: centrifugal compression and air expansion Radial transfer of angular momentum Radial acoustic streaming of energy Radial heat pumping More on these models can be found in recent review articles on vortex tubes. The phenomenological models were developed at an earlier time when the turbine equation of Euler was not thoroughly analyzed; in the engineering literature, this equation is studied mostly to show the work output of a turbine; while temperature analysis is not performed since turbine cooling has more limited application unlike power generation, which is the main application of turbines. Phenomenological studies of the vortex tube in the past have been useful in presenting empirical data. However, due to the complexity of the vortex flow this empirical approach was able to show only aspects of the effect but was unable to explain its operating principle. Dedicated to empirical details, for a long time the empirical studies made the vortex tube effect appear enigmatic and its explanation – a matter of debate. History The vortex tube was invented in 1931 by French physicist Georges J. Ranque. It was rediscovered by Paul Dirac in 1934 while he was searching for a device to perform isotope separation, leading to development of the Helikon vortex separation process. German physicist improved the design and published a widely read paper in 1947 on the device, which he called a Wirbelrohr (literally, whirl pipe). In 1954, Westley published a comprehensive survey entitled "A bibliography and survey of the vortex tube", which included over 100 references. In 1951 Curley and McGree, in 1956 Kalvinskas, in 1964 Dobratz, in 1972 Nash, and in 1979 Hellyar made important contribution to the RHVT literature by their extensive reviews on the vortex tube and its applications. From 1952 to 1963, C. Darby Fulton, Jr. obtained four U.S. patents relating to the development of the vortex tube. In 1961, Fulton began manufacturing the vortex tube under the company name Fulton Cryogenics. Fulton sold the company to Vortec, Inc. The vortex tube was used to separate gas mixtures, oxygen and nitrogen, carbon dioxide and helium, carbon dioxide and air in 1967 by Linderstrom-Lang. Vortex tubes also seem to work with liquids to some extent, as demonstrated by Hsueh and Swenson in a laboratory experiment where free body rotation occurs from the core and a thick boundary layer at the wall. Air is separated causing a cooler air stream coming out the exhaust hoping to chill as a refrigerator. In 1988 R. T. Balmer applied liquid water as the working medium. It was found that when the inlet pressure is high, for instance 20-50 bar, the heat energy separation process exists in incompressible (liquids) vortex flow as well. Note that this separation is only due to heating; there is no longer cooling observed since cooling requires compressibility of the working fluid. Efficiency Vortex tubes have lower efficiency than traditional air conditioning equipment. They are commonly used for inexpensive spot cooling, when compressed air is available. Applications Current applications Commercial vortex tubes are designed for industrial applications to produce a temperature drop of up to . With no moving parts, no electricity, and no refrigerant, a vortex tube can produce refrigeration up to using 100 standard cubic feet per minute (2.832 m3/min) of filtered compressed air at . A control valve in the hot air exhaust adjusts temperatures, flows and refrigeration over a wide range. Vortex tubes are used for cooling of cutting tools (lathes and mills, both manually-operated and CNC machines) during machining. The vortex tube is well-matched to this application: machine shops generally already use compressed air, and a fast jet of cold air provides both cooling and removal of the chips produced by the tool. This eliminates or drastically reduces the need for liquid coolant, which is messy, expensive, and environmentally hazardous. See also Heat pump Maxwell's demon Windhexe References Further reading G. Ranque, (1933) "Expériences sur la détente giratoire avec productions simultanées d'un echappement d'air chaud et d'un echappement d'air froid," Journal de Physique et Le Radium, Supplement, 7th series, 4 : 112 S – 114 S. H. C. Van Ness, Understanding Thermodynamics, New York: Dover, 1969, starting on page 53. A discussion of the vortex tube in terms of conventional thermodynamics. Mark P. Silverman, And Yet it Moves: Strange Systems and Subtle Questions in Physics, Cambridge, 1993, Chapter 6 Samuel B. Hsueh and Frank R. Swenson,"Vortex Diode Interior Flows," 1970 Missouri Academy of Science Proceedings, Warrensburg, Mo. C. L. Stong, The Amateur Scientist, London: Heinemann Educational Books Ltd, 1962, Chapter IX, Section 4, The "Hilsch" Vortex Tube, p514-519. M. Kurosaka, Acoustic Streaming in Swirling Flow and the Ranque-Hilsch (vortex-tube) Effect, Journal of Fluid Mechanics, 1982, 124:139-172 M. Kurosaka, J.Q. Chu, J.R. Goodman, Ranque-Hilsch Effect Revisited: Temperature Separation Traced to Orderly Spinning Waves or 'Vortex Whistle', Paper AIAA-82-0952 presented at the AIAA/ASME 3rd Joint Thermophysics Conference (June 1982) R. Ricci, A. Secchiaroli, V. D’Alessandro, S. Montelpare. Numerical analysis of compressible turbulent helical flow in a Ranque-Hilsch vortex tube. Computational Methods and Experimental Measurement XIV, pp. 353–364, Ed. C. Brebbia, C.M. Carlomagno, . A. Secchiaroli, R. Ricci, S. Montelpare, V. D’Alessandro. Fluid Dynamics Analysis of a Ranque-Hilsch Vortex-Tube. Il Nuovo Cimento C, vol.32, 2009, . A. Secchiaroli, R. Ricci, S. Montelpare, V. D’Alessandro. Numerical simulation of turbulent flow in a Ranque-Hilsch vortex-tube. International Journal of Heat and Mass Transfer, Vol. 52, Issues 23–24, November 2009, pp. 5496–5511, . N. Pourmahmoud, A. Hassanzadeh, O. Moutaby. Numerical Analysis of The Effect of Helical Nozzles Gap on The Cooling Capacity of Ranque Hilsch Vortex Tube. International Journal of Refrigeration, Vol. 35, Issue 5, 2012, pp. 1473–1483, . M. G. Ranque, 1933, "Experiences sur la detente giratoire avec production simulanees d’un echappement d’air chaud et d’air froid", Journal de Physique et le Radium (in French), Supplement, 7th series, Vol. 4, pp. 112 S–114 S. R. Hilsch, 1947, "The Use of the Expansion of Gases in a Centrifugal Field as Cooling Process", Review of Scientific Instruments, Vol. 18, No. 2, pp. 108–113. J Reynolds, 1962, "A Note on Vortex Tube Flows", Journal of Fluid Mechanics, Vol. 14, pp. 18–20. T. T. Cockerill, 1998, "Thermodynamics and Fluid Mechanics of a Ranque-Hilsch Vortex Tube", Ph.D. Thesis, University of Cambridge, Department of Engineering. W. Fröhlingsdorf, and H. Unger, 1999, "Numerical Investigations of the Compressible Flow and the Energy Separation in the Ranque-Hilsch Vortex Tube", Int. J. Heat Mass Transfer, Vol. 42, pp. 415–422. J. Lewins, and A. Bejan, 1999, "Vortex Tube Optimization Theory", Energy, Vol. 24, pp. 931–943. J. P. Hartnett, and E. R. G. Eckert, 1957, "Experimental Study of the Velocity and Temperature Distribution in a high-velocity vortex-type flow", Transactions of the ASME, Vol. 79, No. 4, pp. 751–758. M. Kurosaka, 1982, "Acoustic Streaming in Swirling Flows", Journal of Fluid Mechanics, Vol. 124, pp. 139–172. K. Stephan, S. Lin, M. Durst, F. Huang, and D. Seher, 1983, "An Investigation of Energy Separation in a Vortex Tube", International Journal of Heat and Mass Transfer, Vol. 26, No. 3, pp. 341–348. B. K. Ahlborn, and J. M. Gordon, 2000, "The Vortex Tube as a Classical Thermodynamic Refrigeration Cycle", Journal of Applied Physics, Vol. 88, No. 6, pp. 3645–3653. G. W. Sheper, 1951, Refrigeration Engineering, Vol. 59, No. 10, pp. 985–989. J. M. Nash, 1991, "Vortex Expansion Devices for High Temperature Cryogenics", Proc. of the 26th Intersociety Energy Conversion Engineering Conference, Vol. 4, pp. 521–525. D. Li, J. S. Baek, E. A. Groll, and P. B. Lawless, 2000, "Thermodynamic Analysis of Vortex Tube and Work Output Devices for the Transcritical Carbon Dioxide Cycle", Preliminary Proceedings of the 4th IIR-Gustav Lorentzen Conference on Natural Working Fluids at Purdue, E. A. Groll & D. M. Robinson, editors, Ray W. Herrick Laboratories, Purdue University, pp. 433–440. H. Takahama, 1965, "Studies on Vortex Tubes", Bulletin of JSME, Vol. 8, No. 3, pp. 433–440. B. Ahlborn, and S. Groves, 1997, "Secondary Flow in a Vortex Tube", Fluid Dyn. Research, Vol. 21, pp. 73–86. H. Takahama, and H. Yokosawa, 1981, "Energy Separation in Vortex Tubes with a Divergent Chamber", ASME Journal of Heat Transfer, Vol. 103, pp. 196–203. M. Sibulkin, 1962, "Unsteady, Viscous, Circular Flow. Part 3: Application to the Ranque-Hilsch Vortex Tube", Journal of Fluid Mechanics, Vol. 12, pp. 269–293. K. Stephan, S. Lin, M. Durst, F. Huang, and D. Seher, 1984, "A Similarity Relation for Energy Separation in a Vortex Tube", Int. J. Heat Mass Transfer, Vol. 27, No. 6, pp. 911–920. H. Takahama, and H. Kawamura, 1979, "Performance Characteristics of Energy Separation in a Steam-Operated Vortex Tube", International Journal of Engineering Science, Vol. 17, pp. 735–744. G. Lorentzen, 1994, "Revival of Carbon Dioxide as a Refrigerant", H&V Engineer, Vol. 66. No. 721, pp. 9–14. D. M. Robinson, and E. A. Groll, 1996, "Using Carbon Dioxide in a Transcritical Vapor Compression Refrigeration Cycle", Proceedings of the 1996 International Refrigeration Conference at Purdue, J. E. Braun and E. A. Groll, editors, Ray W. Herrick Laboratories, Purdue University, pp. 329–336. W. A. Little, 1998, "Recent Developments in Joule-Thomson Cooling: Gases, Coolers, and Compressors", Proc. Of the 5th Int. Cryocooler Conference, pp. 3–11. A. P. Kleemenko, 1959, "One Flow Cascade Cycle (in schemes of Natural Gas Liquefaction and Separation)", Proceedings of the 10th International Congress on Refrigeration, Pergamon Press, London, p. 34. J. Marshall, 1977, "Effect of Operating Conditions, Physical Size, and Fluid Characteristics on the Gas Separation Performance of a Linderstrom-Lang Vortex Tube", Int. J. Heat Mass Transfer, Vol. 20, pp. 227–231 External links G. J. Ranque's U.S. Patent Detailed explanation of the vortex tube effect with many pictures Oberlin college physics demo Building a Vortex Tube This Old Tony, YouTube Vortex'n 2 This Old Tony, YouTube Cooling technology Thermodynamics Gas technologies
Vortex tube
[ "Physics", "Chemistry", "Mathematics" ]
3,514
[ "Thermodynamics", "Dynamical systems" ]
968,918
https://en.wikipedia.org/wiki/SN%202004dj
SN 2004dj was the brightest supernova since SN 1987A at the time of its discovery. This Type II-P supernova was discovered by Japanese astronomer Kōichi Itagaki on 31 July 2004. At the time of its discovery, its apparent brightness was 11.2 visual magnitude; the discovery occurred after the supernova had reached its peak magnitude. The supernova's progenitor is a star in a young, compact star cluster in the galaxy NGC 2403, in Camelopardalis. The cluster had been cataloged as the 96th object in a list of luminous stars and clusters by Allan Sandage in 1984; the progenitor is therefore commonly referred to as Sandage 96. This cluster is easily visible in a Kitt Peak National Observatory image and appears starlike. External links Light curves and spectra on the Open Supernova Catalog supernovae.net image collection Bright Supernova page on 2004dj References 20040731 Camelopardalis
SN 2004dj
[ "Chemistry", "Astronomy" ]
201
[ "Supernovae", "Astronomical events", "Constellations", "Camelopardalis", "Explosions" ]
968,994
https://en.wikipedia.org/wiki/6SN7
6SN7 is a dual triode vacuum tube with an eight-pin octal base. It provides a medium gain (20 dB). The 6SN7 is basically two 6J5 triodes in one envelope. History The 6SN7 was originally released in 1939. It was officially registered in 1941 by RCA and Sylvania as the glass-cased 6SN7GT, originally listed on page 235 of RCA's 1940 RC-14 Receiving Tube Manual, in the Recently Added section, as: 6SN7-GT. Although the 6S-series tubes are often metal-cased, there was never a metal-envelope 6SN7 (there being no pin available to connect the metal shield); there were, however, a few glass-envelope tubes with a metal band, such as the 6SN7A developed during World War II - slightly improved in some respects but the metal band was prone to splitting. Numerous variations on the 6SN7 type have been offered over the years, including: 7N7 (Sylvania 1940, short-lived loktal-base version), 1633 (RCA 1941, also for 26-V radios), 12SX7 (RCA 1946, intended for use in 12-volt aircraft electronics), 5692 (RCA 1948, a super-premium version - not exactly identical - with guaranteed 10,000-hour lifetime), 6Н8С (Cyrillic, Soviet version, , in Latin letters: 6N8S); 6SN7 DDR, 6Н8М, E1606 (=CV278), OSW3129 versions with different/larger glass envelopes; 6042 (1951, another 1633 type), and 6180 (1952) 6SN7W (1956; a more rugged military version, glass envelope with metal band) The American military designator for the 6SN7GA is VT-231. The British called it CV1988. European designations include the 1942 ECC32 (not an exact equivalent), 13D2 and B65. The 6SN7 has a 6.3 V 600 mA heater/filament. The 12-volt 300 mA filament equivalent is the 12SN7GT or 12SN7GTA. The 14N7 is the Loktal version of the 12SN7GT. There was also a comparatively rare 8V 8SN7 for 450 mA series-string TV sets) and 25 Volt/0.15 Amp heater version: 25SN7GT. Related types The 1937 6F8G was also an octal-based double triode with essentially the same characteristics as the 6SN7 (or two 6J5's), but in a 'Coke Bottle' large (Outline ST-12) glass envelope with a different pin arrangement and utilising a top cap connection for the first triode's grid (making pin 1 available for a metal shield). 6J5 The 6J5, first registered in June 1937, and 6J5GT (registered April 1938; British version L63) were octal single triodes with identical characteristics to one half of a 6SN7. Other equivalents to the 6J5 include: VT-94, 6C2, 6J5M, 38565J; military versions: CV1933, 10E/11448 and CV1934; Loktal base version: 7A4 (military name: CV1770), and 12.6 V heater version: 12J5. They in turn were successors to the 1935 RCA 6C5 and 1938 6P5G. Successors to the 6SN7 The 1954 6CG7 and 6FQ7 are electrically equivalent to the 6SN7, with nine-pin miniature ("Noval") base (RCA, 1951), also made as an 8.4V 450mA series string heater type as the 8CG7. In contrast to what some sources claim, the ECC40 with Rimlock base and introduced by Philips in 1948 cannot be considered a successor to the 6SN7 as the electrical characteristics are too different. The 1946 miniature 12AU7/ECC82, with similar, but not identical, electrical characteristics to the 6SN7 and ECC32, and a filament usable on either 6.3V or 12.6V supplies, is more widely used than the 6CG7/6FQ7. Usage The 6SN7 was used as an audio amplifier in the 1940-1955 period, usually in the driver stages of power amps. The designer of the famous Williamson amplifier, one of the first true high-fidelity designs, suggested use of the 6SN7 (or B65) in his 1949 revision since it is similar to the original circuit's L63 (=6J5) British single triodes, four of which were used in each channel of his 1947 circuit. The 6SN7 was one of the most important components of the first programmable electronic digital computer, the ENIAC, which contained several thousand of the tubes. The SAGE computer systems used hundreds of 5692s as flip-flops. With the advent of television, the 6SN7 was well suited for use as a vertical-deflection amplifier. As screen sizes became larger, voltage and power headroom became insufficient. To address this, uprated versions with higher peak voltage and power ratings were introduced. The GE 6SN7GTA (GE, 1950) had anode dissipation uprated to 5.0 watts. The 1954 GE 6SN7GTB also had controlled heater warmup time, better for series heater strings. The 6SN7 was considered to be obsolete by the 1960s, replaced by the 12AU7, and became almost unobtainable. With the introduction of semiconductor electronics, vacuum tubes of all types ceased to be manufactured by the major producers. A small demand for vacuum tubes in guitar amplifiers and very expensive high-fidelity equipment remained. As existing stocks ran out, factories in eastern Europe and China started to manufacture the 6SN7, and higher-gain 6SL7. , 6SN7s and 6SL7s are still manufactured in Russia and by JJ Electronic, and are widely available. See also List of vacuum tubes 12AT7 12AU7 12AX7 References External links The Tube Collectors Association Datasheet on the 6SN7 RCA Receiving Tube Manual, RC-14, Harrison NJ, 1940 RCA receiving Tube Manual, RC-29, harrison NJ, 1973 Sylvania Technical Manual 14th edition (reprint), 2000 GE Techni-Talk, Volume 6 number 5, October–November 1954 Datasheet on the 6CG7 SPICE MODEL Reviews of 6sn7 tubes. Vacuum tubes Guitar amplification tubes
6SN7
[ "Physics" ]
1,440
[ "Vacuum tubes", "Vacuum", "Matter" ]
969,038
https://en.wikipedia.org/wiki/Messier%2094
Messier 94 (also known as NGC 4736, Cat's Eye Galaxy, Crocodile Eye Galaxy, or Croc's Eye Galaxy) is a spiral galaxy in the mid-northern constellation Canes Venatici. It was discovered by Pierre Méchain in 1781, and catalogued by Charles Messier two days later. Although some references describe M94 as a barred spiral galaxy, the "bar" structure appears to be more oval-shaped. The galaxy has two ring structures. Structure M94 is classified as having a low ionization nuclear emission region (LINER) nucleus. LINERs in general are characterized by optical spectra that reveal that ionized gas is present but the gas is only weakly ionized (i.e. the atoms are missing relatively few electrons). M94 has an inner ring with a diameter of 70 arcseconds (″) (given its distance, about ) and an outer ring with a diameter of 600″ (about ). These rings appear to form at resonance points in the disk of the galaxy. The inner ring is the site of strong star formation activity and is sometimes referred to as a starburst ring. This star formation is fueled by gas driven dynamically into the ring by the inner oval-shaped bar-like structure. A 2009 study conducted by an international team of astrophysicists revealed that the outer ring of M94 is not a closed stellar ring, as historically attributed in the literature, but a complex structure of spiral arms when viewed in mid-IR and UV. The study found that the outer disk of this galaxy is active. It contains approximately 23% of the galaxy's total stellar mass and contributes about 10% of the galaxy's new stars. In fact, the star formation rate of the outer disk is approximately two times greater than the inner disk because it is more efficient per unit of stellar mass. There are several possible external events that could have led to the origin of M94's outer disk including the accretion of a satellite galaxy or the gravitational interaction with a nearby star system. However, further research found problems with each of these scenarios. Therefore, the report concludes that the inner disk of M94 is an oval distortion which led to the creation of this galaxy's peripheral disk. In a paper published in 2004, John Kormendy and Robert Kennicutt argued that M94 contains a prototypical pseudobulge. A classical spiral galaxy consists of a disk of gas and young stars that intersects a large sphere (or bulge) of older stars. In contrast, a galaxy with a pseudobulge does not have a large bulge of old stars but instead contain a bright central structure with intense star formation that looks like a bulge when the galaxy is viewed face-on. In the case of M94, this pseudobulge takes the form of a ring around a central oval-shaped region. In 2008 a study was published showing that M94 had very little or no dark matter present. The study analyzed the rotation curves of the galaxy's stars and the density of hydrogen gas and found that ordinary luminous matter appeared to account for all of the galaxy's mass. This result was unusual and somewhat controversial, as current models do not indicate how a galaxy could form without a dark matter halo or how a galaxy could lose its dark matter. Other explanations for galactic rotation curves, such as MOND, also have difficulty explaining this galaxy. This result has yet to be confirmed or accepted by other research groups, however, and has not actually been tested against the predictions of standard galaxy formation models. Location At least two techniques have been used to measure distances to M94. The surface brightness fluctuations distance measurement technique estimates distances to spiral galaxies based on the graininess of the appearance of their bulges. The distance measured to M94 using this technique is 17.0 ± 1.4 Mly (5.2 ± 0.4 Mpc). However, M94 is close enough that the Hubble Space Telescope can be used to resolve and measure the fluxes of the brightest individual stars within the galaxy. These measured fluxes can then be compared to the measured fluxes of similar stars within the Milky Way to measure the distance. The estimated distance to M94 using this technique is 15 ± 2 Mly (4.7 ± 0.6 Mpc). Averaged together, these distance measurements give a distance estimate of 16.0 ± 1.3 Mly (4.9 ± 0.4 Mpc). M94 is one of the brightest galaxies within the M94 Group, a group of galaxies that contains between 16 and 24 galaxies. This group is one of many that lie within the Virgo Supercluster (i.e. the Local Supercluster). Although a large number of galaxies may be associated with M94, only a few galaxies near M94 appear to form a gravitationally bound system. Most of the other nearby galaxies appear to be moving with the expansion of the universe. See also List of Messier objects NGC 1512, a galaxy with a similar double ring. NGC 1167, another LINER galaxy. References External links M94- A Panchromatic Perspective M94- A New Optical Perspective Galaxy Messier 94 at the astro-photography site of Mr. T. Yoshida. SEDS: Spiral Galaxy M94 Unbarred spiral galaxies Messier 094 Messier 094 Messier 094 094 Messier 094 07996 43495 Astronomical objects discovered in 1781 Discoveries by Pierre Méchain
Messier 94
[ "Astronomy" ]
1,144
[ "Canes Venatici", "Constellations" ]
969,070
https://en.wikipedia.org/wiki/Messier%20103
Messier 103 (also known as M103, or NGC 581) is a small open cluster of many faint stars in Cassiopeia. It was discovered on 27 March 1781 by Pierre Méchain, but later added as Charles Messier's last deep-sky object in his catalogue. It is located 9,400 light-years from the Sun and is about 15 light years across. It holds two prominent stars, of which the brightest is magnitude 10.5, and in the center of the cluster, another magnitude 10.8 red giant. Another bright foreground object is the double star Struve 131, but is not a member of the cluster. Cluster membership is about 172 stars based on >50% probability of gravitational attachment that binds the cluster together. M103 is between 12.6 to 25 million years in age. Observation history After the discovery of Messier 101 through 103 by Pierre Méchain, Messier later added this open cluster to his own catalogue. In 1783, William Herschel described the region of M103 as 14 to 16 pL (pretty large stars) and with great many eS or extremely faint ones. Åke Wallenquist first identified 40 stars in M103 while Antonín Bečvář raised this to 60. Archinal and Hynes suggest that the cluster has 172 stars. Admiral William Henry Smyth pointed out the cluster's 10.8-magnitude red giant, citing it was a double star on Cassiopeia's knee, about 1° northeast of Delta Cassiopeiae, sometimes called as Ruchbah or Rukhbah. Telescopic view Messier 103 is an easy object to find and the cluster is visible in binoculars or a small telescope. object to find and the cluster is visible even with the use of binoculars. M103 can be seen as a nebulous fan-shaped patch, and is about a fifth the apparent diameter of the Moon or 6 arcminute (6′) or 0.1° across. To find M103, it is suggested that the observer center on Ruchbah or the lowest star of the signature “W” asterism of Cassiopeia. The cluster will appear as a hazy patch in a field about the length of an imaginary line towards Epsilon Cassiopeiae, a northern endpoint of the 'W', and placed on the outer side of the 'W'. Gallery See also List of Messier objects References and footnotes External links Open Cluster M103 @ SEDS Messier pages Open Cluster M103 @ Skyhound.com Open clusters Cassiopeia (constellation) 103 Messier 103 178104?? Perseus Arm Discoveries by Pierre Méchain
Messier 103
[ "Astronomy" ]
560
[ "Cassiopeia (constellation)", "Constellations" ]
969,081
https://en.wikipedia.org/wiki/Messier%2095
Messier 95, also known as M95 or NGC 3351, is a barred spiral galaxy about 33 million light-years away in the zodiac constellation Leo. It was discovered by Pierre Méchain in 1781, and catalogued by compatriot Charles Messier four days later. In 2012 its most recent supernova was discovered. The galaxy has a morphological classification of SB(r)b, with the SBb notation indicating it is a barred spiral with arms that are intermediate on the scale from tightly to loosely wound, and an "(r)" meaning an inner ring surrounds the bar. The latter is a ring-shaped, circumnuclear star-forming region with a diameter of approximately . The spiral structure extends outward from the ring. Its ring structure is about (solar masses) in molecular gas and yields a star formation rate of  yr−1. The star formation is occurring in at least five regions with diameters between 100 and 150 pc that are composed of several star clusters ranging in size from 1.7 to 4.9 pc. These individual clusters contain of stars, and may be on the path to forming globular clusters. A Type II supernova, designated as SN 2012aw, was discovered in M95 in 2012. The light curve of this displayed great flattening after 27 days, thus classifying it as a Type II-P, or "plateau", core-collapse supernova. The disappearance of the progenitor star was later confirmed from near-infrared imaging of the region. The brightness from the presumed red supergiant progenitor allowed its mass to be estimated as . M95 is one of several galaxies within the M96 Group, a group of galaxies in the constellation Leo, the other Messier objects of which are M96 and M105. See also List of Messier objects References External links SEDS: Spiral Galaxy M95 Barred spiral galaxies Messier 095 Messier 095 095 Messier 095 05850 32007 17810320 Discoveries by Pierre Méchain
Messier 95
[ "Astronomy" ]
427
[ "Leo (constellation)", "Constellations" ]
969,113
https://en.wikipedia.org/wiki/Postal%20savings%20system
Postal savings systems provide depositors who do not have access to banks a safe and convenient method to save money. Many nations have operated banking systems involving post offices to promote saving money among the poor. History In 1861, Great Britain became the first nation to offer such an arrangement. It was supported by Sir Rowland Hill, who successfully advocated the penny post, and William Ewart Gladstone, then Chancellor of the Exchequer, who saw it as a cheap way to finance the public debt. At the time, banks were mainly in the cities and largely catered to wealthy customers. Rural citizens and the poor had no choice but to keep their funds at home or on their persons. The original Post Office Savings Bank was limited to deposits of £30 per year with a maximum balance of £150. Interest was paid at the rate of 2.5 percent per annum on whole pounds in the account. Later, the limits were raised to a maximum of £500 per year in deposits with no limit on the total amount. Within five years of the system's establishment, there were over 600,000 accounts and £8.2 million on deposit. By 1927, there were twelve million accounts—one in four Britons—with £283 million (£ million today) on deposit. The British system first offered only savings accounts. In 1880, it also became a retail outlet for government bonds, and in 1916 introduced war savings certificates, which were renamed National Savings Certificates in 1920. In 1956, it launched a lottery bond, the Premium Bond, which became its most popular savings certificate. Post Office Savings Bank became National Savings Bank in 1969, later renamed National Savings and Investments (NS&I), an agency of HM Treasury. While continuing to offer National Savings services, the (then) General Post Office, created the National Giro in 1968 (privatized as Girobank and acquired by Alliance & Leicester in 1989). Many other countries adopted such systems soon afterwards. Japan established a postal savings system in 1875 and the Dutch government started a systems in 1881 under the name Rijkspostspaarbank (national postal savings bank); this was followed by many other countries over the next 50 years. The later part of the 20th century saw a reversal where these systems were abolished or privatized. By country Austria In Austria, the Österreichische Post used to own the Österreichische Postsparkasse (P.S.K.). This financial institute was bought and merged by the BAWAG in 2005. In April 2020, Österreichische Post launched a new postal bank, bank99. Brazil Brazil instituted a postal banking system in 2002, where the national postal service (ECT) formed a partnership with the largest private bank in the country (Bradesco) to provide financial services at post offices. The current partnership is with Bank of Brazil. Today the bank is in a semi-defunct state since 2019, after a decree from the government shut down the branch. Bulgaria In Bulgaria, the postal banking system was a subsidiary of Bulgarian Posts until 1991, when Bulgarian Postbank was created. In the years that followed, Bulgarian Postbank was privatized and the relationship between post offices and bank offices became weaker. Postal banking services ceased to be available in post offices in 2011. Canada Canada Post offered banking services via its Post Office Savings Bank, created by the Post Office Act in April 1868, less than a year following the nation's confederation. A century later, the Post Office Savings Bank was shut down in 1968–69. Since at least the early 2010s, postal banking has been discussed and studied periodically, with postal unions backing the idea. In October 2022, Canada Post dipped its toe into the possibility of rolling out postal banking services by offering small personal loans between $1,000-$30,000 in partnership with TD Bank, but no chequing or savings accounts. Less than a month later, in November 2022, the loan program discontinued any new applications. In 2024, Canada Post confirmed it has partnered with KOHO Financial to bring back postal banking by offering chequing and savings accounts; with a range of different accounts, including a basic no-fee account as well as accounts with fees. The date of public nationwide access to these banking services is planned for 2025. Many rural communities, remote indigenous communities, and even some inner-city neighbourhoods, are lacking a local bank or credit union, many with either no reliable access to the internet or no affordable (or free public wifi) for internet banking. These financially underserved communities are left without the ability to have a locally accessible bank account, having a high risk of theft or misappropriation of funds if large amounts of cash are stored at home or else resort to paying costly fees using Canada Post's prepaid reloadable Visa card, inability to get a bank loan for a business, inability to build credit, great difficulty running a business, inability to deposit or cash cheques, or inability to cash cheques for more than a limited amount at retailers (if there's any retailers cashing cheques in their community) along with cheque cashing fees, or cash larger cheques at extremely high-fee payday lenders. The number of bank branches in Canada have been steadily on the decline, from 6,350 in 2014 to 5,783 in 2020; as have credit union branches, from 3,603 in 2002 to 2,336 in 2022. Of the 2,620 small towns and rural communities with post offices in Canada, 1,178 (45%) did not have any bank branches; in over 700 indigenous communities in Canada, over 90% did not have any bank or credit union branches. As of 2018, there are more post offices (6,200) in Canada than bank branches. "Banking deserts" occur in cities also, in Ottawa's downtown Bank Street, there are more high-fees payday lenders than banks. Despite the steady increase of online banking among Canadians, in 2022, the amount of cash in circulation was 25% higher than pre-pandemic levels. In 2020, 40% of transactions under $15 were conducted with cash. The ability to have a chequing account and withdrawal cash in-person from a local branch is vital for teens, low-income adults, adults trying to get out of debt, and vulnerable Canadians such as the disabled (those able to live without formal trusteeship), elderly and technology-illiterate, and those fleeing domestic abuse, as handling cash gives a more concrete understanding of where money is going than making purchases with a card, as well as less fees taking from their already small funds since purchases with cash have no transaction fees, overdraft fees, non-sufficient funds fees, or card loading fees. China In the People's Republic of China, the Postal Savings Bank of China (:zh:中国邮政储蓄银行) was split from China Post in 2007 and established as a state-owned limited company. It continues to provide banking services at post offices and, at the same time, some separated branches. Finland In Finland, Postisäästöpankki ("Post Savings Bank") was founded in 1887. In 1970 its name was shortened to Postipankki ("Post Bank"). In 1998 it was changed to a commercial bank named Leonia Bank. Later, it was merged with an insurance company to form Sampo Group, and the bank was renamed Sampo Bank. It had a few own offices, but also post offices performed its banking operations until 2000. In 2007, Sampo Bank was sold to the Danish Danske Bank. France France's postal service, La Poste, offers financial services through the affiliated bank known as La Banque postale. Germany Deutsche Postbank has a postal banking system. Deutsche Postbank was a subsidiary of Deutsche Post until 2008, when 30% of Deutsche Post's shares were sold to Deutsche Bank. Postal banking services are still available at all branches of Deutsche Post and Deutsche Postbank. Greece Greek Postal Savings Bank provided banking services from post offices until 2013 when it was replaced by New TT Hellenic Postbank a subsidiary of Eurobank Group. Hungary The postal savings bank of Hungary was established on 1 February 1886 by order of Lax IX of 1885. This act initially only authorized savings accounts, but was later expanded by Law XXXIV of 1889, which authorized "checks and clearing" starting on 1 January 1890. In 1919 the Postal Savings Bank notes were issued under the decree of the Revolutionary Governing Council of the Hungarian Soviet Republic by the Magyar Postatakarékpénztár (Hungarian Postal Savings Bank). India India Post has provided an avenue for managing savings to the people living in rural or the urban poor, underserved by the formal banking system, since 1882 when Post Office Savings Bank was established. Over time, the scope of financial services provided by India Post grew to include other National Savings Schemes promoted by Government of India. In 2018, India Post Payments Bank (IPPB) was launched as a regulated bank to provide a full set of banking services, as specialised division of India Post. As of January 2022, the bank was serving around 50 million customers. Indonesia Postal savings in Indonesia began with the establishment of the Netherlands Indian Post Office Savings Bank () in 1897. During the Japanese occupation of the Dutch East Indies, it was replaced by the and savings were encouraged by the military administration to support the Greater East Asia War. The Savings Office became the Post Office Savings Bank again () after independence, before renamed into the current State Savings Bank, or Bank Tabungan Negara (BTN) in 1963. Between 1963 and 1968, it became the Fifth Unit of Bank Negara Indonesia during the single-bank system, made to support the guided democracy. Currently, BTN offers a savings plan that allows its users to deposit in post offices. Ireland In Ireland, An Post provide a Post Office Savings Bank Deposit Account. It provides an interest rate of 0.15% which is added to the account at the end of the year. Customers are provided with a physical deposit book and can deposit and withdraw from the account using the deposit book at any Post Office Branch. This service is run on behalf of the National Treasury Management Agency with other "Ireland State Saving" schemes offered by the Irish Government, including Prize Bond. An Post also provide saving stamps for children, from the 1980s stamps cost 50p/50c, each stamp was place in a card. There were 10 places on each side of the card, you could exchange the stamps for their value at any post office. Prior to this stamps cost 10p and allowed children to save just IR£1. An Post also provide separate commercial banking services. Between 2006 and 2010 it ran Postbank, a joint venture with Fortis Bank Belgium. It now provides banking service under the brand An Post Money. Israel Israel's postal service Israel Postal Company offers utility payment, savings and checking accounts, as well as foreign currency exchange services from all post offices. Italy In Italy, the Postal savings system is run by Poste italiane, the Italian postal service company. Poste italiane run this service along with Cassa Depositi e Prestiti. Japan Japan Post Bank, part of the post office was the world's largest savings bank with 198 trillion yen (US$1.7 trillion) of deposits as of 2006, much from conservative, risk-averse citizens. The state-owned Japan Post Bank business unit of Japan Post was formed in 2007, as part of a ten-year privatization programme, intended to achieve fully private ownership of the postal system by 2017. Kazakhstan In Kazakhstan, national postal operator, Kazpost, has a banking license and offers banking services in all its branches across the country. Kenya Kenya Post Office Savings Bank (KPOSB/Postbank) Netherlands In 1881 the Dutch government founded the Rijkspostspaarbank (National Postal Savings Bank). In 1986 it was privatised, together with the Postgiro service, as the Postbank N.V. Postbank merged with a commercial bank that would eventually become ING Bank. New Zealand Post Office Savings Bank was established in 1867 by the New Zealand government to give New Zealand investors a place to deposit their savings. This included the provision of children's savings accounts known as Post Office Squirrel savings account. The Post Office Savings bank was split into PostBank in 1987 and was acquired by ANZ Bank New Zealand two years later ending the bank. In 2002 the New Zealand government created a new state owned post bank called Kiwibank as part of the New Zealand Post to again establish a postal savings system. Norway Postbanken was founded in 1948 after major political battle as Norges Postsparebank, however the maximum amount allowed to be saved per person was set to NOK 10,000. In 1948 the bank had services provided at 3,600 post offices and post outlets. It was sold in 1999 and became part of the commercial bank DNB ASA. Philippines The Philippine Postal Savings Bank (PPSB), also known as PostalBank, is the state-owned postal savings system in the Philippines. It is the smallest of the Philippines' three state-owned banks and is governed separately from PhilPost. In late 2017, state bank Land Bank of the Philippines acquired PPSB at zero value and made it as a subsidiary. It is now known as Overseas Filipino Bank. Portugal In Portugal, the CTT own 100% of the Banco CTT, which has been operating since 2015 throughout Portugal. Singapore In Singapore, POSB Bank was established in 1877 as the Post Office Savings Bank. Today, it now operates as part of DBS Bank, after its acquisition on 16 November 1998. South Africa Postbank (South Africa), operated by the South African Post Office (SAPO). Offers transactional, savings, investments, insurance & pension banking services. South Korea Korea Post, operated by South Korean government, has its postal banking and postal insurance business since 1982. Banking counter and ATM is available in all post office, excluding postal agencies and delivery centers. Korea Post ATM is connected with all national and regional banks via KFTC. Banking counter is also opened for Korea Development Bank, Industrial Bank, Citibank Korea, and Jeonbuk Bank customers. Sri Lanka In Sri Lanka, National Savings Bank and Sri Lanka Post provide banking services through post offices. Taiwan In Taiwan, the Chunghwa Post provides savings accounts and Visa debit card services in the Free Area of the Republic of China. Thailand Between 1 April 1929 and 31 March 1947, the Post and Telegraph Department of the Ministry of Commerce and Communications of Siam (before becoming Ministry of Economic Affairs in 1932 before being split to Ministry of Economic Affairs and Ministry of Communications in 1942) has run Saving Office before becoming Government Savings Bank (GSB) United Kingdom The Post Office Ltd offers savings accounts based on its brand, and is operated by the Bank of Ireland, a commercial bank, and Family Investments, a friendly society. The Post Office branded services are similar to some of National Savings and Investments' services, and include instant savings, Individual Savings Accounts, seasonal savings and savings bonds. Post Office Ltd also provides a Post Office card account that accepts only direct deposits of certain state pension and welfare payments, permitting cash withdrawals over the counter. This last account is offered in partnership with the Department for Work and Pensions until 2010, through investment banking and asset management company JP Morgan. (This contract has recently been awarded to JP Morgan to run till 2015) United States In the United States, the United States Postal Savings System was established in 1911 under the Act of June 25, 1910 (). It was discontinued by the Act of March 28, 1966 (). Fifty years later, Vermont Sen. Bernie Sanders' 2016 presidential campaign platform included plans for postal banking. In 2018, Massachusetts Sen. Elizabeth Warren and New York Sen. Kirsten Gillibrand supported such a program. In April 2018, Gillibrand introduced S.2755 - Postal Banking Act partly in response to the Trump administration's suspension of payday lending regulation imposed during the Obama administration. In 2020, after Joe Biden defeated Senator Bernie Sanders in the 2020 Democratic presidential primaries, the Biden-Sanders "Unity Task Force” policy recommendations for a Biden administration, released in July, included postal banking. In September 2020, Gillibrand and Sanders announced a newer Postal Banking Act. It would help strengthen the Postal Service's financial situation and help unbanked and underbanked people with savings and checking accounts, debit cards and low-dollar loans that they might otherwise be forced to get from payday lenders at high interest rates. Vietnam Lien Viet Post Joint Stock Commercial Bank or LienVietPostBank (LPB), formerly known as LienVietBank, is a Vietnamese retail bank that provides banking products and services through its own transaction points across 42 cities and provinces and 1,031 postal transaction offices nationwide. LBP is considered to be in the top 10 biggest banks in terms of assets and equity [2] and ranked 36th in VNR500 – Top 500 largest private companies in Vietnam in 2013. The Bank is striving to become the bank for everyone in Vietnam by focusing on banking products for households and small and medium enterprises especially in the agriculture sector, and expanding its activities to rural and remote areas via the post. See also History of banking Banking agent References Further reading “We know exactly who today’s dream killers are”: Why postal banking is so needed – and on the rise (2015-01-20), Salon Banks Postal systems
Postal savings system
[ "Technology" ]
3,594
[ "Transport systems", "Postal systems" ]
969,118
https://en.wikipedia.org/wiki/Dragon%20Data
Dragon Data Ltd. was a Welsh producer of home computers during the early 1980s. These computers, the Dragon 32 and Dragon 64, strongly resembled the Tandy TRS-80 Color Computer ("CoCo")—both followed a standard Motorola datasheet configuration for the three key components (CPU, SAM and VDG). The machines came in both 32 KB and (later) 64 KB versions. History The history of Dragon Data in the period 1982–84 was a checkered one. The company was originally set up by toy company Mettoy, and after initial good sales looked to have a bright future. At its high point it entered negotiations with Rexnord's Tano Corporation to form a North American branch. Mettoy then suffered financial difficulties, casting a shadow on the future of Dragon Data before it was spun off as a separate company. However, a number of circumstances (the delay in introducing the 64K model, poor colour support with a maximum of 4 colours displayable in "graphics mode" and only 2 colours in the highest 256 × 192 pixel mode, the late introduction of the external disk unit and of the supporting OS9-based software) caused the company to lose market share. To combat this, under the control of GEC, Dragon Data worked on the next generation of Dragon computers; the Dragon Alpha (or Professional) and Beta (or 128). These systems only made it to the prototype stage before the business went into receivership and was sold on to the Spanish startup Eurohard in 1984. Eurohard also suffered financial problems and went into receivership a couple of years later after the release of the Dragon 200 (a rebranded Dragon 64). In addition to the Dragon 32 and 64, an MSX-compatible machine, the Dragon MSX reached the prototype stage. References External links A Slayed Beast - History of the Dragon computer – From The DRAGON Archive Dedicated DRAGON wiki Defunct computer hardware companies Defunct computer companies of the United Kingdom Defunct companies of Wales Home computer hardware companies Computer companies established in 1982 Computer companies disestablished in 1984 1982 establishments in Wales 1984 disestablishments in Wales Manufacturing companies of Wales
Dragon Data
[ "Technology" ]
437
[ "Computing stubs", "Computer hardware stubs" ]
969,126
https://en.wikipedia.org/wiki/Protein%20structure
Protein structure is the three-dimensional arrangement of atoms in an amino acid-chain molecule. Proteins are polymers specifically polypeptides formed from sequences of amino acids, which are the monomers of the polymer. A single amino acid monomer may also be called a residue, which indicates a repeating unit of a polymer. Proteins form by amino acids undergoing condensation reactions, in which the amino acids lose one water molecule per reaction in order to attach to one another with a peptide bond. By convention, a chain under 30 amino acids is often identified as a peptide, rather than a protein. To be able to perform their biological function, proteins fold into one or more specific spatial conformations driven by a number of non-covalent interactions, such as hydrogen bonding, ionic interactions, Van der Waals forces, and hydrophobic packing. To understand the functions of proteins at a molecular level, it is often necessary to determine their three-dimensional structure. This is the topic of the scientific field of structural biology, which employs techniques such as X-ray crystallography, NMR spectroscopy, cryo-electron microscopy (cryo-EM) and dual polarisation interferometry, to determine the structure of proteins. Protein structures range in size from tens to several thousand amino acids. By physical size, proteins are classified as nanoparticles, between 1–100 nm. Very large protein complexes can be formed from protein subunits. For example, many thousands of actin molecules assemble into a microfilament. A protein usually undergoes reversible structural changes in performing its biological function. The alternative structures of the same protein are referred to as different conformations, and transitions between them are called conformational changes. Levels of protein structure There are four distinct levels of protein structure. Primary structure The primary structure of a protein refers to the sequence of amino acids in the polypeptide chain. The primary structure is held together by peptide bonds that are made during the process of protein biosynthesis. The two ends of the polypeptide chain are referred to as the carboxyl terminus (C-terminus) and the amino terminus (N-terminus) based on the nature of the free group on each extremity. Counting of residues always starts at the N-terminal end (NH2-group), which is the end where the amino group is not involved in a peptide bond. The primary structure of a protein is determined by the gene corresponding to the protein. A specific sequence of nucleotides in DNA is transcribed into mRNA, which is read by the ribosome in a process called translation. The sequence of amino acids in insulin was discovered by Frederick Sanger, establishing that proteins have defining amino acid sequences. The sequence of a protein is unique to that protein, and defines the structure and function of the protein. The sequence of a protein can be determined by methods such as Edman degradation or tandem mass spectrometry. Often, however, it is read directly from the sequence of the gene using the genetic code. It is strictly recommended to use the words "amino acid residues" when discussing proteins because when a peptide bond is formed, a water molecule is lost, and therefore proteins are made up of amino acid residues. Post-translational modifications such as phosphorylations and glycosylations are usually also considered a part of the primary structure, and cannot be read from the gene. For example, insulin is composed of 51 amino acids in 2 chains. One chain has 31 amino acids, and the other has 20 amino acids. Secondary structure Secondary structure refers to highly regular local sub-structures on the actual polypeptide backbone chain. Two main types of secondary structure, the α-helix and the β-strand or β-sheets, were suggested in 1951 by Linus Pauling. These secondary structures are defined by patterns of hydrogen bonds between the main-chain peptide groups. They have a regular geometry, being constrained to specific values of the dihedral angles ψ and φ on the Ramachandran plot. Both the α-helix and the β-sheet represent a way of saturating all the hydrogen bond donors and acceptors in the peptide backbone. Some parts of the protein are ordered but do not form any regular structures. They should not be confused with random coil, an unfolded polypeptide chain lacking any fixed three-dimensional structure. Several sequential secondary structures may form a "supersecondary unit". Tertiary structure Tertiary structure refers to the three-dimensional structure created by a single protein molecule (a single polypeptide chain). It may include one or several domains. The α-helices and β-pleated-sheets are folded into a compact globular structure. The folding is driven by the non-specific hydrophobic interactions, the burial of hydrophobic residues from water, but the structure is stable only when the parts of a protein domain are locked into place by specific tertiary interactions, such as salt bridges, hydrogen bonds, and the tight packing of side chains and disulfide bonds. The disulfide bonds are extremely rare in cytosolic proteins, since the cytosol (intracellular fluid) is generally a reducing environment. Quaternary structure Quaternary structure is the three-dimensional structure consisting of the aggregation of two or more individual polypeptide chains (subunits) that operate as a single functional unit (multimer). The resulting multimer is stabilized by the same non-covalent interactions and disulfide bonds as in tertiary structure. There are many possible quaternary structure organisations. Complexes of two or more polypeptides (i.e. multiple subunits) are called multimers. Specifically it would be called a dimer if it contains two subunits, a trimer if it contains three subunits, a tetramer if it contains four subunits, and a pentamer if it contains five subunits, and so forth. The subunits are frequently related to one another by symmetry operations, such as a 2-fold axis in a dimer. Multimers made up of identical subunits are referred to with a prefix of "homo-" and those made up of different subunits are referred to with a prefix of "hetero-", for example, a heterotetramer, such as the two alpha and two beta chains of hemoglobin. Homomers An assemblage of multiple copies of a particular polypeptide chain can be described as a homomer, multimer or oligomer. Bertolini et al. in 2021 presented evidence that homomer formation may be driven by interaction between nascent polypeptide chains as they are translated from mRNA by nearby adjacent ribosomes. Hundreds of proteins have been identified as being assembled into homomers in human cells. The process of assembly is often initiated by the interaction of the N-terminal region of polypeptide chains. Evidence that numerous gene products form homomers (multimers) in a variety of organisms based on intragenic complementation evidence was reviewed in 1965. Domains, motifs, and folds in protein structure Proteins are frequently described as consisting of several structural units. These units include domains, motifs, and folds. Despite the fact that there are about 100,000 different proteins expressed in eukaryotic systems, there are many fewer different domains, structural motifs and folds. Structural domain A structural domain is an element of the protein's overall structure that is self-stabilizing and often folds independently of the rest of the protein chain. Many domains are not unique to the protein products of one gene or one gene family but instead appear in a variety of proteins. Domains often are named and singled out because they figure prominently in the biological function of the protein they belong to; for example, the "calcium-binding domain of calmodulin". Because they are independently stable, domains can be "swapped" by genetic engineering between one protein and another to make chimera proteins. A conservative combination of several domains that occur in different proteins, such as protein tyrosine phosphatase domain and C2 domain pair, was called "a superdomain" that may evolve as a single unit. Structural and sequence motifs The structural and sequence motifs refer to short segments of protein three-dimensional structure or amino acid sequence that were found in a large number of different proteins Supersecondary structure Tertiary protein structures can have multiple secondary elements on the same polypeptide chain. The supersecondary structure refers to a specific combination of secondary structure elements, such as β-α-β units or a helix-turn-helix motif. Some of them may be also referred to as structural motifs. Protein fold A protein fold refers to the general protein architecture, like a helix bundle, β-barrel, Rossmann fold or different "folds" provided in the Structural Classification of Proteins database. A related concept is protein topology. Protein dynamics and conformational ensembles Proteins are not static objects, but rather populate ensembles of conformational states. Transitions between these states typically occur on nanoscales, and have been linked to functionally relevant phenomena such as allosteric signaling and enzyme catalysis. Protein dynamics and conformational changes allow proteins to function as nanoscale biological machines within cells, often in the form of multi-protein complexes. Examples include motor proteins, such as myosin, which is responsible for muscle contraction, kinesin, which moves cargo inside cells away from the nucleus along microtubules, and dynein, which moves cargo inside cells towards the nucleus and produces the axonemal beating of motile cilia and flagella. "[I]n effect, the [motile cilium] is a nanomachine composed of perhaps over 600 proteins in molecular complexes, many of which also function independently as nanomachines...Flexible linkers allow the mobile protein domains connected by them to recruit their binding partners and induce long-range allostery via protein domain dynamics. " Proteins are often thought of as relatively stable tertiary structures that experience conformational changes after being affected by interactions with other proteins or as a part of enzymatic activity. However, proteins may have varying degrees of stability, and some of the less stable variants are intrinsically disordered proteins. These proteins exist and function in a relatively 'disordered' state lacking a stable tertiary structure. As a result, they are difficult to describe by a single fixed tertiary structure. Conformational ensembles have been devised as a way to provide a more accurate and 'dynamic' representation of the conformational state of intrinsically disordered proteins. Protein ensemble files are a representation of a protein that can be considered to have a flexible structure. Creating these files requires determining which of the various theoretically possible protein conformations actually exist. One approach is to apply computational algorithms to the protein data in order to try to determine the most likely set of conformations for an ensemble file. There are multiple methods for preparing data for the Protein Ensemble Database that fall into two general methodologies – pool and molecular dynamics (MD) approaches (diagrammed in the figure). The pool based approach uses the protein's amino acid sequence to create a massive pool of random conformations. This pool is then subjected to more computational processing that creates a set of theoretical parameters for each conformation based on the structure. Conformational subsets from this pool whose average theoretical parameters closely match known experimental data for this protein are selected. The alternative molecular dynamics approach takes multiple random conformations at a time and subjects all of them to experimental data. Here the experimental data is serving as limitations to be placed on the conformations (e.g. known distances between atoms). Only conformations that manage to remain within the limits set by the experimental data are accepted. This approach often applies large amounts of experimental data to the conformations which is a very computationally demanding task. The conformational ensembles were generated for a number of highly dynamic and partially unfolded proteins, such as Sic1/Cdc4, p15 PAF, MKK7, Beta-synuclein and P27 Protein folding As it is translated, polypeptides exit the ribosome mostly as a random coil and folds into its native state. The final structure of the protein chain is generally assumed to be determined by its amino acid sequence (Anfinsen's dogma). Protein stability Thermodynamic stability of proteins represents the free energy difference between the folded and unfolded protein states. This free energy difference is very sensitive to temperature, hence a change in temperature may result in unfolding or denaturation. Protein denaturation may result in loss of function, and loss of native state. The free energy of stabilization of soluble globular proteins typically does not exceed 50 kJ/mol. Taking into consideration the large number of hydrogen bonds that take place for the stabilization of secondary structures, and the stabilization of the inner core through hydrophobic interactions, the free energy of stabilization emerges as small difference between large numbers. Protein structure determination Around 90% of the protein structures available in the Protein Data Bank have been determined by X-ray crystallography. This method allows one to measure the three-dimensional (3-D) density distribution of electrons in the protein, in the crystallized state, and thereby infer the 3-D coordinates of all the atoms to be determined to a certain resolution. Roughly 7% of the known protein structures have been obtained by nuclear magnetic resonance (NMR) techniques. For larger protein complexes, cryo-electron microscopy can determine protein structures. The resolution is typically lower than that of X-ray crystallography, or NMR, but the maximum resolution is steadily increasing. This technique is still a particularly valuable for very large protein complexes such as virus coat proteins and amyloid fibers. General secondary structure composition can be determined via circular dichroism. Vibrational spectroscopy can also be used to characterize the conformation of peptides, polypeptides, and proteins. Two-dimensional infrared spectroscopy has become a valuable method to investigate the structures of flexible peptides and proteins that cannot be studied with other methods. A more qualitative picture of protein structure is often obtained by proteolysis, which is also useful to screen for more crystallizable protein samples. Novel implementations of this approach, including fast parallel proteolysis (FASTpp), can probe the structured fraction and its stability without the need for purification. Once a protein's structure has been experimentally determined, further detailed studies can be done computationally, using molecular dynamic simulations of that structure. Protein structure databases A protein structure database is a database that is modeled around the various experimentally determined protein structures. The aim of most protein structure databases is to organize and annotate the protein structures, providing the biological community access to the experimental data in a useful way. Data included in protein structure databases often includes 3D coordinates as well as experimental information, such as unit cell dimensions and angles for x-ray crystallography determined structures. Though most instances, in this case either proteins or a specific structure determinations of a protein, also contain sequence information and some databases even provide means for performing sequence based queries, the primary attribute of a structure database is structural information, whereas sequence databases focus on sequence information, and contain no structural information for the majority of entries. Protein structure databases are critical for many efforts in computational biology such as structure based drug design, both in developing the computational methods used and in providing a large experimental dataset used by some methods to provide insights about the function of a protein. Structural classifications of proteins Protein structures can be grouped based on their structural similarity, topological class or a common evolutionary origin. The Structural Classification of Proteins database and CATH database provide two different structural classifications of proteins. When the structural similarity is large the two proteins have possibly diverged from a common ancestor, and shared structure between proteins is considered evidence of homology. Structure similarity can then be used to group proteins together into protein superfamilies. If shared structure is significant but the fraction shared is small, the fragment shared may be the consequence of a more dramatic evolutionary event such as horizontal gene transfer, and joining proteins sharing these fragments into protein superfamilies is no longer justified. Topology of a protein can be used to classify proteins as well. Knot theory and circuit topology are two topology frameworks developed for classification of protein folds based on chain crossing and intrachain contacts respectively. Computational prediction of protein structure The generation of a protein sequence is much easier than the determination of a protein structure. However, the structure of a protein gives much more insight in the function of the protein than its sequence. Therefore, a number of methods for the computational prediction of protein structure from its sequence have been developed. Ab initio prediction methods use just the sequence of the protein. Threading and homology modeling methods can build a 3-D model for a protein of unknown structure from experimental structures of evolutionarily-related proteins, called a protein family. See also Biomolecular structure Gene structure Nucleic acid structure PCRPi-DB Ribbon diagram 3D schematic representation of proteins References Further reading 50 Years of Protein Structure Determination Timeline - HTML Version - National Institute of General Medical Sciences at NIH External links Protein Structure drugdesign.org Method_for_the_Characterization_of_the_Three-Dimensional_Structure_of_Proteins_Employing_Mass_Spectrometric_Analysis_and_Experimental-Computational_Feedback_Modeling A_Method_for_the_Determination_of_the_Conformation_(Topology)_of_Proteins_Employing_Experimental-Computational_Feedback_Modeling
Protein structure
[ "Chemistry" ]
3,608
[ "Protein structure", "Structural biology" ]
969,136
https://en.wikipedia.org/wiki/Temporal%20Key%20Integrity%20Protocol
Temporal Key Integrity Protocol (TKIP ) is a security protocol used in the IEEE 802.11 wireless networking standard. TKIP was designed by the IEEE 802.11i task group and the Wi-Fi Alliance as an interim solution to replace WEP without requiring the replacement of legacy hardware. This was necessary because the breaking of WEP had left Wi-Fi networks without viable link-layer security, and a solution was required for already deployed hardware. However, TKIP itself is no longer considered secure, and was deprecated in the 2012 revision of the 802.11 standard. Background On October 31, 2002, the Wi-Fi Alliance endorsed TKIP under the name Wi-Fi Protected Access (WPA). The IEEE endorsed the final version of TKIP, along with more robust solutions such as 802.1X and the AES based CCMP, when they published IEEE 802.11i-2004 on 23 July 2004. The Wi-Fi Alliance soon afterwards adopted the full specification under the marketing name WPA2. TKIP was resolved to be deprecated by the IEEE in January 2009. Technical details TKIP and the related WPA standard implement three new security features to address security problems encountered in WEP protected networks. First, TKIP implements a key mixing function that combines the secret root key with the initialization vector before passing it to the RC4 cipher initialization. WEP, in comparison, merely concatenated the initialization vector to the root key, and passed this value to the RC4 routine. This permitted the vast majority of the RC4 based WEP related key attacks. Second, WPA implements a sequence counter to protect against replay attacks. Packets received out of order will be rejected by the access point. Finally, TKIP implements a 64-bit Message Integrity Check (MIC) and re-initializes the sequence number each time when a new key (Temporal Key) is used. To be able to run on legacy WEP hardware with minor upgrades, TKIP uses RC4 as its cipher. TKIP also provides a rekeying mechanism. TKIP ensures that every data packet is sent with a unique encryption key(Interim Key/Temporal Key + Packet Sequence Counter). Key mixing increases the complexity of decoding the keys by giving an attacker substantially less data that has been encrypted using any one key. WPA2 also implements a new message integrity code, MIC. The message integrity check prevents forged packets from being accepted. Under WEP it was possible to alter a packet whose content was known even if it had not been decrypted. Security TKIP uses the same underlying mechanism as WEP, and consequently is vulnerable to a number of similar attacks. The message integrity check, per-packet key hashing, broadcast key rotation, and a sequence counter discourage many attacks. The key mixing function also eliminates the WEP key recovery attacks. Notwithstanding these changes, the weakness of some of these additions have allowed for new, although narrower, attacks. Packet spoofing and decryption TKIP is vulnerable to a MIC key recovery attack that, if successfully executed, permits an attacker to transmit and decrypt arbitrary packets on the network being attacked. The current publicly available TKIP-specific attacks do not reveal the Pairwise Master Key or the Pairwise Temporal Keys. On November 8, 2008, Martin Beck and Erik Tews released a paper detailing how to recover the MIC key and transmit a few packets. This attack was improved by Mathy Vanhoef and Frank Piessens in 2013, where they increase the amount of packets an attacker can transmit, and show how an attacker can also decrypt arbitrary packets. The basis of the attack is an extension of the WEP chop-chop attack. Because WEP uses a cryptographically insecure checksum mechanism (CRC32), an attacker can guess individual bytes of a packet, and the wireless access point will confirm or deny whether or not the guess is correct. If the guess is correct, the attacker will be able to detect the guess is correct and continue to guess other bytes of the packet. However, unlike the chop-chop attack against a WEP network, the attacker must wait for at least 60 seconds after an incorrect guess (a successful circumvention of the CRC32 mechanism) before continuing the attack. This is because although TKIP continues to use the CRC32 checksum mechanism, it implements an additional MIC code named Michael. If two incorrect Michael MIC codes are received within 60 seconds, the access point will implement countermeasures, meaning it will rekey the TKIP session key, thus changing future keystreams. Accordingly, attacks on TKIP will wait an appropriate amount of time to avoid these countermeasures. Because ARP packets are easily identified by their size, and the vast majority of the contents of this packet would be known to an attacker, the number of bytes an attacker must guess using the above method is rather small (approximately 14 bytes). Beck and Tews estimate recovery of 12 bytes is possible in about 12 minutes on a typical network, which would allow an attacker to transmit 3–7 packets of at most 28 bytes. Vanhoef and Piessens improved this technique by relying on fragmentation, allowing an attacker to transmit arbitrarily many packets, each at most 112 bytes in size. The Vanhoef–Piessens attacks also can be used to decrypt arbitrary packets of the attack's choice. An attacker already has access to the entire ciphertext packet. Upon retrieving the entire plaintext of the same packet, the attacker has access to the keystream of the packet, as well as the MIC code of the session. Using this information the attacker can construct a new packet and transmit it on the network. To circumvent the WPA implemented replay protection, the attacks use QoS channels to transmit these newly constructed packets. An attacker able to transmit these packets may be able to implement any number of attacks, including ARP poisoning attacks, denial of service, and other similar attacks, with no need of being associated with the network. Royal Holloway attack A group of security researchers at the Information Security Group at Royal Holloway, University of London reported a theoretical attack on TKIP which exploits the underlying RC4 encryption mechanism. TKIP uses a similar key structure to WEP with the low 16-bit value of a sequence counter (used to prevent replay attacks) being expanded into the 24-bit "IV", and this sequence counter always increments on every new packet. An attacker can use this key structure to improve existing attacks on RC4. In particular, if the same data is encrypted multiple times, an attacker can learn this information from only 224 connections. While they claim that this attack is on the verge of practicality, only simulations were performed, and the attack has not been demonstrated in practice. NOMORE attack In 2015, security researchers from KU Leuven presented new attacks against RC4 in both TLS and WPA-TKIP. Dubbed the Numerous Occurrence MOnitoring & Recovery Exploit (NOMORE) attack, it is the first attack of its kind that was demonstrated in practice. The attack against WPA-TKIP can be completed within an hour, and allows an attacker to decrypt and inject arbitrary packets. Legacy ZDNet reported on June 18, 2010, that WEP & TKIP would soon be disallowed on Wi-Fi devices by the Wi-Fi alliance. However, a survey in 2013 showed that it was still in widespread use. The IEEE 802.11n standard prohibits the data rate exceed 54 Mbps if TKIP is used as the Wi-Fi cipher. See also Wireless network interface controller CCMP Wi-Fi Protected Access IEEE 802.11i-2004 References Broken cryptography algorithms Cryptographic protocols Key management Secure communication Wireless networking IEEE 802.11
Temporal Key Integrity Protocol
[ "Technology", "Engineering" ]
1,632
[ "Wireless networking", "Computer networks engineering" ]
969,161
https://en.wikipedia.org/wiki/Leo%20I%20%28dwarf%20galaxy%29
Leo I is a dwarf spheroidal galaxy in the constellation Leo. At about 820,000 light-years distant, it is a member of the Local Group of galaxies and is thought to be one of the most distant satellites of the Milky Way galaxy. It was discovered in 1950 by Albert George Wilson on photographic plates of the National Geographic Society – Palomar Observatory Sky Survey, which were taken with the 48-inch Schmidt camera at Palomar Observatory. Visibility Leo I is located only 12 arc minutes from Regulus, the brightest star in the constellation. For that reason, the galaxy is sometimes called the Regulus Dwarf. Scattered light from the star makes studying the galaxy more difficult, and it was not until the 1990s that it was detected visually. The proximity of Regulus and the low surface brightness make it a real challenge to observe. Medium-sized amateur telescopes (15 cm or more) and a dark sky appear to be required for a sighting. But some reports of April 2013 tell that one observer with an 11 cm mini Dobson and even a refractor as small as 7 cm f/10 has sighted Leo I under very dark sky conditions. Mass The measurement of radial velocities of some bright red giants in Leo I have made possible to measure its mass. It was found to be at least (2.0 ± 1.0) × 107 . The results are not conclusive, and do not exclude or confirm the existence of a large dark matter halo around the galaxy. However, it seems to be certain that the galaxy does not rotate. A kinematic study of Leo I could not place much constraints on dark matter, but suggested the presence of a black hole of three million solar masses in the center of the galaxy. This would be significant, as it would be the first time this has been done with a dwarf spheroidal galaxy. A black hole of three million solar masses is comparable to the mass of the Milky Way's black hole, Sagittarius A*. However, another study could not confirm this, suggesting at most an intermediate-mass black hole of a few 105 solar masses. It has been suggested that Leo I is a tidal debris stream in the outer halo of the Milky Way. This hypothesis has not been confirmed, however. Star formation Typical to a dwarf galaxy, the metallicity of Leo I is very low, only one percent that of the Sun. Gallart et al. (1999) deduce from Hubble Space Telescope observations that the galaxy experienced a major increase (accounting for 70% to 80% of its population) in its star formation rate between 6 Ga and 2 Ga (billion years ago). There is no significant evidence of any stars that are more than 10 Ga old. About 1 Ga ago, star formation in Leo I appears to have dropped suddenly to an almost negligible rate, roughly coinciding with its latest periastron passage of the Milky Way. Ram pressure stripping would have removed its gas, decreasing its star formation rate. Some low-level activity may have continued until 200-500 Ma. Therefore, it is thought to be the youngest dwarf spheroidal satellite galaxy of the Milky Way. In addition, the galaxy may be embedded in a cloud of ionized gas with a mass similar to that of the whole galaxy. References External links SEDS page on Leo I Dwarf spheroidal galaxies Local Group Milky Way Subgroup Leo (constellation) 05470 29488 ?
Leo I (dwarf galaxy)
[ "Astronomy" ]
703
[ "Leo (constellation)", "Constellations" ]
969,224
https://en.wikipedia.org/wiki/National%20Geographic%20Society%20%E2%80%93%20Palomar%20Observatory%20Sky%20Survey
The National Geographic Society – Palomar Observatory Sky Survey (NGS-POSS, or just POSS, also POSS I) was a major astronomical survey, that took almost 2,000 photographic plates of the night sky. It was conducted at Palomar Observatory, California, United States, and completed by the end of 1958. Observations The photographs were taken with the Samuel Oschin telescope at Palomar Observatory, and the astronomical survey was funded by a grant from the National Geographic Society to the California Institute of Technology. Among the primary minds behind the project were Edwin Hubble, Milton L. Humason, Walter Baade, Ira Sprague Bowen and Rudolph Minkowski. The first photographic plate was exposed on November 11, 1949. 99% of the plates were taken by June 20, 1956, but the final 1% was not completed until December 10, 1958. The survey utilized square photographic plates, covering about 6° of sky per side (approximately 36 square degrees per plate). Each region of the sky was photographed twice, once using a red sensitive Kodak 103a-E plate, and once with a blue sensitive Kodak 103a-O plate. This allowed the color of celestial objects to be recorded. The survey was originally meant to cover the sky from the north celestial pole to -24° declination. This figure specifies the position of the plate center, hence the actual coverage under the original plan would have been to approximately -27°. It was expected that 879 plate pairs would be required. However the Survey was ultimately extended to -30° plate centers, giving irregular coverage to as far south as -34° declination, and utilizing 936 total plate pairs. The limiting magnitude of the survey varied depending on the region of the sky, but is commonly quoted as 22nd magnitude on average. Publication The NGS-POSS was published shortly after the Survey was completed as a collection of 1,872 photographic negative prints each measuring 14" x 14". In the early 1970s there was another "printing" of the Survey, this time on 14" x17" photographic negative prints. The California Institute of Technology bookstore used to sell prints of selected POSS regions. The regions were chosen to support educational exercises and the set was a curriculum teaching tool. In 1962, the Whiteoak Extension, comprising 100 red-sensitive plates extending coverage to -42° declination, was completed and published as identically sized photographic negative prints. The Whiteoak Extension is often found in libraries stored as an appendix or companion to the photographic print edition of the NGS-POSS. This brings the number of prints to 1,972 for most holders of a photographic edition of the NGS-POSS. In 1981, a set of NGS-POSS Transparency Overlay Maps was published by Robert S. Dixon of the Ohio State University. This work is commonly found wherever a photographic print edition of the NGS-POSS is held. Derivative works Many astronomical catalogs are partial derivatives of the NGS-POSS (e.g. Abell Catalog of Planetary Nebulae), which was used for decades for purposes of cataloging and categorizing celestial objects, especially in studies of galaxy morphology. Innumerable astronomical objects were discovered by astronomers studying the NGS-POSS photographs. In 1986, work was begun on a digital version of the NGS-POSS. Eight years later, the scanning of the original NGS-POSS plates was completed. The resulting digital images were compressed and published as the Digitized Sky Survey in 1994. The Digitized Sky Survey was made available on a set of 102 CD-ROMs, and can also be queried through several web interfaces. In 1996, an even more compressed version, RealSky, was marketed by the Astronomical Society of the Pacific. in 2001, a catalog identifying over 89 million objects on the NGS-POSS was placed online as part of the Minnesota Automated Plate Scanner Catalog of the POSS I. The catalog was also distributed in a set of 4 DVD-ROMs. The catalog contains accurate sky positions and brightness measurements for all of these objects as well as more esoteric parameters such as ellipticity, position angle, and concentration index. See also Whiteoak extension Southern Sky Survey Palomar Observatory Sky Survey II Two Micron All-Sky Survey Sloan Digital Sky Survey Minnesota Automated Plate Scanner References External links Digitized Sky Survey Minnesota Automated Plate Scanner Catalog of the POSS I Astronomical surveys Astronomical imaging Palomar Observatory 1958 in California 1958 in science Palomar Observatory Sky Survey
National Geographic Society – Palomar Observatory Sky Survey
[ "Astronomy" ]
929
[ "Astronomical surveys", "Works about astronomy", "Astronomical objects" ]
969,477
https://en.wikipedia.org/wiki/Preimage%20attack
In cryptography, a preimage attack on cryptographic hash functions tries to find a message that has a specific hash value. A cryptographic hash function should resist attacks on its preimage (set of possible inputs). In the context of attack, there are two types of preimage resistance: preimage resistance: for essentially all pre-specified outputs, it is computationally infeasible to find any input that hashes to that output; i.e., given , it is difficult to find an such that . second-preimage resistance: for a specified input, it is computationally infeasible to find another input which produces the same output; i.e., given , it is difficult to find a second input such that . These can be compared with a collision resistance, in which it is computationally infeasible to find any two distinct inputs , that hash to the same output; i.e., such that . Collision resistance implies second-preimage resistance. Second-preimage resistance implies preimage resistance only if the size of the hash function's inputs can be substantially (e.g., factor 2) larger than the size of the hash function's outputs. Conversely, a second-preimage attack implies a collision attack (trivially, since, in addition to , is already known right from the start). Applied preimage attacks By definition, an ideal hash function is such that the fastest way to compute a first or second preimage is through a brute-force attack. For an -bit hash, this attack has a time complexity , which is considered too high for a typical output size of = 128 bits. If such complexity is the best that can be achieved by an adversary, then the hash function is considered preimage-resistant. However, there is a general result that quantum computers perform a structured preimage attack in , which also implies second preimage and thus a collision attack. Faster preimage attacks can be found by cryptanalysing certain hash functions, and are specific to that function. Some significant preimage attacks have already been discovered, but they are not yet practical. If a practical preimage attack is discovered, it would drastically affect many Internet protocols. In this case, "practical" means that it could be executed by an attacker with a reasonable amount of resources. For example, a preimaging attack that costs trillions of dollars and takes decades to preimage one desired hash value or one message is not practical; one that costs a few thousand dollars and takes a few weeks might be very practical. All currently known practical or almost-practical attacks on MD5 and SHA-1 are collision attacks. In general, a collision attack is easier to mount than a preimage attack, as it is not restricted by any set value (any two values can be used to collide). The time complexity of a brute-force collision attack, in contrast to the preimage attack, is only . Restricted preimage space attacks The computational infeasibility of a first preimage attack on an ideal hash function assumes that the set of possible hash inputs is too large for a brute force search. However if a given hash value is known to have been produced from a set of inputs that is relatively small or is ordered by likelihood in some way, then a brute force search may be effective. Practicality depends on the input set size and the speed or cost of computing the hash function. A common example is the use of hashes to store password validation data for authentication. Rather than store the plaintext of user passwords, an access control system stores a hash of the password. When a user requests access, the password they submit is hashed and compared with the stored value. If the stored validation data is stolen, the thief will only have the hash values, not the passwords. However most users choose passwords in predictable ways and many passwords are short enough that all possible combinations can be tested if fast hashes are used, even if the hash is rated secure against preimage attacks. Special hashes called key derivation functions have been created to slow searches. See Password cracking. For a method to prevent the testing of short passwords see salt (cryptography). See also Birthday attack Cryptographic hash function Hash function security summary Puzzle friendliness Rainbow table Random oracle : Attacks on Cryptographic Hashes in Internet Protocols References Cryptographic attacks
Preimage attack
[ "Technology" ]
915
[ "Cryptographic attacks", "Computer security exploits" ]
969,540
https://en.wikipedia.org/wiki/Limiting%20magnitude
In astronomy, limiting magnitude is the faintest apparent magnitude of a celestial body that is detectable or detected by a given instrument. In some cases, limiting magnitude refers to the upper threshold of detection. In more formal uses, limiting magnitude is specified along with the strength of the signal (e.g., "10th magnitude at 20 sigma"). Sometimes limiting magnitude is qualified by the purpose of the instrument (e.g., "10th magnitude for photometry") This statement recognizes that a photometric detector can detect light far fainter than it can reliably measure. The limiting magnitude of an instrument is often cited for ideal conditions, but environmental conditions impose further practical limits. These include weather, moonlight, skyglow, and light pollution. The International Dark-Sky Association has been vocal in championing the cause of reducing skyglow and light pollution. Naked-eye visibility The limiting magnitude for naked eye visibility refers to the faintest stars that can be seen with the unaided eye near the zenith on clear moonless nights. The quantity is most often used as an overall indicator of sky brightness, in that light polluted and humid areas generally have brighter limiting magnitudes than remote desert or high altitude areas. The limiting magnitude will depend on the observer, and will increase with the eye's dark adaptation. On a relatively clear sky, the limiting visibility will be about 6th magnitude. However, the limiting visibility is 7th magnitude for faint stars visible from dark rural areas located from major cities. (See the Bortle scale.) There is even variation within metropolitan areas. For those who live in the immediate suburbs of New York City, the limiting magnitude might be 4.0. This corresponds to roughly 250 visible stars, or one-tenth of the number that is visible under perfectly dark skies. From the boroughs of New York City outside Manhattan (Brooklyn, Queens, Staten Island, and the Bronx), the limiting magnitude might be 3.0, suggesting that at best, only about 50 stars might be seen at any one time. From brightly lit Midtown Manhattan, the limiting magnitude is possibly 2.0, meaning that from the heart of New York City only about 15 stars will be visible at any given time. From relatively dark suburban areas, the limiting magnitude is frequently closer to 5 or somewhat fainter, but from very remote and clear sites, some amateur astronomers can see nearly as faint as 8th magnitude. Many basic observing references quote a limiting magnitude of 6, as this is the approximate limit of star maps which date from before the invention of the telescope. Ability in this area, which requires the use of averted vision, varies substantially from observer to observer, with both youth and experience being beneficial. Limiting magnitude is traditionally estimated by searching for faint stars of known magnitude. In 2013 an app was developed based on Google's Sky Map that allows non-specialists to estimate the limiting magnitude in polluted areas using their phone. Modelling magnitude limits We see stars if they have sufficient contrast against the background sky. A star's brightness (more precisely its illuminance) must exceed the sky's surface brightness (i.e. luminance) by a sufficient amount. Earth's sky is never completely black – even in the absence of light pollution there is a natural airglow that limits what can be seen. The astronomer H.D. Curtis reported his naked-eye limit as 6.53, but by looking at stars through a hole in a black screen (i.e. against a totally dark background) was able to see one of magnitude 8.3, and possibly one of 8.9. Naked-eye magnitude limits can be modelled theoretically using laboratory data on human contrast thresholds at various background brightness levels. Andrew Crumey has done this using data from experiments where subjects viewed artificial light sources under controlled conditions. Crumey showed that for a sky background with surface brightness , the visual limit could be expressed as: where is a "field factor" specific to the observer and viewing situation. The very darkest skies have a zenith surface brightness of approximately 22 mag arcsec−2, so it can be seen from the equation that such a sky would be expected to show stars approximately 0.4 mag fainter than one with a surface brightness of 21 mag arcsec−2. Crumey speculated that for most people will lie between about 1.4 and 2.4, with being typical. This would imply at the darkest sites, consistent with the traditionally accepted value, though substantially poorer than what is often claimed by modern amateur observers. To explain the discrepancy, Crumey pointed out that his formula assumed sustained visibility rather than momentary glimpses. He reported that "scintillation can lead to sudden 'flashes' with a brightening of 1 to 2 mag lasting a hundredth of a second." He commented, "The activities of amateur astronomers can lie anywhere between science and recreational sport. If the latter, then the individual's concern with limiting magnitude may be to maximise it, whereas for science a main interest should be consistency of measurement." He recommended that "For the purposes of visibility recommendations aimed at the general public it is preferable to consider typical rather than exceptional performance... Stars should be continuously visible (with direct or averted vision) for some extended period (e.g. at least a second or two) rather than be seen to flash momentarily." Crumey's formula, stated above, is an approximation to a more general one he obtained in photometric units. He obtained other approximations in astronomical units for skies ranging from moderately light polluted to truly dark. If an observer knows their own SQM (i.e. sky brightness measured by a sky quality meter), and establishes their actual limiting magnitude, they can work out their own from these formulae. Crumey recommended that for accurate results, the observer should ascertain the V-magnitude of the faintest steadily visible star to one decimal place, and for highest accuracy should also record the colour index and convert to a standard value. Crumey showed that if the limit is at colour index , then the limit at colour index zero is approximately Some sample values are tabulated below. The general result is that a gain of 1 SQM in sky darkness equates to a gain in magnitude limit of roughly 0.3 to 0.4. Visual magnitude limit with a telescope The aperture (or more formally entrance pupil) of a telescope is larger than the human eye pupil, so collects more light, concentrating it at the exit pupil where the observer's own pupil is (usually) placed. The result is increased illuminance – stars are effectively brightened. At the same time, magnification darkens the background sky (i.e. reduces its luminance). Therefore stars normally invisible to the naked eye become visible in the telescope. Further increasing the magnification makes the sky look even darker in the eyepiece, but there is a limit to how far this can be taken. One reason is that as magnification increases, the exit pupil gets smaller, resulting in a poorer image – an effect that can be seen by looking through a small pinhole in daylight. Another reason is that star images are not perfect points of light; atmospheric turbulence creates a blurring effect referred to as seeing. A third reason is that if magnification can be pushed sufficiently high, the sky background will become effectively black, and cannot be darkened any further. This happens at a background surface brightness of approximately 25 mag arcsec−2, where only 'dark light' (neural noise) is perceived. Various authors have stated the limiting magnitude of a telescope with entrance pupil centimetres to be of the form with suggested values for the constant ranging from 6.8 to 8.7. Crumey obtained a formula for as a function of the sky surface brightness, telescope magnification, observer's eye pupil diameter and other parameters including the personal factor discussed above. Choosing parameter values thought typical of normal dark-site observations (e.g. eye pupil 0.7cm and ) he found . Crumey obtained his formula as an approximation to one he derived in photometric units from his general model of human contrast threshold. As an illustration, he calculated limiting magnitude as a function of sky brightness for a 100mm telescope at magnifications ranging from x25 to x200 (with other parameters given typical real-world values). Crumey found that a maximum of 12.7 mag could be achieved if magnification was sufficiently high and the sky sufficiently dark, so that the background in the eyepiece was effectively black. That limit corresponds to = 7.7 in the formula above. More generally, for situations where it is possible to raise a telescope's magnification high enough to make the sky background effectively black, the limiting magnitude is approximated by where and are as stated above, is the observer's pupil diameter in centimetres, and is the telescope transmittance (e.g. 0.75 for a typical reflector). Telescopic limiting magnitudes were investigated empirically by I.S. Bowen at Mount Wilson Observatory in 1947, and Crumey was able to use Bowen's data as a test of the theoretical model. Bowen did not record parameters such as his eye pupil diameter, naked-eye magnitude limit, or the extent of light loss in his telescopes; but because he made observations at a range of magnifications using three telescopes (with apertures 0.33 inch, 6 inch and 60 inch), Crumey was able to construct a system of simultaneous equations from which the remaining parameters could be deduced. Because Crumey used astronomical-unit approximations, and plotted on log axes, the limit "curve" for each telescope consisted of three straight sections, corresponding to exit pupil larger than eye pupil, exit pupil smaller, and sky background effectively black. Bowen's anomalous limit at highest magnification with the 60-inch telescope was due to poor seeing. As well as vindicating the theoretical model, Crumey was able to show from this analysis that the sky brightness at the time of Bowen's observations was approximately 21.27 mag arcsec−2, highlighting the rapid growth of light pollution at Mount Wilson in the second half of the twentieth century. Large observatories Telescopes at large observatories are typically located at sites selected for dark skies. They also increase the limiting magnitude by using long integration times on the detector, and by using image-processing techniques to increase the signal to noise ratio. Most 8 to 10 meter class telescopes can detect sources with a visual magnitude of about 27 using a one-hour integration time. Automated astronomical surveys are often limited to around magnitude 20 because of the short exposure time that allows covering a large part of the sky in a night. In a 30 second exposure the 0.7-meter telescope at the Catalina Sky Survey has a limiting magnitude of 19.5. The Zwicky Transient Facility has a limiting magnitude of 20.5, and Pan-STARRS has a limiting magnitude of 24. Even higher limiting magnitudes can be achieved for telescopes above the Earth's atmosphere, such as the Hubble Space Telescope, where the sky brightness due to the atmosphere is not relevant. For orbital telescopes, the background sky brightness is set by the zodiacal light. The Hubble telescope can detect objects as faint as a magnitude of +31.5, and the James Webb Space Telescope (operating in the infrared spectrum) is expected to exceed that. See also Araucaria Project Night sky Dark-sky movement Ricco's law References External links Estimating Limiting Magnitude at NinePlanets.org Telescope Limiting Magnitude Calculator Loss of the Night app for estimating limiting magnitude The Astronomical Magnitude Scale Astronomical Visibility (articles by Andrew Crumey) Observational astronomy
Limiting magnitude
[ "Astronomy" ]
2,419
[ "Observational astronomy", "Astronomical sub-disciplines" ]
969,582
https://en.wikipedia.org/wiki/Spaghetti%20strap
A spaghetti strap (also called a noodle strap) is a very thin shoulder strap used to support clothing, while providing minimal shoulder straps over otherwise bare shoulders. It is commonly used in garments such as swimwear, camisoles, crop tops, brassieres, sundresses, cocktail dresses, and evening gowns, so-named for its resemblance to the thin pasta strings called spaghetti. Spaghetti straps are fragile suspenders meant to support a light clothing. Dress codes Spaghetti straps may not meet some dress codes. For example, they are not welcome at Ascot Racecourse as well as in traditionalist societies like Saudi Arabia or Afghanistan. See also Halterneck Sleeveless shirt References 1990s fashion Clubwear Metaphors referring to spaghetti Parts of clothing Undergarments Women's clothing
Spaghetti strap
[ "Technology" ]
158
[ "Components", "Parts of clothing" ]
969,603
https://en.wikipedia.org/wiki/True%20anomaly
In celestial mechanics, true anomaly is an angular parameter that defines the position of a body moving along a Keplerian orbit. It is the angle between the direction of periapsis and the current position of the body, as seen from the main focus of the ellipse (the point around which the object orbits). The true anomaly is usually denoted by the Greek letters or , or the Latin letter , and is usually restricted to the range 0–360° (0–2π rad). The true anomaly is one of three angular parameters (anomalies) that defines a position along an orbit, the other two being the eccentric anomaly and the mean anomaly. Formulas From state vectors For elliptic orbits, the true anomaly can be calculated from orbital state vectors as: (if then replace by ) where: v is the orbital velocity vector of the orbiting body, e is the eccentricity vector, r is the orbital position vector (segment FP in the figure) of the orbiting body. Circular orbit For circular orbits the true anomaly is undefined, because circular orbits do not have a uniquely determined periapsis. Instead the argument of latitude u is used: (if then replace ) where: n is a vector pointing towards the ascending node (i.e. the z-component of n is zero). rz is the z-component of the orbital position vector r Circular orbit with zero inclination For circular orbits with zero inclination the argument of latitude is also undefined, because there is no uniquely determined line of nodes. One uses the true longitude instead: (if then replace by ) where: rx is the x-component of the orbital position vector r vx is the x-component of the orbital velocity vector v. From the eccentric anomaly The relation between the true anomaly and the eccentric anomaly is: or using the sine and tangent: or equivalently: so Alternatively, a form of this equation was derived by that avoids numerical issues when the arguments are near , as the two tangents become infinite. Additionally, since and are always in the same quadrant, there will not be any sign problems. where so From the mean anomaly The true anomaly can be calculated directly from the mean anomaly via a Fourier expansion: with Bessel functions and parameter . Omitting all terms of order or higher (indicated by ), it can be written as Note that for reasons of accuracy this approximation is usually limited to orbits where the eccentricity is small. The expression is known as the equation of the center, where more details about the expansion are given. Radius from true anomaly The radius (distance between the focus of attraction and the orbiting body) is related to the true anomaly by the formula where a is the orbit's semi-major axis. In celestial mechanics, Projective anomaly is an angular parameter that defines the position of a body moving along a Keplerian orbit. It is the angle between the direction of periapsis and the current position of the body in the projective space. The projective anomaly is usually denoted by the and is usually restricted to the range 0 - 360 degree (0 - 2 radian). The projective anomaly is one of four angular parameters (anomalies) that defines a position along an orbit, the other two being the eccentric anomaly, true anomaly and the mean anomaly. In the projective geometry, circle, ellipse, parabolla, hyperbolla are treated as a same kind of quadratic curves. projective parameters and projective anomaly An orbit type is classified by two project parameters and as follows, circular orbit elliptic orbit parabolic orbit hyperbolic orbit linear orbit imaginary orbit where where is semi major axis, is eccentricity, is perihelion distance、 is aphelion distance. Position and heliocentric distance of the planet , and can be calculated as functions of the projective anomaly : Kepler's equation The projective anomaly can be calculated from the eccentric anomaly as follows, Case : case : case : The above equations are called Kepler's equation. Generalized anomaly For arbitrary constant , the generalized anomaly is related as The eccentric anomaly, the true anomaly, and the projective anomaly are the cases of , , , respectively. Sato, I., "A New Anomaly of Keplerian Motion", Astronomical Journal Vol.116, pp.2038-3039, (1997) See also Two body problem Mean anomaly Eccentric anomaly Kepler's equation projective geometry Kepler's laws of planetary motion Projective anomaly Ellipse Hyperbola References Further reading Murray, C. D. & Dermott, S. F., 1999, Solar System Dynamics, Cambridge University Press, Cambridge. Plummer, H. C., 1960, An Introductory Treatise on Dynamical Astronomy, Dover Publications, New York. (Reprint of the 1918 Cambridge University Press edition.) External links Federal Aviation Administration - Describing Orbits Orbits Angle Equations of astronomy
True anomaly
[ "Physics", "Astronomy" ]
978
[ "Geometric measurement", "Scalar physical quantities", "Physical quantities", "Concepts in astronomy", "Equations of astronomy", "Wikipedia categories named after physical quantities", "Angle" ]
969,624
https://en.wikipedia.org/wiki/Collision%20attack
In cryptography, a collision attack on a cryptographic hash tries to find two inputs producing the same hash value, i.e. a hash collision. This is in contrast to a preimage attack where a specific target hash value is specified. There are roughly two types of collision attacks: Classical collision attack Find two different messages m1 and m2 such that hash(m1) = hash(m2). More generally: Chosen-prefix collision attack Given two different prefixes p1 and p2, find two suffixes s1 and s2 such that hash(p1 ∥ s1) = hash(p2 ∥ s2), where ∥ denotes the concatenation operation. Classical collision attack Much like symmetric-key ciphers are vulnerable to brute force attacks, every cryptographic hash function is inherently vulnerable to collisions using a birthday attack. Due to the birthday problem, these attacks are much faster than a brute force would be. A hash of n bits can be broken in 2n/2 time steps (evaluations of the hash function). Mathematically stated, a collision attack finds two different messages m1 and m2, such that hash(m1) = hash(m2). In a classical collision attack, the attacker has no control over the content of either message, but they are arbitrarily chosen by the algorithm. More efficient attacks are possible by employing cryptanalysis to specific hash functions. When a collision attack is discovered and is found to be faster than a birthday attack, a hash function is often denounced as "broken". The NIST hash function competition was largely induced by published collision attacks against two very commonly used hash functions, MD5 and SHA-1. The collision attacks against MD5 have improved so much that, as of 2007, it takes just a few seconds on a regular computer. Hash collisions created this way are usually constant length and largely unstructured, so cannot directly be applied to attack widespread document formats or protocols. However, workarounds are possible by abusing dynamic constructs present in many formats. In this way, two documents would be created which are as similar as possible in order to have the same hash value. One document would be shown to an authority to be signed, and then the signature could be copied to the other file. Such a malicious document would contain two different messages in the same document, but conditionally display one or the other through subtle changes to the file: Some document formats like PostScript, or macros in Microsoft Word, have conditional constructs. (if-then-else) that allow testing whether a location in the file has one value or another in order to control what is displayed. TIFF files can contain cropped images, with a different part of an image being displayed without affecting the hash value. PDF files are vulnerable to collision attacks by using color value (such that text of one message is displayed with a white color that blends into the background, and text of the other message is displayed with a dark color) which can then be altered to change the signed document's content. Chosen-prefix collision attack An extension of the collision attack is the chosen-prefix collision attack, which is specific to Merkle–Damgård hash functions. In this case, the attacker can choose two arbitrarily different documents, and then append different calculated values that result in the whole documents having an equal hash value. This attack is normally harder, a hash of n bits can be broken in 2(n/2)+1 time steps, but is much more powerful than a classical collision attack. Mathematically stated, given two different prefixes p1, p2, the attack finds two suffixes s1 and s2 such that hash(p1 ∥ s1) = hash(p2 ∥ s2) (where ∥ is the concatenation operation). More efficient attacks are also possible by employing cryptanalysis to specific hash functions. In 2007, a chosen-prefix collision attack was found against MD5, requiring roughly 250 evaluations of the MD5 function. The paper also demonstrates two X.509 certificates for different domain names, with colliding hash values. This means that a certificate authority could be asked to sign a certificate for one domain, and then that certificate (specially its signature) could be used to create a new rogue certificate to impersonate another domain. A real-world collision attack was published in December 2008 when a group of security researchers published a forged X.509 signing certificate that could be used to impersonate a certificate authority, taking advantage of a prefix collision attack against the MD5 hash function. This meant that an attacker could impersonate any SSL-secured website as a man-in-the-middle, thereby subverting the certificate validation built in every web browser to protect electronic commerce. The rogue certificate may not be revokable by real authorities, and could also have an arbitrary forged expiry time. Even though MD5 was known to be very weak in 2004, certificate authorities were still willing to sign MD5-verified certificates in December 2008, and at least one Microsoft code-signing certificate was still using MD5 in May 2012. The Flame malware successfully used a new variation of a chosen-prefix collision attack to spoof code signing of its components by a Microsoft root certificate that still used the compromised MD5 algorithm. In 2019, researchers found a chosen-prefix collision attack against SHA-1 with computing complexity between 266.9 and 269.4 and cost less than 100,000 US dollars. In 2020, researchers reduced the complexity of a chosen-prefix collision attack against SHA-1 to 263.4. Attack scenarios Many applications of cryptographic hash functions do not rely on collision resistance, thus collision attacks do not affect their security. For example, HMACs are not vulnerable. For the attack to be useful, the attacker must be in control of the input to the hash function. Digital signatures Because digital signature algorithms cannot sign a large amount of data efficiently, most implementations use a hash function to reduce ("compress") the amount of data that needs to be signed down to a constant size. Digital signature schemes often become vulnerable to hash collisions as soon as the underlying hash function is practically broken; techniques like randomized (salted) hashing will buy extra time by requiring the harder preimage attack. The usual attack scenario goes like this: Mallory creates two different documents A and B that have an identical hash value, i.e., a collision. Mallory seeks to deceive Bob into accepting document B, ostensibly from Alice. Mallory sends document A to Alice, who agrees to what the document says, signs its hash, and sends the signature to Mallory. Mallory attaches the signature from document A to document B. Mallory then sends the signature and document B to Bob, claiming that Alice signed B. Because the digital signature matches document B's hash, Bob's software is unable to detect the substitution. In 2008, researchers used a chosen-prefix collision attack against MD5 using this scenario, to produce a rogue certificate authority certificate. They created two versions of a TLS public key certificate, one of which appeared legitimate and was submitted for signing by the RapidSSL certificate authority. The second version, which had the same MD5 hash, contained flags which signal web browsers to accept it as a legitimate authority for issuing arbitrary other certificates. Hash flooding Hash flooding (also known as HashDoS) is a denial of service attack that uses hash collisions to exploit the worst-case (linear probe) runtime of hash table lookups. It was originally described in 2003. To execute such an attack, the attacker sends the server multiple pieces of data that hash to the same value and then tries to get the server to perform slow lookups. As the main focus of hash functions used in hash tables was speed instead of security, most major programming languages were affected, with new vulnerabilities of this class still showing up a decade after the original presentation. To prevent hash flooding without making the hash function overly complex, newer keyed hash functions are introduced, with the security objective that collisions are hard to find as long as the key is unknown. They may be slower than previous hashes, but are still much easier to compute than cryptographic hashes. As of 2021, Jean-Philippe Aumasson and Daniel J. Bernstein's SipHash (2012) is the most widely-used hash function in this class. (Non-keyed "simple" hashes remain safe to use as long as the application's hash table is not controllable from the outside.) It is possible to perform an analogous attack to fill up Bloom filters using a (partial) preimage attack. See also Puzzle friendliness References External links "Meaningful Collisions", attack scenarios for exploiting cryptographic hash collisions Fast MD5 and MD4 Collision Generators - Bishop Fox (formerly Stach & Liu). Create MD4 and MD5 hash collisions using groundbreaking new code that improves upon the techniques originally developed by Xiaoyun Wang. Using a 1.6 GHz Pentium 4, MD5 collisions can be generated in an average of 45 minutes, and MD4 collisions can be generated in an average of 5 seconds. Originally released on 22Jun2006. Cryptographic attacks Cryptographic hash functions
Collision attack
[ "Technology" ]
1,894
[ "Cryptographic attacks", "Computer security exploits" ]
969,684
https://en.wikipedia.org/wiki/Herd
A herd is a social group of certain animals of the same species, either wild or domestic. The form of collective animal behavior associated with this is called herding. These animals are known as gregarious animals. The term herd is generally applied to mammals, and most particularly to the grazing ungulates that classically display this behaviour. Different terms are used for similar groupings in other species; in the case of birds, for example, the word is flocking, but flock may also be used for mammals, particularly sheep or goats. Large groups of carnivores are usually called packs, and in nature a herd is classically subject to predation from pack hunters. Special collective nouns may be used for particular taxa (for example a flock of geese, if not in flight, is sometimes called a gaggle) but for theoretical discussions of behavioural ecology, the generic term herd can be used for all such kinds of assemblage. The word herd, as a noun, can also refer to one who controls, possesses and has care for such groups of animals when they are domesticated. Examples of herds in this sense include shepherds (who tend to sheep), goatherds (who tend to goats), and cowherds (who tend to cattle). The structure and size of herds When an association of animals (or, by extension, people) is described as a herd, the implication is that the group tends to act together (for example, all moving in the same direction at a given time), but that this does not occur as a result of planning or coordination. Rather, each individual is choosing behaviour in correspondence with most other members, possibly through imitation or possibly because all are responding to the same external circumstances. A herd can be contrasted with a coordinated group where individuals have distinct roles. Many human groupings, such as army detachments or sports teams, show such coordination and differentiation of roles, but so do some animal groupings such as those of eusocial insects, which are coordinated through pheromones and other forms of animal communication. A herd is, by definition, relatively unstructured. However, there may be two or a few animals which tend to be imitated by the bulk of the herd more than others. An animal in this role is called a "control animal", since its behaviour will predict that of the herd as a whole. It cannot be assumed, however, that the control animal is deliberately taking a leadership role; control animals are not necessarily socially dominant in conflict situations, though they often are. Group size is an important characteristic of the social environment of gregarious species. Costs and benefits of animals in groups The reason why animals form herds can not always be stated easily, since the underlying mechanisms are diverse and complex. Understanding the social behaviour of animals and the formation of groups has been a fundamental goal in the field of sociobiology and behavioural ecology. Theoretical framework is focused on the costs and benefits associated with living in groups in terms of the fitness of each individual compared to living solitarily. Living in groups evolved independently multiple times in various taxa and can only occur if its benefits outweigh the costs within an evolutionary timescale. Thus, animals form groups whenever this increases their fitness compared to living in solitary. The following includes an outline about some of the major effects determining the trade-offs for living in groups. Dilution effect Perhaps the most studied effect of herds is the so-called dilution effect. The key argument is that the risk of being preyed upon for any particular individual is smaller within a larger group, strictly because a predator has to decide which individual to attack. Although the dilution effect is influenced by so-called selfish herding, it is primarily a direct effect of group size instead of the position within a herd. Greater group sizes result in higher visibility and detection rates for predators, but this relation is not directly proportional and saturates at some point, while the risk of being attacked for an individual is directly proportional to group size. Thus, the net effect for an individual in a group concerning its predation risk is beneficial. Whenever groups, such as shoals of fish, synchronize their movements, it becomes harder for predators to focus on particular individuals. However, animals that are weak and slower or on the periphery are preferred by predators, so that certain positions within the group are better than others (see selfish herd theory). For fit animals, being in a group with such vulnerable individuals may thus decrease the chance of being preyed upon even further. Collective vigilance The effect of collective vigilance in social groups has been widely studied within the framework of optimal foraging theory and animal decision making. While animals under the risk of predation are feeding or resting, they have to stay vigilant and watch for predators. It could be shown in many studies (especially for birds) that with increase in group size individual animals are less attentive, while the overall vigilance suffers little (many eyes effect). This means food intake and other activities related to fitness are optimized in terms of time allocation when animals stay in groups. However, some details about this concepts remain unclear. Being the first to detect predators and react accordingly can be advantageous, implying individuals may not fully be able to rely only on the group. Moreover, the competition for food can lead to the misuse of warning calls, as was observed for great tits: If food is scarce or monopolized by dominant birds, other birds (mainly subordinates) use antipredatory warning calls to induce an interruption of feeding and gain access to resources. Another study concerning a flock of geese suggested that the benefits of lower vigilance concerned only those in central positions, due to the fact that the possibly more vulnerable individuals in the flock's periphery have a greater need to stay attentive. This implies that the decrease in overall vigilance arises simply because the geese on the edge of the flock comprise a smaller group when groups get large. A special case of collective vigilance in groups is that of sentinels. Individuals take turn in keeping guard, while all others participate in other activities. Thus, the strength of social bonds and trust within these groups have to be much higher than in the former cases. Foraging Hunting together enables group-living predators, such as wolves and wild dogs, to catch large prey, which they are unable to achieve when hunting alone. Working together significantly improves foraging efficiency, meaning the net energy gain of each individual is increased when animals are feeding collectively. As an example, a group of Spinner dolphins is able to corral fish into a smaller volume, which makes catching them easier, as there is less opportunity for the fish to escape. Furthermore, large groups are able to monopolize resources and defend them against solitary animals or smaller groups of the same or different species. It has been shown that larger groups of lions tend to be more successful in protecting prey from hyenas than smaller ones. Being able to communicate the location and type of food to other group members may increase the chance for each individual to find profitable food sources, a mechanism which is known to be used by both bees (via a Waggle dance) and several species of birds (using specific vocalisations to indicate food). In terms of Optimal foraging theory, animals always try to maximize their net energy gain when feeding, because this is positively correlated to their fitness. If their energy requirement is fixed and additional energy is not increasing fitness, they will use as little time for foraging as possible (time minimizers). If on the other hand time allocated to foraging is fixed, an animal's gain in fitness is related to the quantity and quality of resources it feeds on (Energy maximizers). Since foraging may be energetically costly (searching, hunting, handling, etc.) and may induce risk of predation, animals in groups may have an advantage, since their combined effort in locating and handling food will reduce time needed to forage sufficiently. Thus, animals in groups may have shorter searching and handling times as well as an increased chance of finding (or monopolizing) highly profitable food, which makes foraging in groups beneficial for time minimizers and energy maximizers alike. The obvious disadvantage of foraging in groups is (scramble or direct) competition with other group members. In general, it is clear that the amount of resources available for each individual decreases with group size. If the resource availability is critical, competition within the group may get so intense, that animals no longer experience benefits from living in groups. However, only the relative importance of within- and between-group competition determines the optimal group size and ultimately the decision of each individual whether or not to stay in the group. Diseases and parasites Since animals in groups stay near each other and interact frequently, infectious diseases and parasites spread much easier between them compared to solitary animals. Studies have shown a positive correlation between herd size and intensity of infections, but the extent to which this sometimes drastic reduction in fitness governs group size and structure is still unclear. However, some animals have found countermeasures such as propolis in beehives or grooming in social animals. Energetic advantages Staying together in groups often brings energetic advantages. Birds flying together in a flock use aerodynamic effects to reduce energetic costs, e.g. by positioning themselves in a V-shaped formation. A similar effect can be observed when fish swim together in fixed formations. Another benefit of group living occurs when climate is harsh and cold: By staying close together animals experience better thermoregulation, because their overall surface to volume ratio is reduced. Consequently, maintaining adequate body temperatures becomes less energetically costly. Antipredatory behaviour The collective force of a group mobbing predators can reduce risk of predation significantly. Flocks of raven are able to actively defend themselves against eagles and baboons collectively mob lions, which is impossible for individuals alone. This behaviour may be based on reciprocal altruism, meaning animals are more likely to help each other if their conspecifics did so earlier. Mating Animals living in groups are more likely to find mates than those living in solitary and are also able to compare potential partners in order to optimize genetic quality for their offspring. Domestic herds Domestic animal herds are assembled by humans for practicality in raising them and controlling them. Their behaviour may be quite different from that of wild herds of the same or related species, since both their composition (in terms of the distribution of age and sex within the herd) and their history (in terms of when and how the individuals joined the herd) are likely to be very different. Human parallels The term herd is also applied metaphorically to human beings in social psychology, with the concept of herd behaviour. However both the term and concepts that underlie its use are controversial. The term has acquired a semi-technical usage in behavioral finance to describe the largest group of market investors or market speculators who tend to "move with the market", or "follow the general market trend". This is at least a plausible example of genuine herding, though according to some researchers it results from rational decisions through processes such as information cascade and rational expectations. Other researchers, however, ascribe it to non-rational process such as mimicry, fear and greed contagion. "Contrarians" or contrarian investors are those who deliberately choose to invest or speculate counter to the "herd". See also Literature Krause, J., & Ruxton, G. D. (2002). Living in groups. Oxford: Oxford University Press. References Ethology Group processes Herding
Herd
[ "Biology" ]
2,364
[ "Behavioural sciences", "Ethology", "Behavior" ]
969,736
https://en.wikipedia.org/wiki/Livestock%20branding
Livestock branding is a technique for marking livestock so as to identify the owner. Originally, livestock branding only referred to hot branding large stock with a branding iron, though the term now includes alternative techniques. Other forms of livestock identification include freeze branding, inner lip or ear tattoos, earmarking, ear tagging, and radio-frequency identification (RFID), which is tagging with a microchip implant. The semi-permanent paint markings used to identify sheep are called a paint or color brand. In the American West, branding evolved into a complex marking system still in use today. History The act of marking livestock with fire-heated marks to identify ownership has origins in ancient times, with use dating back to the ancient Egyptians around 2,700 BCE. Among the ancient Romans, the symbols used for brands were sometimes chosen as part of a magic spell aimed at protecting animals from harm. In English lexicon, the word "brand", common to most Germanic languages (from which root also comes "burn", cf. German Brand "burning, fire"), originally meant anything hot or burning, such as a "firebrand", a burning stick. By the European Middle Ages, it commonly identified the process of burning a mark into stock animals with thick hides, such as cattle, so as to identify ownership under animus revertendi. The practice became particularly widespread in nations with large cattle grazing regions, such as Spain. These European customs were imported to the Americas and were further refined by the vaquero tradition in what today is the southwestern United States and northern Mexico. In the American West, a "branding iron" consisted of an iron rod with a simple symbol or mark, which cowboys heated in a fire. After the branding iron turned red hot, the cowboy pressed the branding iron against the hide of the cow. The unique brand meant that cattle owned by multiple ranches could then graze freely together on the open range. Cowboys could then separate the cattle at "roundup" time for driving to market. Cattle rustlers using running irons were ingenious in changing brands. The most famous brand change involved the making of the X I T brand into the Star-Cross brand, a star with a cross inside. Brands became so numerous that it became necessary to record them in books that the ranchers could carry in their pockets. Laws were passed requiring the registration of brands, and the inspection of cattle driven through various territories. Penalties were imposed on those who failed to obtain a bill of sale with a list of brands on the animals purchased. From the Americas, many cattle branding traditions and techniques spread to Australia, where a distinct set of traditions and techniques developed. Livestock branding has been practiced in Australia since 1866, but after 1897 owners had to register their brands. These fire and paint brands could not then be duplicated legally. Modern use Free-range or open-range grazing is less common today than in the past. However, branding still has its uses. The main purpose is in proving ownership of lost or stolen animals. Many western US states have strict laws regarding brands, including brand registration, and require brand inspections. In many cases, a brand on an animal is considered prima facie proof of ownership. (See Brand Book) In the hides and leather industry, brands are treated as a defect, and can diminish the value of hides. This industry has a number of traditional terms relating to the type of brand on a hide. "Colorado branded" (slang "Collie") refers to placement of a brand on the side of an animal, although this does not necessarily indicate the animal is from Colorado. "Butt branded" refers to a hide which has had a brand placed on the portion of the skin covering the rump area of the animal. A native hide is one without a brand. Outside of the livestock industry, hot branding was used in 2003 by tortoise researchers to provide a permanent means of unique identification of individual Galapagos tortoises being studied. In this case, the brand was applied to the rear of the tortoises' shells. This technique has since been superseded by implanted PIT microchips (combined with ID numbers painted on the shell). Methods The traditional cowboy or stockman captured and secured an animal for branding by roping it, laying it over on the ground, tying its legs together, and applying a branding iron that had been heated in a fire. Modern ranch practice has moved toward use of chutes where animals can be run into a confined area and safely secured while the brand is applied. Two types of restraint are the cattle crush or squeeze chute (for larger cattle), which may close on either side of a standing animal, or a branding cradle, where calves are caught in a cradle which is rotated so that the animal is lying on its side. Bronco branding is an old method of catching cleanskin (unbranded) cattle on Top End cattle stations for branding in Australia. A heavy horse, usually with some draught horse bloodlines and typically fitted with a harness horse collar, is used to rope the selected calf. The calf is then pulled up to several sloping topped panels and a post constructed for the purpose in the centre of the yard. The unmounted stockmen then apply leg ropes and pull it to the ground to be branded, earmarked and castrated (if a bull) there. With the advent of portable cradles, this method of branding has been mostly phased out on stations. However, there are now quite a few bronco branding competitions at rodeos and campdrafting days, etc. Some ranches still heat branding irons in a wood or coal fire; others use an electric branding iron or electric sources to heat a traditional iron. Gas-fired branding iron heaters are quite popular in Australia, as iron temperatures can be regulated and there is not the heat of a nearby fire. Regardless of heating method, the iron is only applied for the amount of time needed to remove all hair and create a permanent mark. Branding irons are applied for a longer time to cattle than to horses, due to the differing thicknesses of their skins. If a brand is applied too long, it can damage the skin too deeply, thus requiring treatment for potential infection and longer-term healing. Branding wet stock may result in the smudging of the brand. Brand identification may be difficult on long-haired animals, and may necessitate clipping of the area to view the brand. Horses may also be branded on their hooves, but this is not a permanent mark, so needs to be redone about every six months. In the military, some brands indicated the horses' army and squadron numbers. These identification numbers were used on British army horses so dead horses on the battlefield could be identified. The hooves of the dead horses were then removed and returned to the Horse Guards with a request for replacements. This method was used to prevent fraudulent requests for horses. Merino rams and bulls are sometimes firebranded on their horns for permanent individual identification. Temporary branding Some types of identification are not permanent. Temporary branding may be achieved by heat branding so that the hair is burned, but the skin is not damaged. Because this persists only until the animal sheds its hair, it is not considered a properly applied brand. Other temporary, but for a time, persistent marking methods include tagging, and nose printing. Tagging usually uses numbering system as a way to identify animals in a herd. It does this by putting together a letter and number to represent the year born and the birth order, then the tag is either attached to the animal’s ear or to some form of neck collar. Nose printing or use of indelible ink elsewhere on the skin and hair is used at some farms, sales and exhibitions. This method is like fingerprinting: it uses ink and cannot be modified. As hair or skin cells shed, the mark eventually fades. Microchip identification and lip or ear tattooing are generally permanent, though microchips can be removed and tattoos sometimes fade over many years. Microchips are used on many animals, and are particularly popular with horses, as the chip leaves no external marks. Tattooing the inside of the upper lip of horses is required for many racehorses, though in some localities, microchips are beginning to replace tattoos. Temporary branding is particularly common for sheep and goats. Ear marking or tattooing are usually used on goats under eight weeks of age because regular branding would harm them. Techniques similar to these are also used on sheep. Temporary branding on sheep is done with paint, crayons, spray markers, chalk, and much more. These can last for up to several months at a time. The sheep's identification number is painted or sprayed with an indelible but non-toxic paint designed for the purpose onto their sides or back. Freeze branding In stark contrast to traditional hot-iron branding, freeze branding uses an iron that has been chilled with a coolant such as dry ice or liquid nitrogen. Instead of burning a scar into the animal's skin, a freeze brand damages the pigment-producing hair cells, causing the animal's hair to grow back white within the branded area. This white-on-dark pattern is prized by cattle ranchers as its contrast allows some range work to be conducted with binoculars rather than individual visits to every animal. Scientists also value the technique for keeping tabs on studied wildlife without having to approach to read, for example, an ear tag. To apply a freeze brand the hair coat of the animal is first shaved very closely so that bare skin is exposed. Then the frozen iron is pressed to the animal's bare skin for a period of time that varies with both the species of animal and the color of its hair coat. Shorter times are used on dark-colored animals, as this causes follicle melanocyte death and hence permanent pigment loss to the hair when it regrows. Longer timessometimes as little as five additional secondsare needed for animals with white hair coats. In these cases the brand is applied for long enough to outright kill the cells of the growth follicle, preventing them from regrowing new hair filaments and leaving the animal permanently bald in the branded area. The somewhat darker epidermis then contrasts well with a pale animal's coat. Horses are frequently freeze-branded. Neither hogs nor birds can presently be freeze branded successfully, as their hair pigment cells are better protected. Other downsides of freeze branding include its time consuming preparation, greater expense in material and time, low tolerance for sloppy application, long wait until success (sometimes as much as five months) and absence of legal grounding in some American states. When an animal grows a long hair coat the freeze brand is still visible, but its details are not always legible. Thus it is sometimes necessary to shave or closely trim the hair to obtain a sharper view of the freeze brand. Besides livestock, freeze branding can also be used on wild, hairless animals such as dolphins for purposes of tracking individuals. The brand appears as a white mark on their bare skin and can last for decades. Immediately after the freeze branding iron is removed from the skin, an indented outline of the brand will be visible. Within seconds, however, the outline will disappear and within several minutes after that, the brand outline will reappear as swollen, puffy skin. Once the swelling subsides, for a short time, the brand will be difficult or impossible to see, but in a few days, the branded skin will begin to flake, and within three to four weeks, the brand will begin to take on its permanent appearance. Horse branding regulations In Australia, all Arabian, Part Bred Arabians, Australian Stock Horses, Quarter Horses, Thoroughbreds, must be branded with an owner brand on the near (left) shoulder and an individual foaling drop number (in relation to the other foals) over the foaling year number on the off shoulder. In Queensland, these three brands may be placed on the near shoulder in the above order. Stock Horse and Quarter Horse classification brands are placed on the hindquarters by the classifiers. Thoroughbreds and Standardbreds in Australia and New Zealand are freeze branded. Standardbred brands are in the form of the Alpha Angle Branding System (AABS), which the United States also uses. In the United States, branding of horses is not generally mandated by the government; however, there are a few exceptions: captured Mustangs made available for adoption by the BLM are freeze branded on the neck, usually with the AABS or with numbers, for identification. Horses that test positive for equine infectious anemia, that are quarantined for life rather than euthanized, will be freeze branded for permanent identification. Race horses of any breed are usually required by state racing commissions to have a lip tattoo, to be identified at the track. Some breed associations have, at times, offered freeze branding as either a requirement for registration or simply as an optional benefit to members, and individual horse owners may choose branding as a means by which to permanently identify their animals. As of 2011, the issue of whether to mandate horses be implanted with RFID microchips under the National Animal Identification System generated considerable controversy in the United States. Symbols and terminology Most brands in the United States include capital letters or numerals, often combined with other symbols such as a slash, circle, half circle, cross, or bar. Brands of this type have a specialized language for "calling" the brand. Some owners prefer to use simple pictures; these brands are called using a short description of the picture (e.g., "rising sun"). Reading a brand aloud is referred to as “calling the brand“. Brands are called from left to right, top to bottom, and when one character encloses another, from outside to inside. Reading of complex brands and picture brands depends at times upon the owner's interpretation, may vary depending upon location, and it may require an expert to identify some of the more complex marks. In general, the following usage of the term "symbol" usually means a capital letter. Uncapitalized letters are not used. Brands are usually “read” top to bottom and left to right. There are regional variations in how brands are read, and deference is given to the terminology preferred by the owner of the brand. Terms used include: "Bar": a short horizontal line. For example, a short horizontal line over an M or before an M would be read as "Bar M". Similarly, a short horizontal line under an M or after an M would be read as "M Bar". The bar can also be through the middle of the symbol and would be read as "Bar M". "Rail" is alternative terminology to "bar" in some areas referencing a long horizontal line. For example, a long horizontal line over a M or before an M would be read as "Rail M". Similarly, a long horizontal line under a M or after a M would be read as "M Rail". "Box": a symbol within a square or rectangle or a square or rectangle by itself. A box with a P inside of it would be read as "Box P". "Circle": a symbol within a circle, or a circle by itself. A circle with a C inside of it would be read as "Circle C". "Half Circle or Quarter Circle": a half or quarter circle above or below a symbol, but not touching the symbol. A K with a half circle above it, open side facing up, would be read as "Half Circle K". A K with a half circle below it, open side facing down, would be read as "K Half Circle". See Rocking below if the circle touches the symbol. "Crazy": An upside down symbol. An upside down R would be read as "Crazy R". "Cross": a plus sign. + "Diamond": a symbol within a four sided box, the box tilted 45 degrees or a four sided box tilted 45 degrees by itself. The box sides are of equal length, and the box can be square or taller in height than in width, or greater in width than in height. A rafter can also be read as a "Half Diamond". "Flying": a symbol that starts and ends with a short serif or short horizontal line attached before the left side of the top of the symbol and attached after the right side of the top of the symbol, extending to the right of the symbol. "Lazy": Symbols turned 90 degrees. A symbol turned 90 degrees, lying on its face (or right hand side) can be read as "Lazy Down" or "Lazy Right". Similarly, a symbol turned 90 degrees, lying on its back (or left hand side) can be read as "Lazy Up" or "Lazy Left". could be read as "Lazy 5" or "Lazy Up 5" or "Lazy Left 5". "Over": a symbol over and above another symbol, but not touching the other symbol. An H above a P would be read as "H Over P". "Rafter or Half-diamond": Two slashes joined at the top. ∧ An R with two slashes joined at the top would be read as "Rafter R" "Reverse": A reversed symbol. would be read as "Reverse K". Reverse is sometimes called "Back" (i.e. a backwards C would be read as "Back C"). "Crazy Reverse": An upside down, reversed symbol. An upside down, reversed R would be read as "Crazy Reverse R" “Running”: a letter with a curving flare attached to the right side of the top of the letter, extending to the right, with the symbol sometimes also leaning to the right like an italic letter. "Slash": A forward or reverse slash. / \ . "Tumbling": a symbol tipped to the right about 45 degrees. "Walking": a symbol with a short horizontal line attached to the bottom of the symbol, extending to the right of the symbol. Combinations of symbols can be made with each symbol distinct, or: "Connected" or conjoined, with symbols touching. would be read as "T S connected" or "TS conjoined". "Combined or conjoined": symbols are partially overlaid. would be read as "J K Combined". "Hanging": a symbol beneath another symbol and touching the other symbol. The hanging nomenclature may be omitted when reading the brand, such as a H with a P below it, with the top of the P touching the bottom of the right hand side of the H would be read as " H Hanging P", or just "H P". "Swinging": a symbol beneath a quarter circle, the open side of the quarter circle facing the symbol, with the symbol touching the quarter circle. For example, a H with a quarter circle over it, with the top of the H touching the quarter circle would be read as "Swinging H". "Rocking": a symbol above a quarter circle, the open side of the quarter circle facing the symbol, with the bottom of the symbol touching the quarter circle. For example, a H with a quarter circle under it, with the bottom of the H touching the quarter circle, is read as "Rocking H". Animal welfare concerns Livestock branding causes pain to the animals being branded, seen in behavioural and physiological indicators. Both hot and freeze branding produce thermal injury to the skin, but hot-iron branding creates more inflammation and pain than freeze branding does. Although alternative methods of identification such as ear tags are suggested, the practice of branding is still common worldwide. Standard hot iron branding can take about eight weeks to heal. Use of analgesics helps reduce discomfort. Topical treatments such as cooling gels helps speed healing in pigs, but results are less clear for cattle. Common concerns include how long the animal is restrained, size and location of the brand, and whether analgesics are applied for pain relief. A 2018 study in Sri Lanka, where hot-iron branding is illegal but still widely practiced, concluded that it impairs animal welfare and that there is no real way to improve the procedure. However, this particular study looked at four small dairy farms that used a technique where multiple applications of irons (“drawing”) created large brands extended across the ribs and took at least a full minute to apply and 10 weeks to heal. In contrast, in nations such as the United States and Australia, pre-shaped brands are used to stamp the brand on an animal, applied for 1-5 seconds. Although branding is painful, from a welfare perspective, stamping is preferable over drawing, as less time is needed to apply the brand. See also Animal abuse Animal identification Horse markings Human branding No. 87 Squadron RAF, whose "lazy-S" World War I unit insignia was derived from ranch branding by Joseph Callaghan. Scarification Veterinary ethics References Cattle Cattle breeding Cruelty to animals Horse management Identification of domesticated animals Livestock Radio-frequency identification Symbols Ethically disputed business practices towards animals
Livestock branding
[ "Mathematics", "Engineering" ]
4,333
[ "Radio-frequency identification", "Radio electronics", "Symbols" ]
969,746
https://en.wikipedia.org/wiki/Area%20code%20900
Area code 900 is a telephone area code in the North American Numbering Plan for premium-rate telephone numbers. Area code 900 was installed in 1971. Premium rate services are dialed in the format 1-900-XXX-XXXX. This is often called a 900 number or a 1 900 number ("one-nine-hundred"). A call to a 900-number can result in high per-minute or per-call charges. For example, a "psychic hotline" may charge for the first minute and for each additional minute. History The first 900-service known to have been used in the United States, was for the "Ask President Carter" program in March 1977, for incoming calls to a nationwide talk radio broadcast featuring the newly elected President Jimmy Carter, hosted by anchorman Walter Cronkite. At that time, the intent for area code 900 was as a choke exchange—a code that blocked large numbers of simultaneous callers from jamming up the long-distance network. Numbers with the 900 area code were those which were expected to have a huge number of potential callers, and the 900 area code was screened at the local level to allow only a certain number of the callers in each area to access the nationwide long-distance network for reaching the destination number. Also, the early incarnation of 900 was not billed at premium-rate charges, but rather at regular long-distance charges based on the time of day and day of week that the call was placed. The number used for the radio program was one that was specially arranged by AT&T Corporation, CBS Radio, and the White House, to be free to the calling party. However, by 1980, the 900 area code was completely restructured by AT&T to be the premium-rate special area code which it remains today. At that time, many evening news agencies conducted "pulse polls" for $0.50 per call charges and displayed results on television. One early use was by Saturday Night Live producers for the sketch "Larry the Lobster", featuring Eddie Murphy. The comedy sketch drew nearly 500,000 calls. AT&T and the producers of SNL split the profits of nearly $250,000. Earlier, 976 numbers used 976 as a local prefix (970 or 540 in some markets, such as New York state), though it was not assigned to a specific telephone exchange. These numbers were dialed as any other number, such as 976-1234. Initially, consumers had no choice regarding the accessibility to 900 or 976 numbers with their subscription service. However, in 1987, after a child had accumulated a bill of $17,000, the California Public Utilities Commission subsequently required telephone companies to give customers the option of preventing the dialing of premium-rate numbers. From the early 1980s through the early 1990s, it was common to see commercials promoting 1 (900) numbers to children featuring such things as characters famous from Saturday morning cartoons to Santa Claus. Due to complaints from parent groups about kids not knowing the dangers and high cost of such calls, the FTC enacted new rules and such commercials directed at children ceased to air on television as of the mid-1990s. Using 900-numbers for adult entertainment lines was a prevalent practice in the early years of the industry. This practice continues, along with the use of these numbers for things such as software technical support, banking access, and stock tips. Adult entertainment 900 numbers have been largely absent from AT&T and MCI since 1991. In 1992, the Supreme Court allowed a law passed by Congress that created a block on all 900 numbers that provided adult content, except for those consumers who requested access to a specific number in writing. The law killed the adult 900 number business, which moved over to 800 numbers, where billing had to be done by credit card. Hulk Hogan's Hotline was a lucrative 900 number in the 1990s. Other early leaders in amassing huge volumes of revenue were the New Kids on the Block and Dionne Warwick's Psychic Friends Network. Regulations Consumers in the US have specific rights regarding 900 number calls, as laid down by the Federal Trade Commission, such as the right to a disclaimer at the beginning of the call and a subsequent 3-second hang-up grace period, the ability to contest billing errors, a prohibition on marketing to children, and a requirement that telecommunication companies must allow the consumer to block dialing to 900 numbers. US telephone companies are prohibited from disconnecting local service as a means to force payment for 1 (900) calls. Furthermore, in 2002, AT&T withdrew from billing their customers for the fee structures. This was followed by other companies throughout the decade until 2012, when Verizon, the final hold-out, also withdrew from passing on the charges. Various attempts have been made by vendors to circumvent these protections by using Caribbean or other international numbers outside Federal Communications Commission jurisdiction to bill US telephone subscribers; the former +1 (809) countries were popular as their North American Numbering Plan format numbers look domestic but are not. The 101XXXX dial-around prefixes were also briefly the target of abuse by premium number providers posing as inter-exchange carriers, a practice which has now been stopped. A loophole which allowed US (but not Canadian) providers in toll-free area code 800 to bill for calls by claiming the subscriber agreed to the charges has also been largely closed by more stringent regulation. See also Telecommunications tariffs Premium SMS References Telephone numbers Telecommunications economics
Area code 900
[ "Mathematics" ]
1,118
[ "Mathematical objects", "Numbers", "Telephone numbers" ]
969,807
https://en.wikipedia.org/wiki/Pit%20sword
The pit sword (also known as a rodmeter) is a blade of metal or plastic that extends into the water beneath the hull of a ship. It is part of the pitometer log, a device for measuring the ship's speed through the water. See also Electromagnetic log Pitot tube References External links Historical site with information on US Navy submarine pitometer log systems during World War II. This site shows the pit sword or rodmeter as it is deployed. Measuring instruments
Pit sword
[ "Technology", "Engineering" ]
97
[ "Measuring instruments" ]
969,821
https://en.wikipedia.org/wiki/JAK-STAT%20signaling%20pathway
The JAK-STAT signaling pathway is a chain of interactions between proteins in a cell, and is involved in processes such as immunity, cell division, cell death, and tumor formation. The pathway communicates information from chemical signals outside of a cell to the cell nucleus, resulting in the activation of genes through the process of transcription. There are three key parts of JAK-STAT signalling: Janus kinases (JAKs), signal transducer and activator of transcription proteins (STATs), and receptors (which bind the chemical signals). Disrupted JAK-STAT signalling may lead to a variety of diseases, such as skin conditions, cancers, and disorders affecting the immune system. Structure of JAKs and STATs Main articles: JAKs and STATs There are four JAK proteins: JAK1, JAK2, JAK3 and TYK2. JAKs contains a FERM domain (approximately 400 residues), an SH2-related domain (approximately 100 residues), a kinase domain (approximately 250 residues) and a pseudokinase domain (approximately 300 residues). The kinase domain is vital for JAK activity, since it allows JAKs to phosphorylate (add phosphate groups to) proteins. There are seven STAT proteins: STAT1, STAT2, STAT3, STAT4, STAT5A, STAT5B and STAT6. STAT proteins contain many different domains, each with a different function, of which the most conserved region is the SH2 domain. The SH2 domain is formed of 2 α-helices and a β-sheet and is formed approximately from residues 575–680. STATs also have transcriptional activation domains (TAD), which are less conserved and are located at the C-terminus. In addition, STATs also contain: tyrosine activation, amino-terminal, linker, coiled-coil and DNA-binding domains. Mechanism The binding of various ligands, usually cytokines, such as interferons and interleukins, to cell-surface receptors, causes the receptors to dimerize, which brings the receptor-associated JAKs into close proximity. The JAKs then phosphorylate each other on tyrosine residues located in regions called activation loops, through a process called transphosphorylation, which increases the activity of their kinase domains. The activated JAKs then phosphorylate tyrosine residues on the receptor, creating binding sites for proteins possessing SH2 domains. STATs then bind to the phosphorylated tyrosines on the receptor using their SH2 domains, and then they are tyrosine-phosphorylated by JAKs, causing the STATs to dissociate from the receptor. At least STAT5 requires glycosylation at threonine 92 for strong STAT5 tyrosine phosphorylation. These activated STATs form hetero- or homodimers, where the SH2 domain of each STAT binds the phosphorylated tyrosine of the opposite STAT, and the dimer then translocates to the cell nucleus to induce transcription of target genes. STATs may also be tyrosine-phosphorylated directly by receptor tyrosine kinases - but since most receptors lack built-in kinase activity, JAKs are usually required for signalling. Movement of STATs from the cytosol to the nucleus To move from the cytosol to the nucleus, STAT dimers have to pass through nuclear pore complexes (NPCs), which are protein complexes present along the nuclear envelope that control the flow of substances in and out of the nucleus. To enable STATs to move into the nucleus, an amino acid sequence on STATs, called the nuclear localization signal (NLS), is bound by proteins called importins. Once the STAT dimer (bound to importins) enters the nucleus, a protein called Ran (associated with GTP) binds to the importins, releasing them from the STAT dimer. The STAT dimer is then free in the nucleus. Specific STATs appear to bind to specific importin proteins. For example, STAT3 proteins can enter the nucleus by binding to importin α3 and importin α6. On the other hand, STAT1 and STAT2 bind to importin α5. Studies indicate that STAT2 requires a protein called interferon regulatory factor 9 (IRF9) to enter the nucleus. Not as much is known about nuclear entrance of other STATs, but it has been suggested that a sequence of amino acids in the DNA-binding domain of STAT4 might allow nuclear import; also, STAT5 and STAT6 can both bind to importin α3. In addition, STAT3, STAT5 and STAT6 can enter the nucleus even if they are not phosphorylated at tyrosine residues. Role of post-translational modifications After STATs are made by protein biosynthesis, they have non-protein molecules attached to them, called post-translational modifications. One example of this is tyrosine phosphorylation (which is fundamental for JAK-STAT signalling), but STATs experience other modifications, which may affect STAT behaviour in JAK-STAT signalling. These modifications include: methylation, acetylation and serine phosphorylation. Methylation. STAT3 can be dimethylated (have two methyl groups) on a lysine residue, at position 140, and it is suggested that this could reduce STAT3 activity. There is debate as to whether STAT1 is methylated on an arginine residue (at position 31), and what the function of this methylation could be. Acetylation. STAT1, STAT2, STAT3, STAT5 and STAT6 have been shown to be acetylated. STAT1 may have an acetyl group attached to lysines at positions 410 and 413, and as a result, STAT1 can promote the transcription of apoptotic genes - triggering cell death. STAT2 acetylation is important for interactions with other STATs, and for the transcription of anti-viral genes. Acetylation of STAT3 has been suggested to be important for its dimerization, DNA-binding and gene-transcribing ability, and IL-6 JAK-STAT pathways that use STAT3 require acetylation for transcription of IL-6 response genes. STAT5 acetylation on lysines at positions 694 and 701 is important for effective STAT dimerization in prolactin signalling. Adding acetyl groups to STAT6 is suggested to be essential for gene transcription in some forms of IL-4 signalling, but not all the amino acids which are acetylated on STAT6 are known. Serine phosphorylation. Most of the seven STATs (except STAT2) undergo serine phosphorylation. Serine phosphorylation of STATs has been shown to reduce gene transcription. It is also required for the transcription of some target genes of the cytokines IL-6 and IFN- γ. It has been proposed that phosphorylation of serine can regulate STAT1 dimerization, and that continuous serine phosphorylation on STAT3 influences cell division. Recruitment of co-activators Like many other transcription factors, STATs are capable of recruiting co-activators such as CBP and p300, and these co-activators increase the rate of transcription of target genes. The coactivators are able to do this by making genes on DNA more accessible to STATs and by recruiting proteins needed for transcription of genes. The interaction between STATs and coactivators occurs through the transactivation domains (TADs) of STATs. The TADs on STATs can also interact with histone acetyltransferases (HATs); these HATs add acetyl groups to lysine residues on proteins associated with DNA called histones. Adding acetyl groups removes the positive charge on lysine residues, and as a result there are weaker interactions between histones and DNA, making DNA more accessible to STATs and enabling an increase in the transcription of target genes. Integration with other signalling pathways JAK-STAT signalling is able to interconnect with other cell-signalling pathways, such as the PI3K/AKT/mTOR pathway. When JAKs are activated and phosphorylate tyrosine residues on receptors, proteins with SH2 domains (such as STATs) are able bind to the phosphotyrosines, and the proteins can carry out their function. Like STATs, the PI3K protein also has an SH2 domain, and therefore it is also able to bind to these phosphorylated receptors. As a result, activating the JAK-STAT pathway can also activate PI3K/AKT/mTOR signalling. JAK-STAT signalling can also integrate with the MAPK/ERK pathway. Firstly, a protein important for MAPK/ERK signalling, called Grb2, has an SH2 domain, and therefore it can bind to receptors phosphorylated by JAKs (in a similar way to PI3K). Grb2 then functions to allow the MAPK/ERK pathway to progress. Secondly, a protein activated by the MAPK/ERK pathway, called MAPK (mitogen-activated protein kinase), can phosphorylate STATs, which can increase gene transcription by STATs. However, although MAPK can increase transcription induced by STATs, one study indicates that phosphorylation of STAT3 by MAPK can reduce STAT3 activity. One example of JAK-STAT signalling integrating with other pathways is Interleukin-2 (IL-2) receptor signaling in T cells. IL-2 receptors have γ (gamma) chains, which are associated with JAK3, which then phosphorylates key tyrosines on the tail of the receptor. Phosphorylation then recruits an adaptor protein called Shc, which activates the MAPK/ERK pathway, and this facilitates gene regulation by STAT5. Alternative signalling pathway An alternative mechanism for JAK-STAT signalling has also been suggested. In this model, SH2 domain-containing kinases, can bind to phosphorylated tyrosines on receptors and directly phosphorylate STATs, resulting in STAT dimerization. Therefore, unlike the traditional mechanism, STATs can be phosphorylated not just by JAKs, but by other receptor-bound kinases. So, if one of the kinases (either JAK or the alternative SH2-containing kinase) cannot function, signalling may still occur through activity of the other kinase. This has been shown experimentally. Role in cytokine receptor signalling Given that many JAKs are associated with cytokine receptors, the JAK-STAT signalling pathway plays a major role in cytokine receptor signalling. Since cytokines are substances produced by immune cells that can alter the activity of neighbouring cells, the effects of JAK-STAT signalling are often more highly seen in cells of the immune system. For example, JAK3 activation in response to IL-2 is vital for lymphocyte development and function. Also, one study indicates that JAK1 is needed to carry out signalling for receptors of the cytokines IFNγ, IL-2, IL-4 and IL-10. The JAK-STAT pathway in cytokine receptor signalling can activate STATs, which can bind to DNA and allow the transcription of genes involved in immune cell division, survival, activation and recruitment. For example, STAT1 can enable the transcription of genes which inhibit cell division and stimulate inflammation. Also, STAT4 is able to activate NK cells (natural killer cells), and STAT5 can drive the formation of white blood cells. In response to cytokines, such as IL-4, JAK-STAT signalling is also able to stimulate STAT6, which can promote B-cell proliferation, immune cell survival, and the production of an antibody called IgE. Role in development JAK-STAT signalling plays an important role in animal development. The pathway can promote blood cell division, as well as differentiation (the process of a cell becoming more specialised). In some flies with faulty JAK genes, too much blood cell division can occur, potentially resulting in leukaemia. JAK-STAT signalling has also been associated with excessive white blood cell division in humans and mice. The signalling pathway is also crucial for eye development in the fruit fly (Drosophila melanogaster). When mutations occur in genes coding for JAKs, some cells in the eye may be unable to divide, and other cells, such as photoreceptor cells, have been shown not to develop correctly. The entire removal of a JAK and a STAT in Drosophila causes death of Drosophila embryos, whilst mutations in the genes coding for JAKs and STATs can cause deformities in the body patterns of flies, particularly defects in forming body segments. One theory as to how interfering with JAK-STAT signalling might cause these defects is that STATs may directly bind to DNA and promote the transcription of genes involved in forming body segments, and therefore by mutating JAKs or STATs, flies experience segmentation defects. STAT binding sites have been identified on one of these genes, called even-skipped (eve), to support this theory. Of all the segment stripes affected by JAK or STAT mutations, the fifth stripe is affected the most, the exact molecular reasons behind this are still unknown. Regulation Given the importance of the JAK-STAT signalling pathway, particularly in cytokine signalling, there are a variety of mechanisms that cells possess to regulate the amount of signalling that occurs. Three major groups of proteins that cells use to regulate this signalling pathway are protein inhibitors of activated STAT (PIAS), protein tyrosine phosphatases (PTPs) and suppressors of cytokine signalling (SOCS). Computational models of JAK-STAT signaling based on the laws of chemical kinetics have elucidated the importance of these different regulatory mechanisms on JAK-STAT signaling dynamics. Protein inhibitors of activated STATs (PIAS) PIAS are a four-member protein family made of: PIAS1, PIAS3, PIASx, and PIASγ. The proteins add a marker, called SUMO (small ubiquitin-like modifier), onto other proteins – such as JAKs and STATs, modifying their function. The addition of a SUMO group onto STAT1 by PIAS1 has been shown to prevent activation of genes by STAT1. Other studies have demonstrated that adding a SUMO group to STATs may block phosphorylation of tyrosines on STATs, preventing their dimerization and inhibiting JAK-STAT signalling. PIASγ has also been shown to prevent STAT1 from functioning. PIAS proteins may also function by preventing STATs from binding to DNA (and therefore preventing gene activation), and by recruiting proteins called histone deacetylases (HDACs), which lower the level of gene expression. Protein tyrosine phosphatases (PTPs) Since adding phosphate groups on tyrosines is such an important part of how the JAK-STAT signalling pathway functions, removing these phosphate groups can inhibit signalling. PTPs are tyrosine phosphatases, so are able to remove these phosphates and prevent signalling. Three major PTPs are SHP-1, SHP-2 and CD45. SHP-1. SHP-1 is mainly expressed in blood cells. It contains two SH2 domains and a catalytic domain (the region of a protein that carries out the main function of the protein) - the catalytic domain contains the amino acid sequence VHCSAGIGRTG (a sequence typical of PTPs). As with all PTPs, a number of amino acid structures are essential for their function: conserved cysteine, arginine and glutamine amino acids, and a loop made of tryptophan, proline and aspartate amino acids (WPD loop). When SHP-1 is inactive, the SH2 domains interact with the catalytic domain, and so the phosphatase is unable to function. When SHP-1 is activated however, the SH2 domains move away from the catalytic domain, exposing the catalytic site and therefore allowing phosphatase activity. SHP-1 is then able to bind and remove phosphate groups from the JAKs associated with receptors, preventing the transphosphorylation needed for the signalling pathway to progress. One example of this is seen in the JAK-STAT signalling pathway mediated by the erythropoietin receptor (EpoR). Here, SHP-1 binds directly to a tyrosine residue (at position 429) on EpoR and removes phosphate groups from the receptor-associated JAK2. The ability of SHP-1 to negatively regulate the JAK-STAT pathway has also been seen in experiments using mice lacking SHP-1. These mice experience characteristics of autoimmune diseases and show high levels of cell proliferation, which are typical characteristics of an abnormally high level of JAK-STAT signalling. Additionally, adding methyl groups to the SHP-1 gene (which reduces the amount of SHP-1 produced) has been linked to lymphoma (a type of blood cancer) . However, SHP-1 may also promote JAK-STAT signalling. A study in 1997 found that SHP-1 potentially allows higher amounts of STAT activation, as opposed to reducing STAT activity. A detailed molecular understanding for how SHP-1 can both activate and inhibit the signalling pathway is still unknown. SHP-2. SHP-2 has a very similar structure to SHP-1, but unlike SHP-1, SHP-2 is produced in many different cell types - not just blood cells. Humans have two SHP-2 proteins, each made up of 593 and 597 amino acids. The SH2 domains of SHP-2 appear to play an important role in controlling the activity of SHP-2. One of the SH2 domains binds to the catalytic domain of SHP-2, to prevent SHP-2 functioning. Then, when a protein with a phosphorylated tyrosine binds, the SH2 domain changes orientation and SHP-2 is activated. SHP-2 is then able to remove phosphate groups from JAKs, STATs and the receptors themselves - so, like SHP-1, can prevent the phosphorylation needed for the pathway to continue, and therefore inhibit JAK-STAT signalling. Like SHP-1, SHP-2 is able to remove these phosphate groups through the action of the conserved cysteine, arginine, glutamine and WPD loop. Negative regulation by SHP-2 has been reported in a number of experiments - one example has been when exploring JAK1/STAT1 signalling, where SHP-2 is able to remove phosphate groups from proteins in the pathway, such as STAT1. In a similar manner, SHP-2 has also been shown to reduce signalling involving STAT3 and STAT5 proteins, by removing phosphate groups. Like SHP-1, SHP-2 is also believed to promote JAK-STAT signalling in some instances, as well as inhibit signalling. For example, one study indicates that SHP-2 may promote STAT5 activity instead of reducing it. Also, other studies propose that SHP-2 may increase JAK2 activity, and promote JAK2/STAT5 signalling. It is still unknown how SHP2 can both inhibit and promote JAK-STAT signalling in the JAK2/STAT5 pathway; one theory is that SHP-2 may promote activation of JAK2, but inhibit STAT5 by removing phosphate groups from it. CD45. CD45 is mainly produced in blood cells. In humans it has been shown to be able to act on JAK1 and JAK3, whereas in mice, CD45 is capable of acting on all JAKs. One study indicates that CD45 can reduce the amount of time that JAK-STAT signalling is active. The exact details of how CD45 functions is still unknown. Suppressors of cytokine signalling (SOCS) There are eight protein members of the SOCS family: cytokine-inducible SH2 domain-containing protein (CISH), SOCS1, SOCS2, SOCS3, SOCS4, SOCS5, SOCS6, and SOCS7, each protein has an SH2 domain and a 40-amino-acid region called the SOCS box. The SOCS box can interact with a number of proteins to form a protein complex, and this complex can then cause the breakdown of JAKs and the receptors themselves, therefore inhibiting JAK-STAT signalling. The protein complex does this by allowing a marker called ubiquitin to be added to proteins, in a process called ubiquitination, which signals for a protein to be broken down. The proteins, such as JAKs and the receptors, are then transported to a compartment in the cell called the proteasome, which carries out protein breakdown. SOCS can also function by binding to proteins involved in JAK-STAT signalling and blocking their activity. For example, the SH2 domain of SOCS1 binds to a tyrosine in the activation loop of JAKs, which prevents JAKs from phosphorylating each other. The SH2 domains of SOCS2, SOCS3 and CIS bind directly to receptors themselves. Also, SOCS1 and SOCS3 can prevent JAK-STAT signalling by binding to JAKs, using segments called kinase inhibitory regions (KIRs) and stopping JAKs binding to other proteins. The exact details of how other SOCS function is less understood. Clinical significance Since the JAK-STAT pathway plays a major role in many fundamental processes, such as apoptosis and inflammation, dysfunctional proteins in the pathway may lead to a number of diseases. For example, alterations in JAK-STAT signalling can result in cancer and diseases affecting the immune system, such as severe combined immunodeficiency disorder (SCID). Immune system-related diseases JAK3 can be used for the signalling of IL-2, IL-4, IL-15 and IL-21 (as well as other cytokines); therefore patients with mutations in the JAK3 gene often experience issues affecting many aspects of the immune system. For example, non-functional JAK3 causes SCID, which results in patients having no NK cells, B cells or T cells, and this would make SCID individuals susceptible to infection. Mutations of the STAT5 protein, which can signal with JAK3, has been shown to result in autoimmune disorders. It has been suggested that patients with mutations in STAT1 and STAT2 are often more likely to develop infections from bacteria and viruses. Also, STAT4 mutations have been associated with rheumatoid arthritis, and STAT6 mutations are linked to asthma. Patients with a faulty JAK-STAT signalling pathway may also experience skin disorders. For example, non-functional cytokine receptors, and overexpression of STAT3 have both been associated with psoriasis (an autoimmune disease associated with red, flaky skin). STAT3 plays an important role in psoriasis, as STAT3 can control the production of IL-23 receptors, and IL-23 can help the development of Th17 cells, and Th17 cells can induce psoriasis. Also, since many cytokines function through the STAT3 transcription factor, STAT3 plays a significant role in maintaining skin immunity. In addition, because patients with JAK3 gene mutations have no functional T cells, B cells or NK cells, they would more likely to develop skin infections. Cancer Cancer involves abnormal and uncontrollable cell growth in a part of the body. Therefore, since JAK-STAT signalling can allow the transcription of genes involved in cell division, one potential effect of excessive JAK-STAT signalling is cancer formation. High levels of STAT activation have been associated with cancer; in particular, high amounts of STAT3 and STAT5 activation is mostly linked to more dangerous tumours. For example, too much STAT3 activity has been associated with increasing the likelihood of melanoma (skin cancer) returning after treatment and abnormally high levels of STAT5 activity have been linked to a greater probability of patient death from prostate cancer. Altered JAK-STAT signalling can also be involved in developing breast cancer. JAK-STAT signalling in mammary glands (located within breasts) can promote cell division and reduce cell apoptosis during pregnancy and puberty, and therefore if excessively activated, cancer can form. High STAT3 activity plays a major role in this process, as it can allow the transcription of genes such as BCL2 and c-Myc, which are involved in cell division. Mutations in JAK2 can lead to leukaemia and lymphoma. Specifically, mutations in exons 12, 13, 14 and 15 of the JAK2 gene are proposed to be a risk factor in developing lymphoma or leukemia. Additionally, mutated STAT3 and STAT5 can increase JAK-STAT signalling in NK and T cells, which promotes very high proliferation of these cells, and increases the likelihood of developing leukaemia. Also, a JAK-STAT signalling pathway mediated by erythropoietin (EPO), which usually allows the development of red blood cells, may be altered in patients with leukemia. Covid-19 The Janus kinase (JAK)/signal transducer and the activator of the transcription (STAT) pathway were at the centre of attention for driving hyperinflammation in COVID-19, i.e., the SARS-CoV-2 infection triggers hyperinflammation through the JAK/STAT pathway, resulting in the recruitment of dendritic cells, macrophages, and natural killer (NK) cells, as well as differentiation of B cells and T cells progressing towards cytokine storm. Treatments Since excessive JAK-STAT signalling is responsible for some cancers and immune disorders, JAK inhibitors have been proposed as drugs for therapy. For instance, to treat some forms of leukaemia, targeting and inhibiting JAKs could eliminate the effects of EPO signalling and perhaps prevent the development of leukaemia. One example of a JAK inhibitor drug is ruxolitinib, which is used as a JAK2 inhibitor. STAT inhibitors are also being developed, and many of the inhibitors target STAT3. It has been reported that therapies which target STAT3 can improve the survival of patients with cancer. Another drug, called Tofacitinib, has been used for psoriasis and rheumatoid arthritis treatment, and has been approved for treatment of Crohn's disease and ulcerative colitis. See also Janus kinase inhibitor, a type of Janus kinases-blocking drugs used for cancer therapy. Signal transducing adaptor protein, a helper protein used by major proteins in signalling pathways. References Further reading External links JAK-STAT, peer-reviewed journal published by Landes Bioscience Jak/Stat pathway (human) on wikipathways Web Site of Austrian Special Research Program (SFB) on Jak STAT signaling Signal transduction Gene expression Transcription factors
JAK-STAT signaling pathway
[ "Chemistry", "Biology" ]
5,697
[ "Gene expression", "Signal transduction", "Molecular genetics", "Induced stem cells", "Cellular processes", "Molecular biology", "Biochemistry", "Neurochemistry", "Transcription factors" ]
970,009
https://en.wikipedia.org/wiki/Helen%20%28play%29
Helen (, Helénē) is a drama by Euripides about Helen, first produced in 412 BC for the Dionysia in a trilogy that also contained Euripides' lost Andromeda. The play has much in common with Iphigenia in Tauris, which is believed to have been performed around the same time period. Historical frame Helen was written soon after the Sicilian Expedition, in which Athens had suffered a massive defeat. Concurrently, the sophists – a movement of teachers who incorporated philosophy and rhetoric into their occupation – were beginning to question traditional values and religious beliefs. Within the play's framework, Euripides starkly condemns war, deeming it to be the root of all evil. Background About thirty years before this play, Herodotus argued in his Histories that Helen had never in fact arrived at Troy, but was in Egypt during the entire Trojan War. The Archaic lyric poet Stesichorus had made the same assertion in his "Palinode" (itself a correction to an earlier poem corroborating the traditional characterization that made Helen out to be a woman of ill repute). The play Helen tells a variant of this story, beginning under the premise that rather than running off to Troy with Paris, Helen was actually whisked away to Egypt by the gods. The Helen who escaped with Paris, betraying her husband and her country and initiating the ten-year conflict, was actually an eidolon, a phantom look-alike. After Paris was promised the most beautiful woman in the world by Aphrodite and he judged her fairer than her fellow goddesses Athena and Hera, Hera ordered Hermes to replace Helen, Paris' assumed prize, with a fake. Thus, the real Helen has been languishing in Egypt for years, while the Greeks and Trojans alike curse her for her supposed infidelity. In Egypt, king Proteus, who had protected Helen, has died. His son Theoclymenus, the new king with a penchant for killing Greeks, intends to marry Helen, who after all these years remains loyal to her husband Menelaus. Plot Helen receives word from the exiled Greek Teucer that Menelaus never returned to Greece from Troy, and is presumed dead, putting her in the perilous position of being available for Theoclymenus to marry, and she consults the prophetess Theonoe, sister to Theoclymenus, to find out Menelaus' fate. Her fears are allayed when a stranger arrives in Egypt and turns out to be Menelaus himself, and the long-separated couple recognize each other. At first, Menelaus does not believe that she is the real Helen, since he has hidden the Helen he won in Troy in a cave. However, the woman he was shipwrecked with was in reality, only a mere phantom of the real Helen. Before the Trojan war even began, a judgement took place, one that Paris was involved in. He gave the Goddess Aphrodite the award of the fairest since she bribed him with Helen as a bride. To take their revenge on Paris, the remaining goddesses, Athena and Hera, replaced the real Helen with a phantom. However, Menelaus did not know better. But luckily one of his sailors steps in to inform him that the false Helen has disappeared into thin air. The couple still must figure out how to escape from Egypt, but the rumor that Menelaus has died is still in circulation. Thus, Helen tells Theoclymenus that the stranger who came ashore was a messenger there to tell her that her husband was truly dead. She informs the king that she may marry him as soon as she has performed a ritual burial at sea, thus freeing her symbolically from her first wedding vows. The king agrees to this, and Helen and Menelaus use this opportunity to escape on the boat given to them for the ceremony. Theoclymenus is furious when he learns of the trick and nearly murders his sister Theonoe for not telling him that Menelaus is still alive. However, he is prevented by the miraculous intervention of the demi-gods Castor and Polydeuces, brothers of Helen and the sons of Zeus and Leda. Themes Virtue and Oaths: In Helen, Euripides emphasizes the importance of virtue and oaths. Awaiting the return of her husband Menelaus for 17 years — the ten of the Trojan War and another seven for the search — Helen remains faithful to Menelaus and the promises she has made him: Helen made two oaths, one to the Spartan river Eurotas and another on the head of Menelaus himself as sanctifying object. Menelaus also swears fidelity to Helen: so seriously do husband and wife take their vows that they agree to commit suicide and never marry another if their plans fail. Such importance to oath-keeping is consonant with general practice during the time period (Torrance, 2009). With these oaths, Helen and Menelaus declare their love for each other and their desire to live only with the other. These oaths prove their devotion and exemplify the importance of oaths. Given the play’s humor and Euripides’ general challenging of norms and values, it remains uncertain what our playwright’s own views are. Identity and Reputation: Throughout all the different permutations of the story of Helen and the Trojan War, what makes the Trojan war distinctive is the fact that it is always caused, somehow, by Helen as the supreme embodiment of female beauty, whether she is or is not physically in Troy and whether she acts as an enthusiastic partner of Paris or as a reluctant victim of his unwanted rape. Euripides expands more on this idea by presenting his play largely from Helen’s point of view, revealing how she truly feels about being the symbolic villain of the Trojan War. Helen’s character in the play is deeply affected by the losses of the people who have died fighting to bring her back to her homeland and husband and expresses this guilt frequently: “The wrecked city of Ilium / is given up to the teeth of fire, / all through me and the deaths I caused, / all for my name of affliction” (lines 196-198). Despite this guilt, she also feels anger for being made into a symbol that people can project their hate on, even though they do not know her: “I have done nothing wrong and yet my reputation / is bad, and worse than a true evil is it to bear / the burden of faults that are not truly yours” (lines 270-272). Although she spends a lot of the beginning of the play feeling pity for the men who have died and herself as well, Euripides’ Helen is independent, confident, and intelligent. She displays her ability to think on her feet as she formulates a workable plan to return home and as she rejects her husband Menelaus’ cockamamy plans. Therefore, Euripides in his play portrays a living and breathing Helen filled with compassion and wit, not at all similar to the blameworthy person others believe her to be. Translations Edward P. Coleridge, 1891 – prose: full text Arthur S. Way, 1912 – verse Philip Vellacott, 1954 – prose and verse Richmond Lattimore, 1956 – verse James Morwood, 1997 – prose Frank McGuinness, 2008 – for Shakespeare's Globe George Theodoridis, 2011 – prose: full text Emily Wilson, 2016 - verse See also Norma Jeane Baker of Troy, 2019 play Richard Strauss's opera Die ägyptische Helena, the libretto for which was adapted by Hugo von Hofmannsthal from the play by Euripides References Torrance, Isabelle. “On Your Head be it Sworn: Oath and Virtue in Euripides' Helen." The Classical Quarterly, vol. 59, no. 1, 2009, pp. 1-7. External links The Real Helen, retelling by W. M. L. Hutchinson. Plays by Euripides Trojan War literature Laconian mythology Egypt in Greek mythology Plays set in ancient Egypt Plays set in ancient Greece Cultural depictions of Helen of Troy Plays adapted into operas Plays based on classical mythology Castor and Pollux
Helen (play)
[ "Astronomy" ]
1,695
[ "Castor and Pollux", "Astronomical myths" ]
970,013
https://en.wikipedia.org/wiki/Jean%20Clouet
Jean (or Janet or Jehannot) Clouet (c. 1485 – 1540/1) was a painter, draughtsman and miniaturist from the Burgundian Netherlands whose known active work period took place in France. He was court painter to French king Francis I. Together with his son François Clouet he is counted among the leading 16th century portrait painters working in France. They are particularly known for their accomplished drawings, using black chalk and pure red chalk. Biography Little is known about the early life of Clouet. Art historians have generally assumed that he was a native of the Burgundian Netherlands, either in French speaking Valenciennes, County of Hainaut or Flemish speaking Brussels, Duchy of Brabant. He may have been the Jehan Cloet from Brussels mentioned in the accounts of the Duke of Burgundy. His father may have been Michel Clauwet or Clauet, a painter from Valenciennes who had settled in Brussels. In a document regarding the succession of his uncle, the painter Simon Marmion, dated 6 May 1499, Michel's two minor children, Janet and Polet, are mentioned, but there is no evidence that this Janet Clauwet was indeed Jean Clouet. Born around 1485 and trained in Flanders, Clouet spent most of his career in France. His connection with the Paris court of the French King Francis I is attested in the court accounts from 1516 until 1537. Originally he was appointed as painter and wardrobe valet at wages of 180 livres tournois. He was promoted to extraordinary valet in 1519 and finally to the new position of painter and gentleman in 1524. In 1522, on the death of the court painter Jean Bourdichon his wages were increased to 240 livres, equal to those received by the official portrait painter Jean Perréal. Perréal's departure in 1527 made Clouet the highest paid ordinary painter, confirming his status as the almost exclusive creator of portraits for the royal family and the court. His title of master painter, likely received in Flanders, also allowed him to work for private patrons, such as the notary of the King Jacques Thiboust, whose portrait he painted in 1516, and his uncle by marriage Pierre Fichepain, who commissioned a Saint Jerome from him in 1522. He lived in the 1520s in Tours, where he met and married his wife Jeanne Boucault, who was the daughter of a goldsmith. The couple had two children, François who would succeed him as a court painter, and Catherine. Catherine married Abel Foullon. Their son Benjamin Foullon (or Foulon) also became a portrait painter and miniaturist. The painter Simon Bélot worked in Jean's workshop in Tours. At the end of the 1520s, the family moved to Paris, where they lived in the rue Sainte-Avoye. From 1540, Clouet, perhaps ill, was replaced in the king's service by his son François. In July 1540, he was godfather to a child of Mathurin Régnier. He died shortly afterwards, in late 1540 or early 1541, and was buried in the Holy Innocents' Cemetery. An act in the Trésor des Chartes, the ancient archives of the French crown, states that Clouet's son Jean would succeed his father as painter and valet from November 1541 and that Jean Clouet was born outside France and never became a naturalized Frenchman. The act also allows François to inherit his father's estate, which otherwise under French law would have escheated to the French crown as Jean was a foreigner. His brother Paul, known as Clouet de Navarre, was in the service of Marguerite d'Angoulême, sister of Francis I, and is referred to in a letter written by Marguerite about 1529. Work Jean Clouet was undoubtedly a very skillful portrait painter, although no work in existence has been proved to be his. About 10 to 15 portrait paintings are currently attributed to him and fewer miniatures, like two portrait of Francis I and two smaller ones by his workshop in the Louvre, that of un unknown man at Hampton Court, that of the Dauphin Francis, son of Francis I at Antwerp.He painted in 1530 a portrait of the mathematician Oronce Finé, at the age of 36. This portrait is now known only through a print. Clouet is generally believed be the author of a very large number of the 130 portrait drawings now preserved at Musée Condé in Chantilly as well as other drawings at the Bibliothèque Nationale de France. Paintings Portrait of Francis I as Saint John the Baptist, 1518, oil on panel, 96.5 x 79 cm, Louvre, Paris. Portrait of a Banker, 1522, oil on panel, 42.5 x 32.7 cm, Saint Louis Art Museum. Portrait of Madeleine of France, c. 1522, oil on panel, 16.1 x 12.7 cm. Portrait of Charlotte of France, c. 1522, oil on panel, 17.78 x 13.34 cm, Minneapolis Institute of Art. Portrait of the Dauphin Francis of France, 1522–1525, 16 x 13 cm, Royal Museum of Fine Arts Antwerp. Portrait of Madame de Canaples, c. 1525, oil on panel, 36 x 28.5 cm, National Gallery of Scotland, Edinburgh. Portrait of Claude de Lorraine Duke of Guise, 1528–1530, oil on panel, 29 x 26 cm, Palazzo Pitti, Florence. Portrait of Marguerite d'Angoulême, c. 1530, oil on panel, 59.8 x 51.4 cm, Walker Art Gallery, Liverpool. Portrait of Francis I, c. 1530, oil on panel, 96 x 74 cm, Louvre. Portrait of a Man Holding a Volume of Petrarch, formerly said Portrait of Claude d'Urfé, c. 1530–1535, oil on panel, 38.4 x 33 cm, Royal Collection, Hampton Court. Portrait of Guillaume Budé, c. 1536, oil on panel, 39.7 x 34.3 cm, Metropolitan Museum of Art, New York. Drawings and miniatures Seven miniature portraits in the Manuscript of the Gallic War in the Bibliothèque Nationale (13,429) are attributed to Jean Clouet with very strong probability, and to these may be added an eighth in the collection of J. Pierpont Morgan, and representing Charles I de Cossé, Maréchal de Brissac, identical in its characteristics with the seven already known. There are other miniatures in the collection of Mr Morgan, which may be attributed to Jean Clouet with some strong degree of probability, inasmuch as they closely resemble the portrait drawings at Chantilly and in Paris which are taken to be his work. The collection of drawings preserved in France, and attributed to this artist and his school, comprises portraits of all the important persons of the time of Francis I. In one album of drawings the portraits are annotated by the king himself, and his merry reflections, stinging taunts or biting satires, add very largely to a proper understanding of the life of his time and court. Definite evidence, however, is still lacking to establish the attribution of the best of these drawings and of certain oil paintings to Jean Clouet. Notes References Cécile Scailliérez Francis I by Clouet, meeting des Musées Nationaux, 1996 Dictionary Bénézit critic and documentary dictionary of painters, sculptors, designers and writers of all times and all countries, vol. 3 January 1999, p 13440. (), p. 725 Oxford Dictionary edited by Robert Maillard, Universal Dictionary of painting, vol. 2 Smeets Offset BV Weert (Netherlands), October 1975, p 3000. (), p. 42-43 (en) Peter Mellen, Jean Clouet, complete edition of the drawings, miniatures and paintings, London, New York, Phaidon Press, 1971, 262 p. () Peter Mellen (trans. Anne Roullet), Jean Clouet, Catalogue raisonné of the drawings, miniatures and paintings, Paris, Flammarion, 1971, p. 250 Lawrence Gowing (Pref. Michel Laclotte) The paintings in the Louvre, Paris Editions Nathan, 1988, 686 p. (), p. 204 External links 1480 births 1541 deaths French portrait miniaturists Early Netherlandish painters 15th-century French painters French male painters 16th-century French painters French Renaissance painters Painters from Brussels 1485 births French portrait painters Portrait miniaturists Flemish portrait painters People from Brussels French court painters
Jean Clouet
[ "Engineering" ]
1,800
[ "Design engineering", "Draughtsmen" ]
970,031
https://en.wikipedia.org/wiki/Byzantine%20fault
A Byzantine fault is a condition of a system, particularly a distributed computing system, where a fault occurs such that different symptoms are presented to different observers, including imperfect information on whether a system component has failed. The term takes its name from an allegory, the "Byzantine generals problem", developed to describe a situation in which, to avoid catastrophic failure of a system, the system's actors must agree on a strategy, but some of these actors are unreliable in such a way as to cause other (good) actors to disagree on the strategy and they may be unaware of the disagreement. A Byzantine fault is also known as a Byzantine generals problem, a Byzantine agreement problem, or a Byzantine failure. Byzantine fault tolerance (BFT) is the resilience of a fault-tolerant computer system or similar system to such conditions. Definition A Byzantine fault is any fault presenting different symptoms to different observers. A Byzantine failure is the loss of a system service due to a Byzantine fault in systems that require consensus among multiple components. The Byzantine allegory considers a number of generals who are attacking a fortress. The generals must decide as a group whether to attack or retreat; some may prefer to attack, while others prefer to retreat. The important thing is that all generals agree on a common decision, for a halfhearted attack by a few generals would become a rout, and would be worse than either a coordinated attack or a coordinated retreat. The problem is complicated by the presence of treacherous generals who may not only cast a vote for a suboptimal strategy; they may do so selectively. For instance, if nine generals are voting, four of whom support attacking while four others are in favor of retreat, the ninth general may send a vote of retreat to those generals in favor of retreat, and a vote of attack to the rest. Those who received a retreat vote from the ninth general will retreat, while the rest will attack (which may not go well for the attackers). The problem is complicated further by the generals being physically separated and having to send their votes via messengers who may fail to deliver votes or may forge false votes. Without message signing, Byzantine fault tolerance can only be achieved if the total number of generals is greater than three times the number of disloyal (faulty) generals. There can be a default vote value given to missing messages. For example, missing messages can be given a "null" value. Further, if the agreement is that the null votes are in the majority, a pre-assigned default strategy can be used (e.g., retreat). The typical mapping of this allegory onto computer systems is that the computers are the generals and their digital communication system links are the messengers. Although the problem is formulated in the allegory as a decision-making and security problem, in electronics, it cannot be solved by cryptographic digital signatures alone, because failures such as incorrect voltages can propagate through the encryption process. Thus, a faulty message could be sent such that some recipients detect the message as faulty (bad signature), others see it is having a good signature, and a third group also sees a good signature but with different message contents than the second group. History The problem of obtaining Byzantine consensus was conceived and formalized by Robert Shostak, who dubbed it the interactive consistency problem. This work was done in 1978 in the context of the NASA-sponsored SIFT project in the Computer Science Lab at SRI International. SIFT (for Software Implemented Fault Tolerance) was the brainchild of John Wensley, and was based on the idea of using multiple general-purpose computers that would communicate through pairwise messaging in order to reach a consensus, even if some of the computers were faulty. At the beginning of the project, it was not clear how many computers in total were needed to guarantee that a conspiracy of n faulty computers could not "thwart" the efforts of the correctly-operating ones to reach consensus. Shostak showed that a minimum of 3n+1 are needed, and devised a two-round 3n+1 messaging protocol that would work for n=1. His colleague Marshall Pease generalized the algorithm for any n > 0, proving that 3n+1 is both necessary and sufficient. These results, together with a later proof by Leslie Lamport of the sufficiency of 3n using digital signatures, were published in the seminal paper, Reaching Agreement in the Presence of Faults. The authors were awarded the 2005 Edsger W. Dijkstra Prize for this paper. To make the interactive consistency problem easier to understand, Lamport devised a colorful allegory in which a group of army generals formulate a plan for attacking a city. In its original version, the story cast the generals as commanders of the Albanian army. The name was changed, eventually settling on "Byzantine", at the suggestion of Jack Goldberg to future-proof any potential offense-giving. This formulation of the problem, together with some additional results, were presented by the same authors in their 1982 paper, "The Byzantine Generals Problem". Mitigation The objective of Byzantine fault tolerance is to be able to defend against failures of system components with or without symptoms that prevent other components of the system from reaching an agreement among themselves, where such an agreement is needed for the correct operation of the system. The remaining operationally correct components of a Byzantine fault tolerant system will be able to continue providing the system's service as originally intended, assuming there are a sufficient number of accurately-operating components to maintain the service. When considering failure propagation only via errors, Byzantine failures are considered the most general and most difficult class of failures among the failure modes. The so-called fail-stop failure mode occupies the simplest end of the spectrum. Whereas the fail-stop failure mode simply means that the only way to fail is a node crash, detected by other nodes, Byzantine failures imply no restrictions on what errors can be created, which means that a failed node can generate arbitrary data, including data that makes it appear like a functioning node to a subset of other nodes. Thus, Byzantine failures can confuse failure detection systems, which makes fault tolerance difficult. Despite the allegory, a Byzantine failure is not necessarily a security problem involving hostile human interference: it can arise purely from physical or software faults. The terms fault and failure are used here according to the standard definitions originally created by a joint committee on "Fundamental Concepts and Terminology" formed by the IEEE Computer Society's Technical Committee on Dependable Computing and Fault-Tolerance and IFIP Working Group 10.4 on Dependable Computing and Fault Tolerance. See also dependability. Byzantine fault tolerance is only concerned with broadcast consistency, that is, the property that when a component broadcasts a value to all the other components, they all receive exactly this same value, or in the case that the broadcaster is not consistent, the other components agree on a common value themselves. This kind of fault tolerance does not encompass the correctness of the value itself; for example, an adversarial component that deliberately sends an incorrect value, but sends that same value consistently to all components, will not be caught in the Byzantine fault tolerance scheme. Solutions Several early solutions were described by Lamport, Shostak, and Pease in 1982. They began by noting that the Generals' Problem can be reduced to solving a "Commander and Lieutenants" problem where loyal Lieutenants must all act in unison and that their action must correspond to what the Commander ordered in the case that the Commander is loyal: One solution considers scenarios in which messages may be forged, but which will be Byzantine-fault-tolerant as long as the number of disloyal generals is less than one third of the generals. The impossibility of dealing with one-third or more traitors ultimately reduces to proving that the one Commander and two Lieutenants problem cannot be solved, if the Commander is traitorous. To see this, suppose we have a traitorous Commander A, and two Lieutenants, B and C: when A tells B to attack and C to retreat, and B and C send messages to each other, forwarding A's message, neither B nor C can figure out who is the traitor, since it is not necessarily A—the other Lieutenant could have forged the message purportedly from A. It can be shown that if n is the number of generals in total, and t is the number of traitors in that n, then there are solutions to the problem only when n > 3t and the communication is synchronous (bounded delay). The full set of BFT requirements are: For F number of Byzantine failures, there needs to be at least 3F+1 players (fault containment zones), 2F+1 independent communication paths, and F+1 rounds of communication. There can be hybrid fault models in which benign (non-Byzantine) faults as well as Byzantine faults may exist simultaneously. For each additional benign fault that must be tolerated, the above numbers need to be incremented by one. If the BFT rounds of communication don't exist, Byzantine failures can occur even with no faulty hardware. A second solution requires unforgeable message signatures. For security-critical systems, digital signatures (in modern computer systems, this may be achieved, in practice, by using public-key cryptography) can provide Byzantine fault tolerance in the presence of an arbitrary number of traitorous generals. However, for safety-critical systems (where "security" addresses intelligent threats while "safety" addresses the inherent dangers of an activity or mission, i.e., faults due to natural phenomena), error detecting codes, such as CRCs, provide stronger coverage at a much lower cost. (Note that CRCs can provide guaranteed coverage for errors that cryptography cannot. If an encryption scheme could provide some guarantee of coverage for message errors, it would have a structure that would make it insecure.) But neither digital signatures nor error detecting codes such as CRCs provide a known level of protection against Byzantine errors from natural causes. And more generally, security measures can weaken safety and vice versa. Thus, cryptographic digital signature methods are not a good choice for safety-critical systems, unless there is also a specific security threat as well. While error detecting codes, such as CRCs, are better than cryptographic techniques, neither provide adequate coverage for active electronics in safety-critical systems. This is illustrated by the Schrödinger CRC scenario where a CRC-protected message with a single Byzantine faulty bit presents different data to different observers and each observer sees a valid CRC. Also presented is a variation on the first two solutions allowing Byzantine-fault-tolerant behavior in some situations where not all generals can communicate directly with each other. There are many systems that claim BFT without meeting the above minimum requirements (e.g., blockchain). Given that there is mathematical proof that this is impossible, these claims need to include a caveat that their definition of BFT strays from the original. That is, systems such as blockchain don't guarantee agreement, they only make disagreement expensive. Several system architectures were designed c. 1980 that implemented Byzantine fault tolerance. These include: Draper's FTMP, Honeywell's MMFCS, and SRI's SIFT. In 1999, Miguel Castro and Barbara Liskov introduced the "Practical Byzantine Fault Tolerance" (PBFT) algorithm, which provides high-performance Byzantine state machine replication, processing thousands of requests per second with sub-millisecond increases in latency. After PBFT, several BFT protocols were introduced to improve its robustness and performance. For instance, Q/U, HQ, Zyzzyva, and ABsTRACTs, addressed the performance and cost issues; whereas other protocols, like Aardvark and RBFT, addressed its robustness issues. Furthermore, Adapt tried to make use of existing BFT protocols, through switching between them in an adaptive way, to improve system robustness and performance as the underlying conditions change. Furthermore, BFT protocols were introduced that leverage trusted components to reduce the number of replicas, e.g., A2M-PBFT-EA and MinBFT. Applications Several examples of Byzantine failures that have occurred are given in two equivalent journal papers. These and other examples are described on the NASA DASHlink web pages. Applications in computing Byzantine fault tolerance mechanisms use components that repeat an incoming message (or just its signature, which can be reduced to just a single bit of information if self-checking pairs are used for nodes) to other recipients of that incoming message. All these mechanisms make the assumption that the act of repeating a message blocks the propagation of Byzantine symptoms. For systems that have a high degree of safety or security criticality, these assumptions must be proven to be true to an acceptable level of fault coverage. When providing proof through testing, one difficulty is creating a sufficiently wide range of signals with Byzantine symptoms. Such testing will likely require specialized fault injectors. Military applications Byzantine errors were observed infrequently and at irregular points during endurance testing for the newly constructed Virginia class submarines, at least through 2005 (when the issues were publicly reported). Cryptocurrency applications The Bitcoin network works in parallel to generate a blockchain with proof-of-work allowing the system to overcome Byzantine failures and reach a coherent global view of the system's state. Some proof of stake blockchains also use BFT algorithms. Blockchain Technology Byzantine Fault Tolerance (BFT) is a crucial concept in blockchain technology, ensuring that a network can continue to function even when some nodes (participants) fail or act maliciously. This tolerance is necessary because blockchains are decentralized systems with no central authority, making it essential to achieve consensus among nodes, even if some try to disrupt the process. Applications and Examples of Byzantine Fault Tolerance in Blockchain Safety Mechanisms: Different blockchains use various BFT-based consensus mechanisms like Practical Byzantine Fault Tolerance (PBFT), Tendermint, and Delegated Proof of Stake (DPoS) to handle Byzantine faults. These protocols ensure that the majority of honest nodes can agree on the next block in the chain, securing the network against attacks and preventing double-spending and other types of fraud. Practical examples of networks include Hyperledger Fabric, Cosmos and Klever in this sequence. 51% Attack Mitigation: While traditional blockchains like Bitcoin use Proof of Work (PoW), which is susceptible to a 51% attack, BFT-based systems are designed to tolerate up to one-third of faulty or malicious nodes without compromising the network's integrity. Decentralized Trust: Byzantine Fault Tolerance underpins the trust model in decentralized networks. Instead of relying on a central authority, the network's security depends on the ability of honest nodes to outnumber and outmaneuver malicious ones. Private and Permissioned Blockchains: BFT is especially important in private or permissioned blockchains, where a limited number of known participants need to reach a consensus quickly and securely. These networks often use BFT protocols to enhance performance and security. Applications in aviation Some aircraft systems, such as the Boeing 777 Aircraft Information Management System (via its ARINC 659 SAFEbus network), the Boeing 777 flight control system, and the Boeing 787 flight control systems, use Byzantine fault tolerance; because these are real-time systems, their Byzantine fault tolerance solutions must have very low latency. For example, SAFEbus can achieve Byzantine fault tolerance within the order of a microsecond of added latency. The SpaceX Dragon considers Byzantine fault tolerance in its design. See also References Sources Bashir, Imran. "Blockchain Consensus." Blockchain Consensus - An Introduction to Classical, Blockchain, and Quantum Consensus Protocols. Apress, Berkeley, CA, 2022. External links Byzantine Fault Tolerance in the RKBExplorer Public-key cryptography Distributed computing problems Fault-tolerant computer systems Theory of computation
Byzantine fault
[ "Mathematics", "Technology", "Engineering" ]
3,279
[ "Distributed computing problems", "Reliability engineering", "Computational problems", "Computer systems", "Fault-tolerant computer systems", "Mathematical problems" ]
970,471
https://en.wikipedia.org/wiki/Mace%20%28spray%29
Mace is the brand name of an early type of aerosol self-defense spray invented by Alan Lee Litman in the 1960s. The first commercial product of its type, Litman's design packaged phenacyl chloride (CN) tear gas dissolved in hydrocarbon solvents into a small aerosol spray can, usable in many environments and strong enough to act as a deterrent and incapacitant when sprayed in the face. A generic trademark, its popularity led to the name "mace" being commonly used for other defense sprays regardless of their composition, and for the term "maced" to be used to reference being pepper sprayed. It is unrelated to the spice mace. History The original formulation consisted of 1% chloroacetophenone (CN) in a solvent of 2-butanol, propylene glycol, cyclohexene, and dipropylene glycol methyl ether. Chemical Mace was originally developed in the 1960s by Allan Lee Litman and his wife, Doris Litman, after one of Doris's female colleagues was robbed in Pittsburgh. In 1987, Chemical Mace was sold to Smith & Wesson and manufactured by their Lake Erie Chemical division. Smith & Wesson subsequently transferred ownership to Jon E. Goodrich along with the rest of the chemical division in what is now Mace Security International, which also owns federal trademark registrations for the term "mace". Historically, "chemical mace" was the development of irritant with the active ingredient called phenacyl chloride (CN) to incapacitate others whereas the term "Mace" is a trademarked term for use on personal defense sprays. Though the design has been expanded on, the original chemical mace formula using only CN has since been discontinued. Due to the potentially toxic nature of CN and the generally superior incapacitating qualities of oleoresin capsicum (OC) pepper spray in most situations, the early CN has been mostly supplanted by OC formulas in police use, although Mace Security International still retains a popular "Triple Action" formula combining CN, OC and an ultraviolet marker dye. References External links Official site of manufacturer Mace Security International Official Pepper Spray Laws American inventions Lachrymatory agents Products introduced in 1965 Self-defense Stun guns
Mace (spray)
[ "Chemistry" ]
467
[ "Lachrymatory agents", "Chemical weapons" ]
970,483
https://en.wikipedia.org/wiki/Ramsar%20site
A Ramsar site is a wetland site designated to be of international importance under the Ramsar Convention, also known as "The Convention on Wetlands", an international environmental treaty signed on 2 February 1971 in Ramsar, Iran, under the auspices of UNESCO. It came into force on 21 December 1975, when it was ratified by a sufficient number of nations. It provides for national action and international cooperation regarding the conservation of wetlands, and wise sustainable use of their resources. Ramsar treaty participants meet regularly to identify and agree to protect "Wetlands of International Importance", especially those providing waterfowl habitat. , there are 2,521 Ramsar sites around the world, protecting , and 172 national governments are participating. Site listings The non-profit organisation Wetlands International provides access to the Ramsar database via the Ramsar Sites Information Service. Ramsar site criteria A wetland can be considered internationally important if any of the following nine criteria apply: Criterion 1: "it contains a representative, rare, or unique example of a natural or near-natural wetland type found within the appropriate biogeographic region." Criterion 2: "it supports vulnerable, endangered, or critically endangered species or threatened ecological communities." Criterion 3: "it supports populations of plant and/or animal species important for maintaining the biological diversity of a particular biogeographic region." Criterion 4: "it supports plant and/or animal species at a critical stage in their life cycles, or provides refuge during adverse conditions." Criterion 5: "it regularly supports 20,000 or more waterbirds." Criterion 6: "it regularly supports 1% of the individuals in a population of one species or subspecies of waterbird." Criterion 7: "it supports a significant proportion of indigenous fish subspecies, species or families, life-history stages, species interactions and/or populations that are representative of wetland benefits and/or values and thereby contributes to global biological diversity." Criterion 8: "it is an important source of food for fishes, spawning ground, nursery and/or migration path on which fish stocks, either within the wetland or elsewhere, depend." Criterion 9: "it regularly supports 1% of the individuals in a population of one species or subspecies of wetland-dependent non-avian animal species." Classification The Ramsar Classification System for Wetland Type is a wetland classification developed within the Ramsar Convention intended as a means for fast identification of the main types of wetlands for the purposes of the Convention. Marine/coastal wetlands Saline water: Permanent: (A) Permanent shallow marine waters: Less than 6m deep at low tide; including sea bays and straits (B) Marine subtidal aquatic beds: Underwater vegetation; including kelp beds and sea grass beds, and tropical marine meadows (C) Coral reefs Shores: (D) Rocky marine shores (E) Sand, shingle or pebble shores Saline or brackish water: Intertidal: (G) Intertidal mud, sand or salt flats (H) Intertidal marshes (I) Intertidal forested wetlands Lagoons: (J) Coastal brackish/saline lagoons Estuarine waters: (F) Estuarine waters Saline, brackish, or fresh water: Subterranean: (Zk(a)) Karst and other subterranean hydrological systems Fresh water: Lagoons: (K) Coastal freshwater lagoons Inland wetlands Fresh water: Flowing water: Permanent: Permanent inland river deltas (L) Permanent rivers/creeks/streams (M) Freshwater springs, oases (Y) Seasonal/intermittent rivers/creeks/streams (N) Lakes/pools: Permanent >8 ha (O) Permanent < 8 ha(Tp) Seasonal / Intermittent > 8 ha (P) Seasonal Intermittent < 8 ha(Ts) Marshes on inorganic soils: Permanent (herb dominated) (Tp) Permanent / Seasonal / Intermittent (shrub dominated)(W) Permanent / Seasonal / Intermittent (tree dominated) (Xf) Seasonal/intermittent (herb dominated) (Ts) Marshes on peat soils: Permanent (non-forested)(U) Permanent (forested)(Xp) Marshes on inorganic or peat soils: Marshes on inorganic or peat soils / High altitude (alpine) (Va) Marshes on inorganic or peat soils / Tundra (Vt) Saline, brackish or alkaline waters: Lakes Permanent (Q) Seasonal/intermittent (R) Marshes/pools Permanent (Sp) Seasonal/intermittent (Ss) Fresh, saline, brackish or alkaline waters: Geothermal (Zg) Subterranean (Zk(b)) Human-made wetlands (1): Aquaculture ponds (2): Ponds (farm and stock ponds, small stock tanks, or area less than 8 ha) (3): Irrigated land (4): Seasonally flooded agricultural land (5): Salt exploitation sites (6): Water Storage areas/Reservoirs (7): Excavations (8): Wastewater treatment areas (9): Canals and drainage channels or ditches (Zk(c)): human-made karst and other subterranean hydrological systems See also List of parties to the Ramsar Convention Montreux Record References External links Ramsar Sites Information Service.org: Official List of all Ramsar Sites website—via Ramsar Sites Information Service Ramsar Sites Information Service.org—images of Ramsar sites Ramsar.org: Ramsar Convention website 1975 in the environment Protected areas established in 1975 Protected areas Sites Wetland conservation Wetlands Wildlife conservation
Ramsar site
[ "Biology", "Environmental_science" ]
1,118
[ "Wildlife conservation", "Hydrology", "Wetlands", "Biodiversity" ]
970,554
https://en.wikipedia.org/wiki/Centaurus%20A/M83%20Group
The Centaurus A/M83 Group is a complex group of galaxies in the constellations Hydra, Centaurus, and Virgo. The group may be roughly divided into two subgroups. The Cen A Subgroup, at a distance of 11.9 Mly (3.66 Mpc), is centered on Centaurus A, a nearby radio galaxy. The M83 Subgroup, at a distance of 14.9 Mly (4.56 Mpc), is centered on the Messier 83 (M83), a face-on spiral galaxy. This group is sometimes identified as one group and sometimes identified as two groups. Hence, some references will refer to two objects named the Centaurus A Group and the M83 Group. However, the galaxies around Centaurus A and the galaxies around M83 are physically close to each other, and both subgroups appear not to be moving relative to each other. The Centaurus A/M83 Group is part of the Virgo Supercluster, the local supercluster of which the Local Group is an outlying member. Members Member identification The brightest group members were frequently identified in early galaxy group identification surveys. However, many of the dwarf galaxies in the group were only identified in more intensive studies. One of the first of these identified 145 faint objects on optical images from the UK Schmidt Telescope and followed these up in hydrogen line emission with the Parkes Radio Telescope and in the hydrogen-alpha spectral line with the Siding Spring 2.3 m Telescope. This identified 20 dwarf galaxies as members of the group. The HIPASS survey, which was a blind radio survey for hydrogen spectral line emission, found five uncatalogued galaxies in the group and also identified five previously-catalogued galaxies as members. An additional dwarf galaxy was identified as a group member in the HIDEEP survey, which was a more intensive radio survey for hydrogen emission within a smaller region of the sky. Several optical surveys later identified 20 more candidate objects to the group. In 2007, the Cen A group membership of NGC 5011C was established. While this galaxy is a well-known stellar system listed with a NGC number, its true identity remained hidden because of coordinate confusion and wrong redshifts in the literature. From 2015 to 2017 a full optical survey was conducted using the Dark Energy Camera, covering 550 square degrees in the sky and doubling the number of known dwarf galaxies in this group. Another deep but spatially limited survey around Centaurus A revealed numerous new dwarfs. The dwarf spheroidal galaxies of the Centaurus A group have been studied and have been found to have old, metal-poor stellar populations similar to those in the Local Group, and follow a similar metallicity–luminosity relation. One dwarf galaxy, KK98 203 (LEDA 166167), has an extended ring of Hα emission. Member list The table below lists galaxies that have been identified as associated with the Centaurus A/M83 Group by I. D. Karachentsev and collaborators. Note that Karachentsev divides this group into two subgroups centered on Centaurus A and Messier 83. Additionally, ESO 219-010, PGC 39032, and PGC 51659 are listed as possibly being members of the Centaurus A Subgroup, and ESO 381-018, NGC 5408, and PGC 43048 are listed as possibly being members of the M83 Subgroup. Although HIPASS J1337-39 is only listed as a possible member of the M83 Subgroup in the later list published by Karachentsev, later analyses indicate that this galaxy is within the subgroup. Saviane and Jerjen found that NGC 5011C has an optical redshift of 647 km/s and thus is a member of the Cen A group rather than of the distant Centaurus galaxy cluster as believed since 1983. References Galaxy clusters
Centaurus A/M83 Group
[ "Astronomy" ]
807
[ "Galaxy clusters", "Astronomical objects" ]
970,579
https://en.wikipedia.org/wiki/Nationwide%20Urban%20Runoff%20Program
The Nationwide Urban Runoff Program (NURP) was a research project conducted by the United States Environmental Protection Agency (EPA) between 1979 and 1983. It was the first comprehensive study of urban stormwater pollution across the United States. Study objectives The principal focus areas of the study consisted of: Examine the water quality aspects of urban runoff, and a comparison of results across various urban sites Assess the impact of urban runoff on overall water quality Implement stormwater management best practices. A major component of the project was an analysis of water samples collected during 2,300 storms in 28 major metropolitan areas. Findings Among the conclusions of the report are the following: "Heavy metals (especially copper, lead and zinc) are by far the most prevalent priority pollutant constituents found in urban runoff...Copper is suggested to be the most significant [threat] of the three." "Coliform bacteria are present at high levels in urban runoff." "Nutrients are generally present in urban runoff, but... [generally] concentrations do not appear to be high in comparison with other possible discharges." "Oxygen demanding substances are present in urban runoff at concentrations approximating those in secondary treatment plant discharges." "The physical aspects of urban runoff, e.g. erosion and scour, can be a significant cause of habitat disruption and can affect the type of fishery present." "Detention basins... [and] recharge devices are capable of providing very effective removal of pollutants in urban runoff." "Wet basins (designs which maintain a permanent water pool) have the greatest performance capabilities." "Wetlands are considered to be a promising technique for control of urban runoff quality." An interesting finding of the NURP was that street sweeping was considered to be, "ineffective as a technique for improving the quality of urban runoff". Impact of the report In 1987, the results of the report were used as the basis of an amendment to the Clean Water Act requiring local governments and industry to address the pollution sources indicated by the report. The amendment requires all industrial stormwater dischargers (including many construction sites) and municipal storm sewer systems, affecting virtually all cities and towns in the country, to obtain discharge permits. EPA published national stormwater regulations in 1990 and 1999. EPA and state agencies began issuing stormwater permits in 1991. See Stormwater management permits. About "NURP ponds" The term "NURP ponds" refers to retention basins (also called "wet ponds") that capture sediment from stormwater runoff as it is detained, and that are designed to perform to the level of the more effective ponds observed in the NURP studies. Some practitioners may assume that a "NURP pond" design conforms to some particular standard issued by EPA, but in fact EPA has issued no regulations or other requirements regarding the design of stormwater ponds. (However, some states and municipalities have issued stormwater design manuals, and these publications may include a reference to a "NURP pond".) See also Green infrastructure Stormwater management Water pollution in the United States References External links EPA Stormwater Permit Program EPA Nonpoint Source Management Program Stormwater management Water pollution in the United States United States Environmental Protection Agency
Nationwide Urban Runoff Program
[ "Chemistry", "Environmental_science" ]
648
[ "Water treatment", "Stormwater management", "Water pollution" ]
970,599
https://en.wikipedia.org/wiki/Prefabrication
Prefabrication is the practice of assembling components of a structure in a factory or other manufacturing site, and transporting complete assemblies or sub-assemblies to the construction site where the structure is to be located. Some researchers refer it to “various materials joined together to form a component of the final installation procedure“. The most commonly cited definition is by Goodier and Gibb in 2007, which described the process of manufacturing and preassembly of a certain number of building components, modules, and elements before their shipment and installation on construction sites. The term prefabrication also applies to the manufacturing of things other than structures at a fixed site. It is frequently used when fabrication of a section of a machine or any movable structure is shifted from the main manufacturing site to another location, and the section is supplied assembled and ready to fit. It is not generally used to refer to electrical or electronic components of a machine, or mechanical parts such as pumps, gearboxes and compressors which are usually supplied as separate items, but to sections of the body of the machine which in the past were fabricated with the whole machine. Prefabricated parts of the body of the machine may be called 'sub-assemblies' to distinguish them from the other components. Process and theory An example from house-building illustrates the process of prefabrication. The conventional method of building a house is to transport bricks, timber, cement, sand, steel and construction aggregate, etc. to the site, and to construct the house on site from these materials. In prefabricated construction, only the foundations are constructed in this way, while sections of walls, floors and roof are prefabricated (assembled) in a factory (possibly with window and door frames included), transported to the site, lifted into place by a crane and bolted together. Prefabrication is used in the manufacture of ships, aircraft and all kinds of vehicles and machines where sections previously assembled at the final point of manufacture are assembled elsewhere instead, before being delivered for final assembly. The theory behind the method is that time and cost is saved if similar construction tasks can be grouped, and assembly line techniques can be employed in prefabrication at a location where skilled labour is available, while congestion at the assembly site, which wastes time, can be reduced. The method finds application particularly where the structure is composed of repeating units or forms, or where multiple copies of the same basic structure are being constructed. Prefabrication avoids the need to transport so many skilled workers to the construction site, and other restricting conditions such as a lack of power, lack of water, exposure to harsh weather or a hazardous environment are avoided. Against these advantages must be weighed the cost of transporting prefabricated sections and lifting them into position as they will usually be larger, more fragile and more difficult to handle than the materials and components of which they are made. History Prefabrication has been used since ancient times. For example, it is claimed that the world's oldest known engineered roadway, the Sweet Track constructed in England around 3800 BC, employed prefabricated timber sections brought to the site rather than assembled on-site. Sinhalese kings of ancient Sri Lanka have used prefabricated buildings technology to erect giant structures, which dates back as far as 2000 years, where some sections were prepared separately and then fitted together, specially in the Kingdom of Anuradhapura and Polonnaruwa. After the great Lisbon earthquake of 1755, the Portuguese capital, especially the Baixa district, was rebuilt by using prefabrication on an unprecedented scale. Under the guidance of Sebastião José de Carvalho e Melo, popularly known as the Marquis de Pombal, the most powerful royal minister of D. Jose I, a new Pombaline style of architecture and urban planning arose, which introduced early anti-seismic design features and innovative prefabricated construction methods, according to which large multistory buildings were entirely manufactured outside the city, transported in pieces and then assembled on site. The process, which lasted into the nineteenth century, lodged the city's residents in safe new structures unheard-of before the quake. Also in Portugal, the town of Vila Real de Santo António in the Algarve, founded on 30 December 1773, was quickly erected through the use of prefabricated materials en masse. The first of the prefabricated stones was laid in March 1774. By 13 May 1776, the centre of the town had been finished and was officially opened. In 19th century Australia a large number of prefabricated houses were imported from the United Kingdom. The method was widely used in the construction of prefabricated housing in the 20th century, such as in the United Kingdom as temporary housing for thousands of urban families "bombed out" during World War II. Assembling sections in factories saved time on-site and the lightness of the panels reduced the cost of foundations and assembly on site. Coloured concrete grey and with flat roofs, prefab houses were uninsulated and cold and life in a prefab acquired a certain stigma, but some London prefabs were occupied for much longer than the projected 10 years. The Crystal Palace, erected in London in 1851, was a highly visible example of iron and glass prefabricated construction; it was followed on a smaller scale by Oxford Rewley Road railway station. During World War II, prefabricated Cargo ships, designed to quickly replace ships sunk by Nazi U-boats became increasingly common. The most ubiquitous of these ships was the American Liberty ship, which reached production of over 2,000 units, averaging 3 per day. Current uses The most widely used form of prefabrication in building and civil engineering is the use of prefabricated concrete and prefabricated steel sections in structures where a particular part or form is repeated many times. It can be difficult to construct the formwork required to mould concrete components on site, and delivering wet concrete to the site before it starts to set requires precise time management. Pouring concrete sections in a factory brings the advantages of being able to re-use moulds and the concrete can be mixed on the spot without having to be transported to and pumped wet on a congested construction site. Prefabricating steel sections reduces on-site cutting and welding costs as well as the associated hazards. Prefabrication techniques are used in the construction of apartment blocks, and housing developments with repeated housing units. Prefabrication is an essential part of the industrialization of construction. The quality of prefabricated housing units had increased to the point that they may not be distinguishable from traditionally built units to those that live in them. The technique is also used in office blocks, warehouses and factory buildings. Prefabricated steel and glass sections are widely used for the exterior of large buildings. Detached houses, cottages, log cabin, saunas, etc. are also sold with prefabricated elements. Prefabrication of modular wall elements allows building of complex thermal insulation, window frame components, etc. on an assembly line, which tends to improve quality over on-site construction of each individual wall or frame. Wood construction in particular benefits from the improved quality. However, tradition often favors building by hand in many countries, and the image of prefab as a "cheap" method only slows its adoption. However, current practice already allows the modifying the floor plan according to the customer's requirements and selecting the surfacing material, e.g. a personalized brick facade can be masoned even if the load-supporting elements are timber. Today, prefabrication is used in various industries and construction sectors such as healthcare, retail, hospitality, education, and public administration, due to its many advantages and benefits over traditional on-site construction, such as reduced installation time and cost savings. Being used in single-story buildings as well as in multi-story projects and constructions. Providing the possibility of applying it to a specific part of the project or to the whole of it. The efficiency and speed in the execution times of these works offer that, for example, in the case of the educational sector, it is possible to execute the projects without the cessation of the operations of the educational facilities during the development of the same. Prefabrication saves engineering time on the construction site in civil engineering projects. This can be vital to the success of projects such as bridges and avalanche galleries, where weather conditions may only allow brief periods of construction. Prefabricated bridge elements and systems offer bridge designers and contractors significant advantages in terms of construction time, safety, environmental impact, constructibility, and cost. Prefabrication can also help minimize the impact on traffic from bridge building. Additionally, small, commonly used structures such as concrete pylons are in most cases prefabricated. Radio towers for mobile phone and other services often consist of multiple prefabricated sections. Modern lattice towers and guyed masts are also commonly assembled of prefabricated elements. Prefabrication has become widely used in the assembly of aircraft and spacecraft, with components such as wings and fuselage sections often being manufactured in different countries or states from the final assembly site. However, this is sometimes for political rather than commercial reasons, such as for Airbus. Advantages Moving partial assemblies from a factory often costs less than moving pre-production resources to each site Deploying resources on-site can add costs; prefabricating assemblies can save costs by reducing on-site work Factory tools - jigs, cranes, conveyors, etc. - can make production faster and more precise Factory tools - shake tables, hydraulic testers, etc. - can offer added quality assurance Consistent indoor environments of factories eliminate most impacts of weather on production Cranes and reusable factory supports can allow shapes and sequences without expensive on-site falsework Higher-precision factory tools can aid more controlled movement of building heat and air, for lower energy consumption and healthier buildings Factory production can facilitate more optimal materials usage, recycling, noise capture, dust capture, etc. Machine-mediated parts movement, and freedom from wind and rain can improve construction safety Homogeneous manufacturing allows high standardization and quality control, ensuring quality requirements subject to performance and resistance tests, which also facilitate high scalability of construction projects. The specific production processes in industrial assembly lines allow high sustainability, which enables savings of up to 20% of the total final cost, as well as considerable savings in indirect costs. Disadvantages Transportation costs may be higher for voluminous prefabricated sections (especially sections so big that they constitute oversize loads requiring special signage, escort vehicles, and temporary road closures) than for their constituent materials, which can often be packed more densely and are more likely to fit onto standard-sized vehicles. Large prefabricated sections may require heavy-duty cranes and precision measurement and handling to place in position. Off-site fabrication Off-site fabrication is a process that incorporates prefabrication and pre-assembly. The process involves the design and manufacture of units or modules, usually remote from the work site, and the installation at the site to form the permanent works at the site. In its fullest sense, off-site fabrication requires a project strategy that will change the orientation of the project process from construction to manufacture to installation. Examples of off-site fabrication are wall panels for homes, wooden truss bridge spans, airport control stations. There are four main categories of off-site fabrication, which is often also referred to as off-site construction. These can be described as component (or sub-assembly) systems, panelised systems, volumetric systems, and modular systems. Below these categories different branches, or technologies are being developed. There are a vast number of different systems on the market which fall into these categories and with recent advances in digital design such as building information modeling (BIM), the task of integrating these different systems into a construction project is becoming increasingly a "digital" management proposition. The prefabricated construction market is booming. It is growing at an accelerated pace both in more established markets such as North America and Europe and in emerging economies such as the Asia-Pacific region (mainly China and India). Considerable growth is expected in the coming years, with the prefabricated modular construction market expected to grow at a CAGR (compound annual growth rate) of 8% between 2022 and 2030. It is expected to reach USD 271 billion by 2030. See also Prefabricated home Prefabricated buildings Concrete perpend Panelák Tower block St Crispin's School — an example of a prefabricated school building Nonsuch House, first prefabricated building Agile construction Intermediate good References Sources Manufacturing
Prefabrication
[ "Engineering" ]
2,605
[ "Manufacturing", "Mechanical engineering" ]
970,650
https://en.wikipedia.org/wiki/Specific%20absorption%20rate
Specific absorption rate (SAR) is a measure of the rate at which energy is absorbed per unit mass by a human body when exposed to a radio frequency (RF) electromagnetic field. It is defined as the power absorbed per mass of tissue and has units of watts per kilogram (W/kg). SAR is usually averaged either over the whole body, or over a small sample volume (typically 1 g or 10 g of tissue). The value cited is then the maximum level measured in the body part studied over the stated volume or mass. Calculation SAR for electromagnetic energy can be calculated from the electric field within the tissue as where is the sample electrical conductivity, is the RMS electric field, is the sample density, is the volume of the sample. SAR measures exposure to fields between 100 kHz and 10 GHz (known as radio waves). It is commonly used to measure power absorbed from mobile phones and during MRI scans. The value depends heavily on the geometry of the part of the body that is exposed to the RF energy and on the exact location and geometry of the RF source. Thus tests must be made with each specific source, such as a mobile-phone model and at the intended position of use. Mobile phone SAR testing When measuring the SAR due to a mobile phone the phone is placed against a representation of a human head (a "SAR Phantom") in a talk position. The SAR value is then measured at the location that has the highest absorption rate in the entire head, which in the case of a mobile phone is often as close to the phone's antenna as possible. Measurements are made for different positions on both sides of the head and at different frequencies representing the frequency bands at which the device can transmit. Depending on the size and capabilities of the phone, additional testing may also be required to represent usage of the device while placed close to the user's body and/or extremities. Various governments have defined maximum SAR levels for RF energy emitted by mobile devices: United States: the FCC requires that phones sold have a SAR level at or below 1.6 watts per kilogram (W/kg) taken over the volume containing a mass of 1 gram of tissue that is absorbing the most signal. European Union: CENELEC specify SAR limits within the EU, following IEC standards. For mobile phones, and other such hand-held devices, the SAR limit is 2 W/kg averaged over the 10 g of tissue absorbing the most signal (IEC 62209-1). India: switched from the EU limits to the US limits for mobile handsets in 2012. Unlike the US, India will not rely solely on SAR measurements provided by manufacturers; random compliance tests are done by a government-run Telecommunication Engineering Center (TEC) SAR Laboratory on handsets and 10% of towers. All handsets must have a hands-free mode. SAR values are heavily dependent on the size of the averaging volume. Without information about the averaging volume used, comparisons between different measurements cannot be made. Thus, the European 10-gram ratings should be compared among themselves, and the American 1-gram ratings should only be compared among themselves. To check SAR on your mobile phone, review the documentation provided with the phone, dial *#07# (only works on some models) or visit the manufacturer's website. MRI scanner SAR testing For magnetic resonance imaging the limits (described in IEC 60601-2-33) are slightly more complicated: Note: Averaging time of 6 minutes. (a) Local SAR is determined over the mass of 10 g. (b) The limit scales dynamically with the ratio "exposed patient mass / patient mass": Normal operating mode: Partial body SAR = 10 W/kg − (8 W/kg × exposed patient mass / patient mass). 1st level controlled: Partial body SAR = 10 W/kg − (6 W/kg × exposed patient mass / patient mass). (c) In cases where the orbit is in the field of a small local RF transmit coil, care should be taken to ensure that the temperature rise is limited to 1 °C. Criticism SAR limits set by law do not consider that the human body is particularly sensitive to the power peaks or frequencies responsible for the microwave hearing effect. Frey reports that the microwave hearing effect occurs with average power density exposures of 400 μW/cm2, well below SAR limits (as set by government regulations). Notes: In comparison to the short term, relatively intensive exposures described above, for long-term environmental exposure of the general public there is a limit of 0.08 W/kg averaged over the whole body. A whole-body average SAR of 0.4 W/kg has been chosen as the restriction that provides adequate protection for occupational exposure. An additional safety factor of 5 is introduced for exposure of the public, giving an average whole-body SAR limit of 0.08 W/kg. FCC advice The FCC guide "Specific Absorption Rate (SAR) For Cell Phones: What It Means For You", after detailing the limitations of SAR values, offers the following "bottom line" editorial: MSBE (minimum SAR with biological effect) In order to find out possible advantages and the interaction mechanisms of electromagnetic fields (EMF), the minimum SAR (or intensity) that could have biological effect (MSBE) would be much more valuable in comparison to studying high-intensity fields. Such studies can possibly shed light on thresholds of non-ionizing radiation effects and cell capabilities (e.g., oxidative response). In addition, it is more likely to reduce the complexity of the EMF interaction targets in cell cultures by lowering the exposure power, which at least reduces the overall rise in temperature. This parameter might differ regarding the case under study and depends on the physical and biological conditions of the exposed target. FCC regulations The FCC regulations for SAR are contained in 47 C.F.R. 1.1307(b), 1.1310, 2.1091, 2.1093 and also discussed in OET Bulletin No. 56, "Questions and Answers About the Biological Effects and Potential Hazards of Radiofrequency Electromagnetic Fields." European regulations Specific energy absorption rate (SAR) averaged over the whole body or over parts of the body, is defined as the rate at which energy is absorbed per unit mass of body tissue and is expressed in watts per kilogram (W/kg). Whole body SAR is a widely accepted measure for relating adverse thermal effects to RF exposure. Legislative acts in the European Union include directive 2013/35/EU of the European Parliament and of the Council of 26 June 2013 on the minimum health and safety requirements regarding the exposure of workers to the risks arising from physical agents (electromagnetic fields) (20th individual Directive within the meaning of Article 16(1) of Directive 89/391/EEC) and repealing Directive 2004/40/EC) in its annex III "THERMAL EFFECTS" for "EXPOSURE LIMIT VALUES AND ACTION LEVELS IN THE FREQUENCY RANGE FROM 100 kHz TO 300 GHz". See also Dielectric heating Electromagnetic radiation and health References External links Specific Absorption Rate (SAR) for Cellular Telephones at the US Federal Communications Commission (FCC) "Evaluating Compliance with FCC Guidelines for Human Exposure to Radiofrequency Electromagnetic Field" (Supplement C to OET Bulletin 65), June 2001; a detailed technical document about measuring SAR Electromagnetic fields and public health at the World Health Organization (WHO) "An Update on SAR Standards and the Basic Requirements for SAR Assessment" at ETS-Lindgren website (Archive.org link), April 2005 Example of a detailed SAR report from the FCC web site (for an Apple iPod Touch 4th generation); hosted at 3rd party website Manufacturers' SAR official websites Apple Samsung Sony Huawei FCC Regulations OET Bulletin No. 56, "Questions and Answers About the Biological Effects and Potential Hazards of Radiofrequency Electromagnetic Fields." Radiobiology Biophysics Rates
Specific absorption rate
[ "Physics", "Chemistry", "Biology" ]
1,614
[ "Radiobiology", "Radioactivity", "Applied and interdisciplinary physics", "Biophysics" ]
970,663
https://en.wikipedia.org/wiki/Silicon%20Alley
Silicon Alley is an area of high tech companies centered around southern Manhattan's Flatiron district in New York City. The term was coined in the 1990s during the dot-com boom, alluding to California's Silicon Valley tech center. The term has grown somewhat obsolete since 2003 as New York tech companies spread outside of Manhattan, and New York as a whole is now a top-tier global high technology hub. Silicon Alley, once a metonym for the sphere encompassing the metropolitan region's high technology industries, is no longer a relevant moniker as the city's tech environment has expanded dramatically both in location and in its scope. New York City's current tech sphere encompasses a universal array of applications involving artificial intelligence, the internet, new media, financial technology (fintech) and cryptocurrency, biotechnology, game design, and other fields within information technology that are supported by its entrepreneurship ecosystem and venture capital investments. Origin The term Silicon Alley was derived from the long-established Silicon Valley in California. It was originally centered in the Flatiron District, in the vicinity of the Flatiron Building at Fifth Avenue near Broadway and 23rd Street, straddling Midtown and Lower Manhattan. Silicon Alley initially also used to extend to Dumbo, a neighborhood in Brooklyn. Columbia University and NYU's leaderships were especially important in the alley's early development. The term Silicon Alley may have originated in 1995 by a New York staffing recruiter, Jason Denmark, who was supporting clients in the newly dubbed technical hub in downtown Manhattan; in an effort to attract candidates who, at that time, were focusing on positions in Silicon Valley, he posted in public usenet postings of Object Technology Developers, job ads with the Silicon Alley label. "Subject: NYC - silicon ALLEY" shows up in an internet post by Jason Denmark on February 16, 1995; another Jason Denmark post on June 16, 1995, is "Subject: SILICON 'ALLEY' POSITIONS." The first publication to cover Silicon Alley was @NY, an online newsletter founded in the summer of 1995 by Tom Watson and Jason Chervokas. The first magazine to focus on venture capital opportunities in Silicon Alley, AlleyCat News co-founded by Anna Copeland Wheatley and Janet Stites, was launched in the fall of 1996. Courtney Pulitzer branched off from her @The Scene column with @NY and created Courtney Pulitzer's Cyber Scene and her popular networking events Cocktails with Courtney. First Tuesday, co-founded by Vincent Grimaldi de Puget and John Grossbart, became the largest gathering of Silicon Alley, welcoming 500 to 1000 venture capitalists and entrepreneurs every month. It was an initiative of law firm Sonnenschein and the Kellogg School of Management, as well as other corporate founders, including Accenture (then Andersen Consulting), AlleyCat News and Merrill Lynch. Silicon Alley Reporter started publishing in October 1996. It was founded by Jason Calacanis and was in business from 1996 to 2001. @NY, print magazines, and the attending media coverage by the larger New York press helped to popularize both the name, and the idea of New York City as a dot-com center. In 1997, over 200 members and leaders of Silicon Alley joined NYC entrepreneurs, Andrew Rasiej and Cecilia Pagkalinawan to help wire Washington Irving High School to the Internet. This response and the Department of Education's growing need for technology integration marked the birth of Making Opportunities for Upgrading Schools and Education (MOUSE), an organization that today serves tens of thousands of underserved youth in schools in five states and over 20 countries. Dot-com bust The rapid growth of internet companies during the 1990s, known as the dot-com bubble, came to a rapid halt during the early 2000s recession. During this economic contraction, many internet companies in Silicon Alley folded. The recession also affected publications that covered the sector. After the dot-com bust, the Silicon Alley Reporter was rebranded as Venture Reporter, in September 2001, and sold to Dow Jones. Self-financed AlleyCat News ceased publication in October 2001. Recovery A couple of years after the dot-com bust, Silicon Alley began making its comeback with the help of NY Tech Meetup, and NextNY. On December 19, 2011, then Mayor Michael R. Bloomberg announced his choice of Cornell University and Technion-Israel Institute of Technology to build a US$2 billion graduate school of applied sciences on Roosevelt Island, with the goal of transforming New York City into the world's premier technology capital. As of 2013, Google's second largest office by number of employees, 111 Eighth Avenue, is located in New York. Verizon Communications, headquartered at 140 West Street in Lower Manhattan, was in 2014 in the final stages of completing a US$3 billion fiber-optic telecommunications upgrade throughout New York City. This revival was not restricted to Lower Manhattan, but was spread throughout New York City. Hence "Silicon Alley" has been considered by some observers to be an obsolete term. See also BioValley Silicon Beach - Westside, Los Angeles Silicon Docks - Dublin, Republic of Ireland Silicon Fen - Cambridge, United Kingdom Silicon Forest - Portland, Oregon Silicon Hills - Austin, Texas Silicon Prairie - Several Midwestern cities Silicon Slopes - Lehi, Utah Silicon Valley - San Jose, California Silicon Wadi - coastal Israel Tech Valley - Hudson Valley, New York References Further reading "How Silicon Alley Growth is Outpacing Silicon Valley," December 10, 2015 New York Post, "Silicon Alley Soaring," January 24, 2012 The New York Times, "Alive and Well in Silicon Alley", March 12, 2006 The New York Times, "New York Isn't Silicon Valley. That’s Why They Like It", March 6, 2010 SiliconAlley.com, "New York's Tax-Free Zones: An Emerging Technology Company's Dream Come True?," July 26,2013 Silicon Alley' Is Dead" "Silicon Valley vs. Silicon Alley: Can New York compete with the best of the west?" Neighborhoods in Manhattan Economy of New York City High-technology business districts in the United States Information technology places Flatiron District
Silicon Alley
[ "Technology" ]
1,248
[ "Information technology", "Information technology places" ]
970,666
https://en.wikipedia.org/wiki/NGC%203115
NGC 3115 (also called the Spindle Galaxy or Caldwell 53) is a field lenticular (S0) galaxy in the constellation Sextans. The galaxy was discovered by William Herschel on February 22, 1787. At about 32 million light-years away from Earth, it is several times bigger than the Milky Way. It is a lenticular (S0) galaxy because it contains a disk and a central bulge of stars, but without a detectable spiral pattern. NGC 3115 is seen almost exactly edge-on, but was nevertheless mis-classified as elliptical. There is some speculation that NGC 3115, in its youth, was a quasar. One supernova has been observed in NGC 3115: SN 1935B (type and mag. unknown). Star formation NGC 3115 has consumed most of the gas of its youthful accretion disk. It has very little gas and dust left that would trigger new star formation. The vast majority of its component stars are very old. Black hole In 1992 John Kormendy of the University of Hawaii and Douglas Richstone of the University of Michigan announced what was observed to be a supermassive black hole in the galaxy. Based on orbital velocities of the stars in its core, the central black hole has mass measured to be approximately one billion solar masses (). The galaxy appears to have mostly old stars and little or no activity. The growth of its black hole has also stopped. In 2011, NASA's Chandra X-ray Observatory examined the black hole at the center of the large galaxy. A flow of hot gas toward the supermassive black hole has been imaged, making this the first time clear evidence for such a flow has been observed in any black hole. As gas flows toward the black hole, it becomes hotter and brighter. The researchers found the rise in gas temperature begins at about 700 light years from the black hole, giving the location of the Bondi radius. This suggests that the black hole in the center of NGC 3115 has a mass of about two billion , supporting previous results from optical observations. This would make NGC 3115 the nearest billion-solar-mass black hole to Earth. See also NGC 5866 – another lenticular galaxy sometimes referred to as the Spindle Galaxy References External links Chandra Press Release SEDS: NGC 3115 Lenticular galaxies Field galaxies Sextans 3115 29265 053b 17870222 UGCA objects
NGC 3115
[ "Astronomy" ]
499
[ "Sextans", "Constellations" ]
970,932
https://en.wikipedia.org/wiki/Anal%20masturbation
Anal masturbation is an autoerotic practice in which a person masturbates by sexually stimulating their own anus and rectum. Common methods of anal masturbation include manual stimulation of the anal opening and the insertion of an object or objects. Items inserted may be sex toys such as anal beads, butt plugs, dildos, vibrators, or specially designed prostate massagers or enemas. Method Pleasure can be derived from anal masturbation due to the nerve endings in the anal and rectal areas. Men In men, orgasmic function through genitalia depends in part on the healthy functioning of the smooth muscles surrounding the prostate, and of the pelvic floor muscles. Anal masturbation can be especially pleasurable for those with a functioning prostate because it often stimulates the area, which also contains sensitive nerve endings. Some men find the quality of their orgasm to be significantly enhanced by the use of a butt plug or other anally inserted item during sexual activity. It is typical for a man to not reach orgasm as a receptive partner solely from anal sex. Women Some women also engage in anal masturbation. Alfred Kinsey in "Sexual Behavior in the Human Female" documented that "There still [are] other masturbatory techniques which were regularly or occasionally employed by some 11 percent of the females in the sample ... enemas, and other anal insertions, ... were employed." Other methods Enemas can be used as a form of anal masturbation, as noted above by Kinsey, sexual arousal by enemas being known as klismaphilia, but also, enemas or anal douches can, for hygienic reasons, be taken prior to anal masturbation if desired. Autosodomy Autosodomy is the penetration of one's own anus with their own penis. This is possible if the penis is long enough and the genitals are properly maneuvered. Safety Insertion of foreign objects into the anus is not without dangers. Unsafe anal masturbation methods cause harm and a potential trip to the hospital emergency room. However, anal masturbation can be carried out in greater safety by ensuring that the bowel is emptied before beginning, the anus and rectum are sufficiently lubricated and relaxed throughout, and the inserted object is not of too great a size. Objects Some anal stimulators are purposely ribbed or have a wave pattern in order to enhance pleasure and simulate intercourse. Stimulating the rectum with a rough-edged object or a finger (for the purposes of medically stimulating a bowel movement or other reasons) may lead to rectum wall tearing, especially if the fingernail is left untrimmed. Vegetables have rough edges and may have microorganisms on the surface, and thus could lead to infection if not sanitized before use. Risks associated with bleeding Minor injuries that cause some bleeding to the rectum pose measurable risk and often need treatment. Injury can be contained by cessation of anal stimulation at any sign of injury, bleeding, or pain. While minor bleeding may stop of its own accord, individuals with serious injury, clotting problems, or other medical factors could face serious risk and require medical attention. Prolonged or heavy bleeding can indicate a life-threatening situation, as the intestinal wall can be damaged, leading to internal injury of the peritoneal cavity and peritonitis, which can be fatal. Carefully using implements without sharp edges or rough surfaces carries a lower risk of damage to the intestinal wall. The treatment for persistent or heavy bleeding will require a visit to an emergency room for a sigmoidoscopy and cauterization in order to prevent further loss of blood. Apart from the volume of blood that is lost into the rectum, other easily observable indications that medical intervention is urgently needed as a result of blood loss are an elevated heart rate, a general feeling of faintness or weakness, and a loss of pleasure from the act. Rectal foreign bodies Butt plugs normally have a flared base to prevent complete insertion and should be carefully sanitized before and after use. Sex toys, including objects for rectal insertion, should not be shared in order to minimize the risk of disease. Objects such as lightbulbs or anything breakable such as glass or wax candles cannot safely be used in anal masturbation, as they may break or shatter, causing highly dangerous medical situations. Some objects can become lodged above the lower colon and could be seriously difficult to remove. Such foreign bodies should not be allowed to remain in place. Medical help should be sought if the object does not emerge on its own. Immediate assistance is recommended if the object is not a proper rectal toy (like a plug or something soft, for example), if it is either too hard, too large, has projections, or slightly sharp edges, or if any trace of injury happens (bleeding, pain, cramps). Small objects with dimensions similar to small stools are less likely to become lodged than medium-sized or large objects as they can usually be expelled by forcing a bowel movement. It is always safest if a graspable part of the object remains outside the body. Hygiene The biological function of the anus is to expel intestinal gas and feces from the body; therefore, when engaging in anal masturbation, hygiene is important. One may wish to cover butt plugs or other objects with a condom before insertion and then dispose of the condom afterward. To minimize the potential transfer of germs between sexual partners, there are practices of safe sex recommended by healthcare professionals. Oral or vaginal infection may occur similarly to penile anus-to-mouth or anilingus practices. See also Anal eroticism Anal sex Prostate massage References Anal eroticism Masturbation Sexual acts
Anal masturbation
[ "Biology" ]
1,202
[ "Sexual acts", "Behavior", "Sexuality", "Mating" ]
14,364,175
https://en.wikipedia.org/wiki/Transmission%20control%20room
A transmission control room (TCR), transmission suite, Tx room, or presentation suite is a room at broadcast facilities and television stations around the world. Compared to a master control room, it is usually smaller in size and is a scaled-down version of centralcasting. A TX room or presentation suite will be staffed 24/7 by presentation coordinators and tape operators and will be fitted out with video play-out systems often using server-based broadcast automation. For operational and content qualitative reasons, not more than two television channels are managed from one TCR. Channels with live content and production switching requirements like sports channels have their own dedicated TCRs. A television station may have several TCRs depending on the number of channels they broadcast. Presentation suite The presentation suite is staffed 24/7 by on-air presentation coordinators who are responsible for the continuity and punctual playout of scheduled broadcast programming. Programming may be live from the television studio or played from video tape or from video server playout. When broadcast programming is 'live' the presentation coordinator will override the broadcast automation system and manually switch television programming. The presentation coordinator will directly coordinate live television programming going to air in consultation with master control and the production assistant (PA) or the director's assistant (DA). The presentation coordinator will arrange program source to be allocated by master control and advise the DA as to the start time and count the production in from 10 seconds to first-frame of picture and the DA will count the production out to the television commercial break and so on it continues to the end of the program. Live programming is unpredictable and will affect the scheduled timing of scheduled programming events; the presentation coordinator adjusts programming to bring the schedule back on time by adding or removing fill content from the playout schedule. Common TCR equipment Broadcast control desk Broadcast automation control computers Production switcher Talkback (recording) Broadcast quality video monitor Waveform monitor SDI audio de-embedder Video playout automation Character generator (CG) titles generator Emergency Alert System encoding/decoding systems See also Master control room Central apparatus room Broadcast engineering References Broadcasting Broadcast engineering Rooms Television terminology
Transmission control room
[ "Engineering" ]
436
[ "Rooms", "Broadcast engineering", "Electronic engineering", "Architecture" ]
14,365,497
https://en.wikipedia.org/wiki/Ethyl%20phenyl%20ether
Ethyl phenyl ether (or phenetole) is an organic compound that belongs to a class of compounds called ethers. Ethyl phenyl ether has the same properties as some other ethers, such as volatility, explosive vapors, and the ability to form peroxides. It will dissolve in less polar solvents such as ethanol or ether, but not in polar solvents such as water. Preparation Ethyl phenyl ether can be prepared by the reaction of phenol with diethyl sulfate: This reaction follows SN2 path. See also Anisole Notes Additional references Organic Chemistry, Fessenden & Fessenden, 6th Edition, Ralph J. Fessenden et al. For Antoine constants: http://webbook.nist.gov/cgi/cbook.cgi?ID=C103731&Units=SI&Mask=4#ref-10 Phenol ethers Phenyl compounds
Ethyl phenyl ether
[ "Chemistry" ]
203
[]
14,365,934
https://en.wikipedia.org/wiki/HD%204113
HD 4113 is a dual star system in the southern constellation of Sculptor. It is too faint to be viewed with the naked eye, having an apparent visual magnitude of 7.88. The distance to this star, as estimated by parallax measurements, is 137 light years. It is receding away from the Sun with a radial velocity of +5 km/s. The primary member of this system, component A, is a Sun-like G-type main-sequence star with a stellar classification of G5V. Estimates of its age are five to seven billion years old, and it is spinning with a leisurely projected rotational velocity of 2.3 km/s. The star is metal rich, with nearly the same mass, radius, and luminosity as the Sun. Orbiting this star is a giant planet and a brown dwarf (HD 4113 C); the latter has been directly imaged. It also has a co-moving stellar companion, designated component B, which is a red dwarf with a class of M0–1V at an angular separation of . This angle is equivalent to a projected separation of . The most recent parameters for HD 4113 C as of 2022 come from a combination of data from radial velocity, astrometry, and imaging, showing that it is about 52 times the mass of Jupiter, and on an eccentric orbit with a semi-major axis of about 50.4 AU and an orbital period of about 348 years. Planetary system On 26 October 2007, Tamuz et al. used the radial velocity method to find a planet with a minimum mass one and half times that of Jupiter orbiting at 1.28 AU away from HD 4113 A. The planet's orbit is highly eccentric. See also HD 156846 List of extrasolar planets References External links G-type main-sequence stars M-type main-sequence stars Binary stars Planetary systems with one confirmed planet Brown dwarfs Sculptor (constellation) Durchmusterung objects 004113 003391
HD 4113
[ "Astronomy" ]
408
[ "Constellations", "Sculptor (constellation)" ]
14,366,026
https://en.wikipedia.org/wiki/HD%20156846
HD 156846 is a binary star system in the equatorial constellation of Ophiuchus, positioned a degree SSE of Messier 9. It has a yellow hue and is just barely bright enough to be visible to the naked eye with an apparent visual magnitude of 6.5. The system is located at a distance of 156 light years from the Sun based on parallax. It is drifting closer with a radial velocity of −68.5 km/s, and is predicted to come to within in about 476,000 years. The primary, component A, is a G-type star with a stellar classification of G1V. The absolute visual magnitude of this star is 1.13 magnitudes above the main sequence, indicating it has evolved slightly off the main sequence. It has 1.35 times the mass of the Sun and 2.12 times the Sun's radius. The star is an estimated 2.8 billion years old and is spinning with a projected rotational velocity of 5 km/s. It is radiating five times the luminosity of the Sun from its photosphere at an effective temperature of 5,969 K. The magnitude 14.4 secondary companion, component B, was discovered by the American astronomer R. G. Aitken in 1910. It lies at an angular separation of from the primary, corresponding to a projected separation of . This is a red dwarf with a class of M4V and has an estimated 59% of the Sun's mass. Planetary system On 26 October 2007, a planet HD 156846 b was found orbiting the primary star by Tamuz, using the radial velocity method. It has an orbital period of and a large eccentricity of 0.85. The estimated mass of this object is, at a minimum, 10.6 times the mass of Jupiter. If it were following the same orbit within the Solar System, it would have a perihelion within the orbit of Mercury and an aphelion outside the orbit of Mars. See also HD 4113 List of extrasolar planets References G-type main-sequence stars M-type main-sequence stars Binary stars Planetary systems with one confirmed planet Ophiuchus Durchmusterung objects 156846 084856 6441
HD 156846
[ "Astronomy" ]
460
[ "Ophiuchus", "Constellations" ]
14,366,227
https://en.wikipedia.org/wiki/HD%2041004
HD 41004 is a visual binary star system in the southern constellation of Pictor. It is too faint to be visible to the naked eye, having a combined apparent visual magnitude of 8.65. The two components have a magnitude difference of 3.7, and share a common proper motion with an angular separation of , as of 2018. The distance to this system is approximately 127 light-years based on parallax. It is drifting further away from the Sun with a radial velocity of +42.5 km/s, having come to within some 831,000 years ago. The primary, component A, is a K-type main-sequence star with a stellar classification of K1V and a visual magnitude of 8.82. Torres et al. (2006) classed it as a K1IV star, suggesting it is a subgiant star that is evolving off the main sequence. It has 89% of the mass of the Sun and 104% of the Sun's radius. The star is radiating 63% of the Sun's luminosity from its photosphere at an effective temperature of 5,255 K. Its smaller companion, designated component B, is a red dwarf with spectral type M2V and apparent magnitude 12.33. It has a projected separation of from the primary. Companions A planet, HD 41004 Ab, was discovered by Zucker et al. and published in 2004. It has a minimum mass 2.56 times that of Jupiter. It orbits the primary star at a separation of 1.70 astronomical units, taking with a high eccentricity of 0.74. HD 41004 Bb is a brown dwarf that at the time of the discovery was orbiting closer to the secondary star than any known extrasolar planet or brown dwarf (a=0.0177 AU), at only 145 km/s, because of its low-mass parent star, taking 1.3 days. Its orbit is circular despite the gravitational effect of HD 41004 A because of the tidal effect of the nearby star HD 41004 B. A search for cyclotron radiation from the magnetosphere of Bb in 2020 did not find any, indicating the planet is either weakly magnetized, or the emission cone did not point to Earth at the time of observation. References External links K-type main-sequence stars M-type main-sequence stars Brown dwarfs Planetary systems with one confirmed planet Binary stars Pictor CD-48 02083 041004 028393
HD 41004
[ "Astronomy" ]
512
[ "Pictor", "Constellations" ]
14,366,304
https://en.wikipedia.org/wiki/HD%2072659
HD 72659 is a star in the equatorial constellation of Hydra. With an apparent visual magnitude of 7.46, his yellow-hued star is too faint to be viewed with the naked eye. Parallax measurements provide a distance estimate of 169.4 light years from the Sun, and it has an absolute magnitude of 3.98. The star is drifting closer with a radial velocity of −18.3 km/s. This is a Sun-like main sequence star with a stellar classification of G2V, indicating that it is generating energy through core hydrogen fusion. It is older than the Sun with an age of about seven billion years, and is spinning with a projected rotational velocity of 5.1 km/s. The star has 7% greater mass than the Sun and a 38% larger radius. It is radiating more than double the Sun's luminosity from its photosphere at an effective temperature of 5,956 K. The metallicity of the stellar atmosphere is similar to the Sun. Planetary system An extrasolar planet was discovered orbiting this star in 2003 via the Doppler method. This is a superjovian planet with an eccentric orbit, completing a lap around its host star every . In 2022, the inclination and true mass of HD 72659 b were measured via astrometry, along with the detection of a second substellar companion, likely a brown dwarf. See also HD 73256 List of extrasolar planets References External links G-type main-sequence stars Planetary systems with one confirmed planet Hydra (constellation) Durchmusterung objects 072659 042030 J08340320-0134056
HD 72659
[ "Astronomy" ]
342
[ "Hydra (constellation)", "Constellations" ]
14,366,377
https://en.wikipedia.org/wiki/HD%2073256
HD 73256 is a variable star in the southern constellation of Pyxis. It has the variable star designation CS Pyxidis. With a baseline apparent visual magnitude of 8.08, it requires binoculars or a small telescope to view. The star is located at a distance of 120 light years from the Sun based on parallax, and is drifting further away with a radial velocity of +30 km/s. The stellar classification of this star is G8IV-VFe+0.5, which suggests a slightly evolved G-type main-sequence star with a mild overabundance of iron in the spectrum. It is a BY Draconis variable with a period of 13.97 days, showing a variation of 0.03 in magnitude due to chromospheric activity. The star appears overluminous for its class, which may be the result of a high metallicity. The star has roughly the same mass and a slightly smaller radius as the Sun, but is radiating 74% of the Sun's luminosity. It is around 2–3 billion years old and is spinning with a projected rotational velocity of 3.2 km/s. Planetary system In 2003, S. Udry and colleagues reported the discovery of a planet in orbit around HD 73256 using data from the CORALIE spectrograph. This object is a hot Jupiter with at least 1.87 times the mass of Jupiter in an orbit with a period of 2.55 days. Assuming the planet is perfectly grey with no greenhouse or tidal effects, and a Bond albedo of 0.1, the temperature would be about 1300 K. This is close to 51 Pegasi b; between the predicted temperatures of HD 189733 b and HD 209458 b (1180-1392K), before they were measured. It is a candidate for "near-infrared characterisation with the VLTI Spectro-Imager". In 2018, K. Ment and colleagues reported an attempt to confirm the existence of this planet using Keck/HIRES data, but were unable to do so despite a likelihood of success. Thus the existence of this object is disputed. In 2023, a different substellar companion on a wide orbit, likely a brown dwarf, was discovered using both radial velocity and astrometry. This study did also detect HD 73256 b, but did not update its parameters or address the dispute. See also HD 72659 List of extrasolar planets References G-type main-sequence stars G-type subgiants Hypothetical planetary systems BY Draconis variables Pyxis CD-29 06456 073256 042214 Pyxidis, CS
HD 73256
[ "Astronomy" ]
550
[ "Pyxis", "Constellations" ]
14,366,469
https://en.wikipedia.org/wiki/HD%2086081
HD 86081 is a yellow-hued star in the equatorial constellation of Sextans. It has the proper name Bibhā, the Bengali form of a Sanskrit word meaning a bright beam of light. The star is named after the physicist Bibha Chowdhuri (1913-1991), who studied cosmic rays. This name was suggested in the 2019 NameExoWorlds campaign. With an apparent visual magnitude of 8.73, this star is too dim to be viewed with the naked eye but can be seen with a small telescope. It is located at a distance of approximately 340 light years from the Sun based on parallax, and is drifting further away with a radial velocity of +31 km/s. Characteristics The stellar classification of this star is G1V, which indicates this is a G-type main-sequence star that, like the Sun, is generating energy through hydrogen fusion at its core. It is bigger and more massive than the Sun at 1.46 and 1.21 solar units respectively. The star is an estimated 3.6 billion years old and is spinning with a projected rotational velocity of 5 km/s. It is chromospherically inactive, with no emission seen in the core of the Ca II H and K lines. HD 86081 is radiating 2.9 times the luminosity of the Sun from its photosphere at an effective temperature of 5,973 K. Planetary system Monitoring of this star for radial velocity variations began in November 2005 and the first companion was discovered on April 17, 2006. This hot Jupiter is orbiting just from the host star and has an orbital period of 2.1 days, one of the shortest periods ever discovered by this technique. The separation of this exoplant is sufficiently low that it may have sped up the star's rotation through tidal interaction. HD 86081 shows no evidence of planetary transits in spite of a 17.6% transit probability. There is a linear trend in the star's radial velocity measurements that may be an indicator of additional unseen companions. See also HD 33283 HD 224693 List of extrasolar planets References G-type main-sequence stars Planetary systems with one confirmed planet Sextans BD-03 2815 086081 048711 Bibhā
HD 86081
[ "Astronomy" ]
474
[ "Sextans", "Constellations" ]
14,367,590
https://en.wikipedia.org/wiki/P1-derived%20artificial%20chromosome
A P1-derived artificial chromosome, or PAC, is a DNA construct derived from the DNA of P1 bacteriophages and Bacterial artificial chromosome. It can carry large amounts (about 100–300 kilobases) of other sequences for a variety of bioengineering purposes in bacteria. It is one type of the efficient cloning vector used to clone DNA fragments (100- to 300-kb insert size; average,150 kb) in Escherichia coli cells. History of PAC The bacteriophage P1 was first isolated by Dr. Giuseppe Bertani. In his study, he noticed that the lysogen produced abnormal non-continuous phages, and later found phage P1 was produced from the Lisbonne lysogen strain, in addition to bacteriophages P2 and P3. P1 has the ability to copy a bacteria's host genome and integrate that DNA information into other bacteria hosts, also known as generalized transduction. Later on, P1 was developed as a cloning vector by Nat Sternberg and colleagues in the 1990s. It is capable of Cre-Lox recombination. The P1 vector system was first developed to carry relatively large DNA fragments in plasmids (95-100kb). Construction PAC has 2 loxP sites, which can be used by phage recombinases to form the product from its cre-gene recognition during Cre-Lox recombination. This process circularizes the DNA strand, forming a plasmid, which can then be inserted into bacteria such as Escherichia coli. The transformation is usually done by electroporation, which uses electricity to allow the plasmids permeate into the cells. If high expression levels are desired, the P1 lytic replicon can be used in constructs. Electroporation allows for lysogeny of PACs so that they can replicate within cells without disturbing other chromosomes. Comparison with Other Artificial Chromosomes PAC is one of the artificial chromosome vectors. Some other artificial chromosomes include: bacterial artificial chromosome, yeast artificial chromosome and the human artificial chromosome. Compared to other artificial chromosomes, it can carry relatively large DNA fragments, however less so than the yeast artificial chromosome(YAC). Some advantages of PACs compared to YACs includes easier manipulation of bacteria system, easier separation from DNA hosts, higher transformation rate, more stable inserts, and they are non-chimeric which means they do not rearrange and ligate to form new DNA strand, allowing for a user friendly vector choice. Applications PAC is commonly used as a large capacity vector which allows propagation of large DNA inserts in Escherichia coli. This feature has been commonly used for: building genome libraries for human, mouse, etc, helps with projects such as Human Genome Project libraries served as the template for gene sequencing (example: used as gene template in mouse gene function analysis) genome analysis on specific functions of different genes for more complex organisms (plants, animals, etc.) facilitate gene expression Since PAC was derived from phages, PAC and its variants are also useful in the PAC-based phage therapy and antibiotic studies. See also Bacterial artificial chromosome Human artificial chromosome Yeast artificial chromosome References External links Online Medical Dictionary P1-derived artificial chromosome P1-derived artificial chromosome (PAC) definition DNA Bacteriophages Molecular biology
P1-derived artificial chromosome
[ "Chemistry", "Biology" ]
699
[ "Biochemistry", "Molecular biology" ]
14,367,845
https://en.wikipedia.org/wiki/Xenobiotic%20metabolism
Xenobiotic metabolism (from the Greek xenos "stranger" and biotic "related to living beings") is the set of metabolic pathways that modify the chemical structure of xenobiotics, which are compounds foreign to an organism's normal biochemistry, such as drugs and poisons. These pathways are a form of biotransformation present in all major groups of organisms, and are considered to be of ancient origin. These reactions often act to detoxify poisonous compounds; however, in cases such as in the metabolism of alcohol, the intermediates in xenobiotic metabolism can themselves be the cause of toxic effects. Xenobiotic metabolism is divided into three phases. In phase I, enzymes such as cytochrome P450 oxidases introduce reactive or polar groups into xenobiotics. These modified compounds are then conjugated to polar compounds in phase II reactions. These reactions are catalysed by transferase enzymes such as glutathione S-transferases. Finally, in phase III, the conjugated xenobiotics may be further processed, before being recognised by efflux transporters and pumped out of cells. The reactions in these pathways are of particular interest in medicine as part of drug metabolism and as a factor contributing to multidrug resistance in infectious diseases and cancer chemotherapy. The actions of some drugs as substrates or inhibitors of enzymes involved in xenobiotic metabolism are a common reason for hazardous drug interactions. These pathways are also important in environmental science, with the xenobiotic metabolism of microorganisms determining whether a pollutant will be broken down during bioremediation, or persist in the environment. The enzymes of xenobiotic metabolism, particularly the glutathione S-transferases are also important in agriculture, since they may produce resistance to pesticides and herbicides. Permeability barriers and detoxification That the exact compounds an organism is exposed to will be largely unpredictable, and may differ widely over time, is a major characteristic of xenobiotic toxic stress. The major challenge faced by xenobiotic detoxification systems is that they must be able to remove the almost-limitless number of xenobiotic compounds from the complex mixture of chemicals involved in normal metabolism. The solution that has evolved to address this problem is an elegant combination of physical barriers and low-specificity enzymatic systems. All organisms use cell membranes as hydrophobic permeability barriers to control access to their internal environment. Polar compounds cannot diffuse across these cell membranes, and the uptake of useful molecules is mediated through transport proteins that specifically select substrates from the extracellular mixture. This selective uptake means that most hydrophilic molecules cannot enter cells, since they are not recognised by any specific transporters. In contrast, the diffusion of hydrophobic compounds across these barriers cannot be controlled, and organisms, therefore, cannot exclude lipid-soluble xenobiotics using membrane barriers. However, the existence of a permeability barrier means that organisms were able to evolve detoxification systems that exploit the hydrophobicity common to membrane-permeable xenobiotics. These systems therefore solve the specificity problem by possessing such broad substrate specificities that they metabolise almost any non-polar compound. Useful metabolites are excluded since they are polar, and in general contain one or more charged groups. The detoxification of the reactive by-products of normal metabolism cannot be achieved by the systems outlined above, because these species are derived from normal cellular constituents and usually share their polar characteristics. However, since these compounds are few in number, specific enzymes can recognize and remove them. Examples of these specific detoxification systems are the glyoxalase system, which removes the reactive aldehyde methylglyoxal, and the various antioxidant systems that eliminate reactive oxygen species. Phases of detoxification The metabolism of xenobiotics is often divided into three phases: modification, conjugation, and excretion. These reactions act in concert to detoxify xenobiotics and remove them from cells. Phase I - modification In phase I, a variety of enzymes acts to introduce reactive and polar groups into their substrates. One of the most common modifications is hydroxylation catalysed by the cytochrome P-450-dependent mixed-function oxidase system. These enzyme complexes act to incorporate an atom of oxygen into nonactivated hydrocarbons, which can result in either the introduction of hydroxyl groups or N-, O- and S-dealkylation of substrates. The reaction mechanism of the P-450 oxidases proceeds through the reduction of cytochrome-bound oxygen and the generation of a highly-reactive oxyferryl species, according to the following scheme: Phase II - conjugation In subsequent phase II reactions, these activated xenobiotic metabolites are conjugated with charged species such as glutathione (GSH), sulfate, glycine, or glucuronic acid. These reactions are catalysed by a large group of broad-specificity transferases, which in combination can metabolise almost any hydrophobic compound that contains nucleophilic or electrophilic groups. One of the most important of these groups are the glutathione S-transferases (GSTs). The addition of large anionic groups (such as GSH) detoxifies reactive electrophiles and produces more polar metabolites that cannot diffuse across membranes, and may, therefore, be actively transported. Phase III - further modification and excretion After phase II reactions, the xenobiotic conjugates may be further metabolised. A common example is the processing of glutathione conjugates to acetylcysteine (mercapturic acid) conjugates. Here, the γ-glutamate and glycine residues in the glutathione molecule are removed by Gamma-glutamyl transpeptidase and dipeptidases. In the final step, the cystine residue in the conjugate is acetylated. Conjugates and their metabolites can be excreted from cells in phase III of their metabolism, with the anionic groups acting as affinity tags for a variety of membrane transporters of the multidrug resistance protein (MRP) family. These proteins are members of the family of ATP-binding cassette transporters and can catalyse the ATP-dependent transport of a huge variety of hydrophobic anions, and thus act to remove phase II products to the extracellular medium, where they may be further metabolised or excreted. Endogenous toxins The detoxification of endogenous reactive metabolites such as peroxides and reactive aldehydes often cannot be achieved by the system described above. This is the result of these species' being derived from normal cellular constituents and usually sharing their polar characteristics. However, since these compounds are few in number, it is possible for enzymatic systems to utilize specific molecular recognition to recognize and remove them. The similarity of these molecules to useful metabolites therefore means that different detoxification enzymes are usually required for the metabolism of each group of endogenous toxins. Examples of these specific detoxification systems are the glyoxalase system, which acts to dispose of the reactive aldehyde methylglyoxal, and the various antioxidant systems that remove reactive oxygen species. History Studies on how people transform the substances that they ingest began in the mid-nineteenth century, with chemists discovering that organic chemicals such as benzaldehyde could be oxidized and conjugated to amino acids in the human body. During the remainder of the nineteenth century, several other basic detoxification reactions were discovered, such as methylation, acetylation, and sulfonation. In the early twentieth century, work moved on to the investigation of the enzymes and pathways that were responsible for the production of these metabolites. This field became defined as a separate area of study with the publication by Richard Williams of the book Detoxication mechanisms in 1947. This modern biochemical research resulted in the identification of glutathione S-transferases in 1961, followed by the discovery of cytochrome P450s in 1962, and the realization of their central role in xenobiotic metabolism in 1963. See also Drug design Drug metabolism Microbial biodegradation Biodegradation Bioremediation Antioxidant SPORCalc, an example process for exploring xenobiotic and drug metabolism databases References Further reading External links Databases Drug metabolism database Directory of P450-containing Systems University of Minnesota Biocatalysis/Biodegradation Database Drug metabolism Small Molecule Drug Metabolism Drug metabolism portal Microbial biodegradation Microbial Biodegradation, Bioremediation and Biotransformation History History of Xenobiotic Metabolism Metabolism
Xenobiotic metabolism
[ "Chemistry", "Biology" ]
1,848
[ "Biochemistry", "Metabolism", "Cellular processes" ]
14,368,179
https://en.wikipedia.org/wiki/Immunocontraception
Immunocontraception is the use of an animal's immune system to prevent it from fertilizing offspring. Contraceptives of this type are not currently approved for human use. Typically immunocontraception involves the administration of a vaccine that induces an adaptive immune response which causes an animal to become temporarily infertile. Contraceptive vaccines have been used in numerous settings for the control of wildlife populations. However, experts in the field believe that major innovations are required before immunocontraception can become a practical form of contraception for human beings. Thus far immunocontraception has focused on mammals exclusively. There are several targets in mammalian sexual reproduction for immune inhibition. They can be organized into three categories. Gamete production Organisms that undergo sexual reproduction must first produce gametes, cells which have half the typical number of chromosomes of the species. Often immunity that prevents gamete production also inhibits secondary sexual characteristics and so has effects similar to castration. Gamete function After gametes are produced in sexual reproduction, two gametes must combine during fertilization to form a zygote, which again has the full typical number of chromosomes of the species. Methods that target gamete function prevent this fertilization from occurring and are true contraceptives. Gamete outcome Shortly after fertilization a zygote develops into a multicellular embryo that in turn develops into a larger organism. In placental mammals this process of gestation occurs inside the reproductive system of the mother of the embryo. Immunity that targets gamete outcome induces abortion of an embryo while it is within its mother's reproductive system. Medical use Immunocontraception in not currently available but is under study. Obstacles Variability of immunogenicity In order for an immunocontraceptive to be palatable for human use, it would need to meet or exceed the efficacy rates of currently popular forms of contraception. Currently the maximum reduction of fertility due to sperm vaccines in laboratory experiments with mice is ~75%. The lack of efficacy is due to variability of immunogenicity from one animal to another. Even when exposed to the exact same vaccine, some animals will produce abundant antibody titers to the vaccine's antigen, while others produce relatively low antibody titers. In the Eppin trial that attained 100% infertility, a small sample size (only 9 monkeys) was used, and even among this small sample 2 monkeys were dropped from the study because they failed to produce sufficiently high antibody titers. This trend—high efficacy when antibody titers are above a threshold coupled with variability in how many animals reach such a threshold—is seen throughout immunocontraception and immune-based birth control research. A long-term study of PZP vaccination in deer that spanned 6 years found that infertility was directly related to antibody titers to PZP. The phase II clinical trial of hCG vaccines was quite successful among women who had antibody titers above 50 ng/mL, but quite poor among those with antibody titers below this threshold. Lack of mucosal immunity Mucosal immunity, which includes immune function in the female reproductive tract, is not as well understood as humoral immunity. This may be an issue for certain contraceptive vaccines. For instance, in the second LDH-C4 primate trial that had negative results, all of the immunized macaque monkeys developed high antibody titers against LDH-C4 in serum, but antibodies against LDH-C4 were not found in the monkeys' vaginal fluids. If antibodies against LDH-C4 do indeed inhibit fertilization, then this result highlights how the difference in the functioning of mucosal immunity from humoral immunity may be critical to the efficacy of contraceptive vaccines. Adverse effects Whenever an immune response is provoked, there is some risk of autoimmunity. Therefore, immunocontraception trials typically check for signs of autoimmune disease. One concern with zona pellucida vaccination, in particular, is that in certain cases it appears to be correlated with ovarian pathogenesis. However, ovarian disease has not been observed in every trial of zona pellucida vaccination, and when observed, has not always been irreversible. Gamete production Gonadotropin-releasing hormone The production of gametes is induced in both male and female mammals by the same two hormones: follicle-stimulating hormone (FSH) and luteinizing hormone (LH). The production of these in turn is induced by a single releasing hormone, gonadotropin-releasing hormone (GnRH), which has been the focus of most of the research into immunocontraception against gamete production. GnRH is secreted by the hypothalamus in pulses and travels to the anterior pituitary gland through a portal venous system. There it stimulates the production of FSH and LH. FSH and LH travel through the general circulatory system and stimulate the functioning of the gonads, including the production of gametes and the secretion of sex steroid hormones. Immunity against GnRH thus lessens FSH and LH production which in turn attenuates gamete production and secondary sexual characteristics. While GnRH immunity has been known to have contraceptive effects for some time, only in the 2000s has it been used to develop several commercial vaccines. Equity® Oestrus Control is a GnRH vaccine marketed for use in non-breeding domestic horses. Repro-Bloc is GnRH vaccine marketed for use in domestic animals in general. Improvac® is a GnRH vaccine marketed for use in pigs not as a contraceptive, but as an alternative to physical castration for the control of boar taint. Unlike the other products which are marketed for use in domestic animals, GonaCon™ is a GnRH vaccine being developed as a United States Department of Agriculture initiative for use for control of wildlife, specifically deer. GonaCon has also been used on a trial basis to control kangaroos in Australia. Gamete function The form of sexual reproduction practiced by most placental mammals is anisogamous, requiring two kinds of dissimilar gametes, and allogamous, such that each individual only produces one of the two kinds of gametes. The smaller gamete is the sperm cell and is produced by males of the species. The larger gamete is the ovum and is produced by females of the species. Under this scheme, fertilization requires two gametes, one from an individual of each sex, in order to occur. Immunocontraception targeting the female gamete has focused on the zona pellucida. Immunocontraception targeting the male gamete has involved many different antigens associated with sperm function. Zona pellucida The zona pellucida is a glycoprotein membrane surrounding the plasma membrane of an ovum. The zona pellucida's main function in reproduction is to bind sperm. Immunity against zonae pellucidae causes an animal to produce antibodies that themselves are bound by a zona pellucida. Thus when a sperm encounters an ovum in an animal immunized against zonae pellucidae, the sperm cannot bind to the ovum because its zona pellucida has already been occupied by antibodies. Therefore, fertilization does not occur. Early research Work begun by researchers at the University of Tennessee in the 1970s into immunity against zonae pellucidae resulted in its identification as a target antigen for immunocontraception. The zona pellucida's suitability is a result of it being necessary for fertilization and containing at least one antigen that is tissue-specific and not species-specific. The tissue-specificity implies that immunity against zonae pellucidae will not also affect other tissues in the immunized animal's body. The lack of species-specificity implies that zonae pellucidae harvested from animals of one species will induce an immune response in those of another, which makes zona pellucida antigens readily available, since zonae pellucidae can be harvested from farm animals. Zonagen In 1987, a pharmaceutical company called Zonagen (later renamed Repros Therapeutics) was started with the goal of developing zona pellucida vaccines as an alternative to the surgical sterilization of companion animals and eventually as a contraceptive for human use. The products would be based on research being done at the Baylor College of Medicine by Bonnie S. Dunbar that was funded by Zonagen. However, the relationship between Zonagen and Bonnie Dunbar ended acrimoniously in 1993. Despite claims later that year that development of a contraceptive vaccine was imminent and an agreement with Schering AG for funding for joint development of a contraceptive vaccine for human use, no vaccine was made commercially available and the agreement with Schering was terminated after primate studies were disappointing. The company would go on to pursue other projects and be renamed. Application to wildlife population control Also in the late 1980s, research began into the use of vaccines based around zonae pellucidae harvested from pigs for the purpose of wildlife control. Such porcine zona pellucida (PZP) vaccines were tested in captive and domestic horses in 1986 with encouraging results. This led to the first successful field trial of contraceptive vaccines with free-ranging wildlife, which examined PZP vaccines used upon wild horses of Assateague Island National Seashore in 1988. The successful results of the field trial were maintained by annual booster inoculations. Following the success of trials with horses, initial trials using captive animals showed promise for the use of PZP vaccines with white-tailed deer and with African elephants. This led to successful field trials of PZP vaccines in white-tailed deer at the Smithsonian Conservation Biology Institute in Front Royal, VA from September 1992 to September 1994 and in African elephants of Kruger National Park in South Africa in 1996. As a result of these successes, PZP vaccination has become the most popular form of immunocontraception for wildlife. As of 2011, thousands of animals are treated with PZP vaccination every year, including 6 different species of free-ranging wildlife in 52 different locations and 76 captive exotic species in 67 different zoological gardens. Bio Farma In 2012, researchers from Brawijaya University in conjunction with pharmaceutical company Bio Farma received a grant from the Indonesian government to develop a zona pellucida contraceptive vaccine for human use. Instead of pigs, the zonae pellucidae for the program are harvested from cows. The program hopes to mass-produce a contraceptive vaccine in Indonesia in 2013 at the earliest. Viral and microbial vectors While contraceptive vaccines can be delivered remotely, they still require administration to each individual animal that is to be made infertile. Thus contraceptive vaccines have been used to control only relatively small populations of wildlife. Australia and New Zealand have large populations of European invasive species for which such approach will not scale. Research in these countries has therefore focused on genetically modifying viruses or microorganisms that infect the unwanted invasive species to contain immunocontraceptive antigens. Such research has included targeting the European rabbit (Oryctolagus cuniculus) in Australia by engineering rabbit zona pellucida glycoproteins into a recombinant myxoma virus. This approach has induced marginal reduction of fertility in laboratory rabbits with some of the glycoproteins. Further improvement of efficacy is necessary before such an approach is ready for field trials. Research has also targeted the house mouse (Mus domesticus) in Australia by engineering murine zona pellucida antigens into a recombinant ectromelia virus and a recombinant cytomegalovirus. The latter approach has induced permanent infertility when injected into laboratory mice. However, there is some attenuation of efficacy when it is actually transmitted virally. In addition to rabbits and mice, this approach has been explored for other animals. Researchers have attempted to replicate similar results when targeting the red fox (Vulpes vulpes) in Australia using such vectors as Salmonella typhimurium, vaccinia, and canine herpesvirus, but no reduction in fertility has been achieved thus far for a variety of reasons. Initial exploration into the control of the common brushtail possum (Trichosurus vulpecula) in New Zealand using the nematode Parastrongyloides trichosuri has identified it as a possible immunocontraceptive vector. Sperm In placental mammals, fertilization typically occurs inside the female in the oviducts. The oviducts are positioned near the ovaries where ova are produced. An ovum therefore needs only to travel a short distance to the oviducts for fertilization. In contrast sperm cells must be highly motile, since they are deposited into the female reproductive tract during copulation and must travel through the cervix (in some species) as well as the uterus and the oviduct (in all species) to reach an ovum. Sperm cells that are motile are spermatozoa. Spermatozoa are protected from the male's immune system by the blood-testis barrier. However, spermatozoa are deposited into the female in semen, which is mostly the secretions of the seminal vesicles, prostate gland, and bulbourethral glands. In this way antibodies generated by the male are deposited into the female along with spermatozoa. Because of this and the extensive travel in the female reproductive tract, spermatozoa are susceptible to anti-sperm antibodies generated by the male in addition to waiting anti-sperm antibodies generated by the female. Early research In 1899, the discovery of the existence of antibodies against sperm was made independently both by Serge Metchnikoff of the Pasteur Institute and by Nobel prize laureate Karl Landsteiner. In 1929, the first recorded attempt at immunocontraception was made by Morris Baskin, clinical director of the Denver Maternal Hygiene Committee. In this trial 20 women who were known to have at least 1 prior pregnancy were injected with their husband's semen, and no conception was recorded in 1 year of observation of these couples. A United States patent (number 2103240) was issued in 1937 for this approach as a contraceptive, but no product for widespread consumption ever came from this approach. Renewed interest Throughout the 1990s, there was a resurgence of research in immunocontraception targeting sperm with the hope of developing a contraceptive vaccine for human use. Unlike earlier research which explored the contraceptive effect of immune responses to whole sperm cells, contemporary research has focused on searching for specific molecular antigens that are involved with sperm function. Antigens that have been identified as potential targets for immunocontraception include the sperm-specific peptides or proteins ADAM, LDH-C4, sp10, sp56, P10G, fertilization antigen 1 (FA-1), sp17, SOB2, A9D, CD52, YLP12, Eppin, CatSper, Izumo, sperm associated antigen 9 (SPAG9), 80 kilodalton human sperm antigen (80 kDa HSA), and nuclear autoantigenic sperm protein (tNASP). Early primate trials had mixed results. One study examined the sperm-specific isozyme of human lactate dehydrogenase (LDH-C4) combined with a T-cell epitope to create a synthetic peptide that acted as a more potent chimeric antigen. Vaccination of female baboons with this synthetic peptide resulted in a reduced fertility in the trial. However, a second study that examined vaccination of female macaque monkeys with the same synthetic peptide did not find reduced fertility. Since then, a study examining vaccination based on an epididymal protease inhibitor (Eppin) in male macaque monkeys demonstrated that vaccination against sperm antigens could be an effective, reversible contraceptive in male primates. While 4 of 6 control monkeys impregnated females during the trial, none of the 7 monkeys included in the trial that were vaccinated against Eppin impregnated females, and 4 of these 7 vaccinated monkeys recovered their fertility within a year and a half of observation after the trial. This illustrated that not only could sperm immunocontraception be effective, but it could have several advantages over zona pellucida vaccines. For instance, sperm vaccines could be used by males, in addition to females. Additionally, while there are relatively few glycoproteins in the zona pellucida and thus relatively few target antigens for zona pellucida vaccines, more than a dozen prospective target antigens for the inhibition of sperm function have been identified. This relative abundance of prospective target antigens makes the prospects of a multivalent vaccine better for sperm vaccines. A study which examined the use of one such multivalent vaccine in female macaque monkeys found that the monkeys produced antibodies against all antigens included in the vaccine, suggesting the efficacy of the multivalent approach. Finally, while there has been autoimmune ovarian pathogenesis found in some trials using zona pellucida vaccines, anti-sperm antibodies are not likely to have adverse health effects, since anti-sperm antibodies are produced by up to 70% of men who have had vasectomies, and there has been much investigation into possible adverse health side-effects of the vasectomy procedure. Passive immunity A vaccine induces active immunity when antigens are injected into an animal that cause it to produce desired antibodies itself. In passive immunity the desired antibody titers are achieved by injecting antibodies directly into an animal. The efficacy of such an approach for immunocontraception was demonstrated as early as the 1970s with antibodies against zonae pellucidae in mice during the investigation of the mechanism by which such antibodies inhibited fertility. Because the variability of individual immune response is an obstacle to bringing contraceptive vaccines to market, there has been research into the approach of contraception through passive immunization as an alternative that would be of less duration, but be closer to market. Research done using phage display technology on lymphocytes from immunoinfertile men led to the isolation, characterization, and synthesis of specific antibodies that inhibit fertility by acting against several of the known sperm antigens. This detailed molecular knowledge of antisperm antibodies may be of use in the development of a passive immunocontraceptive product. Gamete outcome Human chorionic gonadotropin Most of the research into immunity that inhibits gamete outcome has focused on human chorionic gonadotropin (hCG). hCG is not necessary for fertilization, but is secreted by embryos shortly thereafter. Therefore, immunity against hCG does not prevent fertilization. However, it was found that anti-hCG antibodies prevent marmoset embryos from implanting in the endometrium of their mother's uterus. The main function of hCG is to sustain the ovarian corpus luteum during pregnancy past the time it would normally decay as part of the regular menstrual cycle. For the first 7–9 weeks in humans, the corpus luteum secretes the progesterone necessary to maintain the viability of the endometrium for pregnancy. Therefore, immunity against hCG during this time frame would function as an abortifacient, as confirmed by experiments in baboons. In the scientific literature the more inclusive term "birth control vaccine" rather than "contraceptive vaccine" is used to refer to hCG vaccines. Clinical trials Research begun in the 1970s led to clinical trials in humans of a hCG birth control vaccine. A phase I (safety) clinical trial examined 15 women from clinics in Helsinki, Finland, Uppsala, Sweden, Bahia, Brazil, and Santiago, Chile with a vaccine formed by conjugating the beta subunit of hCG with a tetanus toxoid. The women had previously had tubal ligations. In the trial the immune response was reversible and no significant health issues were found. This was followed by another phase I trial in 1977-1978 examining previously sterilized women at 5 institutions in India with a more potent vaccine that combined the beta subunit of hCG with the alpha subunit of ovine luteinizing hormone to form a heterospecies dimer conjugated with both tetanus toxoid and diphtheria toxoid. The multiple carriers were used because it was found that a small percentage of women acquired carrier-specific immunosuppression due to repeated injection of conjugates with the same carrier. This more potent version of the vaccine was used in a phase II (efficacy) trial during 1991-1993 conducted at 3 locations: the All India Institute of Medical Sciences, Safdarjung Hospital in New Delhi, and the Post Graduate Institute of Medical Education and Research in Chandigarh. Primary immunization consisted of 3 injections at 6 week intervals, and 148 women known to be previously fertile completed primary immunization. All women generated antibodies against hCG, but only 119 (80%) generated antibody titers clearly above 50 ng/mL, which was the estimated level for efficacy. Blood samples were taken twice a month, and booster injections were given when antibody titers declined below 50 ng/mL in women who wished to continue using the vaccine. At the completion of the study after 1224 menstrual cycles observed, only 1 pregnancy occurred in a woman having an antibody titer level above 50 ng/mL, and 26 pregnancies occurred among women who had titers below 50 ng/mL. Application to cancer therapy Following these clinical trails of hCG vaccination as a birth control method, hCG was discovered to be expressed in certain kinds of malignant neoplasms, including breast cancer, adenocarcinoma of the prostate, progressive vulvar carcinoma, carcinoma of the bladder, pancreatic adenocarcinoma, cervical carcinoma, gastric carcinoma, squamous-cell carcinoma of the oral cavity and oropharynx, lung carcinoma, and colorectal cancer. Therefore, immunity against hCG has applications such as imaging of cancer cells, selective delivery of cytotoxic compounds to tumor cells, and in at least one case, direct therapeutic effect by preventing establishment, inhibiting the growth, and causing the necrosis of tumors. This has led to interest in developing hCG vaccines specifically for cancer treatment. Ongoing research The vaccine tested in the phase II clinical trial in India did not proceed further because it produced antibody titers of 50 ng/mL for at least 3 months duration in only 60% of women in the trial. Ongoing research in hCG birth control vaccines has focused on improving immunogenicity. A vaccine in which the beta subunit of hCG is fused to the B subunit of Escherichia coli heat-labile enterotoxin has been effective in laboratory mice. It has been approved by the Indian National Review Committee on Genetic Manipulation and is being produced for pre-clinical toxicology testing. If it is determined to be safe, it is planned for clinical trials. Wildlife control Immunocontraception is one of the few alternatives to lethal methods for the direct control of wildlife populations. While there was research into the use of hormonal contraception for wildlife control as early as the 1950s that produced pharmacologically effective products, they all proved to be ineffective for wildlife control for a variety of practical reasons. Field trials of immunocontraception in wildlife, on the other hand, established that contraceptive vaccines could be delivered remotely by capture gun, were safe to use in pregnant animals, were reversible, and induced long-lasting infertility, overcoming these practical limitations. One concern about the use of hormonal contraceptives in general, but especially in wildlife, is that the sex steroid hormones that are used are easily passed, often via the food chain, from animal to animal. This can lead to unintended ecological consequences. For instance, fish exposed to treated human sewage effluents were found to have concentrations of the synthetic hormone levonorgestrel in blood plasma higher than those found in humans taking hormonal contraceptives. Because the antigens used in contraceptive vaccines are protein, not steroids, they are not easily passed from animal to animal without loss of function. References Experimental methods of birth control Immune system Theriogenology
Immunocontraception
[ "Biology" ]
5,177
[ "Immune system", "Organ systems" ]
14,368,327
https://en.wikipedia.org/wiki/Roridomyces%20roridus
Roridomyces roridus, commonly known as the dripping bonnet or the slippery mycena, is a species of agaric fungus in the family Mycenaceae. It is whitish or dirty yellow in color, with a broad convex cap in diameter. The stipe is covered with a thick, slippery slime layer. This species can be bioluminescent, and is one of the several causative species of foxfire. See also List of bioluminescent fungi References Bioluminescent fungi Mycenaceae Fungi described in 1815 Fungi of Europe Fungi of North America Taxa named by Elias Magnus Fries Fungus species
Roridomyces roridus
[ "Biology" ]
129
[ "Fungi", "Fungus species" ]
14,368,398
https://en.wikipedia.org/wiki/Capacity%20of%20a%20set
In mathematics, the capacity of a set in Euclidean space is a measure of the "size" of that set. Unlike, say, Lebesgue measure, which measures a set's volume or physical extent, capacity is a mathematical analogue of a set's ability to hold electrical charge. More precisely, it is the capacitance of the set: the total charge a set can hold while maintaining a given potential energy. The potential energy is computed with respect to an idealized ground at infinity for the harmonic or Newtonian capacity, and with respect to a surface for the condenser capacity. Historical note The notion of capacity of a set and of "capacitable" set was introduced by Gustave Choquet in 1950: for a detailed account, see reference . Definitions Condenser capacity Let Σ be a closed, smooth, (n − 1)-dimensional hypersurface in n-dimensional Euclidean space , will denote the n-dimensional compact (i.e., closed and bounded) set of which Σ is the boundary. Let S be another (n − 1)-dimensional hypersurface that encloses Σ: in reference to its origins in electromagnetism, the pair (Σ, S) is known as a condenser. The condenser capacity of Σ relative to S, denoted C(Σ, S) or cap(Σ, S), is given by the surface integral where: u is the unique harmonic function defined on the region D between Σ and S with the boundary conditions u(x) = 1 on Σ and u(x) = 0 on S; is any intermediate surface between Σ and S; ν is the outward unit normal field to and is the normal derivative of u across ; and σn = 2πn⁄2 ⁄ Γ(n ⁄ 2) is the surface area of the unit sphere in . C(Σ, S) can be equivalently defined by the volume integral The condenser capacity also has a variational characterization: C(Σ, S) is the infimum of the Dirichlet's energy functional over all continuously differentiable functions v on D with v(x) = 1 on Σ and v(x) = 0 on S. Harmonic capacity Heuristically, the harmonic capacity of K, the region bounded by Σ, can be found by taking the condenser capacity of Σ with respect to infinity. More precisely, let u be the harmonic function in the complement of K satisfying u = 1 on Σ and u(x) → 0 as x → ∞. Thus u is the Newtonian potential of the simple layer Σ. Then the harmonic capacity or Newtonian capacity of K, denoted C(K) or cap(K), is then defined by If S is a rectifiable hypersurface completely enclosing K, then the harmonic capacity can be equivalently rewritten as the integral over S of the outward normal derivative of u: The harmonic capacity can also be understood as a limit of the condenser capacity. To wit, let Sr denote the sphere of radius r about the origin in . Since K is bounded, for sufficiently large r, Sr will enclose K and (Σ, Sr) will form a condenser pair. The harmonic capacity is then the limit as r tends to infinity: The harmonic capacity is a mathematically abstract version of the electrostatic capacity of the conductor K and is always non-negative and finite: 0 ≤ C(K) < +∞. The Wiener capacity or Robin constant W(K) of K is given by Logarithmic capacity In two dimensions, the capacity is defined as above, but dropping the factor of in the definition: This is often called the logarithmic capacity, the term logarithmic arises, as the potential function goes from being an inverse power to a logarithm in the limit. This is articulated below. It may also be called the conformal capacity, in reference to its relation to the conformal radius. Properties The harmonic function u is called the capacity potential, the Newtonian potential when and the logarithmic potential when . It can be obtained via a Green's function as with x a point exterior to S, and when and for . The measure is called the capacitary measure or equilibrium measure. It is generally taken to be a Borel measure. It is related to the capacity as The variational definition of capacity over the Dirichlet energy can be re-expressed as with the infimum taken over all positive Borel measures concentrated on K, normalized so that and with is the energy integral Generalizations The characterization of the capacity of a set as the minimum of an energy functional achieving particular boundary values, given above, can be extended to other energy functionals in the calculus of variations. Divergence form elliptic operators Solutions to a uniformly elliptic partial differential equation with divergence form are minimizers of the associated energy functional subject to appropriate boundary conditions. The capacity of a set E with respect to a domain D containing E is defined as the infimum of the energy over all continuously differentiable functions v on D with v(x) = 1 on E; and v(x) = 0 on the boundary of D. The minimum energy is achieved by a function known as the capacitary potential of E with respect to D, and it solves the obstacle problem on D with the obstacle function provided by the indicator function of E. The capacitary potential is alternately characterized as the unique solution of the equation with the appropriate boundary conditions. See also References . The second edition of these lecture notes, revised and enlarged with the help of S. Ramaswamy, re–typeset, proof read once and freely available for download. , available from Gallica. A historical account of the development of capacity theory by its founder and one of the main contributors; an English translation of the title reads: "The birth of capacity theory: reflections on a personal experience". , available at NUMDAM. Potential theory
Capacity of a set
[ "Mathematics" ]
1,235
[ "Mathematical objects", "Functions and mappings", "Mathematical relations", "Potential theory" ]
14,368,827
https://en.wikipedia.org/wiki/Ideomotor%20apraxia
Ideomotor Apraxia, often IMA, is a neurological disorder characterized by the inability to correctly imitate hand gestures and voluntarily mime tool use, e.g. pretend to brush one's hair. The ability to spontaneously use tools, such as brushing one's hair in the morning without being instructed to do so, may remain intact, but is often lost. The general concept of apraxia and the classification of ideomotor apraxia were developed in Germany in the late 19th and early 20th centuries by the work of Hugo Liepmann, Adolph Kussmaul, Arnold Pick, Paul Flechsig, Hermann Munk, Carl Nothnagel, Theodor Meynert, and linguist Heymann Steinthal, among others. Ideomotor apraxia was classified as "ideo-kinetic apraxia" by Liepmann due to the apparent dissociation of the idea of the action with its execution. The classifications of the various subtypes are not well defined at present, however, owing to issues of diagnosis and pathophysiology. Ideomotor apraxia is hypothesized to result from a disruption of the system that relates stored tool use and gesture information with the state of the body to produce the proper motor output. This system is thought to be related to the areas of the brain most often seen to be damaged when ideomotor apraxia is present: the left parietal lobe and the premotor cortex. Little can be done at present to reverse the motor deficit seen in ideomotor apraxia, although the extent of dysfunction it induces is not entirely clear. Signs and Symptoms Ideomotor apraxia (IMA) impinges on one's ability to carry out common, familiar actions on command, such as waving goodbye. Persons with IMA exhibit a loss of ability to carry out motor movements, and may show errors in how they hold and move the tool in attempting the correct function. One of the defining symptoms of ideomotor apraxia is the inability to pantomime tool use. As an example, if a normal individual were handed a comb and instructed to pretend to brush his hair, he would grasp the comb properly and pass it through his hair. If this were repeated in a patient with ideomotor apraxia, the patient may move the comb in big circles around his head, hold it upside-down, or perhaps try and brush his teeth with it. The error may also be temporal in nature, such as brushing exceedingly slowly. The other characteristic symptom of ideomotor apraxia is the inability to imitate hand gestures, meaningless or meaningful, on request; a meaningless hand gesture is something like having someone make a ninety-degree angle with his thumb and placing it under his nose, with his hand in the plane of his face. This gesture has no meaning attached to it. In contrast, a meaningful gesture is something like saluting or waving goodbye. An important distinction here is that all of the above refer to actions that are consciously and voluntarily initiated. That is to say that a person is specifically asked to either imitate what someone else is doing or is given verbal instructions, such as "wave goodbye." People with ideomotor apraxia will know what they are supposed to do, e.g. they will know to wave goodbye and what their arm and hand should do to accomplish it, but will be unable to execute the motion correctly. This voluntary type of action is distinct from spontaneous actions. Ideomotor apraxia patients may still retain the ability to perform spontaneous motions; if someone they know leaves the room, for instance, they may be able to wave goodbye to that person, despite being unable to do so at request. The ability to perform this sort of spontaneous action is not always retained, however; some affected individuals lose this capability, as well. The recognition of meaningful gestures, e.g. understanding what waving goodbye means when it is seen, seems to be unaffected by ideomotor apraxia. It has also been shown that individuals with ideomotor apraxia may have some deficits in general spontaneous movements. Apraxia patients appear to be unable to tap their fingers as quickly as a control group, with a lower maximum tapping rate correlated with more severe apraxia. It has also been demonstrated that apraxic patients are slower to point at a target light when they do not have sight of their hand as compared with healthy patients under the same conditions. The two groups did not differ when they could see their hands. The speed and accuracy of grasping objects also appears unaffected by ideomotor apraxia. Patients with ideomotor apraxia appear to be much more reliant on visual input when conducting movements then nonapraxic individuals. Cause The most common cause of ideomotor apraxia is a unilateral ischemic lesion to the brain, which is damage to one hemisphere of the brain due to a disruption of the blood supply, as in a stroke. There are a variety of brain areas where lesions have been correlated to ideomotor apraxia. Initially it was believed that damage to the subcortical white matter tracts, the axons that extend down from the cells bodies in the cerebral cortex, was the main area responsible for this form of apraxia. Lesions to the basal ganglia may also be responsible, although there is considerable debate as to whether damage to the basal ganglia alone would be sufficient to induce apraxia. Lesions to these lower brain structures has not, however, been shown to be more prevalent in apraxic patients. In fact, these types of lesions are more common in nonapraxic patients. The lesions most associated with ideomotor apraxia are to the left parietal and premotor areas. Patients with lesions to the supplementary motor area have also presented with ideomotor apraxia. Lesions to the corpus callosum can also induce apraxic-like symptoms, with varying effects on the two hands, although this has not been thoroughly studied. In addition to ischemic lesions to the brain, ideomotor apraxia has also been seen in neurodegenerative disorders such as Parkinson's disease, Alzheimer's disease, Huntington's disease, corticobasal degeneration, and progressive supranuclear palsy. Pathophysiology The prevailing hypothesis for the pathophysiology of ideomotor apraxia is that the various brain lesions associated with the disorder somehow disrupt portions of the praxis system. The praxis system is the brain regions that are involved in taking processed sensory input, accessing of stored information about tools and gestures, and translating these into a motor output. Buxbaum et al. have proposed that the praxis system involves three distinct parts: stored gesture representations, stored tool knowledge, and a "dynamic body schema." The first two store information about the representation of gestures in the brain and the characteristic movements of tools. The body schema is a brain model of the body and its position in space. The praxis system relates the stored information about a movement type to how the dynamic, i.e. changing, body representation varies as the movement progresses. It is still not clear how this system maps out onto the brain itself, although some research has given indications to possible locations for certain portions. The dynamic body schema has been suggested to be localized in the superior posterior parietal cortex. There is also evidence that the inferior parietal lobule may be the locus for storage of the characteristic movements of a tool. This area showed inverse activation to the cerebellum in a study of tool use and tool mime. If the connections between these areas become severed, the praxis system would be disrupted, possibly resulting in the symptoms observed in ideomotor apraxia. Diagnosis There is no one definitive test for ideomotor apraxia; there are several that are used clinically to make an ideomotor apraxia diagnosis. The criteria for a diagnosis are not entirely conserved among clinicians, for apraxia in general or distinguishing subtypes. Almost all the tests laid out here that enable a diagnosis of ideomotor apraxia share a common feature: assessment of the ability to imitate gestures. A test developed by Georg Goldenberg uses imitation assessment of 10 gestures. The tester demonstrates the gesture to the patient and rates him on how whether the gesture was correctly imitated. If the first attempt to imitate the gesture was unsuccessful, the gesture is presented a second time; a higher score is given for correct imitation on the first trial, then for the second, and the lowest score is for not correctly imitating the gesture. The gestures used here are all meaningless, such as placing the hand flat on the top of the head or flat outward with the fingers towards the ear. This test is specifically designed for ideomotor apraxia. The main variation from this is in the type and number of gestures used. One test uses twenty-four movements with three trials for each and a trial-based scoring system similar to the Goldenberg protocol. The gestures here are also copied by the patient from the tester and are divided into finger movements, e.g. making a scissor movement with the forefinger and middle finger, and hand and arm movements, e.g. doing a salute. This protocol combines meaningful and meaningless gestures. Another test uses five meaningful gestures, such as waving goodbye or scratching your head and five meaningless gestures. Additional differences in this test are a verbal command to initiate the movement and it distinguishes between accurate performance and inaccurate but recognizable performance. One test utilizes tools, including a hammer and a key, with both a verbal command to use the tools and the patient copying the tester's demonstrated use of the tools. These tests have been shown to be individually unreliable, with considerable variability between the diagnoses delivered by each. If a battery of tests is used, however, the reliability and validity may be improved. It is also highly advisable to include assessments of how the patient performs activities in daily life. One of the newer tests that has been developed may provide greater reliability without relying on a multitude of tests. It combines three types of tool use with imitation of gestures. The tool use section includes having the patient pantomime use with no tool present, with visual contact with the tool, and finally with tactile contact with the tool. This test screens for ideational and ideomotor apraxia, with the second portion aimed specifically at ideomotor apraxia. One study showed great potential for this test, but further studies are needed to reproduce these results before this can be said with confidence. This disorder often occurs with other degenerative neurological disorders such as Parkinson's disease and Alzheimer's disease. These comorbidities can make it difficult to pick out the specific features of ideomotor apraxia. The important point in distinguishing ideomotor apraxia is that basic motor control is intact; it is a high level dysfunction involving tool use and gesturing. Additionally, clinicians must be careful to exclude aphasia as a possible diagnosis, as, in the tests involving verbal command, an aphasic patient could fail to perform a task properly because they do not understand what the directions are. Management Given the complexity of the medical problems facing people with ideomotor apraxia, as they are usually experiencing a multitude of other problems, it is difficult to ascertain the impact that it has on their ability to function independently. Deficits due to Parkinson's or Alzheimer's disease could very well be sufficient to mask or make irrelevant difficulties arising from the apraxia. Some studies have shown ideomotor apraxia to independently diminish the patient's ability to function on their own. The general consensus seems to be that ideomotor apraxia does have a negative impact on independence in that it can reduce an individual's ability to manipulate objects, as well as diminishing the capacity for mechanical problem solving, owing to the inability to access information about how familiar parts of the unfamiliar system function. A small subset of patients has been known to spontaneously recover from apraxia; this is rare, however. One possible hope is the phenomenon of hemispheric shift, where functions normally performed by one hemisphere can shift to the other in the event that the first is damaged. This seems to necessitate, however, that some portion of the function is associated with the other hemisphere to begin with. There is dispute over whether the right hemisphere of the cortex is involved at all in the praxis system, as some evidence from patients with severed corpus callosums indicates it may not be. Although there is little that can be done to substantially reverse the effects of ideomotor apraxia, Occupational Therapy can be effective in helping patients regain some functional control. Sharing the same approach in treating ideational apraxia, this is achieved by breaking a daily task (e.g. combing hair) into separate components and teaching each distinct component individually. With ample repetition, proficiency in these movements can be acquired and should eventually be combined to create a single pattern of movement. References Further reading External links Apraxia An Intervention Guide for Occupational Therapists Neurological disorders Complications of stroke Motor control
Ideomotor apraxia
[ "Biology" ]
2,782
[ "Behavior", "Motor control" ]
14,369,238
https://en.wikipedia.org/wiki/Anticachexia
Anticachexia (AN-tee-kuh-KEK-see-uh) is a drug or effect that works against cachexia (loss of body weight and muscle mass). See also Cachexia#Management External links National Cancer Institute Dictionary - Definition of Anticachexia Drugs_acting_on_the_gastrointestinal_system_and_metabolism
Anticachexia
[ "Chemistry" ]
78
[ "Pharmacology", "Pharmacology stubs", "Medicinal chemistry stubs" ]
14,369,471
https://en.wikipedia.org/wiki/Vierordt%27s%20law
Karl von Vierordt in 1868 was the first to record a law of time perception which relates perceived duration to actual duration over different interval magnitudes, and according to task complexity. It states that, retrospectively, "short" intervals of time (e.g., 10 seconds) tend to be overestimated, and "long" intervals of time tend to be underestimated. The other major paradigm of time estimation methodology measures time prospectively. Modern research suggests that "Vierordt’s law is caused by an unnatural yet widely used experimental protocol". See also References Dyschronometria Sequential activities Frequent and infrequent THC consumption Caused by THC (in German) Oxford Caused by emotions Perception Time
Vierordt's law
[ "Physics", "Mathematics" ]
152
[ "Physical quantities", "Time", "Time stubs", "Quantity", "Spacetime", "Wikipedia categories named after physical quantities" ]
14,369,650
https://en.wikipedia.org/wiki/Strengthening%20mechanisms%20of%20materials
Methods have been devised to modify the yield strength, ductility, and toughness of both crystalline and amorphous materials. These strengthening mechanisms give engineers the ability to tailor the mechanical properties of materials to suit a variety of different applications. For example, the favorable properties of steel result from interstitial incorporation of carbon into the iron lattice. Brass, a binary alloy of copper and zinc, has superior mechanical properties compared to its constituent metals due to solution strengthening. Work hardening (such as beating a red-hot piece of metal on anvil) has also been used for centuries by blacksmiths to introduce dislocations into materials, increasing their yield strengths. Basic description Plastic deformation occurs when large numbers of dislocations move and multiply so as to result in macroscopic deformation. In other words, it is the movement of dislocations in the material which allows for deformation. If we want to enhance a material's mechanical properties (i.e. increase the yield and tensile strength), we simply need to introduce a mechanism which prohibits the mobility of these dislocations. Whatever the mechanism may be, (work hardening, grain size reduction, etc.) they all hinder dislocation motion and render the material stronger than previously. The stress required to cause dislocation motion is orders of magnitude lower than the theoretical stress required to shift an entire plane of atoms, so this mode of stress relief is energetically favorable. Hence, the hardness and strength (both yield and tensile) critically depend on the ease with which dislocations move. Pinning points, or locations in the crystal that oppose the motion of dislocations, can be introduced into the lattice to reduce dislocation mobility, thereby increasing mechanical strength. Dislocations may be pinned due to stress field interactions with other dislocations and solute particles, creating physical barriers from second phase precipitates forming along grain boundaries. There are five main strengthening mechanisms for metals, each is a method to prevent dislocation motion and propagation, or make it energetically unfavorable for the dislocation to move. For a material that has been strengthened, by some processing method, the amount of force required to start irreversible (plastic) deformation is greater than it was for the original material. In amorphous materials such as polymers, amorphous ceramics (glass), and amorphous metals, the lack of long range order leads to yielding via mechanisms such as brittle fracture, crazing, and shear band formation. In these systems, strengthening mechanisms do not involve dislocations, but rather consist of modifications to the chemical structure and processing of the constituent material. The strength of materials cannot infinitely increase. Each of the mechanisms explained below involves some trade-off by which other material properties are compromised in the process of strengthening. Strengthening mechanisms in metals Work hardening The primary species responsible for work hardening are dislocations. Dislocations interact with each other by generating stress fields in the material. The interaction between the stress fields of dislocations can impede dislocation motion by repulsive or attractive interactions. Additionally, if two dislocations cross, dislocation line entanglement occurs, causing the formation of a jog which opposes dislocation motion. These entanglements and jogs act as pinning points, which oppose dislocation motion. As both of these processes are more likely to occur when more dislocations are present, there is a correlation between dislocation density and shear strength. The shear strengthening provided by dislocation interactions can be described by: where is a proportionality constant, is the shear modulus, is the Burgers vector, and is the dislocation density. Dislocation density is defined as the dislocation line length per unit volume: Similarly, the axial strengthening will be proportional to the dislocation density. This relationship does not apply when dislocations form cell structures. When cell structures are formed, the average cell size controls the strengthening effect. Increasing the dislocation density increases the yield strength which results in a higher shear stress required to move the dislocations. This process is easily observed while working a material (by a process of cold working in metals). Theoretically, the strength of a material with no dislocations will be extremely high () because plastic deformation would require the breaking of many bonds simultaneously. However, at moderate dislocation density values of around 107-109 dislocations/m2, the material will exhibit a significantly lower mechanical strength. Analogously, it is easier to move a rubber rug across a surface by propagating a small ripple through it than by dragging the whole rug. At dislocation densities of 1014 dislocations/m2 or higher, the strength of the material becomes high once again. Also, the dislocation density cannot be infinitely high, because then the material would lose its crystalline structure. Solid solution strengthening and alloying For this strengthening mechanism, solute atoms of one element are added to another, resulting in either substitutional or interstitial point defects in the crystal (see Figure on the right). The solute atoms cause lattice distortions that impede dislocation motion, increasing the yield stress of the material. Solute atoms have stress fields around them which can interact with those of dislocations. The presence of solute atoms impart compressive or tensile stresses to the lattice, depending on solute size, which interfere with nearby dislocations, causing the solute atoms to act as potential barriers. The shear stress required to move dislocations in a material is: where is the solute concentration and is the strain on the material caused by the solute. Increasing the concentration of the solute atoms will increase the yield strength of a material, but there is a limit to the amount of solute that can be added, and one should look at the phase diagram for the material and the alloy to make sure that a second phase is not created. In general, the solid solution strengthening depends on the concentration of the solute atoms, shear modulus of the solute atoms, size of solute atoms, valency of solute atoms (for ionic materials), and the symmetry of the solute stress field. The magnitude of strengthening is higher for non-symmetric stress fields because these solutes can interact with both edge and screw dislocations, whereas symmetric stress fields, which cause only volume change and not shape change, can only interact with edge dislocations. Precipitation hardening In most binary systems, alloying above a concentration given by the phase diagram will cause the formation of a second phase. A second phase can also be created by mechanical or thermal treatments. The particles that compose the second phase precipitates act as pinning points in a similar manner to solutes, though the particles are not necessarily single atoms. The dislocations in a material can interact with the precipitate atoms in one of two ways (see Figure 2). If the precipitate atoms are small, the dislocations would cut through them. As a result, new surfaces (b in Figure 2) of the particle would get exposed to the matrix and the particle-matrix interfacial energy would increase. For larger precipitate particles, looping or bowing of the dislocations would occur and result in dislocations getting longer. Hence, at a critical radius of about 5 nm, dislocations will preferably cut across the obstacle, while for a radius of 30 nm, the dislocations will readily bow or loop to overcome the obstacle. The mathematical descriptions are as follows: For particle bowing- For particle cutting- Dispersion strengthening Dispersion strengthening is a type of particulate strengthening in which incoherent precipitates attract and pin dislocations. These particles are typically larger than those in the Orowon precipitation hardening discussed above. The effect of dispersion strengthening is effective at high temperatures whereas precipitation strengthening from heat treatments are typically limited to temperatures much lower than the melting temperature of the material. One common type of dispersion strengthening is oxide dispersion strengthening. Grain boundary strengthening In a polycrystalline metal, grain size has a tremendous influence on the mechanical properties. Because grains usually have varying crystallographic orientations, grain boundaries arise. While undergoing deformation, slip motion will take place. Grain boundaries act as an impediment to dislocation motion for the following two reasons: 1. Dislocation must change its direction of motion due to the differing orientation of grains. 2. Discontinuity of slip planes from grain one to grain two. The stress required to move a dislocation from one grain to another in order to plastically deform a material depends on the grain size. The average number of dislocations per grain decreases with average grain size (see Figure 3). A lower number of dislocations per grain results in a lower dislocation 'pressure' building up at grain boundaries. This makes it more difficult for dislocations to move into adjacent grains. This relationship is the Hall-Petch relationship and can be mathematically described as follows: , where is a constant, is the average grain diameter and is the original yield stress. The fact that the yield strength increases with decreasing grain size is accompanied by the caveat that the grain size cannot be decreased infinitely. As the grain size decreases, more free volume is generated resulting in lattice mismatch. Below approximately 10 nm, the grain boundaries will tend to slide instead; a phenomenon known as grain-boundary sliding. If the grain size gets too small, it becomes more difficult to fit the dislocations in the grain and the stress required to move them is less. It was not possible to produce materials with grain sizes below 10 nm until recently, so the discovery that strength decreases below a critical grain size is still finding new applications. Transformation hardening This method of hardening is used for steels. High-strength steels generally fall into three basic categories, classified by the strengthening mechanism employed. 1- solid-solution-strengthened steels (rephos steels) 2- grain-refined steels or high strength low alloy steels (HSLA) 3- transformation-hardened steels Transformation-hardened steels are the third type of high-strength steels. These steels use predominantly higher levels of C and Mn along with heat treatment to increase strength. The finished product will have a duplex micro-structure of ferrite with varying levels of degenerate martensite. This allows for varying levels of strength. There are three basic types of transformation-hardened steels. These are dual-phase (DP), transformation-induced plasticity (TRIP), and martensitic steels. The annealing process for dual -phase steels consists of first holding the steel in the alpha + gamma temperature region for a set period of time. During that time C and Mn diffuse into the austenite leaving a ferrite of greater purity. The steel is then quenched so that the austenite is transformed into martensite, and the ferrite remains on cooling. The steel is then subjected to a temper cycle to allow some level of marten-site decomposition. By controlling the amount of martensite in the steel, as well as the degree of temper, the strength level can be controlled. Depending on processing and chemistry, the strength level can range from 350 to 960 MPa. TRIP steels also use C and Mn, along with heat treatment, in order to retain small amounts of austenite and bainite in a ferrite matrix. Thermal processing for TRIP steels again involves annealing the steel in the a + g region for a period of time sufficient to allow C and Mn to diffuse into austenite. The steel is then quenched to a point above the martensite start temperature and held there. This allows the formation of bainite, an austenite decomposition product. While at this temperature, more C is allowed to enrich the retained austenite. This, in turn, lowers the martensite start temperature to below room temperature. Upon final quenching a metastable austenite is retained in the predominantly ferrite matrix along with small amounts of bainite (and other forms of decomposed austenite). This combination of micro-structures has the added benefits of higher strengths and resistance to necking during forming. This offers great improvements in formability over other high-strength steels. Essentially, as the TRIP steel is being formed, it becomes much stronger. Tensile strengths of TRIP steels are in the range of 600-960 MPa. Martensitic steels are also high in C and Mn. These are fully quenched to martensite during processing. The martensite structure is then tempered back to the appropriate strength level, adding toughness to the steel. Tensile strengths for these steels range as high as 1500 MPa. Strengthening mechanisms in amorphous materials Polymer Polymers fracture via breaking of inter- and intra molecular bonds; hence, the chemical structure of these materials plays a huge role in increasing strength. For polymers consisting of chains which easily slide past each other, chemical and physical cross linking can be used to increase rigidity and yield strength. In thermoset polymers (thermosetting plastic), disulfide bridges and other covalent cross links give rise to a hard structure which can withstand very high temperatures. These cross-links are particularly helpful in improving tensile strength of materials which contain much free volume prone to crazing, typically glassy brittle polymers. In thermoplastic elastomer, phase separation of dissimilar monomer components leads to association of hard domains within a sea of soft phase, yielding a physical structure with increased strength and rigidity. If yielding occurs by chains sliding past each other (shear bands), the strength can also be increased by introducing kinks into the polymer chains via unsaturated carbon-carbon bonds. Adding filler materials such as fibers, platelets, and particles is a commonly employed technique for strengthening polymer materials. Fillers such as clay, silica, and carbon network materials have been extensively researched and used in polymer composites in part due to their effect on mechanical properties. Stiffness-confinement effects near rigid interfaces, such as those between a polymer matrix and stiffer filler materials, enhance the stiffness of composites by restricting polymer chain motion. This is especially present where fillers are chemically treated to strongly interact with polymer chains, increasing the anchoring of polymer chains to the filler interfaces and thus further restricting the motion of chains away from the interface. Stiffness-confinement effects have been characterized in model nanocomposites, and shows that composites with length scales on the order of nanometers increase the effect of the fillers on polymer stiffness dramatically. Increasing the bulkiness of the monomer unit via incorporation of aryl rings is another strengthening mechanism. The anisotropy of the molecular structure means that these mechanisms are heavily dependent on the direction of applied stress. While aryl rings drastically increase rigidity along the direction of the chain, these materials may still be brittle in perpendicular directions. Macroscopic structure can be adjusted to compensate for this anisotropy. For example, the high strength of Kevlar arises from a stacked multilayer macrostructure where aromatic polymer layers are rotated with respect to their neighbors. When loaded oblique to the chain direction, ductile polymers with flexible linkages, such as oriented polyethylene, are highly prone to shear band formation, so macroscopic structures which place the load parallel to the draw direction would increase strength. Mixing polymers is another method of increasing strength, particularly with materials that show crazing preceding brittle fracture such as atactic polystyrene (APS). For example, by forming a 50/50 mixture of APS with polyphenylene oxide (PPO), this embrittling tendency can be almost completely suppressed, substantially increasing the fracture strength. Interpenetrating polymer networks (IPNs), consisting of interlacing crosslinked polymer networks that are not covalently bonded to one another, can lead to enhanced strength in polymer materials. The use of an IPN approach imposes compatibility (and thus macroscale homogeneity) on otherwise immiscible blends, allowing for a blending of mechanical properties. For example, silicone-polyurethane IPNs show increased tear and flexural strength over base silicone networks, while preserving the high elastic recovery of the silicone network at high strains. Increased stiffness can also be achieved by pre-straining polymer networks and then sequentially forming a secondary network within the strained material. This takes advantage of the anisotropic strain hardening of the original network (chain alignment from stretching of the polymer chains) and provides a mechanism whereby the two networks transfer stress to one another due to the imposed strain on the pre-strained network. Glass Many silicate glasses are strong in compression but weak in tension. By introducing compression stress into the structure, the tensile strength of the material can be increased. This is typically done via two mechanisms: thermal treatment (tempering) or chemical bath (via ion exchange). In tempered glasses, air jets are used to rapidly cool the top and bottom surfaces of a softened (hot) slab of glass. Since the surface cools quicker, there is more free volume at the surface than in the bulk melt. The core of the slab then pulls the surface inward, resulting in an internal compressive stress at the surface. This substantially increases the tensile strength of the material as tensile stresses exerted on the glass must now resolve the compressive stresses before yielding. Alternately, in chemical treatment, a glass slab treated containing network formers and modifiers is submerged into a molten salt bath containing ions larger than those present in the modifier. Due to a concentration gradient of the ions, mass transport must take place. As the larger cation diffuses from the molten salt into the surface, it replaces the smaller ion from the modifier. The larger ion squeezing into surface introduces compressive stress in the glass's surface. A common example is treatment of sodium oxide modified silicate glass in molten potassium chloride. Examples of chemically strengthened glass are Gorilla Glass developed and manufactured by Corning, AGC Inc.'s Dragontrail and Schott AG's Xensation. Composite strengthening Many of the basic strengthening mechanisms can be classified based on their dimensionality. At 0-D there is precipitate and solid solution strengthening with particulates strengthening structure, at 1-D there is work/forest hardening with line dislocations as the hardening mechanism, and at 2-D there is grain boundary strengthening with surface energy of granular interfaces providing strength improvement. The two primary types of composite strengthening, fiber reinforcement and laminar reinforcement, fall in the 1-D and 2-D classes, respectively. The anisotropy of fiber and laminar composite strength reflects these dimensionalities. The primary idea behind composite strengthening is to combine materials with opposite strengths and weaknesses to create a material which transfers load onto the stiffer material but benefits from the ductility and toughness of the softer material. Fiber reinforcement Fiber-reinforced composites (FRCs) consist of a matrix of one material containing parallel embedded fibers. There are two variants of fiber-reinforced composites, one with stiff fibers and a ductile matrix and one with ductile fibers and a stiff matrix. The former variant is exemplified by fiberglass which contains very strong but delicate glass fibers embedded in a softer plastic matrix resilient to fracture. The latter variant is found in almost all buildings as reinforced concrete with ductile, high tensile-strength steel rods embedded in brittle, high compressive-strength concrete. In both cases, the matrix and fibers have complimentary mechanical properties and the resulting composite material is therefore more practical for applications in the real world. For a composite containing aligned, stiff fibers which span the length of the material and a soft, ductile matrix, the following descriptions provide a rough model. Four stages of deformation The condition of a fiber-reinforced composite under applied tensile stress along the direction of the fibers can be decomposed into four stages from small strain to large strain. Since the stress is parallel to the fibers, the deformation is described by the isostrain condition, i.e., the fiber and matrix experience the same strain. At each stage, the composite stress () is given in terms of the volume fractions of the fiber and matrix (), the Young's moduli of the fiber and matrix (), the strain of the composite (), and the stress of the fiber and matrix as read from a stress-strain curve (). Both fiber and composite remain in the elastic strain regime. In this stage, we also note that the composite Young's modulus is a simple weighted sum of the two component moduli. The fiber remains in the elastic regime but the matrix yields and plastically deforms. Both fiber and composite yield and plastically deform. This stage often features significant Poisson strain which is not captured by model below. The fiber fractures while the matrix continues to plastically deform. While in reality the fractured pieces of fiber still contribute some strength, it is left out of this simple model. Tensile strength Due to the heterogeneous nature of FRCs, they also feature multiple tensile strengths (TS), one corresponding to each component. Given the assumptions outlined above, the first tensile strength would correspond to failure of the fibers, with some support from the matrix plastic deformation strength, and the second with failure of the matrix. Anisotropy (Orientation effects) As a result of the aforementioned dimensionality (1-D) of fiber reinforcement, significant anisotropy is observed in its mechanical properties. The following equations model the tensile strength of a FRC as a function of the misalignment angle () between the fibers and the applied force, the stresses in the parallel and perpendicular, or and , cases (), and the shear strength of the matrix (). Small Misalignment Angle (longitudinal fracture) The angle is small enough to maintain load transfer onto the fibers and prevent delamination of fibers and the misaligned stress samples a slightly larger cross-sectional area of the fiber so the strength of the fiber is not just maintained but actually increases compared to the parallel case. Significant Misalignment Angle (shear failure) The angle is large enough that the load is not effectively transferred to the fibers and the matrix experiences enough strain to fracture. Near Perpendicular Misalignment Angle (transverse fracture) The angle is close to 90 so most of the load remains in the matrix and thus tensile transverse matrix fracture is the dominant failure condition. This can be seen as complementary to the small angle case, with similar form but with an angle . Laminar reinforcement Applications Strengthening of materials is useful in many applications. A primary application of strengthened materials is for construction. In order to have stronger buildings and bridges, one must have a strong frame that can support high tensile or compressive load and resist plastic deformation. The steel frame used to make the building should be as strong as possible so that it does not bend under the entire weight of the building. Polymeric roofing materials would also need to be strong so that the roof does not cave in when there is build-up of snow on the rooftop. Research is also currently being done to increase the strength of metallic materials through the addition of polymer materials such as bonded carbon fiber reinforced polymer to (CFRP). Current research Molecular dynamics simulation assisted studies The molecular dynamics (MD) method has been widely applied in materials science as it can yield information about the structure, properties, and dynamics on the atomic scale that cannot be easily resolved with experiments. The fundamental mechanism behind MD simulation is based on classical mechanics, from which we know the force exerted on a particle is caused by the negative gradient of the potential energy with respect to the particle position. Therefore, a standard procedure to conduct MD simulation is to divide the time into discrete time steps and solve the equations of motion over these intervals repeatedly to update the positions and energies of the particles. Direct observation of atomic arrangements and energetics of particles on the atomic scale makes it a powerful tool to study microstructural evolution and strengthening mechanisms. Grain boundary strengthening There have been extensive studies on different strengthening mechanisms using MD simulation. These studies reveal the microstructural evolution that cannot be either easily observed from an experiment or predicted by a simplified model. Han et al. investigated the grain boundary strengthening mechanism and the effects of grain size in nanocrystalline graphene through a series of MD simulations. Previous studies observed inconsistent grain size dependence of the strength of graphene at the length scale of nm and the conclusions remained unclear. Therefore, Han et al. utilized MD simulation to observe the structural evolution of graphene with nanosized grains directly. The nanocrystalline graphene samples were generated with random shapes and distribution to simulate well-annealed polycrystalline samples. The samples were then loaded with uniaxial tensile stress, and the simulations were carried out at room temperature. By decreasing the grain size of graphene, Han et al. observed a transition from an inverse pseudo Hall-Petch behavior to pseudo Hall-Petch behavior and the critical grain size is 3.1 nm. Based on the arrangement and energetics of simulated particles, the inverse pseudo Hall-Petch behavior can be attributed to the creation of stress concentration sites due to the increase in the density of grain boundary junctions. Cracks then preferentially nucleate on these sites and the strength decreases. However, when the grain size is below the critical value, the stress concentration at the grain boundary junctions decreases because of stress cancellation between 5 and 7 defects. This cancellation helps graphene sustain the tensile load and exhibit a pseudo Hall-Petch behavior. This study explains the previous inconsistent experimental observations and provides an in-depth understanding of the grain boundary strengthening mechanism of nanocrystalline graphene, which cannot be easily obtained from either in-situ or ex-situ experiments. Precipitate strengthening There are also MD studies done on precipitate strengthening mechanisms. Shim et al. applied MD simulations to study the precipitate strengthening effects of nanosized body-centered-cubic (bcc) Cu on face-centered-cubic (fcc) Fe. As discussed in the previous section, the precipitate strengthening effects are caused by the interaction between dislocations and precipitates. Therefore, the characteristics of dislocation play an important role on the strengthening effects. It is known that a screw dislocation in bcc metals has very complicated features, including a non-planar core and the twinning-anti-twinning asymmetry. This complicates the strengthening mechanism analysis and modeling and it cannot be easily revealed by high resolution electron microscopy. Thus, Shim et al. simulated coherent bcc Cu precipitates with diameters ranging from 1 to 4 nm embedded in the fcc Fe matrix. A screw dislocation is then introduced and driven to glide on a {112} plane by an increasing shear stress until it detaches from the precipitates. The shear stress that causes the detachment is regarded as the critical resolved shear stress (CRSS). Shim et al. observed that the screw dislocation velocity in the twinning direction is 2-4 times larger than that in the anti-twinning direction. The reduced velocity in the anti-twinning direction is mainly caused by a transition in the screw dislocation glide from the kink-pair to the cross-kink mechanism. In contrast, a screw dislocation overcomes the precipitates of 1–3.5 nm by shearing in the twinning direction. In addition, it also has been observed that the screw dislocation detachment mechanism with the larger, transformed precipitates involves annihilation-and-renucleation and Orowan looping in the twinning and anti-twinning direction, respectively. To fully characterize the involved mechanisms, it requires intensive transmission electron microscopy analysis and it is normally hard to give a comprehensive characterization. Solid solution strengthening and alloying A similar study has been done by Zhang et al. on studying the solid solution strengthening of Co, Ru, and Re of different concentrations in fcc Ni. The edge dislocation was positioned at the center of Ni and its slip system was set to be <110> {111}. Shear stress was then applied to the top and bottom surfaces of the Ni with a solute atom (Co, Ru, or Re) embedded at the center at 300 K. Previous studies have shown that the general view of size and modulus effects cannot fully explain the solid solution strengthening caused by Re in this system due to their small values. Zhang et al. took a step further to combine the first-principle DFT calculations with MD to study the influence of stacking fault energy (SFE) on strengthening, as partial dislocations can easily form in this material structure. MD simulation results indicate that Re atoms strongly drag to edge dislocation motion and the DFT calculation reveals a dramatic increase in SFE, which is due to the interaction between host atoms and solute atoms located in the slip plane. Further, similar relations have also been found in fcc Ni embedded with Ru and Co. Limitation of the MD studies of strengthening mechanisms These studies show great examples of how the MD method can assist the studies of strengthening mechanisms and provides more insights on the atomic scale. However, it is important to note the limitations of the method. To obtain accurate MD simulation results, it is essential to build a model that properly describes the interatomic potential based on bonding. The interatomic potentials are approximations rather than exact descriptions of interactions. The accuracy of the description varies significantly with the system and complexity of the potential form. For example, if the bonding is dynamic, which means that there is a change in bonding depending on atomic positions, the dedicated interatomic potential is required to enable the MD simulation to yield accurate results. Therefore, interatomic potentials need to be tailored based on bonding. The following interatomic potential models are commonly used in materials science: Born-Mayer potential, Morse potential, Lennard Jones potential, and Mie potential. Although they give very similar results for the variation of potential energy with respect to the particle position, there is a non-negligible difference in their repulsive tails. These characteristics make them better describe materials systems with specific chemical bonds, respectively. In addition to inherent errors in interatomic potentials, the number of atoms and the time steps in MD is limited by the computational power. Nowadays, it is common to simulate an MD system with multimillion atoms and it can even achieve simulations with multimillion atoms. However this still limits the length scale of the simulation to roughly a micron in size. The time steps in MD are also very small and a long simulation will only yield results at the time scale of a few nanoseconds. To further extend the scale of simulation time, it is common to apply a bias potential that changes the barrier height, therefore, accelerating the dynamics. This method is called hyperdynamics. The proper application of this method typically can extend the simulation times to microseconds. Nanostructure fabrication for material strengthening Based on the mechanism of strengthening discussed in the previous contents, nowadays people are also working on enhancing the strength by purposely fabricating nanostructures in materials. Here we introduce several representative methods, including hierarchical nanotwined structures, pushing the limit of grain size for strengthening and dislocation engineering. Hierarchical nanotwinned structures As mentioned in the previous content, hindering dislocation motion renders great strengthening to materials. Nanoscale twins – crystalline regions related by symmetry have the ability to effectively block the dislocation motion due to the microstructure change at the interface. The formation of hierarchical nanotwinned structures pushes the hindrance effect to the extreme, due to the construction of a complex 3D nanotwinned network. Thus, the delicate design of hierarchical nanotwinned structures is of great importance for inventing materials with super strength. For instance, Yue et al. constructed a diamond composite with hierarchically nanotwinned structure by manipulating the synthesis pressure. The obtained composite showed the higher strength than typical engineering metals and ceramics. Pushing the limit of grain size for strengthening The Hall-Petch effect illustrates that the yield strength of materials increases with decreasing grain size. However, many researchers have found that the nanocrystalline materials will soften when the grain size decreases to the critical point, which is called the inverse Hall-Petch effect. The interpretations of this phenomenon is that the extremely small grains are not able to support dislocation pileup which provides extra stress concentration in the large grains. At this point, the strengthening mechanism changes from dislocation-dominated strain hardening to growth softening and grain rotation. Typically, the inverse Hall-Petch effect will happens at grain size ranging from 10 nm to 30 nm and makes it hard for nanocrystalline materials to achieve a high strength. To push the limit of grain size for strengthening, the hindrance of grain rotation and growth could be achieved by grain boundary stabilization. The construction of nanolaminated structure with low-angle grain boundaries is one method to obtain ultrafine grained materials with ultra-strength. Lu et al. applied a very high rate shear deformation with high strain gradients on the top surface layer of bulk Ni sample and introduced nanolaminated structures. This material exhibits an ultra-high hardness, higher than any reported ultrafine-grained nickel. The exceptional strength is resulted from the appearance of low-angle grain boundaries, which have low-energy states efficient for enhancing structure stability. Another method to stabilize grain boundaries is the addition of nonmetallic impurities. Nonmetallic impurities often aggregate at grain boundaries and have the ability to impact the strength of materials by changing the grain boundary energy. Rupert et al. conducted first-principles simulations to study the impact of the addition of common nonmetallic impurities on Σ5 (310) grain boundary energy in Cu. They claimed that the decrease of covalent radius of the impurity and the increase of electronegativity of the impurity would lead to the increase of the grain boundary energy and further strengthen the materials. For instance, boron stabilized the grain boundaries by enhancing the charge density among the adjacent Cu atoms to improve the connection between two grain boundaries. Dislocation engineering Previous studies on the impact of dislocation motion on materials strengthening mainly focused on high density dislocation, which is effective for enhancing strength with the cost of reducing ductility. Engineering dislocation structures and distribution is promising to comprehensively improve the performance of material. Solutes tend to aggregate at dislocations and are promising for dislocation engineering. Kimura et al. conducted atom probe tomograph and observed the aggregation of niobium atoms to the dislocations. The segregation energy was calculated to be almost the same as the grain boundary segregation energy. That's to say, the interaction between niobium atoms and dislocations hindered the recovery of dislocations and thus strengthened the materials. Introducing dislocations with heterogeneous characteristics could also be utilized for material strengthening. Lu et al. introduced ordered oxygen complexes into TiZrHfNb alloy. Unlike the traditional interstitial strengthening, the introduction of the ordered oxygen complexes enhanced the strength of the alloy without the sacrifice of ductility. The mechanism was that the ordered oxygen complexes changed the dislocation motion mode from planar slip to wavy slip and promoted double cross-slip. See also Grain boundary strengthening Precipitation strengthening Solid solution strengthening Strength of materials Tempering (metallurgy) Work hardening References External links Grain boundary strengthening in alumina by rare earth impurities Mechanism of grain boundary strengthening of steels An open source Matlab toolbox for analysis of slip transfer through grain boundaries Materials science
Strengthening mechanisms of materials
[ "Physics", "Materials_science", "Engineering" ]
7,476
[ "Strengthening mechanisms of materials", "Applied and interdisciplinary physics", "Materials science", "nan" ]
14,369,709
https://en.wikipedia.org/wiki/Schottky%20anomaly
The Schottky anomaly is an effect observed in solid-state physics where the specific heat capacity of a solid at low temperature has a peak. It is called anomalous because the heat capacity usually increases with temperature, or stays constant. It occurs in systems with a limited number of energy levels so that E(T) increases with sharp steps, one for each energy level that becomes available. Since Cv =(dE/dT), it will experience a large peak as the temperature crosses over from one step to the next. This effect can be explained by looking at the change in entropy of the system. At zero temperature only the lowest energy level is occupied, entropy is zero, and there is very little probability of a transition to a higher energy level. As the temperature increases, there is an increase in entropy and thus the probability of a transition goes up. As the temperature approaches the difference between the energy levels there is a broad peak in the specific heat corresponding to a large change in entropy for a small change in temperature. At high temperatures all of the levels are populated evenly, so there is again little change in entropy for small changes in temperature, and thus a lower specific heat capacity. For a two level system the specific heat coming from the Schottky anomaly has the form: Where Δ is the energy between the two levels. This anomaly is usually seen in paramagnetic salts or even ordinary glass (due to paramagnetic iron impurities) at low temperature. At high temperature the paramagnetic spins have many spin states available, but at low temperatures some of the spin states are "frozen out" (having too high energy due to crystal field splitting), and the entropy per paramagnetic atom is lowered. It was named after Walter H. Schottky. Details In a system where particles can have either a state of energy 0 or , the expected value of the energy of a particle in the canonical ensemble is: with the inverse temperature and the Boltzmann constant . The total energy of independent particles is thus: The heat capacity is therefore: Plotting as a function of temperature, a peak can be seen at . In this section for the in the introductory section. References Thermodynamic properties Condensed matter physics
Schottky anomaly
[ "Physics", "Chemistry", "Materials_science", "Mathematics", "Engineering" ]
458
[ "Thermodynamics stubs", "Thermodynamic properties", "Materials science stubs", "Physical quantities", "Quantity", "Phases of matter", "Materials science", "Thermodynamics", "Condensed matter physics", "Condensed matter stubs", "Physical chemistry stubs", "Matter" ]
14,370,049
https://en.wikipedia.org/wiki/Schwarzschild%20criterion
Discovered by Martin Schwarzschild, the Schwarzschild criterion is a criterion in astrophysics where a stellar medium is stable against convection when the rate of change in temperature (T) by altitude (Z) satisfies where is gravity and is the heat capacity at constant pressure. If a gas is unstable against convection then if an element is displaced upwards its buoyancy will cause it to keep rising or, if it is displaced downwards, it is denser than its surroundings and will continue to sink. Therefore, the Schwarzschild criterion dictates whether an element of a star will rise or sink if displaced by random fluctuations within the star or if the forces the element experiences will return it to its original position. For the Schwarzschild criterion to hold the displaced element must have a bulk velocity which is highly subsonic. If this is the case then the time over which the pressures surrounding the element changes is much longer than the time it takes for a sound wave to travel through the element and smooth out pressure differences between the element and its surroundings. If this were not the case the element would not hold together as it traveled through the star. In order to keep rising or sinking in the star the displaced element must not be able to become the same density as the gas surrounding it. In other words, it must respond adiabatically to its surroundings. In order for this to be true it must move fast enough for there to be insufficient time for the element to exchange heat with its surroundings. De Schwarzschild criterion is often written as which indicates that convection takes place whenever the adiabatic temperature gradient is less steep than the radiative temperature gradient (both gradients are usually negative). Stellar-structure models indicate that the two gradients are seldom of the same order of magnitude, so that the smaller can usually be neglected, even if both are always present. See also Archimedes' principle Brunt–Väisälä frequency Convection References Concepts in stellar astronomy
Schwarzschild criterion
[ "Physics", "Astronomy" ]
403
[ "Concepts in astrophysics", "Astronomy stubs", "Astrophysics", "Astrophysics stubs", "Concepts in stellar astronomy" ]
14,370,302
https://en.wikipedia.org/wiki/Transumbilical%20plane
The transumbilical plane or umbilical plane, one of the transverse planes in human anatomy, is a horizontal line that passes through the abdomen at the level of the navel (or umbilicus). In physical examination, clinicians use the transumbilical plane and its intersection with the median plane to divide the abdomen into four quadrants. References Animal anatomy Anatomical planes
Transumbilical plane
[ "Mathematics" ]
80
[ "Planes (geometry)", "Anatomical planes" ]
14,370,459
https://en.wikipedia.org/wiki/List%20of%20social%20bookmarking%20websites
A social bookmarking website is a centralized online service that allows users to store and share Internet bookmarks. Such a website typically offers a blend of social and organizational tools, such as annotation, categorization, folksonomy-based tagging, social cataloging and commenting. The website may also interface with other kinds of services, such as citation management software and social networking sites. Defunct sites See also Comparison of enterprise bookmarking platforms List of social software List of social networking services Comparison of reference management software Social network aggregation Notes and references bookmarking websites Social bookmarking
List of social bookmarking websites
[ "Technology" ]
121
[ "Mobile content", "Social software" ]
14,371,981
https://en.wikipedia.org/wiki/National%20Institute%20of%20Astrophysics%2C%20Optics%20and%20Electronics
The National Institute of Astrophysics, Optics and Electronics (in Spanish: Instituto Nacional de Astrofísica, Óptica y Electrónica, INAOE) is a Mexican science research institute located in Tonantzintla, Puebla. Founded by presidential decree on November 12, 1971, it has over 100 researchers in Astrophysics, Optics, Electronics and Computing Science, with postgraduate programs in these areas. INAOE is one of 30 public research centers sponsored by the National Council of Science and Technology of Mexico (CONACyT). The Institute, in partnership with the University of Massachusetts Amherst, developed the Large Millimeter Telescope / Gran Telescopio Milimétrico on the Puebla-Veracruz border. The asteroid 14674 INAOE was named after this institute. Structure There are four research departments with a number of research groups and laboratories: Astrophysics Coordinator: José Ramón Valdés Parra Research groups:   Visible Astronomical Instrumentation Laboratory and of High Energies (Esperanza Carrasco-Licea)   Millimeter Wavelength Instrumentation Laboratory   Fourier Spectroscopy Laboratory (Fabián Rosales)   Photographic Plates Collection (Raquel Díaz Hernández) Computer Sciences Coordinator: Ariel Carrasco Ochoa Research groups: Machine Learning and Pattern Recognition Reconfigurable and High Performance Computing Ubiquitous Computing and Processing Biosignal Processing and Medical Computing Robotics Language Technologies Computer Perception Cybersecurity Electronics Coordinator: Alfonso Torres Jacome Research groups: Microelectronics Integrated Circuit Design Electronic Instrumentation Communications Optics Coordinator: Fermín Granados Agustín Research groups: Science Group and Optoelectronics Engineering (CIOE) Image-Science Group and Digital Color Photonics Optics Instrumentation Quantum Optics Diffractive Optics Optoelectronics Imaging Science Biophotonics Optical Communications and Optoelectronics Optic Fibers Holography Imaging and Digital Color Optical Instrumentation Optical Microscopy and Dimensional Metrology Diffractive Optics Biomedical Optics Thin-films See also University of California High-Performance AstroComputing Center References Research institutes in Mexico Astrophysics research institutes Universities and colleges in Puebla Postgraduate schools in Mexico 1971 establishments in Mexico
National Institute of Astrophysics, Optics and Electronics
[ "Physics", "Astronomy" ]
417
[ "Astronomy organization stubs", "Astronomy stubs", "Astronomy organizations", "Astrophysics", "Astrophysics research institutes" ]
14,372,314
https://en.wikipedia.org/wiki/Lateral%20shoot
A lateral shoot, commonly known as a branch, is a part of a plant's shoot system that develops from axillary buds on the stem's surface, extending laterally from the plant's stem. Importance to photosynthesis As a plant grows it requires more energy, it also is required to out-compete nearby plants for this energy. One of the ways a plant can compete for this energy is to increase its height, another is to increase its overall surface area. That is to say, the more lateral shoots a plant develops, the more foliage the plant can support increases how much photosynthesis the plant can perform as it allows for more area for the plant to uptake carbon dioxide as well as sunlight. Genes, transcription factors, and growth Through testing with Arabidopsis thaliana (A plant considered a model organism for plant genetic studies) genes including MAX1 and MAX2 have been found to affect growth of lateral shoots. Gene knockouts of these genes cause abnormal proliferation of the plants affected, implying they are used for repressing said growth in wild type plants. Another set of experiments with Arabidopsis thaliana testing genes in the plant hormone florigen, two genes FT and TSF (which are abbreviations for Flowering Locus T, and Twin Sister of FT) when knocked out, appear to affect lateral shoot in a negative fashion. These mutants cause slower growth and improper formation of lateral shoots, which could also mean that lateral shoots are important to florigen's function. Along with general growth there are also transcription factors that directly effect the production of additional lateral shoots like the TCP family (also known as Teosinte branched 1/cycloidea/proliferating cell factor) which are plant specific proteins that suppress lateral shoot branching. Additionally the TCP family has been found to be partially responsible for inhibiting the cell's Growth hormone–releasing hormone (GHRF) which means it also inhibits cell proliferation. See also Apical dominance Shoot apical meristem Auxin, another plant growth hormone Cell growth References Plant morphology Auxin action
Lateral shoot
[ "Biology" ]
426
[ "Plant morphology", "Plants" ]
14,372,436
https://en.wikipedia.org/wiki/Natural%20remanent%20magnetization
Natural remanent magnetization is the permanent magnetism of a rock or sediment. This preserves a record of the Earth's magnetic field at the time the mineral was laid down as sediment or crystallized in magma and also the tectonic movement of the rock over millions of years from its original position. Natural remanent magnetization forms the basis of paleomagnetism and magnetostratigraphy. Igneous rocks Natural remnant magnetism is important when studying igneous rocks and the majority of the studies are based on. This is because these rocks contain a magnetic field at the time when the rock was formed. By being able measure the angle difference between the current magnetic field and the direction of the rocks the inclination can be discovered as well as seeing at how much the magnetic fields have moved. This is also the most common method used to get the remanence direction and strength. The main difficulty that arises is if the rocks has significant weathering or are overlayed with thick layers of sediments. (Shuang Liu, 2018) Brunhes in 1906 discovered in the Pliocene lavas in France that showed various directions making the magnetic fields that usually pointed north and down point in south and down. He was able to demonstrate that the baked igneous rocks were magnetized with similar polarity to the other igneous rocks. This created the baked contact test that was able to find relative ages in the areas of igneous rocks. (Neil Opdyke, 1996) Types There are several kinds of natural remnant magnetism that can occur in a sample. Many samples have more than one kind superimposed. Thermoremanent magnetization (TRM) is acquired during cooling through the Curie temperature of the magnetic minerals and is the best source of information on the past Earth's field. Magnetization formed by phase change, chemical action or growth of crystals at low temperature is called chemical remanent magnetization. Sediments acquire a depositional remanent magnetization during their formation or a post-depositional remanent magnetization afterwards. Some kinds of remanence are undesirable and must be removed before the useful remanence is measured. One is isothermal remanent magnetization, which as a component of natural remnant magnetism induced through exposing a particle to a large magnetic field, causing the field to flip its lower coercivity magnetic moments to a field-favored direction. A commonly cited mechanism of isothermal remanent magnetization acquisition is through lightning strikes. Another is viscous remanent magnetization, a remanence acquired when the rock sits in the Earth's field for long periods. The most important component of remanence is acquired when a rock is formed. This is called its primary component or characteristic remanent magnetization. Any later component is called a secondary component. To separate these components, the natural remnant magnetism is stripped away in a stepwise manner using thermal or alternating field demagnetization techniques to reveal the characteristic magnetic component. But not "all magnetic changes resulting from mechanical shock can be removed by AF demagnetization". Marine oil-bearing sandstones are physically unstable mineralogies whose low-field susceptibility and isothermal remanent magnetization increase irreversibly, even after weak mechanical shocks and an AF demagnetization in 100 mT peak alternating fields. Chemical remnant magnetization in magnetite Magnetite is used for measuring the chemical remnant magnetization. Since it is grown in a magnetic field then after a certain the field is blocked hence acquiring chemical remnant magnetization. However this concept and behavior is still not well understood.(Pick, 1991) A study was also conducted exploring when magnetite went under low-temperature oxidation to a maghemite. The results showed that this was not a truly effective method die to the separation between the chemical remnant magnetization and viscous remnant magnetization that was formed in the chosen field direction was not as effective.(Gapeev,1991) Uses Remnant magnetism specifically measures how much magnetism is left when removed from a magnetic field. This is used to get information on the "consetration, mineralogy, and grain size of the magnetic material". This provided data on the minerals that add to magnetic signal. This provided information on the minerals and where they come from, occurrence in soils, and their magnetic behavior. (Singer, 2013) See also Rock magnetism Notes References Ferromagnetism Stratigraphy
Natural remanent magnetization
[ "Chemistry", "Materials_science" ]
909
[ "Magnetic ordering", "Ferromagnetism" ]
14,372,575
https://en.wikipedia.org/wiki/Karahafu
is a type of curved gable found in Japanese architecture. It is used on Japanese castles, Buddhist temples, and Shinto shrines. Roofing materials such as tile and bark may be used as coverings. The face beneath the gable may be flush with the wall below, or it may terminate on a lower roof. History Although kara (唐) can be translated as meaning "China" or "Tang", this type of roof with undulating bargeboards is an invention of Japanese carpenters in the late Heian period. It was named thus because the word kara could also mean "peculiar" or "elegant", and was often added to names of objects considered grand or intricate regardless of origin. The karahafu developed during the Heian period and is shown in picture scrolls to decorate gates, corridors, and palanquins. The first known depiction of a karahafu appears on a miniature shrine () in Shōryoin shrine at Hōryū-ji in Nara. The karahafu and its building style (karahafu-zukuri) became increasingly popular during the Kamakura and Muromachi period, when Japan witnessed a new wave of influences from the Asian continent. During the Kamakura period, Zen Buddhism spread to Japan and the karahafu was employed in many Zen temples. Initially, the karahafu was used only in temples and aristocratic gateways, but starting from the beginning of the Azuchi–Momoyama period, it became an important architectural element in the construction of a daimyōs mansions and castles. The daimyō'''s gateway with a karahafu roof was reserved for the shōgun during his onari visits to the retainer, or for the reception of the emperor at shogunate establishments. A structure associated with these social connections naturally imparted special meaning. Gates with a karahafu roof, the karamon (mon meaning "gate"), became a means to proclaim the prestige of a building and functioned as a symbol of both religious and secular architecture. In the Tokugawa shogunate, the karamon gates were a powerful symbol of authority reflected in architecture. Images See also Japanese architecture Japanese castle List of roof shapes Notes References Coaldrake, William. (1996). Architecture and Authority in Japan. London/New York: Routledge. . Sarvimaki Marja. (2000). Structures, Symbols and Meanings: Chinese and Korean Influence on Japanese Architecture. Helsinki University of Technology, Department of Architecture. . Sarvimaki Marja. (2003). "Layouts and Layers: Spatial Arrangements in Japan and Korea". Sungkyun Journal of East Asian Studies, Volume 3, No. 2. Retrieved on May 30, 2009. Parent, Mary Neighbour. (2003). Japanese Architecture and Art Net Users System''. Japanese architectural features Roofs
Karahafu
[ "Technology", "Engineering" ]
581
[ "Structural system", "Structural engineering", "Roofs" ]
14,373,561
https://en.wikipedia.org/wiki/Shinya%20Yamanaka
is a Japanese stem cell researcher and a Nobel Prize laureate. He is a professor and the director emeritus of Center for iPS Cell (induced Pluripotent Stem Cell) Research and Application, Kyoto University; as a senior investigator at the UCSF-affiliated Gladstone Institutes in San Francisco, California; and as a professor of anatomy at University of California, San Francisco (UCSF). Yamanaka is also a past president of the International Society for Stem Cell Research (ISSCR). He received the 2010 BBVA Foundation Frontiers of Knowledge Award in the biomedicine category, the 2011 Wolf Prize in Medicine with Rudolf Jaenisch, and the 2012 Millennium Technology Prize together with Linus Torvalds. In 2012, he and John Gurdon were awarded the Nobel Prize for Physiology or Medicine for the discovery that mature cells can be converted to stem cells. In 2013, he was awarded the $3 million Breakthrough Prize in Life Sciences for his work. Education Yamanaka was born in Higashiōsaka, Japan, in 1962. After graduating from Tennōji High School attached to Osaka Kyoiku University, he received his M.D. degree at Kobe University in 1987 and his Ph.D. degree at Osaka City University, Graduate School of Medicine in 1993. After this, he went through a residency in orthopedic surgery at National Osaka Hospital and a postdoctoral fellowship at the Gladstone Institute of Cardiovascular disease, San Francisco. Afterwards, he worked at the Gladstone Institutes in San Francisco, US, and Nara Institute of Science and Technology in Japan. Yamanaka is currently a professor and the director emeritus of Center for iPS Research and Application (CiRA), Kyoto University. He is also a senior investigator at the Gladstone Institutes. Professional career Between 1987 and 1989, Yamanaka was a resident in orthopedic surgery at the National Osaka Hospital. His first operation was to remove a benign tumor from his friend Shuichi Hirata, a task he could not complete after one hour when a skilled surgeon would have taken ten minutes or so. Some seniors referred to him as "Jamanaka", a pun on the Japanese word for obstacle. From 1993 to 1996, he was at the Gladstone Institute of Cardiovascular disease. Between 1996 and 1999, he was an assistant professor at Osaka City University Medical School, but found himself mostly looking after mice in the laboratory, not doing actual research. His wife advised him to become a practicing doctor, but instead he applied for a position at the Nara Institute of Science and Technology. He stated that he could and would clarify the characteristics of embryonic stem cells, and this can-do attitude won him the job. From 1999 to 2003, he was an associate professor there, and started the research that would later win him the 2012 Nobel Prize. He became a full professor and remained at the institute in that position from 2003 to 2005. Between 2004 and 2010, Yamanaka was a professor at the Institute for Frontier Medical Sciences, Kyoto University. Between 2010 and 2022, Yamanaka was the director and a professor at the center for iPS Cell Research and Application (CiRA), Kyoto University. In April 2022, he stepped down and took place of the director emeritus of CiRA keeping with professor position. In 2006, he and his team generated induced pluripotent stem cells (iPS cells) from adult mouse fibroblasts. iPS cells closely resemble embryonic stem cells, the in vitro equivalent of the part of the blastocyst (the embryo a few days after fertilization) which grows to become the embryo proper. They could show that his iPS cells were pluripotent, i.e. capable of generating all cell lineages of the body. Later he and his team generated iPS cells from human adult fibroblasts, again as the first group to do so. A key difference from previous attempts by the field was his team's use of multiple transcription factors, instead of transfecting one transcription factor per experiment. They started with 24 transcription factors known to be important in the early embryo, but could in the end reduce it to four transcription factors – Sox2, Oct4, Klf4 and c-Myc. Yamanaka's Nobel Prize–winning research in iPS cells The 2012 Nobel Prize in Physiology or Medicine was awarded jointly to Sir John B. Gurdon and Shinya Yamanaka "for the discovery that mature cells can be reprogrammed to become pluripotent." Background-different cell types There are different types of stem cells. These are some types of cells that will help in understanding the material. Background-different stem cell techniques Historical background The prevalent view during the early 20th century was that mature cells were permanently locked into the differentiated state and cannot return to a fully immature, pluripotent stem cell state. It was thought that cellular differentiation can only be a unidirectional process. Therefore, non-differentiated egg/early embryo cells can only develop into specialized cells. However, stem cells with limited potency (adult stem cells) remain in bone marrow, intestine, skin etc. to act as a source of cell replacement. The fact that differentiated cell types had specific patterns of proteins suggested irreversible epigenetic modifications or genetic alterations to be the cause of unidirectional cell differentiation. So, cells progressively become more restricted in the differentiation potential and eventually lose pluripotency. In 1962, John B. Gurdon demonstrated that the nucleus from a differentiated frog intestinal epithelial cell can generate a fully functional tadpole via transplantation to an enucleated egg. Gurdon used somatic cell nuclear transfer (SCNT) as a method to understand reprogramming and how cells change in specialization. He concluded that differentiated somatic cell nuclei had the potential to revert to pluripotency. This was a paradigm shift at the time. It showed that a differentiated cell nucleus has retained the capacity to successfully revert to an undifferentiated state, with the potential to restart development (pluripotent capacity). However, the question still remained whether an intact differentiated cell could be fully reprogrammed to become pluripotent. Yamanaka's research Shinya Yamanaka proved that introduction of a small set of transcription factors into a differentiated cell was sufficient to revert the cell to a pluripotent state. Yamanaka focused on factors that are important for maintaining pluripotency in embryonic stem (ES) cells. This was the first time an intact differentiated somatic cell could be reprogrammed to become pluripotent. Knowing that transcription factors were involved in the maintenance of the pluripotent state, he selected a set of 24 ES cell transcriptional factors as candidates to reinstate pluripotency in somatic cells. First, he collected the 24 candidate factors. When all 24 genes encoding these transcription factors were introduced into skin fibroblasts, few actually generated colonies that were remarkably similar to ES cells. Secondly, further experiments were conducted with smaller numbers of transcription factors added to identify the key factors, through a very simple and yet sensitive assay system. Lastly, he identified the four key genes. They found that 4 transcriptional factors (Myc, Oct3/4, Sox2 and Klf4) were sufficient to convert mouse embryonic or adult fibroblasts to pluripotent stem cells (capable of producing teratomas in vivo and contributing to chimeric mice). These pluripotent cells are called iPS (induced pluripotent stem) cells; they appeared with very low frequency. iPS cells can be selected by inserting the b-geo gene into the Fbx15 locus. The Fbx15 promoter is active in pluripotent stem cells which induce b-geo expression, which in turn gives rise to G418 resistance; this resistance helps us identify the iPS cells in culture. Moreover, in 2007, Yamanaka and his colleagues found iPS cells with germline transmission (via selecting for Oct4 or Nanog gene). Also in 2007, they were the first to produce human iPS cells. Some issues that current methods of induced pluripotency face are the very low production rate of iPS cells and the fact that the 4 transcriptional factors are shown to be oncogenic. In July 2014, during a scandal involving Japanese stem cell researcher Haruko Obokata fabricating data, doctoring images, and plagiarizing the work of others, Yamanaka faced public scrutiny for his associated work lacking full documentation. Yamanaka denied manipulating images in his papers on embryonic mouse stem cells, but he could not find lab notes to confirm that the raw data was consistent with the published results. Further research and future prospects Since the original discovery by Yamanaka, much further research has been done in this field, and many improvements have been made to the technology. Improvements made to Yamanaka's research as well as future prospects of his findings are as follows: The delivery mechanism of pluripotency factors has been improved. At first retroviral vectors, that integrate randomly in the genome and cause deregulation of genes that contribute to tumor formation, were used. However, now, non-integrating viruses, stabilised RNAs or proteins, or episomal plasmids (integration-free delivery mechanism) are used. Transcription factors required for inducing pluripotency in different cell types have been identified (e.g. neural stem cells). Small substitutive molecules were identified, that can substitute for the function of the transcription factors. Transdifferentiation experiments were carried out. They tried to change the cell fate without proceeding through a pluripotent state. They were able to systematically identify genes that carry out transdifferentiation using combinations of transcription factors that induce cell fate switches. They found trandifferentiation within germ layer and between germ layers, e.g., exocrine cells to endocrine cells, fibroblast cells to myoblast cells, fibroblast cells to cardiomyocyte cells, fibroblast cells to neurons Cell replacement therapy with iPS cells is a possibility. Stem cells can replace diseased or lost cells in degenerative disorders and they are less prone to immune rejection. However, there is a danger that it may introduce mutations or other genomic abnormalities that render it unsuitable for cell therapy. So, there are still many challenges, but it is a very exciting and promising research area. Further work is required to guarantee safety for patients. Can medically use iPS cells from patients with genetic and other disorders to gain insights into the disease process. - Amyotrophic lateral sclerosis (ALS), Rett syndrome, spinal muscular atrophy (SMA), α1-antitrypsin deficiency, familial hypercholesterolemia and glycogen storage disease type 1A. - For cardiovascular disease, Timothy syndrome, LEOPARD syndrome, type 1 and 2 long QT syndrome - Alzheimer's, Spinocerebellar ataxia, Huntington's etc. iPS cells provide screening platforms for development and validation of therapeutic compounds. For example, kinetin was a novel compound found in iPS cells from familial dysautonomia and beta blockers & ion channel blockers for long QT syndrome were identified with iPS cells. Yamanaka's research has "opened a new door and the world's scientists have set forth on a long journey of exploration, hoping to find our cells’ true potential." In 2013, iPS cells were used to generate a human vascularized and functional liver in mice in Japan. Multiple stem cells were used to differentiate the component parts of the liver, which then self-organized into the complex structure. When placed into a mouse host, the liver vessels connected to the hosts vessels and performed normal liver functions, including breaking down of drugs and liver secretions. In 2022, Yamanaka factors were shown to effect age related measures in aged mice. Recognition In 2007, Yamanaka was recognized as a "Person Who Mattered" in the Time Person of the Year edition of Time magazine. Yamanaka was also nominated as a 2008 Time 100 Finalist. In June 2010, Yamanaka was awarded the Kyoto Prize for reprogramming adult skin cells to pluripotential precursors. Yamanaka developed the method as an alternative to embryonic stem cells, thus circumventing an approach in which embryos would be destroyed. In May 2010, Yamanaka was given "Doctor of Science honorary degree" by Mount Sinai School of Medicine. In September 2010, he was awarded the Balzan Prize for his work on biology and stem cells. Yamanaka has been listed as one of the 15 Asian Scientists To Watch by Asian Scientist magazine on May 15, 2011. In June 2011, he was awarded the inaugural McEwen Award for Innovation; he shared the $100,000 prize with Kazutoshi Takahashi, who was the lead author on the paper describing the generation of induced pluripotent stem cells. In June 2012, he was awarded the Millennium Technology Prize for his work in stem cells. He shared the 1.2 million euro prize with Linus Torvalds, the creator of the Linux kernel. In October 2012, he and fellow stem cell researcher John Gurdon were awarded the Nobel Prize in Physiology or Medicine "for the discovery that mature cells can be reprogrammed to become pluripotent." 2007 – Osaka Science Prize 2007 – Inoue Prize for Science 2007 – Asahi Prize 2007 – Meyenburg Cancer Research Award 2008 – Yamazaki-Teiichi Prize in Biological Science & Technology 2008 – Robert Koch Prize 2008 – Medals of Honor (Japan) (with purple ribbon) 2008 – Shaw Prize in Life Science & Medicine 2008 – Sankyo Takamine Memorial Award 2008 – Massry Prize from the Keck School of Medicine, University of Southern California 2008 - Golden Plate Award of the American Academy of Achievement 2009 – Lewis S. Rosenstiel Award for Distinguished Work in Basic Medical Research 2009 – Gairdner Foundation International Award 2009 – Albert Lasker Award for Basic Medical Research 2010 – Balzan Prize for Stem Cells: Biology and potential applications 2010 – March of Dimes Prize in Developmental Biology 2010 – Kyoto Prize in Biotechnology and medical technology 2010 – Person of Cultural Merit 2010 – BBVA Foundation Frontiers of Knowledge Award in the Biomedicine Category 2011 – Albany Medical Center Prize in biomedicine 2011 – Wolf Prize in Medicine 2011 – King Faisal International Prize for Medicine 2011 – McEwen Award for Innovation 2012 – Millennium Technology Prize 2012 – Fellow of the National Academy of Sciences 2012 – Nobel Prize in Physiology or Medicine 2012 – Order of Culture 2013 – Breakthrough Prize in Life Sciences 2013 – Member of the Pontifical Academy of Sciences 2014 – UCSF 150th Anniversary Alumni Excellence Awards 2016 – Honorable Emeritus Professor, Hiroshima University Interest in sports Yamanaka practiced judo (2nd Dan black belt) and played rugby as a university student. He also has a history of running marathons. After a 20-year gap, he competed in the inaugural Osaka Marathon in 2011 as a charity runner with a time of 4:29:53. He took part in Kyoto Marathon to raise money for iPS research since 2012. His personal best is 3:25:20 at 2018 Beppu-Ōita Marathon. See also Catherine Verfaillie List of Japanese Nobel laureates List of Nobel laureates affiliated with Kyoto University Tasuku Honjo References General references: The Discovery and Future of Induced Pluripotent Stem (iPS) Cloning and Stem Cell Discoveries Earn Nobel in Medicine (New York Times, October 8, 2012) Specific citations: External links Shinya Yamanaka, Center for iPS Cell Research and Application (CiRA), Kyoto University International Society for Stem Cell Research (ISSCR) 1962 births Living people 21st-century Japanese biologists Japanese Nobel laureates Academic staff of Kyoto University People from Higashiōsaka Cell biologists Stem cell researchers Biogerontologists Wolf Prize in Medicine laureates Laureates of the Imperial Prize Nobel laureates in Physiology or Medicine Foreign associates of the National Academy of Sciences Members of the French Academy of Sciences Recipients of the Order of Culture Recipients of the Albert Lasker Award for Basic Medical Research Members of the Pontifical Academy of Sciences Kobe University alumni Articles containing video clips Academic staff of Nara Institute of Science and Technology University of California, San Francisco faculty University of California, San Francisco alumni Members of the National Academy of Medicine Kyoto laureates in Advanced Technology
Shinya Yamanaka
[ "Biology" ]
3,418
[ "Stem cell researchers", "Stem cell research" ]
14,375,609
https://en.wikipedia.org/wiki/Ioflupane%20%28123I%29
{{DISPLAYTITLE:Ioflupane (123I)}} Ioflupane (123I) is the international nonproprietary name (INN) of a cocaine analogue which is a neuro-imaging radiopharmaceutical drug, used in nuclear medicine for the diagnosis of Parkinson's disease and the differential diagnosis of Parkinson's disease over other disorders presenting similar symptoms. During the DaT scan procedure it is injected into a patient and viewed with a gamma camera in order to acquire SPECT images of the brain with particular respect to the striatum, a subcortical region of the basal ganglia. The drug is sold under the brand name Datscan and is manufactured by GE Healthcare, formerly Amersham plc. Pharmacology Datscan is a solution of ioflupane (123I) for injection into a living test subject. The iodine introduced during manufacture is a radioactive isotope, iodine-123, and it is the gamma decay of this isotope that is detectable to a gamma camera. 123I has a half-life of approximately 13 hours and a gamma photon energy of 159 keV making it an appropriate radionuclide for medical imaging. The solution also contains 5% ethanol to aid solubility and is supplied sterile since it is intended for intravenous use. Ioflupane has a high binding affinity for presynaptic dopamine transporters (DAT) in the brains of mammals, in particular the striatal region of the brain. A feature of Parkinson's disease is a marked reduction in dopaminergic neurons in the striatal region. By introducing an agent that binds to the dopamine transporters a quantitative measure and spatial distribution of the transporters can be obtained. Method of administration The Datscan solution is supplied ready to inject with a certificate stating the calibration activity and time. The nominal injection activity is 185 MBq and a scan should not be performed with less than 111 MBq. Thyroid blocking via oral administration of 120 mg potassium iodide is recommended to minimize unnecessary excessive uptake of radioiodine. This is typically given 1–4 hours before the injection. The most convenient way to administer the IV dose is via a peripheral intravenous cannula. The scan is carried out 3 to 6 hours post injection. Pharmacokinetics Blood clearance of the radionuclide is rapid in healthy volunteers. Radioactivity was 4.5% of the injected amount 5 min after injection of ioflupane (123I), falling to 2.2% at 30 min, 1.9% at 5 h, and declining to 1.3% at 24 h and 1.1% at 48 h after injection. Values were similar in both whole blood and plasma. Excretion was primarily renal. Risks Common side effects of ioflupane (123I) are headache, vertigo, increased appetite and formication. Less than 1% of patients experience pain at the injection site. The radiation risks are reported as low. The committed effective dose for a single investigation on a 70 kg individual is 4.6 mSv. Pregnant patients should not undergo the test. It is not known if 123I-ioflupane is secreted in breast milk however it is recommended that breastfeeding be interrupted for three days after administration. See also List of cocaine analogues References Radiopharmaceuticals Neuroimaging Medical physics Tropanes Dopamine reuptake inhibitors Stimulants Organofluorides 4-Iodophenyl compounds Extrapyramidal and movement disorders
Ioflupane (123I)
[ "Physics", "Chemistry" ]
744
[ "Applied and interdisciplinary physics", "Medicinal radiochemistry", "Radiopharmaceuticals", "Medical physics", "Chemicals in medicine" ]
14,378,396
https://en.wikipedia.org/wiki/Mooning%20the%20Cog
Mooning the Cog is a tradition in which hikers bare their buttocks to the trains of the Cog Railway on Mount Washington, the highest peak in New Hampshire. Description Mooning of the Mount Washington Cog Railway trains is most commonly done by thru-hikers, as they pass by on the Appalachian Trail. It is a tradition, believed to date to at least 1987, in which, as the train passes the trail, some hikers choose to drop their drawers and "moon" the passengers. There are several theories as to the reasons for this tradition. One holds that it is an act of protest against the smoke, steam, and noise pollution generated by the railroad, which is known as the "Smog Railway" to some hikers. According to others, it is a reference to the train's original name, "The Railway to the Moon". The practice, though longstanding, is considered offensive by some of the Cog Railway's passengers. An off-duty New Hampshire State Trooper and a Forest Ranger began riding the train and arresting hikers who mooned it. During the autumn of 2007, eight hikers were arrested and were to be charged in a federal court, as the act took place in a National Forest. Sources Appalachian Trail Mount Washington (New Hampshire) Civil disobedience in the United States Gestures Nudity in the United States Protest tactics Buttocks
Mooning the Cog
[ "Biology" ]
285
[ "Behavior", "Gestures", "Human behavior" ]
7,098,529
https://en.wikipedia.org/wiki/Fusion%20of%20horizons
In the philosophy of Hans-Georg Gadamer, a fusion of horizons () is the process through which the members of a hermeneutical dialogue establish the broader context within which they come to a shared understanding. In phenomenology, a horizon refers to the context within which of any meaningful presentation is contained. For Gadamer, we exist neither in closed horizons, nor within a horizon that is unique; we must reject both the assumption of absolute knowledge, that universal history can be articulated within a single horizon, and the assumption of objectivity, that we can "forget ourselves" in order to achieve an objective perspective of the other participant. According to Gadamer, since it is not possible to totally remove oneself from one's own broader context, (e.g. the background, history, culture, gender, language, education, etc.) to an entirely different system of attitudes, beliefs and ways of thinking, in order to be able to gain an understanding from a conversation or dialogue about different cultures we must acquire "the right horizon of inquiry for the questions evoked by the encounter with tradition." through negotiation; in order to come to an agreement, the participants must establish a shared context through this "fusion" of their horizons. See also Horizon of expectation Perspectivism Notes References Concepts in epistemology Hans-Georg Gadamer Hermeneutics Phenomenology Social epistemology
Fusion of horizons
[ "Technology" ]
291
[ "Social epistemology", "Science and technology studies" ]
7,098,580
https://en.wikipedia.org/wiki/Elizabeth%20Rather
Elizabeth "Bess" D. Rather (born 1940) is the co-founder of FORTH, Inc. and is a leading expert in the Forth programming language. She became involved with Forth while she was at the University of Arizona, but working part-time for National Radio Astronomy Observatory (NRAO). While she initially aimed to rewrite their systems (written in Forth) in FORTRAN, her discovery of the power of Forth convinced her to leave the University to work for NRAO and Kitt Peak National Observatory, where she wrote the first Forth manual and started popularizing the language in the scientific community. She co-founded FORTH, Inc. with Charles Moore in 1973. Since then, she has become an expert in the language and one of its main proponents. She is an author of several books on the subject and has given many training seminars on its usage. From 1980 to 2006 she was President of FORTH, Inc., headquartered in the Los Angeles area. From 1986 to 1994, she was chair of the Technical Committee X3J14 that developed the ANSI Standard (X3.215-1994) for the Forth programming language. In 2006, she retired and lives in Hawaii, but continues with occasional Forth-related writing and teaching projects. Publications References External links FORTH, Inc. Living people 1940 births University of Arizona people American computer programmers American computer scientists American women computer scientists 21st-century American women
Elizabeth Rather
[ "Technology" ]
286
[ "Computing stubs", "Computer specialist stubs" ]
7,098,644
https://en.wikipedia.org/wiki/System%20Fault%20Tolerance
(DELETE) This text describes "product name", is not an encyclopedic entry. In computing, System Fault Tolerance (SFT) is a fault tolerant system built into NetWare operating systems. Three levels of fault tolerance exist: SFT I 'Hot Fix' maps out bad disk blocks on the file system level to help ensure data integrity (fault tolerance on the disk-block level) SFT II provides a disk mirroring or duplexing system based on RAID 1; mirroring refers to two disk drives holding the same data, duplexing uses two data channels/controllers to connect the disks (fault tolerance on the disk level and optionally on the data-channel level). SFT III is a server duplexing scheme where if a server fails, a constantly synchronized server seamlessly takes its place (fault tolerance on the system level). References Novell NetWare 4.2 documentation Novell NetWare
System Fault Tolerance
[ "Technology" ]
193
[ "Computing stubs", "Computer science", "Computer science stubs" ]
7,100,728
https://en.wikipedia.org/wiki/Quantum%20vortex
In physics, a quantum vortex represents a quantized flux circulation of some physical quantity. In most cases, quantum vortices are a type of topological defect exhibited in superfluids and superconductors. The existence of quantum vortices was first predicted by Lars Onsager in 1949 in connection with superfluid helium. Onsager reasoned that quantisation of vorticity is a direct consequence of the existence of a superfluid order parameter as a spatially continuous wavefunction. Onsager also pointed out that quantum vortices describe the circulation of superfluid and conjectured that their excitations are responsible for superfluid phase transitions. These ideas of Onsager were further developed by Richard Feynman in 1955 and in 1957 were applied to describe the magnetic phase diagram of type-II superconductors by Alexei Alexeyevich Abrikosov. In 1935 Fritz London published a very closely related work on magnetic flux quantization in superconductors. London's fluxoid can also be viewed as a quantum vortex. Quantum vortices are observed experimentally in type-II superconductors (the Abrikosov vortex), liquid helium, and atomic gases (see Bose–Einstein condensate), as well as in photon fields (optical vortex) and exciton-polariton superfluids. In a superfluid, a quantum vortex "carries" quantized orbital angular momentum, thus allowing the superfluid to rotate; in a superconductor, the vortex carries quantized magnetic flux. The term "quantum vortex" is also used in the study of few body problems. Under the de Broglie–Bohm theory, it is possible to derive a "velocity field" from the wave function. In this context, quantum vortices are zeros on the wave function, around which this velocity field has a solenoidal shape, similar to that of irrotational vortex on potential flows of traditional fluid dynamics. Vortex-quantisation in a superfluid In a superfluid, a quantum vortex is a hole with the superfluid circulating around the vortex axis; the inside of the vortex may contain excited particles, air, vacuum, etc. The thickness of the vortex depends on a variety of factors; in liquid helium, the thickness is of the order of a few Angstroms. A superfluid has the special property of having phase, given by the wavefunction, and the velocity of the superfluid is proportional to the gradient of the phase (in the parabolic mass approximation). The circulation around any closed loop in the superfluid is zero if the region enclosed is simply connected. The superfluid is deemed irrotational; however, if the enclosed region actually contains a smaller region with an absence of superfluid, for example a rod through the superfluid or a vortex, then the circulation is: where is the Planck constant divided by , m is the mass of the superfluid particle, and is the total phase difference around the vortex. Because the wave-function must return to its same value after an integer number of turns around the vortex (similar to what is described in the Bohr model), then , where is an integer. Thus, the circulation is quantized: London's flux quantization in a superconductor A principal property of superconductors is that they expel magnetic fields; this is called the Meissner effect. If the magnetic field becomes sufficiently strong it will, in some cases, “quench” the superconductive state by inducing a phase transition. In other cases, however, it will be energetically favorable for the superconductor to form a lattice of quantum vortices, which carry quantized magnetic flux through the superconductor. A superconductor that is capable of supporting vortex lattices is called a type-II superconductor, vortex-quantization in superconductors is general. Over some enclosed area S, the magnetic flux is where is the vector potential of the magnetic induction Substituting a result of London's equation: , we find (with ): where ns, m, and es are, respectively, number density, mass, and charge of the Cooper pairs. If the region, S, is large enough so that along , then The flow of current can cause vortices in a superconductor to move, causing the electric field due to the phenomenon of electromagnetic induction. This leads to energy dissipation and causes the material to display a small amount of electrical resistance while in the superconducting state. Constrained vortices in ferromagnets and antiferromagnets The vortex states in ferromagnetic or antiferromagnetic material are also important, mainly for information technology. They are exceptional, since in contrast to superfluids or superconducting material one has a more subtle mathematics: instead of the usual equation of the type where is the vorticity at the spatial and temporal coordinates, and where is the Dirac function, one has: where now at any point and at any time there is the constraint . Here is constant, the constant magnitude of the non-constant magnetization vector . As a consequence the vector in eqn. (*) has been modified to a more complex entity . This leads, among other points, to the following fact: In ferromagnetic or antiferromagnetic material a vortex can be moved to generate bits for information storage and recognition, corresponding, e.g., to changes of the quantum number n. But although the magnetization has the usual azimuthal direction, and although one has vorticity quantization as in superfluids, as long as the circular integration lines surround the central axis at far enough perpendicular distance, this apparent vortex magnetization will change with the distance from an azimuthal direction to an upward or downward one, as soon as the vortex center is approached. Thus, for each directional element there are now not two, but four bits to be stored by a change of vorticity: The first two bits concern the sense of rotation, clockwise or counterclockwise; the remaining bits three and four concern the polarization of the central singular line, which may be polarized up- or downwards. The change of rotation and/or polarization involves subtle topology. Statistical mechanics of vortex lines As first discussed by Onsager and Feynman, if the temperature in a superfluid or a superconductor is raised, the vortex loops undergo a second-order phase transition. This happens when the configurational entropy overcomes the Boltzmann factor, which suppresses the thermal or heat generation of vortex lines. The lines form a condensate. Since the centre of the lines, the vortex cores, are normal liquid or normal conductors, respectively, the condensation transforms the superfluid or superconductor into the normal state. The ensembles of vortex lines and their phase transitions can be described efficiently by a gauge theory. Statistical mechanics of point vortices In 1949 Onsager analysed a toy model consisting of a neutral system of point vortices confined to a finite area. He was able to show that, due to the properties of two-dimensional point vortices the bounded area (and consequently, bounded phase space), allows the system to exhibit negative temperatures. Onsager provided the first prediction that some isolated systems can exhibit negative Boltzmann temperature. Onsager's prediction was confirmed experimentally for a system of quantum vortices in a Bose-Einstein condensate in 2019. Pair-interactions of quantum vortices In a nonlinear quantum fluid, the dynamics and configurations of the vortex cores can be studied in terms of effective vortex–vortex pair interactions. The effective intervortex potential is predicted to affect quantum phase transitions and giving rise to different few-vortex molecules and many-body vortex patterns. Preliminary experiments in the specific system of exciton-polaritons fluids showed an effective attractive–repulsive intervortex dynamics between two cowinding vortices, whose attractive component can be modulated by the nonlinearity amount in the fluid. Spontaneous vortices Quantum vortices can form via the Kibble–Zurek mechanism. As a condensate forms by quench cooling, separate protocondensates form with independent phases. As these phase domains merge quantum vortices can be trapped in the emerging condensate order parameter. Spontaneous quantum vortices were observed in atomic Bose–Einstein condensates in 2008. See also Vortex Optical vortex Macroscopic quantum phenomena Abrikosov vortex Josephson vortex Fractional vortices Superfluid helium-4 Superfluid film Superconductor Type-II superconductor Type-1.5 superconductor Quantum turbulence Bose–Einstein condensate Negative temperature References Vortices Quantum mechanics Superconductivity Superfluidity
Quantum vortex
[ "Physics", "Chemistry", "Materials_science", "Mathematics", "Engineering" ]
1,859
[ "Electrical resistance and conductance", "Physical phenomena", "Phase transitions", "Physical quantities", "Vortices", "Superconductivity", "Phases of matter", "Theoretical physics", "Quantum mechanics", "Superfluidity", "Materials science", "Condensed matter physics", "Exotic matter", "Dy...
7,101,410
https://en.wikipedia.org/wiki/Solar%20shingle
Solar shingles, also called photovoltaic shingles, are solar panels designed to look like and function as conventional roofing materials, such as asphalt shingle or slate, while also producing electricity. Solar shingles are a type of solar energy solution known as building-integrated photovoltaics (BIPV). There are several varieties of solar shingles, including shingle-sized solid panels that take the place of a number of conventional shingles in a strip, semi-rigid designs containing several silicon solar cells that are sized more like conventional shingles, and newer systems using various thin-film solar cell technologies that match conventional shingles both in size and flexibility. There are also products using a more traditional number of silicon solar cells per panel reaching as much as 100 watts DC rating per shingle. Solar shingles are manufactured by several companies. History Solar shingles became commercially available in 2005. In a 2009 interview with Reuters, a spokesperson for the Dow Chemical Company estimated that their entry into the solar shingle market would generate $5 billion in revenue by 2015 and $10 billion by 2020. Dow solar shingles, known as the POWERHOUSE Solar System, first became available in Colorado, in October 2011. A 3rd generation of POWERHOUSE Solar System was exclusively licensed to RGS Energy for commercialization from 2017 until 2020, when RGS Energy filed for bankruptcy. In October 2016, Tesla entered the solar shingle space in a joint venture with SolarCity. Tesla later acquired SolarCity and the solar shingle product was described as "a flop" in 2019. Solar marketplace provider EnergySage reviewed the now named Tesla Solar Roof, noting that it had "experienced significant setbacks that have delayed its design, production, and deployment." In January 2022, GAF Materials Corporation announced they would start selling a solar shingle product. Description Solar shingles are photovoltaic modules, capturing sunlight and transforming it into electricity. Most solar shingles are and can be stapled directly to the roofing cloth. When applied they have a strip of exposed surface. Different models of shingles have different mounting requirements. Some can be applied directly onto roofing felt intermixed with regular asphalt shingles while others may need special installation. Some early manufacturers used solar thin-film technologies such as CIGS to produce electricity, which are less common in the solar industry than silicon-based cells. Current manufacturers, such as RGS Energy, CertainTeed, and SunTegra, have chosen to use the industry-standard monocrystalline or polycrystalline silicon solar cells in their POWERHOUSE 3.0, Apollo II, and SunTegra Shingle, respectively. The installation methods for some solar shingle solutions can be easier than traditional panel installations because they avoid the need to locate rafters and install with a process much more similar to asphalt shingles than standard solar panels. Other solar shingles, such as the Tesla Solar Roof, are much more difficult and expensive to install, requiring the removal and replacement of the existing roof. Solar shingled roofs tend to have a deep, dark, purplish-blue or black color, and therefore look similar to other roofs in most situations. Home owners may prefer solar shingles because large solar panels can be highly visible and spoil the aesthetics of the house. Cost The cost of solar shingles can range from $3.80 per watt up to $9.00 per watt installed depending on the manufacturer, technology used, and system size. As of May 2019, the average cost of a traditional, roof-mounted residential solar panel installation in the United States was just above $3.00 per watt, according to the Solar Energy Industry Association. While solar shingles are typically more expensive to install than traditional solar panels, some companies in recent years since 2014 have made strides to lessen the gap between the installed cost of going solar with panels versus going solar with shingles. According to Dow Chemical Company reports, a typical residential install consisting of 350 solar shingles can cost at least $20,000; however, federal and state incentives depending on the location might significantly bring down the cost. Solar contractors typically offer homeowners a full-service price for solar installation, which includes equipment purchasing, permit preparation and filing, registration with the local utility company, workmanship warranties, and complete on-site installation. Because photovoltaic solutions produce power in the form of direct current (DC) and the standard in homes is alternating current (AC), all grid-connected solar installations include an inverter to convert DC to AC. See also Building-integrated photovoltaics Energy development Green technology Solar energy Thin film solar on metal roofs References Solar architecture Solar cells Photovoltaics Sustainable building Roofing materials
Solar shingle
[ "Engineering" ]
963
[ "Construction", "Sustainable building", "Building engineering" ]
7,102,158
https://en.wikipedia.org/wiki/A514%20steel
A514 is a particular type of high strength steel, which is quenched and tempered alloy steel, with a yield strength of 100,000 psi (100 ksi or approximately 700 MPa). The ArcelorMittal trademarked name is T-1. A514 is primarily used as a structural steel for building construction. A517 is a closely related alloy that is used for the production of high-strength pressure vessels. This is a standard set by the standards organization ASTM International, a voluntary standards development organization that sets technical standards for materials, products, systems, and services. Specifications A514 The tensile yield strength of A514 alloys is specified as at least for thicknesses up to thick plate, and at least ultimate tensile strength, with a specified ultimate range of . Plates from thick have specified strength of (yield) and (ultimate). A517 A517 steel has equal tensile yield strength, but slightly higher specified ultimate strength of for thicknesses up to and for thicknesses . Usage A514 steels are used where a weldable, machinable, very high strength steel is required to save weight or meet ultimate strength requirements. It is normally used as a structural steel in building construction, cranes, or other large machines supporting high loads. In addition, A514 steels are specified by military standards (ETL 18-11) for use as small-arms firing range baffles and deflector plates. References Steels Structural steel
A514 steel
[ "Engineering" ]
308
[ "Steels", "Structural engineering", "Alloys", "Structural steel" ]
7,102,231
https://en.wikipedia.org/wiki/Ericoid%20mycorrhiza
The ericoid mycorrhiza is a mutualistic relationship formed between members of the plant family Ericaceae and several lineages of mycorrhizal fungi. This symbiosis represents an important adaptation to acidic and nutrient poor soils that species in the Ericaceae typically inhabit, including boreal forests, bogs, and heathlands. Molecular clock estimates suggest that the symbiosis originated approximately 140 million years ago. Structure and function Ericoid mycorrhizas are characterized by fungal coils that form in the epidermal cells of the fine hair roots of ericaceous species. Ericoid mycorrhizal fungi establish loose hyphal networks around the outside of hair roots, from which they penetrate the walls of cortical cells to form intracellular coils that can densely pack individual plant cells. However, the fungi do not penetrate plasma membranes of plant cells. Evidence suggests that coils only function for a period of a few weeks before the plant cell and fungal hyphae begin to degrade. The coil is the site where fungi exchange nutrients obtained from the soil for carbohydrates fixed through photosynthesis by the plant. Ericoid mycorrhizal fungi have been shown to have enzymatic capabilities to break down complex organic molecules. This may allow some ericoid mycorrhizal fungi to act as saprotrophs. However, the primary function of these enzymatic capabilities is likely to access organic forms of nutrients, such as nitrogen, whose mineralized forms are in very limiting quantities in habitats typically occupied by ericaceous plants. Fungal symbionts The majority of research with ericoid mycorrhizal fungal physiology and function has focused on fungal isolates morphologically identified as Rhizoscyphus ericae, in the Ascomycota order Helotiales, now known to be a Pezoloma species. In addition to Rhizoscyphus ericae, it is currently recognized that culturable Ascomycota such as Meliniomyces (closely allied with Rhizoscyphus ericae), Cairneyella variabilis, Gamarada debralockiae and Oidiodendron maius form ericoid mycorrhizas. The application of DNA sequencing to fungal isolates and clones from environmental PCR has uncovered diverse fungal communities in ericoid roots, however, the ability of these fungi to form typical ericoid mycorrhizal coils has not been verified and some may be non-mycorrhizal endophytes, saprobes or parasites. In addition to ascomycetes, Sebacina species in the phylum Basidiomycota are also recognized as frequent, but unculturable, associates of ericoid roots, and can form ericoid mycorrhizas. Similarly, basidiomycetes from the order Hymenochaetales have also been implicated in ericoid mycorrhizal formation. Geographic and host distribution The ericoid mycorrhizal symbiosis is widespread. Ericaceae species occupy at least some habitats on all continents except Antarctica. A few lineages within the Ericaceae do not form ericoid mycorrhizas, and instead form other types of mycorrhizas, including manzanita (Arctostaphylos), madrone (Arbutus), and the Monotropoidiae. The geographic distribution of many of the fungi is uncertain, primarily because the identification of the fungal partners has not always been easy, especially prior to the application of DNA-based identification methods. Fungi ascribed to Rhizoscyphus ericae have been identified from Northern and Southern Hemisphere habitats, but these are not likely all the same species. Some studies have also shown that fungal communities colonizing ericoid roots can lack specificity for different species of ericoid plant, suggesting that at least some of these fungi have a broad host range. Economic significance Ericoid mycorrhizal fungi form symbioses with several crop and ornamental species, such as blueberries, cranberries and Rhododendron. Inoculation with ericoid mycorrhizal fungi can influence plant growth and nutrient uptake. However, much less agricultural and horticultural research has been conducted with ericoid mycorrhizal fungi relative to arbuscular mycorrhizal and ectomycorrhizal fungi. External links Mycorrhiza Literature Exchange References Ascomycota Ericaceae Fungal morphology and anatomy Soil biology Symbiosis
Ericoid mycorrhiza
[ "Biology" ]
933
[ "Biological interactions", "Behavior", "Symbiosis", "Soil biology" ]
7,102,272
https://en.wikipedia.org/wiki/Sample%20preparation%20equipment
Sample preparation equipment refers to equipment used for the preparation of physical specimens for subsequent microscopy or related disciplines - including failure analysis and quality control. The equipment includes the following types of machinery: Precision cross-sectioning saws Precision lapping & polishing machines Selected Area Preparation Systems Decapsulation machinery (using mechanical, chemical/ 'jet etching' acid, laser and plasma methodologies) Focused ion beam (FIB) systems Anti-reflective coating systems Dimpling equipment Sputter coating equipment Carbon and metal evaporation systems Each of these system types incorporates a wealth of accessories and consumable items which fit the particular system for a specific application. External links Article from MATERIALS WORLD Journal discussing the various sample preparation disciplines that allow for failure analysis of electronic materials and components Article from ULTRA TEC Web-site discussing the backside sample preparation of a packaged electronic device that allow for (through silicon) backside analysis Article discussing the applications of jet etch equipment Industrial equipment
Sample preparation equipment
[ "Engineering" ]
193
[ "nan" ]
7,102,599
https://en.wikipedia.org/wiki/Lutz%E2%80%93Kelker%20bias
The Lutz–Kelker bias is a supposed systematic bias that results from the assumption that the probability of a star being at distance increases with the square of the distance which is equivalent to the assumption that the distribution of stars in space is uniform. In particular, it causes measured parallaxes to stars to be larger than their actual values. The bias towards measuring larger parallaxes in turn results in an underestimate of distance and therefore an underestimate on the object's luminosity. For a given parallax measurement with an accompanying uncertainty, both stars closer and farther may, because of uncertainty in measurement, appear at the given parallax. Assuming uniform stellar distribution in space, the probability density of the true parallax per unit range of parallax will be proportional to (where is the true parallax), and therefore, there will be more stars in the volume shells at farther distance. As a result of this dependence, more stars will have their true parallax smaller than the observed parallax. Thus, the measured parallax will be systematically biased towards a value larger than the true parallax. This causes inferred luminosities and distances to be too small, which poses an apparent problem to astronomers trying to measure distance. The existence (or otherwise) of this bias and the necessity of correcting for it has become relevant in astronomy with the precision parallax measurements made by the Hipparcos satellite and more recently with the high-precision data releases of the Gaia mission. The correction method due to Lutz and Kelker placed a bound on the true parallax of stars. This is not valid because true parallax (as distinct from measured parallax) cannot be known. Integrating over all true parallaxes (all space) assumes that stars are equally visible at all distances, and leads to divergent integrals yielding an invalid calculation. Consequently, the Lutz-Kelker correction should not be used. In general, other corrections for systematic bias are required, depending on the selection criteria of the stars under consideration. The scope of effects of the bias are also discussed in the context of the current higher-precision measurements and the choice of stellar sample where the original stellar distribution assumptions are not valid. These differences result in the original discussion of effects to be largely overestimated and highly dependent on the choice of stellar sample. It also remains possible that relations to other forms of statistical bias such as the Malmquist bias may have a counter-effect on the Lutz–Kelker bias for at least some samples. Mathematical Description Original Description The Distribution Function Mathematically, the Lutz-Kelker Bias originates from the dependence of the number density on the observed parallax that is translated into the conditional probability of parallax measurements. Assuming a Gaussian distribution of the observed parallax about the true parallax due to errors in measurement, we can write the conditional probability distribution function of measuring a parallax of given that the true parallax is as since the estimation is of a true parallax based on the measured parallax, the conditional probability of the true parallax being , given that the observed parallax is is of interest. In the original treatment of the phenomenon by Lutz & Kelker, this probability, using Bayes theorem, is given as where and are the prior probabilities of the true and observed parallaxes respectively. Dependence on Distance The probability density of finding a star with apparent magnitude at a distance can be similarly written as where is the probability density of finding a star with apparent magnitude m with a given distance . Here, will be dependent on the luminosity function of the star, which depends on its absolute magnitude. is the probability density function of the apparent magnitude independent of distance. The probability of a star being at distance will be proportional to such that Assuming a uniform distribution of stars in space, the number density becomes a constant and we can write , where . Since we deal with the probability distribution of the true parallax based on a fixed observed parallax, the probability density becomes irrelevant and we can conclude that the distribution will have the proportionality and thus, Normalization The conditional probability of the true parallax based on the observed parallax is divergent around zero for the true parallax. Therefore, it is not possible to normalize this probability. Following the original description of the bias, we can define a normalization by including the observed parallax as The inclusion of does not affect proportionality since it is a fixed constant. Moreover, in this defined "normalization", we will get a probability of 1 when the true parallax is equal to the observed parallax, regardless of the errors in measurement. Therefore, we can define a dimensionless parallax and get the dimensionless distribution of the true parallax as Here, represents the point where the measurement in parallax is equal to its true value, where the probability distribution should be centered. However, this distribution, due to factor will deviate from the point to smaller values. This presents the systematic Lutz-Kelker Bias. The value of this bias will be based on the value of , the marginal uncertainty in parallax measurement. Scope of Effects Original Treatment In the original treatment of the Lutz–Kelker bias as it was first proposed the uncertainty in parallax measurement is considered to be the sole source of bias. As a result of the parallax dependence of stellar distributions, smaller uncertainty in the observed parallax will result in only a slight bias from the true parallax value. Larger uncertainties in contrast would yield higher systematic deviations of the observed parallax from its true value. Large errors in parallax measurement become apparent in luminosity calculations and are therefore easy to detect. Consequently, the original treatment of the phenomenon considered the bias to be effective when the uncertainty in the observed parallax, , is close to about 15% of the measured value, . This was a very strong statement indicating that if the uncertainty in the parallax in about 15–20%, the bias is so effective that we lose most of the parallax and distance information. Several subsequent work on the phenomenon refuted this argument and it was shown that the scope is actually very sample based and may be dependent on other sources of bias. Therefore, more recently it is argued that the scope for most stellar samples is not as drastic as first proposed. Subsequent Discussions Following the original statement, the scope of the effects of the bias, as well as its existence and relative methods of correction have been discussed in many works in recent literature, including subsequent work of Lutz himself. Several subsequent work state that the assumption of uniform stellar distribution may not be applicable depending on the choice of stellar sample. Moreover, the effects of different distributions of stars in space as well as that of measurement errors would yield different forms of bias. This suggests the bias is largely dependent on the specific choice of sample and measurement error distributions, although the term Lutz–Kelker bias is commonly used generically for the phenomenon on all stellar samples. It is also questioned whether other sources of error and bias such as the Malmquist Bias actually counter-effect or even cancel the Lutz–Kelker bias, so that the effects are not as drastic as initially described by Lutz and Kelker. Overall, such differences are discussed to result in effects of the bias to be largely overestimated in the original treatment. More recently, the effects of the Lutz–Kelker bias became relevant in the context of the high-precision measurements of Gaia mission. The scope of effects of Lutz–Kelker bias on certain samples is discussed in the recent Gaia data releases, including the original assumptions and the possibility of different distributions. It remains important to take bias effects with caution regarding sample selection as stellar distribution is expected to be non-uniform at large distance scales. As a result, it is questioned whether correction methods, including the Lutz-Kelker correction proposed in the original work, are applicable for a given stellar sample, since effects are expected to depend on the stellar distribution. Moreover, following the original description and the dependence of the bias on the measurement errors, the effects are expected to be lower due to the higher precision of current instruments such as Gaia. History The original description of the phenomenon was presented in a paper by Thomas E. Lutz and Douglas H. Kelker in the Publications of the Astronomical Society of the Pacific, Vol. 85, No. 507, p. 573 article entitled "On the Use of Trigonometric Parallaxes for the Calibration of Luminosity Systems: Theory." although it was known following the work of Trumpler & Weaver in 1953. The discussion on statistical bias on measurements in astronomy date back to as early as to Eddington in 1913. References Astrometry
Lutz–Kelker bias
[ "Astronomy" ]
1,834
[ "Astrometry", "Astronomical sub-disciplines" ]