id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160
values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
252,076 | https://en.wikipedia.org/wiki/G%CE%B4%20set | {{DISPLAYTITLE:Gδ set}}
In the mathematical field of topology, a Gδ set is a subset of a topological space that is a countable intersection of open sets. The notation originated from the German nouns and .
Historically Gδ sets were also called inner limiting sets, but that terminology is not in use anymore.
Gδ sets, and their dual, F sets, are the second level of the Borel hierarchy.
Definition
In a topological space a Gδ set is a countable intersection of open sets. The Gδ sets are exactly the level Π sets of the Borel hierarchy.
Examples
Any open set is trivially a Gδ set.
The irrational numbers are a Gδ set in the real numbers . They can be written as the countable intersection of the open sets (the superscript denoting the complement) where is rational.
The set of rational numbers is a Gδ set in . If were the intersection of open sets each would be dense in because is dense in . However, the construction above gave the irrational numbers as a countable intersection of open dense subsets. Taking the intersection of both of these sets gives the empty set as a countable intersection of open dense sets in , a violation of the Baire category theorem.
The continuity set of any real valued function is a Gδ subset of its domain (see the "Properties" section for a more general statement).
The zero-set of a derivative of an everywhere differentiable real-valued function on is a Gδ set; it can be a dense set with empty interior, as shown by Pompeiu's construction.
The set of functions in not differentiable at any point within contains a dense Gδ subset of the metric space . (See .)
Properties
The notion of Gδ sets in metric (and topological) spaces is related to the notion of completeness of the metric space as well as to the Baire category theorem. See the result about completely metrizable spaces in the list of properties below. sets and their complements are also of importance in real analysis, especially measure theory.
Basic properties
The complement of a Gδ set is an Fσ set, and vice versa.
The intersection of countably many Gδ sets is a Gδ set.
The union of many Gδ sets is a Gδ set.
A countable union of Gδ sets (which would be called a Gδσ set) is not a Gδ set in general. For example, the rational numbers do not form a Gδ set in .
In a topological space, the zero set of every real valued continuous function is a (closed) Gδ set, since is the intersection of the open sets , .
In a metrizable space, every closed set is a Gδ set and, dually, every open set is an Fσ set. Indeed, a closed set is the zero set of the continuous function , where indicates the distance from a point to a set. The same holds in pseudometrizable spaces.
In a first countable T1 space, every singleton is a Gδ set.
A subspace of a completely metrizable space is itself completely metrizable if and only if it is a Gδ set in .
A subspace of a Polish space is itself Polish if and only if it is a Gδ set in . This follows from the previous result about completely metrizable subspaces and the fact that every subspace of a separable metric space is separable.
A topological space is Polish if and only if it is homeomorphic to a Gδ subset of a compact metric space.
Continuity set of real valued functions
The set of points where a function from a topological space to a metric space is continuous is a set. This is because continuity at a point can be defined by a formula, namely: For all positive integers there is an open set containing such that for all in . If a value of is fixed, the set of for which there is such a corresponding open is itself an open set (being a union of open sets), and the universal quantifier on corresponds to the (countable) intersection of these sets. As a consequence, while it is possible for the irrationals to be the set of continuity points of a function (see the popcorn function), it is impossible to construct a function that is continuous only on the rational numbers.
In the real line, the converse holds as well; for any Gδ subset of the real line, there is a function that is continuous exactly at the points in .
Gδ space
A Gδ space is a topological space in which every closed set is a Gδ set. A normal space that is also a Gδ space is called perfectly normal. For example, every metrizable space is perfectly normal.
See also
Fσ set, the dual concept; contrast the "G" from German and "F" from French .
P-space, any space having the property that every Gδ set is open
Notes
References
General topology
Descriptive set theory | Gδ set | Mathematics | 1,022 |
31,252,114 | https://en.wikipedia.org/wiki/Neoendorphin | Neoendorphins are a group of endogenous opioid peptides derived from the proteolytic cleavage of prodynorphin. They include α-neoendorphin and β-neoendorphin. The α-neoendorphin is present in greater amounts in the brain than β-neoendorphin. Both are products of the dynorphin gene, which also expresses dynorphin A, dynorphin A-(1-8), and dynorphin B. These opioid neurotransmitters are especially active in Central Nervous System receptors, whose primary function is pain sensation. These peptides all have the consensus amino acid sequence of Try-Gly-Gly-Phe-Met (met-enkephalin) or Tyr-Gly-Gly-Phe-Leu ( leu-enkephalin). Binding of neoendorphins to opioid receptors (OPR), in the dorsal root ganglion (DRG) neurons results in the reduction of time of calcium-dependent action potential. The α-neoendorphins bind OPRD1(delta), OPRK1(kappa), and OPRM1 (mu) and β-neoendorphin bind OPRK1.
Types
See also
Endorphin
References
Opioid peptides | Neoendorphin | Chemistry,Biology | 299 |
2,208,970 | https://en.wikipedia.org/wiki/Intensity%20interferometer | An intensity interferometer is the name given to devices that use the Hanbury Brown and Twiss effect. In astronomy, the most common use of such an astronomical interferometer is to determine the apparent angular diameter of a radio source or star. If the distance to the object can then be determined by parallax or some other method, the physical diameter of the star can then be inferred. An example of an optical intensity interferometer is the Narrabri Stellar Intensity Interferometer. In quantum optics, some devices which take advantage of correlation and anti-correlation effects in beams of photons might be said to be intensity interferometers, although the term is usually reserved for observatories.
An intensity interferometer is built from two light detectors, typically either radio antenna or optical telescopes with photomultiplier tubes (PMTs), separated by some distance, called the baseline. Both detectors are pointed at the same astronomical source, and intensity measurements are then transmitted to a central correlator facility. A major advantage of intensity interferometers is that only the measured intensity observed by each detector must be sent to the central correlator facility, rather than the amplitude and phase of the signal. The intensity interferometer measures interferometric visibilities like all other astronomical interferometers. These measurements can be used to calculate the diameter and limb darkening coefficients of stars, but with intensity interferometers aperture synthesis images cannot be produced as the visibility phase information is not preserved by an intensity interferometer.
References
Telescopes
Interferometric telescopes
Quantum optics | Intensity interferometer | Physics,Astronomy | 329 |
57,778,150 | https://en.wikipedia.org/wiki/Cole%20equation%20of%20state | An equation of state introduced by R. H. Cole
where is a reference density, is the adiabatic index, and is a parameter with pressure units.
References
External links
Cole equation of state article at sklogwiki
Equations of state | Cole equation of state | Physics,Chemistry | 50 |
41,838,740 | https://en.wikipedia.org/wiki/Micropyle%20%28zoology%29 | A micropyle is a pore in the membrane covering the ovum, through which a sperm enters.
Micropyles are also found in sporozoites of some digenetic microorganisms such as Plasmodium at the anterior part of the cell that ultimately leads towards the apical cap. Examples of other organisms that have micropyles are the Bombyx mandarina and the Ceratitis capitata.
References
Reproduction | Micropyle (zoology) | Biology | 91 |
74,933,186 | https://en.wikipedia.org/wiki/Eta3%20Fornacis | {{DISPLAYTITLE:Eta3 Fornacis}}
Eta3 Fornacis (η3 Fornacis) is an orange giant in the constellation of Fornax. The star has a spectral type of K2III and an apparent magnitude of 5.47. The star is visually close to, but unrelated with the similar stars η2 Fornacis and η1 Fornacis. The star is located at approximately 489 light years away with a luminosity of about , and is a suspected binary system with the primary being the orange giant.
References
Fornax
Fornacis, Eta2
K-type giants
017829
0851
013265 | Eta3 Fornacis | Astronomy | 139 |
6,138,977 | https://en.wikipedia.org/wiki/EURAMET | EURAMET (European Association of National Metrology Institutes, previously known as EUROMET, the European Collaboration in Measurement Standards) is a collaborative alliance of national metrological organizations from member states of the European Union (EU) and of the European Free Trade Association (EFTA) whose purpose is to achieve higher efficiency by co-ordinating and sharing metrological activities and services.
EURAMET was established on 11 January 2007 in Berlin. Legally it is a company registered under German law with its offices in Braunschweig and on 1 July 2007 took over the role of EUROMET as a Regional Metrology Organisation. EUROMET was created in Madrid, Spain, on 23 September 1987 and became operative on 1 January 1988.
Full membership of EURAMET is restricted to national metrology institutes (NMIs) of EU and EFTA member states, well-established NMIs of other European states, and the European Commission's Institute working in the field of Metrology. Associate membership is available for designated metrology institutes from member states and NMIs which, for various reasons, cannot be full members.
EURAMET coordinates metrological activity at a European level, liaising with the International Organization of Legal Metrology and the International Bureau of Weights and Measures where appropriate. Amongst its publications are various calibration and technical guides and a booklet on European time zones.
See also
WELMEC, a body that promotes European cooperation in the field of legal metrology
References
External links
Measurement
Standards organizations
Organisations based in Braunschweig
Metrology organizations | EURAMET | Physics,Mathematics | 310 |
2,853,242 | https://en.wikipedia.org/wiki/Uranium%20metallurgy | In materials science and materials engineering, uranium metallurgy is the study of the physical and chemical behavior of uranium and its alloys.
Commercial-grade uranium can be produced through the reduction of uranium halides with alkali or alkaline earth metals. Uranium metal can also be made through electrolysis of KUF5 or UF4, dissolved in a molten CaCl2 and NaCl. Very pure uranium can be produced through the thermal decomposition of uranium halides on a hot filament.
The uranium isotope 235U is used as the fuel for nuclear reactors and nuclear weapons. It is the only isotope existing in nature to any appreciable extent that is fissile, that is, fissionable by thermal neutrons. The isotope 238U is also important because it absorbs neutrons to produce a radioactive isotope that subsequently decays to the isotope 239Pu (plutonium), which also is fissile. Uranium in its natural state comprises just 0.71% 235U and 99.3% 238U, and the main focus of uranium metallurgy is the enrichment of uranium through isotope separation.
See also
Nuclear weapon design#Enriched materials
Uranium tile
References
Sources
Uranium
Enriched uranium
Nuclear weapon design
The technology of mining and metallurgy , retrieved 7 October 2005.
External links
The technology of mining and metallurgy
Building nuclear warheads: The process
List of Uranium Alloys
Uranium
Metallurgy | Uranium metallurgy | Chemistry,Materials_science,Engineering | 285 |
68,259,709 | https://en.wikipedia.org/wiki/Valencia%20Koomson | Valencia Joyner Koomson is an American electrical engineer. She is an associate professor in the Department of Electrical and Computer Engineering with secondary appointments in the Department of Computer Science and the Jonathan M. Tisch College of Civic Life at Tufts University. She is the principal investigator for the Advanced Integrated Circuits and Systems Lab at Tufts University.
Early life and education
Koomson was born in Washington, DC and graduated from Benjamin Banneker Academic High School. Her parents, Otis and Vernese Joyner, moved to Washington DC during the Great Migration after living for years as sharecroppers in Wilson County, North Carolina. Her family history can be traced back to the Antebellum South era. Her oldest known relative is Hagar Atkinson, an enslaved African woman whose name is recorded in the will of a plantation owner in Johnston County, North Carolina established in 1746.
Research and career
Koomson attended the Massachusetts Institute of Technology, graduating with a BS in electrical engineering and computer science in 1998 and a Master of Engineering in 1999. she earned her Master of Philosophy from the University of Cambridge in 2000, followed by her PhD in electrical engineering from the same institution in 2003.
Koomson was an adjunct professor at Howard University from 2004 to 2005, and during that period was a Senior Research Engineer at the University of Southern California's Information Sciences Institute (USC/ISI). She was a visiting professor at Rensselaer Polytechnic Institute and Boston University in 2008 and 2013, respectively. Koomson joined Tufts University in 2005 as an assistant professor and became an associate professor in 2011. In 2020, Koomson was named an MLK Visiting Professor at MIT for the academic year 2020/2021.
Her Advanced Integrated Circuits and Systems Lab continues to do research into the design and implementation of innovative high-performance, low-power microsystems, with a focus on the integration of heterogeneous devices/materials (optical, RF, bio/chemical) with silicon circuit architectures to address challenges in high-speed wireless communication, biomedical imaging, and sensing. Recently, Koomson has focused on addressing racial bias in medical devices and algorithms, including the pulse oximeter device that became widely used by the public during the Covid-19 Pandemic. She's been addressing this concern through the development of technology designed to measure a person's skin tone. This innovation will allow the pulse oximeter to emit more light into the device, ensuring individuals with higher melanin levels receive a more accurate reading. Koomson has also been actively engaged with policymakers and scientists, advocating for an FDA review of the biases linked to pulse oximeters. This effort played a pivotal role in orchestrating an FDA forum which gathered in late 2022 to address the issue. She shared with The Tufts Admission Magazine, "I spent one summer contacting our congressional delegation in Massachusetts to ensure lawmakers are aware of these issues and talking to their staff members who focus on health policy. Senator Warren led the charge in 2021 to urge the Food and Drug Administration (FDA) to review this." In addition to her work with medical devices, Koomson played a crucial role in a collaborative team focused on developing a Hybrid VLC/RF parking automation system.
Honors and awards
MLK Visiting Professor at MIT, 2020
References
External links
African-American women engineers
21st-century American women engineers
Tufts University faculty
Living people
Year of birth missing (living people)
21st-century African-American women
Educators from Washington, D.C.
Computer engineering
Massachusetts Institute of Technology alumni
Alumni of the University of Cambridge | Valencia Koomson | Technology,Engineering | 723 |
9,753,326 | https://en.wikipedia.org/wiki/Ken%20Ono | Ken Ono (born March 20, 1968) is an American mathematician with fields of study in number theory. He is the STEM Advisor to the Provost and the Marvin Rosenblum Professor of Mathematics at the University of Virginia.
Early life and education
Ono was born on March 20, 1968, in Philadelphia, Pennsylvania. He is the son of mathematician Takashi Ono, who emigrated from Japan to the United States after World War II. Ken Ono was born in the United States as his father returned to the United States from the University of British Columbia in Canada for a position at the University of Pennsylvania.
In the 1980s, Ono attended Towson High School, but he dropped out. He later enrolled at the University of Chicago without a high school diploma. There he raced bicycles, and he was a member of the Pepsi–Miyata Cycling Team.
He received his BA from the University of Chicago in 1989, where he was a member of the Psi Upsilon fraternity. He earned his PhD in 1993 from the University of California, Los Angeles, where his advisor was Basil Gordon. Initially he planned to study medicine, but later switched to mathematics. He attributes his interest in mathematics to his father.
Career
Ono worked as an instructor at Woodbury University from 1991 to 1993, as a visiting assistant professor at the University of Georgia from 1993 to 1994, and as a visiting assistant professor at the University of Illinois at Urbana-Champaign from 1994 to 1995. He was a member of the Institute for Advanced Study from 1995 to 1997.
Ono worked at Pennsylvania State University from 1997 to 2000 as an assistant professor and then as the Louis A. Martarano Professor of Mathematics. He moved to the University of Wisconsin-Madison as an associate professor in 1999, and later became the Solle P. and Margaret Manasse Professor of Letters and Science from 2004 to 2011 and as the Hilldale Professor of Mathematics from 2008 to 2011. He was the Candler Professor of Mathematics at Emory University from 2010 to 2019. In 2019, Ono became the Thomas Jefferson Professor of Mathematics at the University of Virginia, and in Fall 2021 he was named the Marvin Rosenblum Professor of Mathematics and the chairman of the Department of Mathematics. He ended his term as chairman in Fall 2022 to become the STEM Advisor to the Provost at the University of Virginia.
Ono was the Vice President of the American Mathematical Society from 2018 to 2021. He is serving as the section chair for mathematics at the American Association for the Advancement of Science from 2020 to 2023.
Research
In 2000, Ono derived a theory of Ramanujan congruences for the partition function with all prime moduli greater than 3. His paper was published in the Annals of Mathematics. In a joint work with Jan Bruinier, Ono discovered a finite algebraic formula for computing partition numbers.
In 2014, a joint paper by Michael J. Griffin, Ono, and S. Ole Warnaar provided a framework for the Rogers–Ramanujan identities and their arithmetic properties, solving a long-standing mystery stemming from the work of Ramanujan. The findings yield new formulas for algebraic numbers. Their work was ranked 15th among the top 100 stories of 2014 in science by Discover magazine.
In a 2015 joint paper co-authored with John Duncan and Michael Griffin, Ono helped prove the umbral moonshine conjecture. This conjecture was formulated by Miranda Cheng, John Duncan, and Jeff Harvey, and is a generalization of the monstrous moonshine conjecture proved by Richard Borcherds.
In May 2019, Ono published a joint paper (co-authored with Don Zagier and two former students) in the Proceedings of the National Academy of Sciences on the Riemann Hypothesis. Their work proves a large portion of the Jensen-Polya criterion for the Riemann Hypothesis. However, the Riemann Hypothesis remains unsolved. Their work also establishes the Gaussian Unitary Ensemble random matrix condition in derivative aspect for the derivatives of the Riemann Xi function.
Since 2016, Ono used mathematical analysis and modeling to advise elite competitive swimmers including some of the 2020 and 2024 Olympians.
Media
Ono wrote, with Amir Aczel as coauthor, an autobiography, emphasizing the inspiration he gained from Ramanujan's mathematical research.
Ono was an Associate Producer and the mathematical consultant for the movie The Man Who Knew Infinity, which starred Jeremy Irons and Dev Patel, based on Ramanujan's biography written by Robert Kanigel.
He starred in a 2022 Super Bowl commercial for Miller Lite beer. He is on the Board of Directors of the Infinity Arts Foundation.
Personal life
From 2012 to 2014, Ono has competed in World Triathlon Cross Championships events while representing the United States.
Honors and awards
National Security Agency Young Investigator (1997)
National Science Foundation CAREER Award (1999)
Sloan Research Fellowship (1999)
Packard Fellowship for Science and Engineering (1999)
Presidential Early Career Award for Scientists and Engineers from President Bill Clinton (2000)
Guggenheim Fellowship (2003)
National Science Foundation Director's Distinguished Teaching Scholar Award (2005)
Fellow of the American Mathematical Society (2013)
University of Chicago Alumni Award for Professional Achievement (2023)
Effie Silver Award for Miller 64 Super Bowl ad (2023).
Honorary Fellow of the Indian Academy of Sciences (2024).
Fellow of the Asian American Scholar Forum (2024).
Editorial boards
Ono is on the editorial board of several journals:
Annals of Combinatorics
Communications in Number Theory and Physics
The Ramanujan Journal (Editor-in-Chief)
Research in the Mathematical Sciences (Editor-in-Chief)
Research in Number Theory (Editor-in-Chief)
See also
Ramanujan's ternary quadratic form
References
External links
Ken Ono on The Man Who Knew Infinity and why Ramanujan Matters
1968 births
Living people
Combinatorialists
American number theorists
20th-century American mathematicians
21st-century American mathematicians
Towson High School alumni
University of Chicago alumni
Emory University faculty
University of Virginia faculty
University of Wisconsin–Madison faculty
American academics of Japanese descent
Fellows of the American Mathematical Society
Mathematicians from Philadelphia
Recipients of the Presidential Early Career Award for Scientists and Engineers | Ken Ono | Mathematics | 1,225 |
28,916,502 | https://en.wikipedia.org/wiki/Autochaperone | In molecular biology, autotransporter proteins are proteins secreted out the Gram-negative bacteria. These beta helixes require a domain which is called the intramolecular autochaperone domain. It shows similarities with other intramolecular chaperone sequences and has a folding-associated function. This increases the efficiency, either by stabilizing the beta-barrel, or by promoting the folding of the passenger domain.
The autochaperone domain is usually located between the HSF and the passenger domain. When the passenger domain is translocated, starting with its C terminus, the autochaperone domain is first out. This would result in the formation of a hairpin structure.
See also
Pharmacological chaperone
Protein domain
References
External links
Surface display of proteins by Gram-negative bacterial autotransporters
Adhesion mediated by autotransporters of Gram-negative bacteria: Structural and functional features
Identification of Secretion Determinants of the Bordetella pertussis BrkA Autotransporter
Protein structural motifs
Protein domains | Autochaperone | Chemistry,Biology | 220 |
2,837,716 | https://en.wikipedia.org/wiki/Fretting | Fretting refers to wear and sometimes corrosion damage of loaded surfaces in contact while they encounter small oscillatory movements tangential to the surface. Fretting is caused by adhesion of contact surface asperities, which are subsequently broken again by the small movement. This breaking causes wear debris to be formed.
If the debris and/or surface subsequently undergo chemical reaction, i.e., mainly oxidation, the mechanism is termed fretting corrosion. Fretting degrades the surface, leading to increased surface roughness and micropits, which reduces the fatigue strength of the components.
The amplitude of the relative sliding motion is often in the order of micrometers to millimeters, but can be as low as 3 nanometers.
Typically fretting is encountered in shrink fits, bearing seats, bolted parts, splines, and dovetail connections.
Materials
Steel
Fretting damage in steel can be identified by the presence of a pitted surface and fine 'red' iron oxide dust resembling cocoa powder. Strictly this debris is not 'rust' as its production requires no water. The particles are much harder than the steel surfaces in contact, so abrasive wear is inevitable; however, particulates are not required to initiate fret.
Aluminium
Fretting in Aluminium causes black debris to be present in the contact area due to the fine oxide particles.
Products affected
Fretting examples include wear of drive splines on driveshafts, wheels at the lug bolt interface, and cylinder head gaskets subject to differentials in thermal expansion coefficients.
There is currently a focus on fretting research in the aerospace industry. The dovetail blade-root connection and the spline coupling of gas turbine aero engines experience fretting.
Another example in which fretting corrosion may occur are the pitch bearings of modern wind turbines, which operate under oscillation motion to control the power and loads of the turbine.
Fretting can also occur between reciprocating elements in the human body. Especially implants, for example hip implants, are often affected by fretting effects.
Fretting electrical/electronic connectors
Source:
Fretting also occurs on virtually all electrical connectors subject to motion (e.g. a printed circuit board connector plugged into a backplane, i.e. SOSA/VPX). Commonly most board to board (B2B) electrical connectors are especially vulnerable if there is any relative motion present between the mating connectors. A mechanically rigid connection system is required to hold both halves of a B2B motionless (often impossible). Wire to board (W2B) connectors tend to be immune to fretting because the wire half of the connector acts as a spring absorbing relative motion that would otherwise transfer to the contact surfaces of the W2B connector. Very few exotic B2B connectors exist that address fretting by: 1) incorporating springs into the individual contacts or 2) using a Chinese finger trap design to greatly increase the contact area. A connector design that contacts all 4-sides of a square pin instead of just one or 1 or 2 can delay the inevitable fretting some amount. Keeping contacts clean and lubricated also offers some longevity.
Contact fretting can change the impedance of a B2B connector from milliohms to ohms in just minutes when vibration is present. The relatively soft and thin gold plating used on most high quality electrical connectors is quickly worn through exposing the underlying alloy metals and with fretting debris the impedance rapidly increases. Somewhat counterintuitively, high contact forces on the mated connector pair (thought to help lower impedance and increase reliability) can actually make the rate of fretting even worse.
Fretting in rolling element bearings
In rolling element bearings fretting may occur when the bearings are operating in an oscillating motion. Examples of applications are blade bearings in wind turbines, helicopter rotor pitch bearings, and bearings in robots. If the bearing movement is limited to small motions the damage caused may be called fretting or false brinelling depending on mechanism encountered. The main difference is that false brinelling occurs under lubricated and fretting under dry contact conditions. Between false brinelling and fretting corrosion, a time-dependent relation has been proposed.
Fretting fatigue
Fretting decreases fatigue strength of materials operating under cycling stress. This can result in fretting fatigue, whereby fatigue cracks can initiate in the fretting zone. Afterwards, the crack propagates into the material. Lap joints, common on airframe surfaces, are a prime location for fretting corrosion. This is also known as frettage or fretting corrosion.
Factors affecting fretting
Fretting resistance is not an intrinsic property of a material, or even of a material couple. There are several factors affecting fretting behavior of a contact:
Contact load
Sliding amplitude
Number of cycles
Temperature
Relative humidity
Inertness of materials
Corrosion and resulting motion-triggered contact insufficiency
Mitigation
The fundamental way to prevent fretting is to design for no relative motion of the surfaces at the contact. Surface roughness plays an important role as fretting normally occurs by the contact of the asperities of the mating surfaces. Lubricants are often employed to mitigate fretting because they reduce friction and inhibit oxidation. This may however, also cause the opposite effect as a lower coefficient of friction may lead to more movement. Thus, a solution must be carefully considered and tested.
In the aviation industry, coatings are applied to cause a harder surface and/or influence the friction coefficient.
Soft materials often exhibit higher susceptibility to fretting than hard materials of a similar type. The hardness ratio of the two sliding materials also has an effect on fretting wear. However, softer materials such as polymers can show the opposite effect when they capture hard debris which becomes embedded in their bearing surfaces. They then act as a very effective abrasive agent, wearing down the harder metal with which they are in contact.
See also
References
External links
Fretting and Its Insidious Effects, by EPI Inc.
Assessment Of Cold Welding Between Separable Contact Surfaces Due To Impact And Fretting Under Vacuum
Corrosion
Materials degradation
Tribology | Fretting | Chemistry,Materials_science,Engineering | 1,292 |
12,515,152 | https://en.wikipedia.org/wiki/Aceramarca%20gracile%20opossum | The Aceramarca gracile opossum or Bolivian gracile opossum (Gracilinanus aceramarcae) is a species of opossum. It is native to Bolivia and Peru, where it occurs in tropical elfin forest habitat.
This opossum is mostly arboreal, but it may forage on the ground for food.
This species has been recorded at only six locations, but it is not considered to be threatened because its habitat is relatively secure from deforestation and other threats at this time.
This mouse opossum does not have a pouch. It is reddish or grayish brown in color with a cream-colored belly and a dark eye ring. It is up to long, not including its slender, scaly tail, which may be over long.
References
Opossums
Fauna of the Andes
Marsupials of Bolivia
Marsupials of Peru
EDGE species
Mammals described in 1931
Taxa named by George Henry Hamilton Tate
Taxonomy articles created by Polbot | Aceramarca gracile opossum | Biology | 201 |
66,516,101 | https://en.wikipedia.org/wiki/Rousettus%20bat%20coronavirus%20GCCDC1 | Rousettus bat coronavirus GCCDC1 is a species of coronavirus in the genus Betacoronavirus.
References
Betacoronaviruses | Rousettus bat coronavirus GCCDC1 | Biology | 32 |
23,080,169 | https://en.wikipedia.org/wiki/Lammas%20growth | Lammas growth, also called Lammas leaves, Lammas flush, second shoots, or summer shoots, is a season of renewed growth in some trees in temperate regions put on in July and August (if in the northern hemisphere, January and February if in the southern), that is around Lammas day, August 1.
It can occur in both hardwoods and softwoods. Examples of common trees which exhibit regrowth are oak, ash, beech, sycamore, yew, scots pine, sitka spruce, poplar and hawthorn. This secondary growth may be an evolutionary strategy to compensate for leaf damage caused by insects during the spring. It is not present in birch or willow.
Lammas growth declines with the age of the tree, being most vigorous and noticeable in young trees. It differs in nature from spring growth which is fixed when leaves and shoots are laid down in the bud the previous year. The lammas flush involves newly made leaves. One or more of the buds set in the spring on the ends of terminal and lateral stems will break, and begin to grow, producing a new shoot.
References
Plant morphology | Lammas growth | Biology | 229 |
33,063,577 | https://en.wikipedia.org/wiki/List%20of%20Saint%20Patrick%27s%20crosses | A variety of crosses, both designs and physical objects, have been associated with Saint Patrick, the patron saint of Ireland. Traditionally, the cross pattée has been associated with him, but in more recent times, the Saint Patrick's Saltire has also been linked to him.
Some authors have stated, however, that Patrick is not entitled to have a cross as a symbol since he was not a martyr, unlike Saints George and Andrew.
Celtic Cross
It is popularly believed that St. Patrick introduced the Celtic Cross in Ireland, during his conversion of the provincial kings from paganism to Christianity. St Patrick is said to have taken the symbol of the sun and extended one of the lengths to form a melding of the Christian Cross and the sun.
Saltire
Saint Patrick's Saltire is a red saltire on a white field. It is used in the insignia of the Order of Saint Patrick, established in 1783, and after the Acts of Union 1800 it was combined with the Saint George's Cross of England and the Saint Andrew's Cross of Scotland to form the Union Flag of the United Kingdom of Great Britain and Ireland. A saltire was intermittently used as a symbol of Ireland from the seventeenth century, but without reference to Saint Patrick.
The Pepys Library's collection of broadside ballads includes one from called "Teague and Sawney: or The Unfortunate Success of a Dear-Joys Devotion by St. Patrick's Cross. Being Transform'd into the Deel's Whirlegig." It describes an Irishman (Teague) and Scot (Sawney), both stereotypically blockheaded, encountering a windmill for the first time and arguing over whether it is Saint Andrew's Cross or Saint Patrick's Cross.
Cross pattée
Some of the Order of St Patrick's symbols were borrowed from the pre-existing Friendly Brothers of St Patrick, including the motto Quis separabit?; however, the "Saint Patrick's Cross" used in the Friendly Brothers' badge was not a saltire. A 1783 letter to a Dublin newspaper criticising the Order's use of a saltire, asserted that "The Cross generally used on St Patrick's day, by Irishmen, is the Cross-Patee". Whereas Vincent Morley in 1999 characterised the Friendly Brothers' cross as a cross pattée, the Brothers' medallist in 2003 said that the shape varied somewhat, often approximating a Maltese Cross. Varying illustrations of the badge figure in the Brothers' 1763 statute book, a 1786 letter to The Gentleman's Magazine, and a 2008 photograph.
Though both pattée and Patrick begin with pat-, the words are unrelated.
Or a Cross Gules
Henry Gough in 1893 noted that Ireland was represented by a harp in the flags of the Protectorate, but at Oliver Cromwell's funeral the national banners had crosses, Ireland's being a red cross on a gold field. Gough guesses Edward Bysshe may have co-opted the de Burgh Earl of Ulster arms for the purpose. Gough suggests it gained currency in subsequent decades; it was called Patrick's Cross and shown alongside those of George and Andrew in various documents, including a 1697 drawing of William III, and The Irish Compendium of 1722. A 1679 pamphlet account of heraldry states that the arms borne in the Crusades by the Irish Nation were "a red Cross in a yellow Field". In 1688, Randle Holme explicitly calls this (Or a Cross Gules) "St. Patrick's Cross" "for Ireland". The County Galway unit of the Irish Volunteers in 1914 adopted a similar banner because "it was used as the Irish flag in Cromwell's time". The flag used by the King's Own Regiment in the Kingdom of Ireland, established in 1653, was a red saltire on a "taffey" yellow background. The origins of the regimental colours remain a mystery however.
Other heraldic designs
In 1593–94, Irish Catholics in Habsburg Spain made unrealised plans for a "military order of Saint Patrick" to fight in the Nine Years' War, whose knights would wear a cross moline badge.
A 1935 article states that during the Confederation, "the true St. Patrick's Cross was carried as a square flag: a white cross on a green ground, with a red circle."
Monuments
Ancient high crosses called "Saint Patrick's cross" existed at places with legendary associations with the saint: the Rock of Cashel, where he baptised Óengus mac Nad Froích, King of Munster. and Station Island, site of Saint Patrick's Purgatory, in Lough Derg, County Donegal. Until the 18th century there was a "St Patrick's Cross" in Liverpool, marking the spot where he supposedly preached before starting his mission to Ireland.
The arms of Ballina, County Mayo, adopted in 1970, include an image of "St Patrick's cross" carved on a rock in Leigue cemetery, said to date from Patrick's visit there in AD 441.
Saint Patrick's Day badges
It was formerly a common custom to wear a cross made of paper or ribbon on St Patrick's Day. Surviving examples of such badges come in a variety of colours and they were worn upright rather than as saltires.
The second part of Richard Johnson's Seven Champions of Christendom (1608) concludes its fanciful account of St Patrick with, "the Irishmen as well in England as in that Country, do as yet in Honour of his Name, keep one day in the Year Festival, wearing upon their Hats each of them a Cross of red Silk, in Token of his many Adventures, under the Christian Cross". Irish soldiers stationed in Britain in 1628 reportedly wore red crosses on Patrick's Day "after their country manner".
Thomas Dinely, an English traveller in Ireland in 1681, remarked that "the Irish of all stations and condicõns were crosses in their hatts, some of pins, some of green ribbon." Jonathan Swift, writing to "Stella" of Saint Patrick's Day 1713, said "the Mall was so full of crosses that I thought all the world was Irish". The crosses were also associated with Irish regiments, who were reported in 1682 to have been seen wearing crosses of red ribbon on St Patrick's Day; and with the English court, who were said to have worn crosses in honour of St Patrick on the saint's day in 1726. In the 1740s, the badges pinned were multicoloured interlaced fabric. In the 1820s, they were only worn by children, with simple multicoloured daisy patterns. In the 1890s, they were almost extinct, and a simple green Greek cross inscribed in a circle of paper (similar to the Ballina crest pictured). The Irish Times in 1935 reported they were still sold in poorer parts of Dublin, but fewer than those of previous years "some in velvet or embroidered silk or poplin, with the gold paper cross entwined with shamrocks and ribbons".
Others
On the St. Patrick halfpenny, Patrick is depicted holding a crozier headed with a patriarchal cross.
The badge of the Companions of Saint Patrick, a nonpartisan but mainly unionist group of Dublin civic leaders active from 1906 until the 1930s, featured a red Celtic cross on a white background.
Further reading
References
Crosses
Cross symbols
Christianity-related lists
Lists of symbols
Crosses in heraldry
Ireland-related lists | List of Saint Patrick's crosses | Mathematics | 1,520 |
51,123,191 | https://en.wikipedia.org/wiki/Firebase%20Cloud%20Messaging | Firebase Cloud Messaging (FCM), formerly known as Google Cloud Messaging (GCM), is a cross-platform cloud service for messages and notifications for Android, iOS, and web applications, which as of May 2023 can be used at no cost. Firebase Cloud Messaging allows third-party application developers to send notifications or messages from servers hosted by FCM to users of the platform or end users.
The service is provided by Firebase, a subsidiary of Google. On October 21, 2014, Firebase announced it had been acquired by Google for an undisclosed amount. The official Google Cloud Messaging website points to Firebase Cloud Messaging (FCM) as the new version of GCM. Firebase is a mobile platform which supports users in developing mobile and web applications. Firebase Cloud Messaging is one of many products which are part of the Firebase platform. On the platform users can integrate and combine different Firebase features in both web and mobile applications.
History
Firebase Cloud Messaging (FCM) is part of the Firebase platform, which is a cloud service model that automates backend development or a Backend-as-a-service (BaaS). After the Firebase company was acquired by Google in 2014, some Firebase platform products or technologies were integrated with Google’s existing services. Google’s mobile notification service Google Cloud Messaging (GCM) was replaced by FCM in 2016. On April 10, 2018, GCM was removed by Google and on May 29, 2019, the GCM server and client API were deprecated. FCM has become the replacement for GCM. However, FCM is compatible with existing Google Software Development Kits (SDK).
Firebase Cloud Messaging is a cross-platform messaging service on which the user can deliver messages without cost. FCM is compatible with various platforms including Android and iOS. Google launched support for web applications on October 17, 2016 including mobile web application. On FCM, third party application developers can send push notifications and messages via an application programming interface (API) to end users. After users enable consent to receive push notifications, users are able to receive real time information or data for syncing.
Development
FCM inherits the core infrastructure of GCM, however, it simplifies the development of the client side. GCM and FCM offer encryption, push notification and messaging, native Android and iOS SDK support. Both require a third-party entity between the client application and the trusted environment which may create delays in the communication path between the mobile terminal and application server. FCM supports server protocols HTTP and XMPP which are identical to GCM protocols.
Developers are not required to write individual registrations or subscripting retrying login in the client application. FCM and GCM handle messages through the same instructions, however, instead of GCM connection servers, messages are passed through FCM servers. The FCM Software Development Kit (SDK) excludes writing individual registration or subscription retry logic for a shortened client development process. The FCM SDK provides a new notification solution allowing developers to use the serverless Firebase Notifications on a web console, based on Firebase Analytics insights. FCM enables unlimited upstream and downstream messages to be sent.
Key capabilities
Firebase Cloud Messaging has three main capabilities. The first capability is that FCM allows the user to receive notification messages or data messages which can be deciphered by the application code. The second capability is message targeting. Messages are able to be sent to the client application through different methods; from the FCM platform to individual devices, specified device groups or devices which are subscribed to particular topic domains. The third key capability is the connection channel from client applications to the server. FCM allows messages of various types to be sent from selected devices or client apps via the FCM channel.
Technical details
Firebase Cloud Messaging sends notifications and messages to devices which have installed specific Firebase-enabled apps. Adding support for FCM to an application requires multiple steps: add support to the Android Studio project, obtain registration tokens and implement handlers to identify message notifications. The message notifications can be sent via the Firebase console with a select user segmentation option.
FCM architecture
The FCM Architecture includes three components: FCM connection server, a Trusted environment with an application server based on HTTP or XMPP and cloud functionality, and a Client application. Sending and receiving messages require a secured environment or server to build, direct and send messages, and an iOS, Android or web client application to receive messages. There are two types of messages developers can send with FCM; notification messages and data messages. Notification messages are messages displayed on the device by FCM and are automatically managed by the FCM SDK. Data messages are processed by the client application. Therefore, Notification messages are used when the developer requires FCM to handle the notification display for the client applications. Data messages will be used when the developer requires the messages to be processed on the client application.
FCM can deliver target messages to applications via three methods: to a single device, to a device group or to devices which are subscribed to topics. Developers build and send targeted messages to a select group of users on the ‘Notification composer.’ Messages sent with FCM are integrated with Google Analytics to track user conversion and engagement.
Implementation
The implementation process has two key components. First, a secure environment to send and receive messages is required for FCM or other application servers to facilitate message transaction. Second, a client application of possible types, iOS, Android or web (javaScript), which is also compatible with the selected platform service is needed.
The implementation path for FCM is initiated with the FCM SDK setup following the instructions prescribed for the decided platform. Following setup, the client application must be developed. On the client app, add message handling, topic subscription logic and other required features. During this step, test messages can also be sent from the Notifications composer. The application server is developed next to build the sending logic. The base server environment is created without code.
Architecture flow
Registration of the device and setting it up to enable message reception from FCM is first required. The client application instance will be registered and assigned a registration token or FCM Token, which is issued by the FCM connection servers that will provide the application instance (app instance) a unique identifier. The app instance is then able to send and receive downstream messages. Downstream messaging refers to the sending of a push notification from the application server towards the client application. This process involves four steps. First, after a message is created on the Notifications composer or in another secure environment, a request for the message will be sent to the FCM backend. Second, the FCM backend will receive and accept the message request and prepare the messages for each specified topic, create message metadata such as a message ID and send it to a transport layer, specific to the platform. Third, the message will be sent through the platform-specific transport layer to an online device. The platform-level transport layer is responsible for routing the message to a specific device, handling the delivery of the message and applying specific configurations to the platform. Fourth, the client application will receive the notification or message via their device.
Additional features and tools
Analytics
Firebase offers free and unrestricted analytics tools to assist the user gain insights into the 'ad click' & 'application usage' of end customers. In conjunction with other Firebase features, Firebase Analytics allows the user to explore and use on a range of functionalities such as click-through rates to app crashes.
Firebase Remote Config
It is a simple key–value store that lives in the cloud and enables the user to implement modifications which can be read by the application. The Firebase Remote Config also includes an audience builder, in addition to the basic feature, which helps the user create custom audiences and perform A / B testing.
Cross-platform support
APIs packaged into single SDKs for iOS, Android, JavaScript and C++ in conjunction with the cross-platform support provided by FCM allow the developer to expand across different platforms without infrastructure modification.
Web Push support
Developers can implement the standard IETF Web Push APIs and being to target web browsers. On Chrome, developers can send messages to Chrome on Android or Chrome pages in Mac, Windows and Linux. Added features for web push support include Topic Messaging and the ability to send messages to Topic Combinations.
Topic Messaging
Developers can send a single message to multiple devices. It is a method of notification to users with common interest topics such as sports events, artists, music genres. Developers need to publish a message to FCM, which is automatically delivered to devices subscribed to the select topic. Subscriber count on a single topic or multiple topics are not limited on the application.
Topic Combination Messaging
If users are subscribed to different topics, to prevent publishing the same message across different topics and users from receiving duplicate messages, developers can use the updated API. Developers can set specific conditions for FCM to deliver the message only to users who meet the condition criteria.
Message Delivery Reports
Message Delivery Reports (MDR) are generated by FCM's reporting tool which allows developers to obtain analytical insights into the message delivery. In the MDR, developers can evaluate the reach of the sent messages to specific users by viewing the data for messages to different FCM SDKs (Android, iOS).
Notification Funnel Analysis
A Notification Funnel Analysis (NFA) is built into the FCM platform. By using this tool, developers can view user behaviour and trends from data around responses to particular notifications. The types of notification data which can be analysed are "Notifications Sent", "Notifications Opened" and number of unique users. An analysis report can be pulled from the NFA. Developers can also customise and build the notification funnels.
Key concerns
Security concerns
FCM shortens the design and implementation process for mobile applications. Due to the available functionality of sending test messages through the Notifications Composer in the Firebase console, the testing process is also shortened. Cloud-based messaging solutions also have security and privacy risks which need to be mitigated and considered before implementation into a project. The development of cloud computing involves an open network structure and elastic pooling of shared resources which increases the need for cloud security measures to be established.
A security concern is the potential exploitation of server keys which are stored in the FCM’s Android application package (APK) files. If exploited, this allows the distribution of push notification messages to any and all users on the Firebase platform. GCM has previously reported security vulnerabilities where phishing and malicious advertisement activities have occurred.
Protection against security threats involves multiple steps and can lead to additional implications. Deactivating the Cloud Messaging service will prevent immediate transactions. However, this could potentially stop other applications installed on the blocked device which rely on the FCM service. A possible solution is to block a specific notification channel or unsubscribe from a topic. Other solutions involve setting up message traffic notification systems to detect malicious information being messaged through the FCM service platform. To implement this solution additional steps are required. The user needs to identify at the start, the connection channel or topic potentially used by the malicious application.
Privacy concerns
Cloud-based messaging also poses privacy risks and issues. Black hat hackers may be able to breach the security of the Firebase Cloud Messaging platform and acquire the registration ID of the user’s application or other sensitive information. Security compromise examples include private messages on a user’s social media account being pushed to the hacker’s device. To ensure the privacy of the platform, the user can build end-to-end protection schemes around the open communication channels provided by the Cloud Messaging Services, which are unsecure. FCM provides users with payload encryption.
References
External links
Firebase Cloud Messaging - official website
Google Cloud Messaging - official website
GCM and FCM Frequently Asked Questions
Mobile telecommunication services
Google Cloud
Push technology | Firebase Cloud Messaging | Technology | 2,460 |
4,364,523 | https://en.wikipedia.org/wiki/Fission%20products%20%28by%20element%29 | This page discusses each of the main elements in the mixture of fission products produced by nuclear fission of the common nuclear fuels uranium and plutonium. The isotopes are listed by element, in order by atomic number.
Neutron capture by the nuclear fuel in nuclear reactors and atomic bombs also produces actinides and transuranium elements (not listed here). These are found mixed with fission products in spent nuclear fuel and nuclear fallout.
Neutron capture by materials of the nuclear reactor (shielding, cladding, etc.) or the environment (seawater, soil, etc.) produces activation products (not listed here). These are found in used nuclear reactors and nuclear fallout. A small but non-negligible proportion of fission events produces not two, but three fission products (not counting neutrons or subatomic particles). This ternary fission usually produces a very light nucleus such as helium (about 80% of ternary fissions produce an alpha particle) or hydrogen (most of the rest produce tritium or to a lesser extent deuterium and protium) as the third product. This is the main source of tritium from light water reactors. Another source of tritium is Helium-6 which immediately decays to (stable) Lithium-6. Lithium-6 produces tritium when hit by neutrons and is one of the main sources of commercially or militarily produced tritium. If the first or only step of nuclear reprocessing is an aqueous solution (as is the case in PUREX) this poses a problem as tritium contamination cannot be removed from water other than by costly isotope separation. Furthermore, a tiny fraction of the free neutrons involved in the operation of a nuclear reactor decay to a proton and a beta particle before they can interact with anything else. Given that protons from this source are indistinguishable from protons from ternary fission or radiolysis of coolant water, their overall proportion is hard to quantify.
Germanium-72, 73, 74, 76
If Germanium-75 is produced, it quickly decays to Arsenic. Germanium-76 is essentially stable, only decaying via extremely slow double beta decay to .
Arsenic-75
While arsenic presents no radiological hazard, it is extremely chemically toxic. If it is desired to get rid of arsenic (no matter its origin), thermal neutron irradiation of the only stable isotope will yield short lived which quickly decays to stable . If Arsenic is irradiated with sufficient fast neutrons to cause notable "knockout" (n,2n) or even (n,3n) reactions, Isotopes of germanium will be produced instead.
Selenium-77, 78, 79, 80, 82
Se-79, half-life of 327k years, is one of the long-lived fission products. Given the stability of its next lighter and heavier isotopes and the high cross section those isotopes exhibit for various neutron reactions, it is likely that the relatively low yield is due to Se-79 being destroyed in the reactor to an appreciable extent.
Bromine-81
The other stable isotope is "shadowed" by the long half life of its more neutron rich isobar .
Krypton-83, 84, 85, 86
Krypton-85, with a half-life 10.76 years, is formed by the fission process with
a fission yield of about 0.3%. Only 20% of the fission products of mass 85 become 85Kr itself; the rest passes through a short-lived nuclear isomer and then to stable 85Rb. If irradiated reactor fuel is reprocessed, this radioactive krypton may be released into the air. This krypton release can be detected and used as a means of detecting clandestine nuclear reprocessing. Strictly speaking, the stage which is detected is the dissolution of used nuclear fuel in nitric acid, as it is at this stage that the krypton and other fission gases like the more abundant xenon are released. Despite the industrial applications of Krypton-85 and the relatively high prices of both Krypton and Xenon, they are not currently extracted from spent fuel to any appreciable extent even though Krypton and Xenon both become solid at the temperature of liquid nitrogen and could thus be captured in a cold trap if the flue gas of a voloxidation process were cooled by liquid nitrogen.
Increase of fission gases above a certain limit can lead to fuel pin swelling and even puncture, so that fission gas measurement after discharging the fuel from the reactor is most important to make burn-up calculations, to study the nature of fuel inside the reactor, behaviour with pin materials, for effective utilization of fuel and also reactor safety. In addition to that, they are a nuisance in a nuclear reactor due to being neutron poisons, albeit not to the same extent as isotopes of xenon, another noble gas produced by fission.
Rubidium-85, 87
Rubidium-87 has such a long half life as to be essentially stable (longer than the age of the Earth). Rubidium-86 quickly decays to stable Strontium-86 if produced either directly, via (n,2n) reactions in Rubidium-87 or via neutron capture in Rubidium-85.
Strontium-88, 89, 90
The strontium radioisotopes are very important, as strontium is a calcium mimic which is incorporated in bone growth and therefore has a great ability to harm humans. On the other hand, this also allows 89Sr to be used in the open source radiotherapy of bone tumors. This tends to be used in palliative care to reduce the pain due to secondary tumors in the bones.
Strontium-90 is a strong beta emitter with a half-life of 28.8 years. Its fission product yield decreases as the mass of the fissile nuclide increases - fission of produces more than fission of with fission of in the middle. A map of 90Sr contamination around Chernobyl has been published by the IAEA. Due to its very small neutron absorption cross section, Strontium-90 is poorly suited for thermal neutron induced nuclear transmutation as a way of disposing of it.
Strontium-90 has been used in radioisotope thermoelectric generators (RTGs) in the past because of its relatively high power density (0.95 Wthermal/g for the metal, 0.46 Wthermal/g for the commonly used inert perovskite form Strontium titanate) and because it is easily extracted from spent fuel (both native Strontium metal and Strontium oxide react with water by forming soluble Strontium hydroxide). However, the increased availability of renewable energy for off-grid applications formerly served by RTGs as well as concern about orphan sources has led to a nigh-total abandonment of in RTGs. The few (largely space based) applications for RTGs that still exist are largely supplied by despite its higher cost, as it has a higher power density, longer half life and is easier shielded since it is an alpha emitter while Strontium-90 is a beta emitter.
Yttrium-89 to 91
The only stable yttrium isotope, 89Y, will be found with yield somewhat less than 1% in a fission product mixture which has been allowed to age for months or years, as the next-longest lived yttrium isotopes have half-lives of only 107 days (88Y) or 59 days (91Y). However, a small amount of yttrium-90 will be found in secular equilibrium with its parent strontium-90 unless the two elements are separated from each other.
90Sr decays into 90Y which is a beta emitter with a half-life of 2.67 days.
90Y is sometimes used for medical purposes and can be obtained either by the neutron activation of stable 89Y or by using a device similar to a technetium cow.
As the half lives of the unstable Yttrium isotopes are low ( being the longest at 106 days), yttrium extracted from strontium-free moderately aged spent fuel has negligible radioactivity. However, the strong gamma emitter will be present as long as its parent nuclide is. Should a nonradioactive sample of Yttrium be desired, care must be taken to remove all traces of strontium and sufficient time to let the short lived Y-90 (64 hours half life) decay must be allowed before the product can be used.
Zirconium-90 to 96
A significant amount of zirconium is formed by the fission process; some of this consists of short-lived radionuclides (95Zr and 97Zr which decay to molybdenum), while almost 10% of the fission products mixture after years of decay consists of five stable or nearly stable isotopes of zirconium plus 93Zr with a halflife of 1.53 million years which is one of the 7 major long-lived fission products. Zirconium is commonly used in cladding of fuel rods due to its low neutron cross section. However, a small share of this zirconium does capture neutrons and contributes to the overall inventory of radioactive zirconium isotopes. Zircalloy cladding is not commonly reused and neither is fission product zirconium, which could be used in cladding as its relatively weak radioactivity would be of no major concern inside a nuclear reactor. Despite its high yield and long live, Zr-93 is generally not deemed to be of major concern as it is not chemically mobile and emits little radiation.
In PUREX plants the zirconium (regardless of source or isotope) sometimes forms a third phase which can be a disturbance in the plant. The third phase is the term in solvent extraction given to a third layer (such as foam and/or emulsion) which forms from the two layers in the solvent extraction process. The zirconium forms the third phase by forming small particles which stabilise the emulsion which is the third phase.
Zirconium-90 mostly forms by successive beta decays out of Strontium-90. A nonradioactive Zirconium sample can be extracted from spent fuel by extracting Strontium-90 and allowing enough of it to decay (e.g. In an RTG). The Zirconium can then be separated from the remaining strontium leaving a very isotopically pure Zr-90 sample.
Niobium-95
Niobium-95, with a half-life of 35 days, is initially present as a fission product. The only stable isotope of niobium has mass number 93, and fission products of mass 93 first decay to long-lived zirconium-93 (half-life 1.53 Ma). Niobium-95 will decay to molybdenum-95 which is stable.
Molybdenum-95, 97, 98, 99, 100
The fission product mixture contains significant amounts of molybdenum. Molybdenum-99 is of enormous interest to nuclear medicine as the parent nuclide to but its short half life means it'll usually be decayed long before the spent fuel is reprocessed. can be produced both by fission followed by immediate reprocessing (usually only done in small scale research reactors) or in particle accelerators. As Molybdenum-100 only decays extremely slowly via double beta decay (half life longer than the age of the universe) the molybdenum content of spent fuel will be essentially stable after a few days have passed to allow the Molybdenum-99 to decay.
Technetium-99
99Tc, half-life 211k years, is produced at a yield of about 6% per fission; see also the main fission products page. It is also produced (via the short lived nuclear isomer Technetium-99m) as a decay product of Molybdenum-99. Technetium is particularly mobile in the environment as it forms negatively charged pertechnetate-ions and it presents the biggest radiological hazard among the long lived fission products. Despite being a metal, Technetium usually doesn't form positively charged ions, but Technetium halides like Technetium hexafluoride exist. TcF6 is a nuisance in uranium enrichment as its boiling point () is very close to that of uranium hexafluoride (). The issue is known to enrichment facilities because spontaneous fission also yields small amounts of Technetium (which will be in secular equilibrium with its parent nuclides in natural uranium) but if fluoride volatility is employed for reprocessing, a significant share of the "uranium" fraction of fractional distillation will be contaminated with Technetium requiring a further separation step.
Technetium-99 is suitable for nuclear transmutation by slow neutrons as it has a sufficient thermal neutron cross section and as it has no known stable isotopes. Under neutron irradiation, Tc-99 forms Tc-100 which quickly decays to stable a valuable platinum group metal.
Ruthenium-101 to 106
Plenty of radioactive ruthenium-103, ruthenium-106, and stable ruthenium are formed by the fission process. The ruthenium in PUREX raffinate can become oxidized to form volatile ruthenium tetroxide which forms a purple vapour above the surface of the aqueous liquor. The ruthenium tetroxide is very similar to osmium tetroxide; the ruthenium compound is a stronger oxidant which enables it to form deposits by reacting with other substances. In this way the ruthenium in a reprocessing plant is very mobile, difficult to stabilize, and can be found in odd places. It has been called extremely troublesome and has a notorious reputation as an especially difficult product to handle during reprocessing. Voloxidation combined with cold trap collection of the flue gases could recover the volatile ruthenium tetroxide before it can become a nuisance in further processing. After the radioactive isotopes have had time to decay, recovered ruthenium could be sold at its relatively high market value.
In addition, the ruthenium in PUREX raffinate forms a large number of nitrosyl complexes which makes the chemistry of the ruthenium very complex. The ligand exchange rate at ruthenium and rhodium tends to be long, hence it can take a long time for a ruthenium or rhodium compound to react.
At Chernobyl, during the fire, the ruthenium became volatile and behaved differently from many of the other metallic fission products. Some of the particles which were emitted by the fire were very rich in ruthenium.
As the longest-lived radioactive isotope ruthenium-106 has a half-life of only 373.59 days, it has been suggested that the ruthenium and palladium in PUREX raffinate should be used as a source of the metals after allowing the radioactive isotopes to decay. After ten half life cycles have passed over 99.96% of any radioisotope is stable. For Ru-106 this is 3,735.9 days or about 10 years.
Rhodium-103, 105
While less rhodium than ruthenium and palladium is formed (around 3.6% yield), the mixture of fission products still contains a significant amount of this metal. Due to the high prices of ruthenium, rhodium, and palladium, some work has been done on the separation of these metals to enable them to be used at a later date. Because of the possibility of the metals being contaminated by radioactive isotopes, they are not suitable for making consumer products such as jewellery. However, this source of the metals could be used for catalysts in industrial plants such as petrochemical plants.
A dire example of people being exposed to radiation from contaminated jewellery occurred in the United States. It is thought that gold seeds used to contain radon were recycled into jewellery. The gold indeed did contain radioactive decay products of 222Rn.
Some other rhodium isotopes exist as "transitory states" of ruthenium decaying before further decaying towards stable isotopes of Palladium. If the low level radioactivity of Palladium (see below) is deemed excessive - for example for use as an investment or jewelry - either of its predecessors can be extracted from relatively "young" spent fuel and allowed to decay before extracting the stable end-product of the decay series.
Palladium-105 to 110
Much palladium forms during the fission process. In nuclear reprocessing, not all of the fission palladium dissolves; also some palladium that dissolves at first comes out of solution later. Palladium-rich dissolver fines (particles) are often removed as they interfere with the solvent extraction process by stabilising the third phase.
The fission palladium can separate during the process in which the PUREX raffinate is combined with glass and heated to form the final high level waste form. The palladium forms an alloy with the fission tellurium. This alloy can separate from the glass.
107Pd is the only long-living radioactive isotope among the fission products and its beta decay has a long half life and low energy, this allows industrial use of extracted palladium without isotope separation.
Palladium-109 will most likely have decayed to stable silver-109 by the time reprocessing happens. Before reaching silver-109, a nuclear isomer will be reached; . However, unlike for there is no current use for .
Silver-109, 111
While the radioactive silver isotopes that are produced quickly decay away leaving only stable silver, extracting it for use is not economical, unless as byproduct of platinum group metal extraction.
Cadmium-111 to 116
Cadmium is a strong neutron poison and in fact control rods are often made out of cadmium, making the accumulation of cadmium in fuel of particular concern for the maintenance of stable neutron economy. Cadmium is also a chemically poisonous heavy metal, but given the number of neutron absorptions required for transmutation, it is not a high priority target for deliberate transmutation.
Indium-115
While Indium-115 is very slightly radioactive, its half life is longer than the age of the universe and indeed a typical sample of Indium on earth will contain more of this "unstable" isotope than of "stable" Indium-113.
Tin-117 to 126
In a normal thermal reactor, tin-121m has a very low fission product yield; thus, this isotope is not a significant contributor to nuclear waste. Fast fission or fission of some heavier actinides will produce 121mSn at higher yields. For example, its yield from U-235 is 0.0007% per thermal fission and 0.002% per fast fission.
Antimony-121, 123, 124, 125
Antimony-125 decays with a half life of over two years to which itself decays with a half life of almost two months via isomeric transition to the ground state. While its relatively short half life and the significant gamma emissions (144.77 keV) of its daughter nuclide make usage in an RTG less attractive, Sb-125 could deliver a relatively high power density of 3.4 Wthermal/g.
Fluoride volatility can recover antimony as the mildly volatile (solid at room temperature) Antimony trifluoride or the more volatile (boiling point ) Antimony pentafluoride.
Tellurium-125 to 132
Tellurium-128 and -130 are essentially stable. They only decay by double beta decay, with half lives >1020 years. They constitute the major fraction of natural occurring tellurium at 32 and 34% respectively.
Tellurium-132 and its daughter 132I are important in the first few days after a criticality. It was responsible for a large fraction of the dose inflicted on workers at Chernobyl in the first week.
The isobar forming 132Te/132I is: Tin-132 (half-life 40 s) decaying to antimony-132 (half-life 2.8 minutes) decaying to tellurium-132 (half-life 3.2 days) decaying to iodine-132 (half-life 2.3 hours) which decays to stable xenon-132.
The creation of tellurium-126 is delayed by the long half-life (230 k years) of tin-126.
Iodine-127, 129, 131
131I, with a half-life of 8 days, is a hazard from nuclear fallout because iodine concentrates in the thyroid gland. See also Radiation effects from Fukushima Daiichi nuclear disaster#Iodine-131 and
Downwinders#Nevada.
In common with 89Sr, 131I is used for the treatment of cancer. A small dose of 131I can be used in a thyroid function test while a large dose can be used to destroy the thyroid cancer. This treatment will also normally seek out and destroy any secondary tumor which arose from a thyroid cancer. Much of the energy from the beta emission from the 131I will be absorbed in the thyroid, while the gamma rays are likely to be able to escape from the thyroid to irradiate other parts of the body.
Large amounts of 131I was released during an experiment named the Green Run in which fuel which had only been allowed to cool for a short time after irradiation was reprocessed in a plant which had no iodine scrubber in operation.
129I, with a half-life almost a billion times as long, is a long-lived fission product. It is among the most troublesome because it accumulates in a relatively small organ (the thyroid) where even its comparatively low radiation dose can cause great damage as it has a long biological half life. For this reason, Iodine is often considered for transmutation despite the presence of stable in spent fuel. In the thermal neutron spectrum, more Iodine-129 is destroyed than newly created since Iodine-128 is short lived and the isotope ratio is in favor of . Depending on the design of the transmutation apparatus, care must be taken as Xenon, the product of Iodine's beta decay, is both a strong neutron poison and a gas that is nigh impossible to chemically "fix" in solid compounds, so it will either escape to the outside air or put pressure on the vessel containing the transmutation target.
127I is stable, the only one of the isotopes of iodine that is nonradioactive. It makes up only about of the iodine in spent fuel, with I-129 about .
Xenon-131 to 136
In reactor fuel, the fission product xenon tends to migrate to form bubbles in the fuel. As caesium 133, 135, and 137 are formed by the beta particle decay of the corresponding xenon isotopes, this causes the caesium to become physically separated from the bulk of the uranium oxide fuel.
Because 135Xe is a potent nuclear poison with the largest cross section for thermal neutron absorption, the buildup of 135Xe in the fuel inside a power reactor can lower the reactivity greatly. If a power reactor is shut down or left running at a low power level, then large amounts of 135Xe can build up through decay of 135I. When the reactor is restarted or the low power level is increased significantly, 135Xe will be quickly consumed through neutron capture reactions and the reactivity of the core will increase. Under some circumstances, control systems may not be able to respond quickly enough to manage an abrupt reactivity increase as the built-up 135Xe burns off. It is thought that xenon poisoning was one of the factors which led to the power surge which damaged the Chernobyl reactor core.
Caesium-133, 134, 135, 137
Caesium-134 is found in spent nuclear fuel but is not produced by nuclear weapon explosions, as it is only formed by neutron capture on stable Cs-133, which is only produced by beta decay of Xe-133 with a half-life of 3 days. Cs-134 has a half-life of 2 years and may be a major source of gamma radiation in the first 20 years after discharge.
Caesium-135 is a long-lived fission product with much weaker radioactivity. Neutron capture inside the reactor transmutes much of the xenon-135 that would otherwise decay to Cs-135.
Caesium-137, with a half-life of 30 years, is the main medium-lived fission product, along with Sr-90.
Cs-137 is the primary source of penetrating gamma radiation from spent fuel from 10 years to about 300 years after discharge.
It is the most significant radioisotope left in the area around Chernobyl.
Barium-138, 139, 140
Barium is formed in large amounts by the fission process. A short-lived barium isotope was confused with radium by some early workers. They were bombarding uranium with neutrons in an attempt to form a new element. But instead they caused fission which generated a large amount of radioactivity in the target. Because the chemistry of barium and radium the two elements could be coseparated by for instance a precipitation with sulfate anions. Because of this similarity of their chemistry the early workers thought that the very radioactive fraction which was separated into the "radium" fraction contained a new isotope of radium. Some of this early work was done by Otto Hahn and Fritz Strassmann.
Lanthanides (lanthanum-139, cerium-140 to 144, neodymium-142 to 146, 148, 150, promethium-147, and samarium-149, 151, 152, 154)
A great deal of the lighter lanthanides (lanthanum, cerium, neodymium, and samarium) are formed as fission products. In Africa, at Oklo where the natural nuclear fission reactor operated over a billion years ago, the isotopic mixture of neodymium is not the same as 'normal' neodymium, it has an isotope pattern very similar to the neodymium formed by fission.
In the aftermath of criticality accidents, the level of 140La is often used to determine the fission yield (in terms of the number of nuclei which underwent fission).
Samarium-149 is the second most important neutron poison in nuclear reactor physics. Samarium-151, produced at lower yields, is the third most abundant medium-lived fission product but emits only weak beta radiation. Both have high neutron absorption cross sections, so that much of them produced in a reactor are later destroyed there by neutron absorption.
Lanthanides are a problem in nuclear reprocessing because they are chemically very similar to actinides and most reprocessing aims at separating some or all of the actinides from the fission products or at least the neutron poisons among them.
External links
The Live Chart of Nuclides – IAEA Color-map of fission product yields, and detailed data by click on a nuclide.
Periodic Table with isotope decay chain displays. Click on element, and then isotope mass number to see the decay chain (link to uranium 235).
References
Inorganic chemistry
Nuclear chemistry
Nuclear physics
Nuclear technology | Fission products (by element) | Physics,Chemistry | 5,720 |
10,803,719 | https://en.wikipedia.org/wiki/String%20operations | In computer science, in the area of formal language theory, frequent use is made of a variety of string functions; however, the notation used is different from that used for computer programming, and some commonly used functions in the theoretical realm are rarely used when programming. This article defines some of these basic terms.
Strings and languages
A string is a finite sequence of characters.
The empty string is denoted by .
The concatenation of two string and is denoted by , or shorter by .
Concatenating with the empty string makes no difference: .
Concatenation of strings is associative: .
For example, .
A language is a finite or infinite set of strings.
Besides the usual set operations like union, intersection etc., concatenation can be applied to languages:
if both and are languages, their concatenation is defined as the set of concatenations of any string from and any string from , formally .
Again, the concatenation dot is often omitted for brevity.
The language consisting of just the empty string is to be distinguished from the empty language .
Concatenating any language with the former doesn't make any change: ,
while concatenating with the latter always yields the empty language: .
Concatenation of languages is associative: .
For example, abbreviating , the set of all three-digit decimal numbers is obtained as . The set of all decimal numbers of arbitrary length is an example for an infinite language.
Alphabet of a string
The alphabet of a string is the set of all of the characters that occur in a particular string. If s is a string, its alphabet is denoted by
The alphabet of a language is the set of all characters that occur in any string of , formally:
.
For example, the set is the alphabet of the string ,
and the above is the alphabet of the above language as well as of the language of all decimal numbers.
String substitution
Let L be a language, and let Σ be its alphabet. A string substitution or simply a substitution is a mapping f that maps characters in Σ to languages (possibly in a different alphabet). Thus, for example, given a character a ∈ Σ, one has f(a)=La where La ⊆ Δ* is some language whose alphabet is Δ. This mapping may be extended to strings as
f(ε)=ε
for the empty string ε, and
f(sa)=f(s)f(a)
for string s ∈ L and character a ∈ Σ. String substitutions may be extended to entire languages as
Regular languages are closed under string substitution. That is, if each character in the alphabet of a regular language is substituted by another regular language, the result is still a regular language.
Similarly, context-free languages are closed under string substitution.
A simple example is the conversion fuc(.) to uppercase, which may be defined e.g. as follows:
For the extension of fuc to strings, we have e.g.
fuc(‹Straße›) = {‹S›} ⋅ {‹T›} ⋅ {‹R›} ⋅ {‹A›} ⋅ {‹SS›} ⋅ {‹E›} = {‹STRASSE›},
fuc(‹u2›) = {‹U›} ⋅ {ε} = {‹U›}, and
fuc(‹Go!›) = {‹G›} ⋅ {‹O›} ⋅ {} = {}.
For the extension of fuc to languages, we have e.g.
fuc({ ‹Straße›, ‹u2›, ‹Go!› }) = { ‹STRASSE› } ∪ { ‹U› } ∪ { } = { ‹STRASSE›, ‹U› }.
String homomorphism
A string homomorphism (often referred to simply as a homomorphism in formal language theory) is a string substitution such that each character is replaced by a single string. That is, , where is a string, for each character .
String homomorphisms are monoid morphisms on the free monoid, preserving the empty string and the binary operation of string concatenation. Given a language , the set is called the homomorphic image of . The inverse homomorphic image of a string is defined as
while the inverse homomorphic image of a language is defined as
In general, , while one does have
and
for any language .
The class of regular languages is closed under homomorphisms and inverse homomorphisms.
Similarly, the context-free languages are closed under homomorphisms and inverse homomorphisms.
A string homomorphism is said to be ε-free (or e-free) if for all a in the alphabet . Simple single-letter substitution ciphers are examples of (ε-free) string homomorphisms.
An example string homomorphism guc can also be obtained by defining similar to the above substitution: guc(‹a›) = ‹A›, ..., guc(‹0›) = ε, but letting guc be undefined on punctuation chars.
Examples for inverse homomorphic images are
guc−1({ ‹SSS› }) = { ‹sss›, ‹sß›, ‹ßs› }, since guc(‹sss›) = guc(‹sß›) = guc(‹ßs›) = ‹SSS›, and
guc−1({ ‹A›, ‹bb› }) = { ‹a› }, since guc(‹a›) = ‹A›, while ‹bb› cannot be reached by guc.
For the latter language, guc(guc−1({ ‹A›, ‹bb› })) = guc({ ‹a› }) = { ‹A› } ≠ { ‹A›, ‹bb› }.
The homomorphism guc is not ε-free, since it maps e.g. ‹0› to ε.
A very simple string homomorphism example that maps each character to just a character is the conversion of an EBCDIC-encoded string to ASCII.
String projection
If s is a string, and is an alphabet, the string projection of s is the string that results by removing all characters that are not in . It is written as . It is formally defined by removal of characters from the right hand side:
Here denotes the empty string. The projection of a string is essentially the same as a projection in relational algebra.
String projection may be promoted to the projection of a language. Given a formal language L, its projection is given by
Right and left quotient
The right quotient of a character a from a string s is the truncation of the character a in the string s, from the right hand side. It is denoted as . If the string does not have a on the right hand side, the result is the empty string. Thus:
The quotient of the empty string may be taken:
Similarly, given a subset of a monoid , one may define the quotient subset as
Left quotients may be defined similarly, with operations taking place on the left of a string.
Hopcroft and Ullman (1979) define the quotient L1/L2 of the languages L1 and L2 over the same alphabet as .
This is not a generalization of the above definition, since, for a string s and distinct characters a, b, Hopcroft's and Ullman's definition implies yielding , rather than .
The left quotient (when defined similar to Hopcroft and Ullman 1979) of a singleton language L1 and an arbitrary language L2 is known as Brzozowski derivative; if L2 is represented by a regular expression, so can be the left quotient.
Syntactic relation
The right quotient of a subset of a monoid defines an equivalence relation, called the right syntactic relation of S. It is given by
The relation is clearly of finite index (has a finite number of equivalence classes) if and only if the family right quotients is finite; that is, if
is finite. In the case that M is the monoid of words over some alphabet, S is then a regular language, that is, a language that can be recognized by a finite-state automaton. This is discussed in greater detail in the article on syntactic monoids.
Right cancellation
The right cancellation of a character a from a string s is the removal of the first occurrence of the character a in the string s, starting from the right hand side. It is denoted as and is recursively defined as
The empty string is always cancellable:
Clearly, right cancellation and projection commute:
Prefixes
The prefixes of a string is the set of all prefixes to a string, with respect to a given language:
where .
The prefix closure of a language is
Example:
A language is called prefix closed if .
The prefix closure operator is idempotent:
The prefix relation is a binary relation such that if and only if . This relation is a particular example of a prefix order.
See also
Comparison of programming languages (string functions)
Levi's lemma
String (computer science) — definition and implementation of more basic operations on strings
Notes
References
(See chapter 3.)
Formal languages
Relational algebra
Operations | String operations | Mathematics,Technology | 1,926 |
71,989,813 | https://en.wikipedia.org/wiki/Nitryl%20cyanide | Nitryl cyanide is an energetic chemical compound with the formula NCNO2. Nitryl cyanide is a possible precursor to the theoretical explosive 2,4,6-trinitro-1,3,5-triazine.
Synthesis
Nitryl cyanide was first synthesized in 2014. The reaction of nitronium tetrafluoroborate with tert-butyldimethylsilyl cyanide at −30 °C produces nitryl cyanide, with tert-butyldimethylsilyl fluoride and boron trifluoride as byproducts.
The conversion of this method is only 50%, and using an excess of tert-butyldimethylsilyl causes the yield to drop even further.
References
Nitro compounds
Cyanides
Explosives | Nitryl cyanide | Chemistry | 174 |
39,482,555 | https://en.wikipedia.org/wiki/Life%20Sciences%20Research%20Foundation | The Life Sciences Research Foundation (LSRF) is a postdoctoral fellowship program, with missions "to identify and fund exceptional young scientists at a critical juncture of their training in all areas of basic life sciences" and "to establish partnerships between those who support research in the life sciences and academic institutions for their mutual benefit".
Historical background
LSRF was established in 1983 by Donald D. Brown of the Carnegie Institution for Science Department of Embryology. As one of four highly competitive postdoctoral awards in the life sciences, each year LSRF receives more than 1000 applications and awards 15-25 fellowships. The Board of Directors also includes Douglas Koshland and Solomon H. Snyder. The 56 sponsors include many top companies in the biotech and pharmaceutical industry.
In 2012, Brown won the Albert Lasker Special Achievement Award in Medical Science, in part for his initiation and 30-year dedication to LSRF.
Alumni
Notable alumni include:
Philip Beachy of Stanford
Ben Barres of Stanford
George M. Church of Harvard
Gerald F. Joyce of the Scripps Research Institute
Robert Sapolsky of Stanford.
References
External links
Fellowships
Life sciences industry | Life Sciences Research Foundation | Biology | 230 |
2,527,244 | https://en.wikipedia.org/wiki/Isotopes%20of%20darmstadtium | Darmstadtium (110Ds) is a synthetic element, and thus a standard atomic weight cannot be given. Like all synthetic elements, it has no stable isotopes. The first isotope to be synthesized was 269Ds in 1994. There are 11 known radioisotopes from 267Ds to 281Ds (with many gaps) and 2 or 3 known isomers. The longest-lived isotope is 281Ds with a half-life of 14 seconds. However, the unconfirmed 282Ds might have an even longer half-life of 67 seconds.
List of isotopes
|-id=Darmstadtium-267
| 267Ds
| style="text-align:right" | 110
| style="text-align:right" | 157
| 267.14373(22)#
| 10(8) μs[]
| α
| 263Hs
| 3/2+#
|-id=Darmstadtium-269
| 269Ds
| style="text-align:right" | 110
| style="text-align:right" | 159
| 269.14475(3)
| 230(110) μs[]
| α
| 265Hs
|
|-id=Darmstadtium-270
| 270Ds
| style="text-align:right" | 110
| style="text-align:right" | 160
| 270.14459(4)
|
| α
| 266Hs
| 0+
|-id=Darmstadtium-270m
| rowspan=2 style="text-indent:1em" | 270mDs
| rowspan=2 colspan="3" style="text-indent:2em" | 1390(60) keV
| rowspan=2|[]
| α (70%)
| 266Hs
| rowspan=2|10−#
|-
| IT (30%)
| 270Ds
|-id=Darmstadtium-271
| rowspan=2|271Ds
| rowspan=2 style="text-align:right" | 110
| rowspan=2 style="text-align:right" | 161
| rowspan=2|271.14595(10)#
| rowspan=2|144(53) ms
|SF (75%)
|(various)
|rowspan=2|
|-
| α (25%)
| 267Hs
|-id=Darmstadtium-271m
| style="text-indent:1em" | 271mDs
| colspan="3" style="text-indent:2em" | 68(27) keV
| 1.7(4) ms[]
| α
| 267Hs
|
|-id=Darmstadtium-273
| 273Ds
| style="text-align:right" | 110
| style="text-align:right" | 163
| 273.14846(15)#
| 240(100) μs[]
| α
| 269Hs
|
|-id=Darmstadtium-273m
| style="text-indent:1em" | 273mDs
| colspan="3" style="text-indent:2em" | 198(20) keV
| 120 ms
| α
| 269Hs
|
|-id=Darmstadtium-275
|275Ds
| style="text-align:right" | 110
| style="text-align:right" | 165
|275.15209(37)#
|
|α
|271Hs
| 3/2#
|-id=Darmstadtium-276
| rowspan=2|276Ds
| rowspan=2 style="text-align:right" | 110
| rowspan=2 style="text-align:right" | 166
| rowspan=2|276.15302(59)#
| rowspan=2|
| SF (57%)
| (various)
| rowspan=2|0+
|-
| α (43%)
| 272Hs
|-id=Darmstadtium-277
| 277Ds
| style="text-align:right" | 110
| style="text-align:right" | 167
| 277.15576(42)#
| []
| α
| 273Hs
|
|-id=Darmstadtium-279
| rowspan=2|279Ds
| rowspan=2 style="text-align:right" | 110
| rowspan=2 style="text-align:right" | 169
| rowspan=2|279.15998(65)#
| rowspan=2|
| SF (87%)
| (various)
| rowspan=2|
|-
| α (13%)
| 275Hs
|-id=Darmstadtium-280
| 280Ds
| style="text-align:right" | 110
| style="text-align:right" | 170
| 280.16138(80)#
|
| SF
| (various)
| 0+
|-id=Darmstadtium-281
| rowspan=2|281Ds
| rowspan=2 style="text-align:right" | 110
| rowspan=2 style="text-align:right" | 171
| rowspan=2|281.16455(53)#
| rowspan=2|14(3) s
| SF (90%)
| (various)
| rowspan=2|
|-
| α (10%)
| 277Hs
|-id=Darmstadtium-281m
| style="text-indent:1em" | 281mDs
| colspan="3" style="text-indent:2em" | 80(240)# keV
| 0.9(7) ms[]
| α
| 277Hs
|
|-id=Darmstadtium-282
| 282Ds
|style="text-align:right" | 110
|style="text-align:right" | 172
|282.16617(32)#
| 4.2(33) min[]
| α
| 278Hs
|0+
Isotopes and nuclear properties
Nucleosynthesis
Superheavy elements such as darmstadtium are produced by bombarding lighter elements in particle accelerators that induce fusion reactions. Whereas most of the isotopes of darmstadtium can be synthesized directly this way, some heavier ones have only been observed as decay products of elements with higher atomic numbers.
Depending on the energies involved, the former are separated into "hot" and "cold". In hot fusion reactions, very light, high-energy projectiles are accelerated toward very heavy targets (actinides), giving rise to compound nuclei at high excitation energy (~40–50 MeV) that may either fission or evaporate several (3 to 5) neutrons. In cold fusion reactions, the produced fused nuclei have a relatively low excitation energy (~10–20 MeV), which decreases the probability that these products will undergo fission reactions. As the fused nuclei cool to the ground state, they require emission of only one or two neutrons, and thus, allows for the generation of more neutron-rich products. The latter is a distinct concept from that of where nuclear fusion claimed to be achieved at room temperature conditions (see cold fusion).
The table below contains various combinations of targets and projectiles which could be used to form compound nuclei with Z = 110.
Cold fusion
Before the first successful synthesis of darmstadtium in 1994 by the GSI team, scientists at GSI also tried to synthesize darmstadtium by bombarding lead-208 with nickel-64 in 1985. No darmstadtium atoms were identified. After an upgrade of their facilities, the team at GSI successfully detected 9 atoms of 271Ds in two runs of their discovery experiment in 1994. This reaction was successfully repeated in 2000 by GSI (4 atoms), in 2000 and 2004 by the Lawrence Berkeley National Laboratory (LBNL) (9 atoms in total) and in 2002 by RIKEN (14 atoms). The GSI team studied the analogous reaction with nickel-62 instead of nickel-64 in 1994 as part of their discovery experiment. Three atoms of 269Ds were detected. A fourth decay chain was measured but was subsequently retracted.
In addition to the official discovery reactions, in October–November 2000, the team at GSI also studied the analogous reaction using a lead-207 target in order to synthesize the new isotope 270Ds. They succeeded in synthesising eight atoms of 270Ds, relating to a ground state isomer, 270Ds, and a high-spin metastable state, 270mDs.
In 1986, a team at the Joint Institute for Nuclear Research (JINR) in Dubna, Russia, studied the reaction:
Bi + Co → Ds + n
They were unable to detect any darmstadtium atoms. In 1995, the team at LBNL reported that they had succeeded in detecting a single atom of 267Ds using this reaction. However, several decays were not measured and further research is required to confirm this discovery.
Hot fusion
In the late 1980s, the GSI team attempted to synthesize element 110 by bombarding a target consisting of various uranium isotopes—233U, 235U, and 238U—with accelerated argon-40 ions. No atoms were detected; a limiting cross section of 21 pb was reported.
In September 1994, the team at Dubna detected a single atom of 273Ds by bombarding a plutonium-244 target with accelerated sulfur-34 ions.
Experiments were done in 2004 at the Flerov Laboratory of Nuclear Reactions (FLNR) in Dubna studying the fission characteristics of the compound nucleus 280Ds, produced in the reaction:
Th + Ca → Ds* → fission
The result revealed how compound nuclei such as this fission predominantly by expelling magic and doubly magic nuclei such as 132Sn (Z = 50, N = 82). No darmstadtium atoms were obtained. A compound nucleus is a loose combination of nucleons that have not arranged themselves into nuclear shells yet. It has no internal structure and is held together only by the collision forces between the target and projectile nuclei. It is estimated that it requires around 10−14 s for the nucleons to arrange themselves into nuclear shells, at which point the compound nucleus becomes a nuclide, and this number is used by IUPAC as the minimum half-life a claimed isotope must have in order to be recognized as being discovered.
The 232Th+48Ca reaction was attempted again at the FLNR in 2022; it was predicted that the 48Ca-induced reaction leading to element 110 would have a lower yield than those leading to lighter or heavier elements. Seven atoms of 276Ds were reported, with lifetimes ranging between and ; four decayed by spontaneous fission and three decayed via a two-alpha sequence to 272Hs and the spontaneously fissioning 268Sg. The maximum reported cross section for the production of 276Ds was about 0.7 pb and a sensitivity limit an order of magnitude lower was reached. This reported cross section is lower than that of all reactions using 48Ca as a projectile, with the exception of 249Cf + 48Ca, and it further supports the existence of magic numbers at Z = 108, N = 162 and Z = 114, N = 184. In 2023, the JINR team repeated this reaction at a higher beam energy and also found 275Ds. They intend to further study the reaction to search for 274Ds. The FLNR also successfully synthesised 273Ds in the 238U+40Ar reaction.
As decay product
Darmstadtium has been observed as a decay product of copernicium. Copernicium currently has seven known isotopes, five of which have been shown to alpha decay into darmstadtium, with mass numbers 273, 277, and 279–281. To date, all of these bar 273Ds have only been produced by decay of copernicium. Parent copernicium nuclei can be themselves decay products of flerovium or livermorium. Darmstadtium may also have been produced in the electron capture decay of roentgenium nuclei which are themselves daughters of nihonium and moscovium. For example, in 2004, the Dubna team (JINR) identified darmstadtium-281 as a product in the decay of livermorium via an alpha decay sequence:
→ +
→ +
→ +
Retracted isotopes
280Ds
The first synthesis of element 114 resulted in two atoms assigned to 288Fl, decaying to the 280Ds, which underwent spontaneous fission. The assignment was later changed to 289Fl and the darmstadtium isotope to 281Ds. Hence, 280Ds remained unknown until 2016, when it was populated by the hitherto unknown alpha decay of 284Cn (previously, that nucleus was only known to undergo spontaneous fission). The discovery of 280Ds in this decay chain was confirmed in 2021; it undergoes spontaneous fission with a half-life of 360 μs.
277Ds
In the claimed synthesis of 293Og in 1999, the isotope 277Ds was identified as decaying by 10.18 MeV alpha emission with a half-life of 3.0 ms. This claim was retracted in 2001. This isotope was finally created in 2010 and its decay data supported the fabrication of previous data.
273mDs
In the synthesis of 277Cn in 1996 by GSI (see copernicium), one decay chain proceeded via 273Ds, which decayed by emission of a 9.73 MeV alpha particle with a lifetime of 170 ms. This would have been assigned to an isomeric level. This data could not be confirmed and thus this isotope is currently unknown or unconfirmed.
272Ds
In the first attempt to synthesize darmstadtium, a 10 ms SF activity was assigned to 272Ds in the reaction 232Th(44Ca,4n). Given current understanding regarding stability, this isotope has been retracted from the table of isotopes.
Nuclear isomerism
281Ds
The production of 281Ds by the decay of 289Fl or 293Lv has produced two very different decay modes. The most common and readily confirmed mode is spontaneous fission with a half-life of 11 s. A much rarer and as yet unconfirmed mode is alpha decay by emission of an alpha particle with energy 8.77 MeV with an observed half-life of around 3.7 min. This decay is associated with a unique decay pathway from the parent nuclides and must be assigned to an isomeric level. The half-life suggests that it must be assigned to an isomeric state but further research is required to confirm these reports. It was suggested in 2016 that this unknown activity might be due to 282Mt, the great-granddaughter of 290Fl via electron capture and two consecutive alpha decays.
271Ds
Decay data from the direct synthesis of 271Ds clearly indicates the presence of two nuclear isomers. The first emits alpha particles with energies 10.74 and 10.69 MeV and has a half-life of 1.63 ms. The other only emits alpha particles with an energy of 10.71 MeV and has a half-life of 69 ms. The first has been assigned to the ground state and the latter to an isomeric level. It has been suggested that the closeness of the alpha decay energies indicates that the isomeric level may decay primarily by delayed isomeric transition to the ground state, resulting in an identical measured alpha energy and a combined half-life for the two processes.
270Ds
The direct production of 270Ds has clearly identified two nuclear isomers. The ground state decays by alpha emission into the ground state of 266Hs by emitting an alpha particle with energy 11.03 MeV and has a half-life of 0.10 ms. The metastable state decays by alpha emission, emitting alpha particles with energies of 12.15, 11.15, and 10.95 MeV, and has a half-life of 6 ms. When the metastable state emits an alpha particle of energy 12.15 MeV, it decays into the ground state of 266Hs, indicating that it has 1.12 MeV of excess energy.
Chemical yields of isotopes
Cold fusion
The table below provides cross-sections and excitation energies for cold fusion reactions producing darmstadtium isotopes directly. Data in bold represent maxima derived from excitation function measurements. + represents an observed exit channel.
Fission of compound nuclei with Z = 110
Experiments have been performed in 2004 at the Flerov Laboratory of Nuclear Reactions in Dubna studying the fission characteristics of the compound nucleus 280Ds. The nuclear reaction used is 232Th+48Ca. The result revealed how nuclei such as this fission predominantly by expelling closed shell nuclei such as 132Sn (Z = 50, N = 82).
Theoretical calculations
Decay characteristics
Theoretical calculation in a quantum tunneling model reproduces the experimental alpha decay half-live data. It also predicts that the isotope 294Ds would have alpha decay half-life of the order of 311 years.
Evaporation residue cross sections
The below table contains various targets-projectile combinations for which calculations have provided estimates for cross section yields from various neutron evaporation channels. The channel with the highest expected yield is given.
DNS = Di-nuclear system; σ = cross section
References
Darmstadtium
Darmstadtium | Isotopes of darmstadtium | Chemistry | 3,561 |
20,455,598 | https://en.wikipedia.org/wiki/Polymer%20Battery%20Experiment | The Polymer Battery Experiment (PBEX) demonstrates the charging and discharging characteristics of polymer batteries in the space environment. PBEX validates use of lightweight, flexible battery technology to decrease cost and weight for future military and commercial space systems. PBEX was developed by Johns Hopkins University and is one of four On Orbit Mission Control (OOMC) packages on PicoSat 9:
Polymer Battery Experiment
Ionospheric Occultation Experiment
Coherent Electromagnetic Radio Tomography
Optical Precision Platform Experiment
Specifications
NSSDC ID: 2001-043B-03
Mission: PicoSAT 9
Sources
NASA: Picosat Experiment Package 2001-043B-03 Mainpage
See also
Batteries in space
References
External links
NASA: PicoSAT 9 Mainpage
NASA: Coherent Electromagnetic Radio Tomography Mainpage
NASA: Ionospheric Occultation Experiment Mainpage
Electric battery
Space science experiments
Chemistry experiments | Polymer Battery Experiment | Chemistry | 180 |
11,570,376 | https://en.wikipedia.org/wiki/Cerrena%20unicolor | Cerrena unicolor, commonly known as the mossy maze polypore, is a species of poroid fungus in the genus Cerrena (Family: Polyporaceae). This saprobic fungus causes white rot.
Taxonomy
The fungus was originally described by French botanist Jean Bulliard in 1785 as Boletus unicolor, when all pored fungi were typically assigned to genus Boletus. William Alphonso Murrill transferred it to Cerrena in 1903. The fungus has acquired a long and extensive synonymy as it has been re-described under many different names, and been transferred to many polypore genera.
Description
Cerrena unicolor has fruit bodies that are semicircular, wavy brackets up to 10 centimeters (4 in) wide, in groups of 2-20. Attached to the growing surface without a stalk (sessile), the upper surface is finely hairy, white to grayish brown in color, and in zonate—marked with zones or concentric bands of color. The surface is often green from algal growth. The pore surface is whitish in young specimens, later turning gray in maturity. The arrangement of the pores resembles a maze of slots; the tubes may extend to 4 mm deep. With age, the pores descend into tooth like structures. The spore print is white.
Spores are elliptical in shape, smooth, hyaline, inamyloid, and have dimensions of 5–7 by 2.5–4 μm. The basidia are 4-spored, 20-25 x 5-6 μm in size and have basal clamps. The cystidia are 40-60 x 4-5 μm and thin-walled.
The hyphal system is trimitic (containing generative, skeletal and binding hyphae). The generative context hyphae are 2-4 μm, thin-walled and nodose-septate, while the skeletal context hyphae are wider, thicker and have no septa. The binding and tramal hyphae are 2-4 μm wide, have thick walls but no septa, and are quite branched.
Similar Species
Cerrena unicolor can be easily distinguished from most other polypores by its hairy upper surface and maze-like pores that slowly descend into tooth-like structures. Confusion can arise with other smaller polypores such as the genera Trichaptum or Trametes.
Trichaptum species that are young can easily be distinguished by their purple tinge. Older specimens however need careful examination of their pore shapes, which will either be gill-like or angular, versus C. unicolors maze-like. Trichaptum perrottetii can also have maze-like pores in old age and can instead be distinguished by its flattened forked hairs on the upper surface. In North America T. perrottetii is only known from Florida and Georgia.
Some Trametes species can be similar with hairy caps and labyrinth like pores, but they will never have pores descending into teeth like structures. Trametes species are also more flexible in old age, while C. unicolor will become brittle and break easily. Two Trametes species with maze-like pores, Trametes gibbosa and Trametes aesculi, have lumpy/warted caps.
Trametopsis cervina will have a cinnamon colored pore surface and a less conspicuously zonate cap.
Daedalea quercina has thick walled pores and the upper surface is velvety at most, rather than hairy. Daedaleopsis confragosa has pores that bruise brown and also does not have as hairy of a cap. Both of these species are overall more brown in color, but can fade in age.
Ecology
Cerrena unicolor causes canker rot and decay in paper birch (Betula papyrifera) and sugar maple (Acer saccharum) . It causes white rot on deciduous hardwoods, and rarely on conifers. It is found year-round.
When a female wasp of the genus Tremex bores into wood near these fungi, spores will become trapped in the wasp's ovipositor. The spores are carried with the wasp's eggs and will eventually germinate where the eggs are placed. As the spores germinate and form a mycelium, the wasp's eggs will hatch, and the newly-born larvae eat the mycelium. The wasp species Tremex columba requires C. unicolor to grow, as without the interaction, the larvae will die. However, the parasitic wasp genus Megarhyssa will lay its own eggs within the larvae of the Tremex wasp. The larvae of Megarhyssa, when hatched, proceed to eat the larvae of Tremex, helping control the population of Tremex.
Distribution
The fungus has a wide distribution, and is found in Asia, Europe, South America, and North America.
Applications
Cerrena unicolor has been identified as a source of the enzyme laccase. This enzyme has potential applications in a wide variety of bioprocesses. C. unicolor is known to produce laccase in culture at more favorable conditions and in higher yield than other wood rotting fungi, and research is focussing on ways to produce laccase cost-effectively on a large scale.
It is inedible to humans.
References
Fungi described in 1785
Fungi of North America
Fungal plant pathogens and diseases
Inedible fungi
Polyporaceae
Fungus species | Cerrena unicolor | Biology | 1,152 |
4,717,751 | https://en.wikipedia.org/wiki/History%20of%20the%20Dylan%20programming%20language | Dylan programming language history first introduces the history with a continuous text. The second section gives a timeline overview of the history and present several milestones and watersheds. The third section presents quotations related to the history of the Dylan programming language.
Introduction to the history
Dylan was originally developed by Apple Cambridge, then a part of the Apple Advanced Technology Group (ATG). Its initial goal was to produce a new system programming application development programming language for the Apple Newton PDA, but soon it became clear that this would take too much time. Walter Smith developed NewtonScript for scripting and application development, and systems programming was done in the language C. Development continued on Dylan for the Macintosh. The group produced an early Technology Release of its Apple Dylan product, but the group was dismantled due to internal restructuring before they could finish any real usable products.
According to Apple Confidential by Owen W. Linzmayer, the original code name for the Dylan project was Ralph, for Ralph Ellison, author of the novel Invisible Man, to reflect its status as a secret research project.
The initial killer application for Dylan was the Apple Newton PDA, but the initial implementation came too late for it. Also, the performance and size objectives were missed. So Dylan was retargeted toward a general computer programming audience. To compete in this market, it was decided to switch to infix notation.
Andrew Shalit (along with David A. Moon and Orca Starbuck) wrote the Dylan Reference Manual, which served as a basis for work at Harlequin and Carnegie Mellon University. When Apple Cambridge was closed, several members went to Harlequin, which produces a working compiler and development environment for Microsoft Windows. When Harlequin got bought and split, some of the developers founded Functional Objects. In 2003, the firm contributed its repository to the Dylan open source community. This repository was the foundation of the free and open-source software Dylan implementation Open Dylan.
In 2003, the Dylan community had already proven its engagement for Dylan. In summer 1998, the community took over the code from the Carnegie Mellon University (CMU) Dylan implementation named for the Gwydion project, and founded the open-source model project Gwydion Dylan. At that time, CMU had already stopped working at their Dylan implementation because Apple in its financial crisis could no longer sponsor the project. CMU thus shifted its research toward the mainstream and toward Java.
Today, Gwydion Dylan and Open Dylan are the only working Dylan compilers. While the first is still a Dylan-to-C compiler, Open Dylan produces native code for Intel processors. Open Dylan was designed to account for the Architecture Neutral Distribution Format (ANDF).
Timeline overview
History by (mostly) quotations
The roots of the programming language Dylan
Dylan was created by the same group at Apple that was responsible for Macintosh Common Lisp. The first implementation had a Lisp-like syntax.
Dylan began with the acquisition of Coral Software, which became ATG East. Coral was marketing Macintosh Common Lisp, and Apple asked them to continue to support MCL and simultaneously develop a new dynamic language with all the programmer power and convenience of Lisp and Smalltalk but with the performance required for production applications
Quoted from MacTech Vol 7 No. 1
In the late 1980s, Apple’s Advanced Technology Group (ATG) saddled themselves with the task of creating a new language, one that would combine the best qualities of dynamic languages like Smalltalk and Lisp, with those of static languages like C++. Recognizing that a language definition alone was insufficient to meet the challenges of developing the next ever-more complex generation of software, ATG further committed the Dylan team (now a part of the Developer Products Group) to developing an attendant development environment that would enable the rapid prototyping and construction of real-world applications
Quoted from MacTech Vol 11 No. 8
The acknowledgments from the First Dylan Manual (1992) states:
Designing Dylan has been a work of many hands.
The primary contributors to the language design were Glenn S. Burke, Robert Cassels, John Hotchkiss, Jeremy A. Jones, David A. Moon, Jeffrey Piazza, Andrew Shalit, Oliver Steele, and Gail Zacharias.
Additional design work and oodles of helpful comments were provided by Jerome T. Coonen, James Grandy, Ike Nassi, Walter R. Smith, Steve Strassmann, and Larry Tesler.
Many more people provided invaluable feedback during the design. Among these were Peter Alley, Kim Barrett, Alan Bawden, Ernie Beernink, Rasha Bozinovic, Steve Capps, Mikel Evins, Gregg Foster, Jed Harris, Alice K. Hartley, Alan Kay, Larry Kenyon, Matthew MacLaurin, John Meier, Richard Mlynarik, Peter Potrebic, David Singer, David C. Smith, Bill St. Clair, Andy Stadler, Joshua Susser, Michael Tibbott, Tom Vrhel, Bob Welland, and Derek White.
Moral and logistical support were provided by Donna Auguste, Chrissy Boggs, James Joaquin, Rick LeFaivre, Becky Mulhearn, David Nagel, Mark Preece, Mary Reagan, Shane Robison, and Susan M. Whittemore.
The Dylan project was directed by Ike Nassi.
This manual was written by Andrew Shalit with contributions from Jeffrey Piazza and David Moon.
The manual was designed by Scott Kim and Steve Strassmann. The typefaces are the Lucida family and Letter Gothic. The cover was designed by Scott Kim.
The Dylan project was funded entirely by the Advanced Technology Group of Apple Computer.
The two non-Apple collaborators were CMU Gwydion and Harlequin.
"I think our general impression was that our influence at CMU was limited to being able to participate in meetings and email discussions where we could try to convince the Apple people to see things our way. There was actually a great deal of consensus about many issues, mainly because the designers were primarily from the Common Lisp community, and saw similar strengths and failings of Common Lisp."
Rob MacLachlan, former member of CMU's Dylan project Gwydion.
CMU still provide an information page about Gwydion.
The roots of changing the syntax from Lisp way to an infix one
The developers at the Cambridge lab and CMU thought they'd get better reception from the C/C++ community out there if they changed the syntax to make it look more like these languages.
Rob MacLachlan, at Carnegie Mellon during the Dylan project, from comp.lang.dylan:
"In a way, the most remarkable realignment was the decision to ditch the Lisp syntax. This happened after Gwydion was participating in the design effort. We advocated the infix syntax and ditching the lisp/prefix syntax. As I recall, we didn't really expect anyone to listen, but that was exactly what happened. In that case, we may have shifted the balance of power internal to Apple on this issue."
Bruce Hoult replied:
"Which interestingly enough is the reverse of Lisp itself, where John McCarthy originally intended S- expressions to be just a temporary form until the real syntax was developed/implemented."
Oliver Steele in a ll1-discuss:
"Mike Kahl, who designed the infix syntax (and implemented the parser and indenter for it), was trying to make it look like Pascal. At the time (1991?), that probably looked like a better bet than it does today in the world of languages that have mostly converged on the use of punctuation marks as punctuation.
I had actually implemented a more C-like (that is, braces) syntax for Dylan, but dropped it when we hired Mike in order to work on the IDE."
End of Dylan as commercial product
Project death at Apple in 1995
Raffael Cavallaro once provided some insights:
The Apple Dylan project died in early '95 (if memory serves - I was a seed site for Apple Dylan). The Dylan team were under a lot of pressure to get a working release out the door when two things sort of took them by surprise:
1. Apple started to become less profitable because of the Wintel juggernaut. With Apple no longer so profitable, the Apple suits started to look for research projects to axe. Those that didn't seem likely to ship a profitable product in the near future were at the top of the list. Apple Dylan at the time was still not ready for release - it compiled pretty slowly... especially compared to CodeWarrior C/C++, since it hadn't yet been optimized. Apple managers were talking about rewriting it in C++ to make it run faster (not realizing that Common Lisp can be optimized to run as quickly as C/C++).
2. Apple was making the transition to PowerPC, and Apple Dylan still only ran on 68k machines, and only compiled to 68k binaries. So, it was looking like it would be at least another year, maybe two, before there was a usable PowerPC product, so the project was cancelled.
Apple execs killed the Dylan project... because nobody could show them a release-quality product when they started swinging the meat axes.
Gabor Greif:
Spindler, CEO of Apple at that time, stopped Dylan because the engineers working on it were more expensive than Apple could afford back then. Till the end of '95 the core team got a chance to wrap up all they had and package it as a product which came out as the Apple Dylan Technology Release. It featured PPC code generation but did not itself run on PowerPC natively. The development bed was all Common Lisp and there was no PPC MCL (Macintosh Common Lisp) at that time. Later Digitool was paid to port the environment to PPC using their development version of MCL for PPC they were working on. Apple Dylan TR PPC was quietly released 1996. It still runs fine on classic MacOS, dunno about X
The team sometimes hinted that not bootstrapping the environment in Dylan was a mistake. This would have eased the PPC adoption considerably. But in the light of limited resources and a very strong CL background of the members it was understandable.
Oliver Steele:
I'm convinced that Apple Dylan sank because the development team tried to cram all our favorite features into it (mine had to do with the IDE).
From Mike Lockwood, a former member of the Apple Cambridge Labs (originally published on apple.computerhistory.org):
I started my career at Apple in the developer tools group in Cupertino. But after a couple of years I decided to move east, and transferred to the Cambridge office to work on the Dylan project. In April 1995, we were notified that the project would be cancelled and we would all be laid off. But we were not to be laid off immediately. Apple wanted us to stay for 6 months so Dylan could be released as an experimental "technology release". This was apparently done to avoid embarrassment at WWDC the following month. Dylan was announced and hyped heavily at the previous WWDC, and it would look bad if it disappeared the month before the WWDC the following year.
We were offered an incentive bonus to stay until October. It was strange to be given 6 months notice. We all had plenty of time to find new jobs, but it was not much fun to go down with the ship. But one interesting side effect was we had plenty of time to prepare for the layoff.
First thing (after all) was to print T-shirts. We printed T-shirts (at Apple's expense) that said "The power to cancel your very best" on the front. On the back was a screen shot of the Dylan IDE with all of our names listed in a window. In front of that was a dialog box that said "Are you sure you want to cancel the entire Cambridge lab?", with the mouse pointer hovering over the "Cancel" button.
By the day of the layoffs, we were ready. We decorated the entire office with gaudy halloween decorations, including a raven with a motion detector that would caw and flap its wings whenever someone walked by. Someone found an advertisement for the "Beverly Hills 90210" with a picture of Luke Perry, whose character was named Dylan. The ad said "Dylan - one step closer to revenge, or one step closer to death?" The "90210" was changed to the zip code for our office in Cambridge, MA, and were posted in the hallways in the office.
When the HR people arrived from Cupertino, we politely invited them into the conference room and served them apple turnovers. I was very proud that one of my coworkers had the presence of mind to think of that! We were all wearing our layoff T-shirts, except David Moon had his "the journey begins" T-shirt on, with masking tape covering the word "begins" and "ends" written on top of it instead. They called us by name one at a time to receive a folder with all of our layoff paperwork. When the first name was called, we instinctively applauded - it had the feeling of a graduation ceremony.
I guess that is the kind of layoff that could only happen at Apple...
A picture of the shirt can be seen here.
The Death at Harlequin and Functional Objects
Gary M. Palter about Functional Objects and the history of the Dylan project at Harlequin:
In September 1999, Harlequin canceled its Dylan project and laid off the project staff, myself included. In an unusual move, Harlequin transferred the intellectual property rights for its Dylan project to said group. The group decided to continue its efforts to both develop and market its Dylan implementation. Three members of the group, myself included, agreed to commit to a one-year full-time effort to further product development and to raise funding to establish a viable business. We founded Functional Objects, Inc. to pursue these efforts. However, our fund raising efforts were unsuccessful. Functional Objects has been effectively dormant since late 2000. (Quoted from Palter's Resume)
Open Sourcing of CMU Gwydion Project
CMU's Gwydion Project became open source in 1998. Eric Kidd in a message to the Gwydion Hackers about the process:
Andreas Bogk and I rescued the source tarball from oblivion. We fought bit rot, made a web site, and started making releases. Other people showed up, and started contributing code. We got in touch with the Gwydion Group at CMU, and they wished us well. The Gwydion Group has given up on Dylan. To the best of my knowledge, they've turned down multiple invitations to participate (or even just subscribe to the mailing lists).
The Gwydion website is http://www.gwydiondylan.org.
Open Sourcing of Harlequin Dylan / Functional Objects Project
Before Functional Objects, formerly Harlequin Dylan, ceased operating in January 2006, they open sourced their repository in 2004 to Gwydion Dylan Maintainers. The repository included white papers, design papers, documentation once written for the commercial product, and the code for:
The Dylan Flow Machine (the Harlequin Dylan compiler),
The Interactive Development Environment which provides features like
Attaching to running applications
High level code browsing code
The Dylan User Interface Management code (A high level language for GUI programming, which is a Dylan implementation and further development of CLIM).
A CORBA implementation
Access to Microsoft component technology: Component Object Model (COM), Object Linking and Embedding (OLE).
A LispWork-based Dylan emulator, which was used to platform independent prototype the Dylan language implementation.
The project is now known as Open Dylan and its website is https://opendylan.org.
References
External links
The Dylan Reference Manual
Dylan - An object-oriented dynamic language (An early description of Dylan with the Lisp/Scheme syntax.)
History
Dylan programming language | History of the Dylan programming language | Technology | 3,339 |
645,139 | https://en.wikipedia.org/wiki/Data%20dictionary | A data dictionary, or metadata repository, as defined in the IBM Dictionary of Computing, is a "centralized repository of information about data such as meaning, relationships to other data, origin, usage, and format". Oracle defines it as a collection of tables with metadata. The term can have one of several closely related meanings pertaining to databases and database management systems (DBMS):
A document describing a database or collection of databases
An integral component of a DBMS that is required to determine its structure
A piece of middleware that extends or supplants the native data dictionary of a DBMS
Documentation
The terms data dictionary and data repository indicate a more general software utility than a catalogue. A catalogue is closely coupled with the DBMS software. It provides the information stored in it to the user and the DBA, but it is mainly accessed by the various software modules of the DBMS itself, such as DDL and DML compilers, the query optimiser, the transaction processor, report generators, and the constraint enforcer. On the other hand, a data dictionary is a data structure that stores metadata, i.e., (structured) data about information. The software package for a stand-alone data dictionary or data repository may interact with the software modules of the DBMS, but it is mainly used by the designers, users and administrators of a computer system for information resource management. These systems maintain information on system hardware and software configuration, documentation, application and users as well as other information relevant to system administration.
If a data dictionary system is used only by the designers, users, and administrators and not by the DBMS Software, it is called a passive data dictionary. Otherwise, it is called an active data dictionary or data dictionary. When a passive data dictionary is updated, it is done so manually and independently from any changes to a DBMS (database) structure. With an active data dictionary, the dictionary is updated first and changes occur in the DBMS automatically as a result.
Database users and application developers can benefit from an authoritative data dictionary document that catalogs the organization, contents, and conventions of one or more databases. This typically includes the names and descriptions of various tables (records or entities) and their contents (fields) plus additional details, like the type and length of each data element. Another important piece of information that a data dictionary can provide is the relationship between tables. This is sometimes referred to in entity-relationship diagrams (ERDs), or if using set descriptors, identifying which sets database tables participate in.
In an active data dictionary constraints may be placed upon the underlying data. For instance, a range may be imposed on the value of numeric data in a data element (field), or a record in a table may be forced to participate in a set relationship with another record-type. Additionally, a distributed DBMS may have certain location specifics described within its active data dictionary (e.g. where tables are physically located).
The data dictionary consists of record types (tables) created in the database by systems generated command files, tailored for each supported back-end DBMS. Oracle has a list of specific views for the "sys" user. This allows users to look up the exact information that is needed. Command files contain SQL Statements for CREATE TABLE, CREATE UNIQUE INDEX, ALTER TABLE (for referential integrity), etc., using the specific statement required by that type of database.
There is no universal standard as to the level of detail in such a document.
Middleware
In the construction of database applications, it can be useful to introduce an additional layer of data dictionary software, i.e. middleware, which communicates with the underlying DBMS data dictionary. Such a "high-level" data dictionary may offer additional features and a degree of flexibility that goes beyond the limitations of the native "low-level" data dictionary, whose primary purpose is to support the basic functions of the DBMS, not the requirements of a typical application. For example, a high-level data dictionary can provide alternative entity-relationship models tailored to suit different applications that share a common database. Extensions to the data dictionary also can assist in query optimization against distributed databases. Additionally, DBA functions are often automated using restructuring tools that are tightly coupled to an active data dictionary.
Software frameworks aimed at rapid application development sometimes include high-level data dictionary facilities, which can substantially reduce the amount of programming required to build menus, forms, reports, and other components of a database application, including the database itself. For example, PHPLens includes a PHP class library to automate the creation of tables, indexes, and foreign key constraints portably for multiple databases. Another PHP-based data dictionary, part of the RADICORE toolkit, automatically generates program objects, scripts, and SQL code for menus and forms with data validation and complex joins. For the ASP.NET environment, Base One's data dictionary provides cross-DBMS facilities for automated database creation, data validation, performance enhancement (caching and index utilization), application security, and extended data types. Visual DataFlex features provides the ability to use DataDictionaries as class files to form middle layer between the user interface and the underlying database. The intent is to create standardized rules to maintain data integrity and enforce business rules throughout one or more related applications.
Some industries use generalized data dictionaries as technical standards to ensure interoperability between systems. The real estate industry, for example, abides by a RESO's Data Dictionary to which the National Association of REALTORS mandates its MLSs comply with through its policy handbook. This intermediate mapping layer for MLSs' native databases is supported by software companies which provide API services to MLS organizations.
Platform-specific examples
Developers use a data description specification (DDS) to describe data attributes in file descriptions that are external to the application program that processes the data, in the context of an IBM i. The sys.ts$ table in Oracle stores information about every table in the database. It is part of the data dictionary that is created when the Oracle Database is created. Developers may also use DDS context from free and open-source software (FOSS) for structured and transactional queries in open environments.
Typical attributes
Here is a non-exhaustive list of typical items found in a data dictionary for columns or fields:
Entity or form name or their ID (EntityID or FormID). The group this field belongs to.
Field name, such as RDBMS field name
Displayed field title. May default to field name if blank.
Field type (string, integer, date, etc.)
Measures such as min and max values, display width, or number of decimal places. Different field types may interpret this differently. An alternative is to have different attributes depending on field type.
Field display order or tab order
Coordinates on screen (if a positional or grid-based UI)
Default value
Prompt type, such as drop-down list, combo-box, check-boxes, range, etc.
Is-required (Boolean) - If 'true', the value can not be blank, null, or only white-spaces
Is-read-only (Boolean)
Reference table name, if a foreign key. Can be used for validation or selection lists.
Various event handlers or references to. Example: "on-click", "on-validate", etc. See event-driven programming.
Format code, such as a regular expression or COBOL-style "PIC" statements
Description or synopsis
Database index characteristics or specification
See also
Data hierarchy
Data modeling
Database catalog
Database schema
ISO/IEC 11179
Metadata registry
Semantic spectrum
Vocabulary OneSource
Metadata repository
References
External links
Yourdon, Structured Analysis Wiki, Data Dictionaries (Web archive)
Octopai, Data Dictionary vs. Business Glossary
Data management
Data modeling
Knowledge representation
Metadata | Data dictionary | Technology,Engineering | 1,609 |
13,922,392 | https://en.wikipedia.org/wiki/International%20Phytogeographic%20Excursion | The International Phytogeographic Excursions was a series of international meetings in plant geography that significantly contributed to exchange of scientific ideas across national and linguistic barriers and also to the rise of Anglo-American plant ecology. The initiative was taken by the British botanist Arthur Tansley at the International Geographic Congress in Geneva in 1908. Tansley and another early key figure, Henry C. Cowles, were both much-inspired by the new 'ecological plant geography' introduced by Eugenius Warming and its quest for answering why-questions about plant distribution, as opposed to the traditional, merely descriptive 'floristic plant geography'.
The First International Phytogeographic Excursion was held in the British Isles in 1911. It was organized by Arthur Tansley and went through parts of England, Scotland and Ireland.
The participants were:
Eduard Rübel, Switzerland
Carl Schroeter, Switzerland
Oscar Drude, Germany
Paul Graebner, Germany
C.A.M. Lindman, Sweden
G. Claridge Druce, England
Jean Massart, Belgium
C.H. Ostenfeld, Denmark
Frederic Clements, U.S.A.
Henry C. Cowles, U.S.A., who gave a brief report in Science in 1913.
The Second International Phytogeographic Excursion was a travel across North America from July to September 1913. It was hosted by a number of American ecologists led by Henry C. Cowles. The participants were:
Henry C. Cowles, U.S.A.
Frederic Clements, U.S.A.
Edith S. Clements, U.S.A.
Alfred Dachnowsky, U.S.A.
George Fuller, U.S.A.
George E. Nichols, U.S.A.
Willis Linn Jepson, U.S.A.
Heinrich Brockmann-Jerosch, Switzerland
Marie Charlotte Brockmann-Jerosch, Switzerland
Ove Paulsen, Denmark
Carl Skottsberg, Sweden
Eduard Rübel, Switzerland
Karl von Tubeuf, Germany
Carl Schroeter, Switzerland
Theodoor J. Stomps, Netherlands
Arthur Tansley, England
Adolf Engler, Germany
Cecil Crampton, Scotland.
The Third International Phytogeographic Excursion was proposed in 1915, but postponed due to the First World War. It was finally carried through in 1923 in neutral Switzerland, and as noted by John William Harshberger is his report in Ecology, the participants from Germany, France and other nations recently at war, coexisted peacefully. The organizers were the Swiss botanists Rübel, Schroeter and H. Brockmann-Jerosch.
The participants were, among others:
Gustaf Einar Du Rietz, Sweden
John William Harshberger, U.S.A.
Jens Holmboe, Norway
Huguet del Villar, Spain
Kaarlo Linkola, Finland
Hugo Osvald, Sweden
Ove Paulsen, Denmark
Robert Lloyd Praeger, Ireland
Constantin von Regel, Lithuania
Edward Salisbury, England
Carl Skottsberg, Sweden
Władysław Szafer, Poland
Heinrich Brockmann-Jerosch, Switzerland
Marie Charlotte Brockmann-Jerosch, Switzerland
Eduard Rübel, Switzerland
Carl Schroeter, Switzerland
Josias Braun-Blanquet, Switzerland
Paul Jaccard, Switzerland
The Fourth International Phytogeographic Excursion was held in Scandinavia in 1925 (July 2 to August 24). It formed as a trip through Sweden and Norway starting in Lund in southernmost Sweden, passing Stockholm, Uppsala and Abisko, going down through Norway, ending in Oslo. It was organized by G. Einar Du Rietz from Uppsala University.
By this time, Warmings 'ecological plant geography' had developed into plant ecology and the excursion programme returned to 'floristic plant geography'. Through the 1930s and after the Second World War, the International Phytogeographic Excursions continued at regular intervals, but now outside the mainstream of ecology. At the same time, scientific exchange between plant ecologists had found other means.
The Fifth International Phytogeographic Excursion was held in Czechoslovakia in 1928. It was organized by Karel Domin.
The Sixth International Phytogeographic Excursion was held in Romania in 1931.
The Seventh International Phytogeographic Excursion was held in Italy in 1934.
The Eighth International Phytogeographic Excursion went to Morocco and western Algeria in 1936.
1949 Ireland (9th excursion)
1953 Spain (10th excursion)
1956 Eastern Alps (11th excursion)
1958 Czechoslovakia (12th excursion)
1961 Finland and North Norway
1966 French Alps, Switzerland, Eastern Pyrenees
1970 Western Alps (14th excursion)
1971 mainland Greece and Crete (15th excursion)
1978 U.S.A. (16th excursion)
1983 Argentina (17th excursion)
1984 Japan
1989 Poland (19th excursion)
References
Botany
Ecology organizations
History of biology | International Phytogeographic Excursion | Biology | 977 |
254,498 | https://en.wikipedia.org/wiki/Harry%20Potter%20and%20the%20Prisoner%20of%20Azkaban | Harry Potter and the Prisoner of Azkaban is a fantasy novel written by British author J. K. Rowling. It is the third instalment in the Harry Potter series. The novel follows Harry Potter, a young wizard, in his third year at Hogwarts School of Witchcraft and Wizardry. Along with friends Ron Weasley and Hermione Granger, Harry investigates Sirius Black, an escaped prisoner from Azkaban, the wizard prison, believed to be one of Lord Voldemort's old allies.
The book was published in the United Kingdom on 8 July 1999 by Bloomsbury and in the United States on 8 September 1999 by Scholastic, Inc. Rowling found the book easy to write, finishing it just a year after she began writing it. The book sold 68,000 copies in just three days after its release in the United Kingdom and since has sold over three million in the country. The book won the 1999 Whitbread Children's Book Award, the Bram Stoker Award, and the 2000 Locus Award for Best Fantasy Novel and was short-listed for other awards, including the Hugo.
The film adaptation of the novel was released in 2004, grossing more than $796 million and earning critical acclaim. Video games loosely based on Harry Potter and the Prisoner of Azkaban were also released for several platforms, and most obtained favourable reviews.
Plot
During the summer, Harry accidentally performs magic at the home of his Aunt Petunia and Uncle Vernon by blowing up his Aunt Marge, causing her to float into the sky. After this incident, he leaves their house and spends the summer in London. While staying at the Leaky Cauldron inn, Harry is visited by Minister for Magic Cornelius Fudge, who warns him about Sirius Black, a mass-murderer who escaped from the wizard prison Azkaban.
With Black at large, Dementors have been stationed at Hogwarts as a security measure. During a Care of Magical Creatures lesson with Hagrid, Draco Malfoy is injured after provoking a hippogriff named Buckbeak. Draco's father, Lucius Malfoy, gets Hagrid put on trial for owning a dangerous creature. Harry repeatedly faints in the presence of the Dementors, but eventually is taught by Professor Lupin how to repel them using the Patronus Charm. When Harry is unable to participate in weekend trips to Hogsmeade Village, Fred and George give him a magical map that shows him how to get there using a secret passage. At the Three Broomsticks pub, Harry overhears that Black is his godfather, that he betrayed Harry's parents to Voldemort, and that he now seeks to kill Harry as well.
When Ron's pet rat Scabbers disappears, he blames Hermione and her cat Crookshanks. Ron and Hermione stop talking to each other, although Hermione is distraught when Ron survives an attack by Black inside the Gryffindor dormitory. After the attack, Black cannot be found. After Harry, Ron and Hermione learn that Buckbeak will be executed, they console Hagrid, and Ron and Hermione resume their friendship. Ron also finds Scabbers, who was hiding in Hagrid's hut. As the friends make their way back to the castle, Ron is attacked by a large black dog, which drags him through the passageway leading to Hogsmeade. Harry and Hermione give chase, and find themselves in the Shrieking Shack, where the dog is revealed to be Black in his Animagus form. Lupin arrives, and Black states that he intends to kill Scabbers, not Harry. He explains that Scabbers is Peter Pettigrew, who betrayed Harry's parents to Voldemort and framed Black for mass murder. Black and Lupin compel Pettigrew to transform into human form, then haul him back to Hogwarts.
On the way to the castle, the full moon causes Lupin to transform into a werewolf. Pettigrew escapes and is pursued by Black, Harry and Hermione, who encounter Dementors and lose consciousness. They awaken in the castle, where Black is now being held captive. Harry and Hermione proclaim his innocence to Dumbledore, who suggests using Hermione's Time Turner. Harry and Hermione travel back in time and save both Black and Buckbeak, who fly away together. Snape blames Lupin for Black's disappearance and makes his werewolf-identity public, which forces Lupin to resign. On the train back to London, Harry receives a letter from Black, expressing his gratitude to Harry for saving his life.
Publication and reception
Pre-release history
Harry Potter and the Prisoner of Azkaban is the third book in the Harry Potter series. The first, Harry Potter and the Philosopher's Stone, was published by Bloomsbury on 26 June 1997 and the second, Harry Potter and the Chamber of Secrets, was published on 2 July 1998. Rowling started to write the Prisoner of Azkaban the day after she finished The Chamber of Secrets. Rowling said in 2004 that Prisoner of Azkaban was "the best writing experience I ever had...I was in a very comfortable place writing (number) three. Immediate financial worries were over, and press attention wasn't yet by any means excessive".
Critical reception
Upon release, Harry Potter and the Prisoner of Azkaban received mostly positive reviews. The Daily Telegraph reported on reviews from several British publications with a rating scale for the novel out of "Love It", "Pretty Good", "Ok", and "Rubbish": Daily Telegraph, Guardian, Times, Independent, Sunday Telegraph, and Sunday Times reviews under "Love It" and Observer review under "Pretty Good". The Guardian reported an average rating of 9 out of 10 for the book based on reviews from multiple British newspapers. On BookBrowse, based on American press, the book received a from "Critics' Consensus" and for the media reviews on a rating scale out of five: Kirkus Reviews and School Library Journal reviews under five and Publishers Weekly review under four.
Gregory Maguire wrote a review in The New York Times for Prisoner of Azkaban: in it he said, "So far, in terms of plot, the books do nothing new, but they do it brilliantly...so far, so good." In a newspaper review in The New York Times, it was said that "'The Prisoner of Azkaban' may be the best 'Harry Potter' book yet". A reviewer for KidsReads said, "This crisply-paced fantasy will leave you hungry for the four additional Harry books that J.K. Rowling is working on. Harry's third year is a charm. Don't miss it." Kirkus Reviews did not give a starred review but said, "a properly pulse-pounding climax...The main characters and the continuing story both come along so smartly...that the book seems shorter than its page count: have readers clear their calendars if they are fans, or get out of the way if they are not." Martha V. Parravano also gave a positive review for The Horn Book Magazine, calling it "quite a good book." In addition, a Publishers Weekly review said, "Rowling's wit never flags, whether constructing the workings of the wizard world...or tossing off quick jokes...The Potter spell is holding strong".
However, Anthony Holden, who was one of the judges against Prisoner of Azkaban for the Whitbread Award, was negative about the book, saying that the characters are "all black-and-white", and the "story-lines are predictable, the suspense minimal, the sentimentality cloying every page".
In 2012 it was ranked number 12 on a list of the top 100 children's novels published by School Library Journal.
Awards
Harry Potter and the Prisoner of Azkaban won several awards, including the 1999 Booklist Editors' Choice Award, the 1999 Bram Stoker Award for Best Work for Young Readers, the 1999 FCBG Children's Book Award, the 1999 Whitbread Book of the Year for children's books, and the 2000 Locus Award for Best Fantasy Novel. It was also nominated for the 2000 Hugo Award for Best Novel, the first in the series nominated, but lost to A Deepness in the Sky. Prisoner of Azkaban additionally won the 2004 Indian Paintbrush Book Award and the 2004 Colorado Blue Spruce Young Adult Book Award. Additionally, it was named an American Library Association Notable Children's Book in 2000 as well as one of their Best Books for Young Adults. As with the previous two books in the series, Prisoner of Azkaban won the Nestlé Smarties Book Prize Gold Medal for children aged 9–11 and made the top of the New York Times Best Seller list. In both cases, it was the last in the series to do so. However, in the latter case, a Children's Best Sellers list was created just before the release of Harry Potter and the Goblet of Fire in July 2000 in order to free up more room on the original list. In 2003, the novel was listed at number 24 on the BBC's survey The Big Read.
Sales
Prisoner of Azkaban sold more than 68,000 copies in the UK within three days of publication, which made it the fastest selling British book of the time. The sales total by 2012 is said by The Guardian to be 3,377,906.
Editions
Harry Potter and the Prisoner of Azkaban was issued, prior to publication, in two distinct UK proof editions, and one US "Advance Reader's Edition". The first UK proof, in purple wrappers, differs from the second in a number of respects, and is thought to have been printed in a small edition of 50 copies. The second UK proof is in green wrappers and was printed in a somewhat larger run. The US Advance Reader's Edition is the last of its kind in the Harry Potter series, as no Advance Reader's Editions are known for books 4 through 7. The rear wrapper of the Advance Reader's Edition reveals the circumstances of the US publication of the book:
Harry Potter and the Prisoner of Azkaban was released in hardcover in the UK on 8 July 1999 and in the US on 8 September. The UK edition was released at the unusually precise time of 3.45pm, so as to avoid children skipping school in order to purchase the book. The first state of the hardback edition features an error on p. 7, with an unintended carriage return in a block quote. Two further issues were released, both fixing the error. Across all three states, 5,150 were printed by Clays Ltd.
The British paperback edition was released on 1 April 2000, while the US paperback was released 1 October 2001.
Bloomsbury additionally released an adult edition with a different cover design to the original, in paperback on 10 July 2004 and in hardcover in October 2004. A hardcover special edition, featuring a green border and signature, was released on 8 July 1999. In May 2004, Bloomsbury released a Celebratory Edition, with a blue and purple border. On 1 November 2010, they released the 10th anniversary Signature edition illustrated by Clare Mellinsky and in July 2013 a new adult cover illustrated by Andrew Davidson, both these editions were designed by Webb & Webb Design Limited.
Beginning on 27 August 2013, Scholastic will release new covers for the paperback editions of Harry Potter in the United States to celebrate 15 years of the series. The covers were designed by the author and illustrator Kazu Kibuishi.
An illustrated version of Harry Potter and the Prisoner of Azkaban was released on 3 October 2017, and was illustrated by Jim Kay, who illustrated the previous two instalments. This includes over 115 new illustrations and will be followed by Illustrated editions of the following 4 novels in the future. Jim Kay announced on 6 October 2022 that he would not be illustrating the final two Harry Potter books and that his last work, Harry Potter and the Order of the Phoenix, would be released on 11 October 2022.
Adaptations
Film
The film version of Harry Potter and the Prisoner of Azkaban was released in 2004 and was directed by Alfonso Cuarón from a screenplay by Steve Kloves. The film débuted at number one at the box office and held that position for two weeks. It made a total of $796.7 million worldwide, which made it the second highest-grossing film of 2004 behind Shrek 2. However, among all eight entries in the Harry Potter franchise, Prisoner of Azkaban grossed the lowest; yet among critics and fans, the film is often cited as the best in the franchise – in large part due to Cuarón's stylistic influence. The film ranks at number 471 in Empire magazine's 2008 list of the 500 greatest movies of all time.
Video games
Three unique video games by different developers were released in 2004 by Electronic Arts, loosely based on the book:
References
External links
1999 children's books
1999 British novels
1999 fantasy novels
BILBY Award–winning works
Bloomsbury Publishing books
Bram Stoker Award for Best Work for Young Readers winners
British novels adapted into films
Costa Book Award–winning works
Fiction about shapeshifting
Fiction about size change
Fiction about prison escapes
Fiction about wrongful convictions
Fiction set in 1993
Fiction set in 1994
03
Novels about revenge
Novels about time travel
Scholastic Corporation books
Sequel novels
Werewolf novels
Children's fantasy novels | Harry Potter and the Prisoner of Azkaban | Physics,Mathematics | 2,795 |
49,632,961 | https://en.wikipedia.org/wiki/List%20of%20security%20assessment%20tools | This is a list of available software and hardware tools that are designed for or are particularly suited to various kinds of security assessment and security testing.
Operating systems and tool suites
Several operating systems and tool suites provide bundles of tools useful for various types of security assessment.
Operating system distributions
Kali Linux (formerly BackTrack), a penetration-test-focused Linux distribution based on Debian
Pentoo, a penetration-test-focused Linux distribution based on Gentoo
ParrotOS, a Linux distro focused on penetration testing, forensics, and online anonymity.
Tools
External links
SecTools.org: Top 125 Network Security Tools – a list of security tools suggested by a community
Computer security
Security assessment tools | List of security assessment tools | Technology | 145 |
2,193,362 | https://en.wikipedia.org/wiki/Chromophore | A chromophore is a molecule which absorbs light at a particular wavelength and reflects color as a result. Chromophores are commonly referred to as colored molecules for this reason. The word is derived . Many molecules in nature are chromophores, including chlorophyll, the molecule responsible for the green colors of leaves.
The color that is seen by our eyes is that of the light not absorbed by the reflecting object within a certain wavelength spectrum of visible light. The chromophore indicates a region in the molecule where the energy difference between two separate molecular orbitals falls within the range of the visible spectrum (or in informal contexts, the spectrum under scrutiny). Visible light that hits the chromophore can thus be absorbed by exciting an electron from its ground state into an excited state. In biological molecules that serve to capture or detect light energy, the chromophore is the moiety that causes a conformational change in the molecule when hit by light.
Conjugated pi-bond system chromophores
Just like how two adjacent p-orbitals in a molecule will form a pi-bond, three or more adjacent p-orbitals in a molecule can form a conjugated pi-system. In a conjugated pi-system, electrons are able to capture certain photons as the electrons resonate along a certain distance of p-orbitals - similar to how a radio antenna detects photons along its length. Typically, the more conjugated (longer) the pi-system is, the longer the wavelength of photon can be captured. In other words, with every added adjacent double bond we see in a molecule diagram, we can predict the system will be progressively more likely to appear yellow to our eyes as it is less likely to absorb yellow light and more likely to absorb red light. ("Conjugated systems of fewer than eight conjugated double bonds absorb only in the ultraviolet region and are colorless to the human eye", "Compounds that are blue or green typically do not rely on conjugated double bonds alone.")
In the conjugated chromophores, the electrons jump between energy levels that are extended pi orbitals, created by electron clouds like those in aromatic systems. Common examples include retinal (used in the eye to detect light), various food colorings, fabric dyes (azo compounds), pH indicators, lycopene, β-carotene, and anthocyanins. Various factors in a chromophore's structure go into determining at what wavelength region in a spectrum the chromophore will absorb. Lengthening or extending a conjugated system with more unsaturated (multiple) bonds in a molecule will tend to shift absorption to longer wavelengths. Woodward–Fieser rules can be used to approximate ultraviolet-visible maximum absorption wavelength in organic compounds with conjugated pi-bond systems.
Some of these are metal complex chromophores, which contain a metal in a coordination complex with ligands. Examples are chlorophyll, which is used by plants for photosynthesis and hemoglobin, the oxygen transporter in the blood of vertebrate animals. In these two examples, a metal is complexed at the center of a tetrapyrrole macrocycle ring: the metal being iron in the heme group (iron in a porphyrin ring) of hemoglobin, or magnesium complexed in a chlorin-type ring in the case of chlorophyll. The highly conjugated pi-bonding system of the macrocycle ring absorbs visible light. The nature of the central metal can also influence the absorption spectrum of the metal-macrocycle complex or properties such as excited state lifetime. The tetrapyrrole moiety in organic compounds which is not macrocyclic but still has a conjugated pi-bond system still acts as a chromophore. Examples of such compounds include bilirubin and urobilin, which exhibit a yellow color.
Auxochrome
An auxochrome is a functional group of atoms attached to the chromophore which modifies the ability of the chromophore to absorb light, altering the wavelength or intensity of the absorption.
Halochromism
Halochromism occurs when a substance changes color as the pH changes. This is a property of pH indicators, whose molecular structure changes upon certain changes in the surrounding pH. This change in structure affects a chromophore in the pH indicator molecule. For example, phenolphthalein is a pH indicator whose structure changes as pH changes as shown in the following table:
In a pH range of about 0-8, the molecule has three aromatic rings all bonded to a tetrahedral sp3 hybridized carbon atom in the middle which does not make the π-bonding in the aromatic rings conjugate. Because of their limited extent, the aromatic rings only absorb light in the ultraviolet region, and so the compound appears colorless in the 0-8 pH range. However, as the pH increases beyond 8.2, that central carbon becomes part of a double bond becoming sp2 hybridized and leaving a p orbital to overlap with the π-bonding in the rings. This makes the three rings conjugate together to form an extended chromophore absorbing longer wavelength visible light to show a fuchsia color. At pH ranges outside 0-12, other molecular structure changes result in other color changes; see Phenolphthalein details.
Common chromophore absorption wavelengths
See also
Biological pigment
Chromatophore
Fluorophore
Litmus
Pharmacophore
Photophore, glandular organ
Pigment
Spectroscopy
Visual phototransduction
Woodward's rules
References
External links
Causes of Color: physical mechanisms by which color is generated.
High Speed Nano-Sized Electronics May be Possible with Chromophores - Azonano.com
Chemical compounds
Color | Chromophore | Physics,Chemistry | 1,229 |
16,290,826 | https://en.wikipedia.org/wiki/PSR%20J0855%E2%88%924644 | PSR J0855-4644 is a pulsar in the constellation Vela, and was at one time thought possibly associated with supernova remnant RX J0852.0-4622. However, this association is considered unlikely since a central compact object with better matching kinematics to the shell has been observed.
References
External links
Simbad
00.7
Vela (constellation) | PSR J0855−4644 | Astronomy | 84 |
55,588,470 | https://en.wikipedia.org/wiki/Chondrometer | A chondrometer is a measuring instrument designed to determine the bulk density of grain. Grain density is measured in kilograms per hectolitre (Imp. pounds per bushel). It is thus also referred to as the hectolitre mass.
Purpose
Density is a guide to wheat quality and determines the price and the space required to store and transport the crop.
Description
A chondrometer consists of a filling hopper, a measuring container, a straightedge, a weighing instrument. The filling hopper allows the grain to fall into the measuring container in a reproducible pattern as several measurements are taken, and all readings need to be within a strict degree of accuracy. The measuring cylinder has flat top edge to it can levelled using the straightedge (a strickle) to give a set volume. Today, the measuring instrument can be a set of digital scales with an accuracy greater than 0.1 g, though in the past it was a steelyard balance with the measuring cylinder hooking directly onto the scales.
Calculation
Bulk Density (kg/hL)= Mass of Grain Captured (in kg) /Volume of measuring container (in L) x 100
The measuring container will usually be 1L or 0.5L to make the calculations easy.
References
External links
How to Measure Test Weight of Grain - YouTube
Measuring instruments | Chondrometer | Technology,Engineering | 271 |
21,165,748 | https://en.wikipedia.org/wiki/Drug-induced%20urticaria | One of the most prevalent forms of adverse drug reactions is cutaneous reactions, with drug-induced urticaria ranking as the second most common type, preceded by drug-induced exanthems. Urticaria, commonly known as hives, manifests as weals, itching, burning, redness, swelling, and angioedema—a rapid swelling of lower skin layers, often more painful than pruritic. These symptoms may occur concurrently, successively, or independently. Typically, when a drug triggers urticaria, symptoms manifest within 24 hours of ingestion, aiding in the identification of the causative agent. Urticaria symptoms usually subside within 1–24 hours, while angioedema may take up to 72 hours to resolve completely.
Mechanisms of drug-induced urticaria
Drug-induced urticaria occurs by immunologic and nonimmunologic mechanisms. The primary mechanism for drug-induced urticaria involves a type-I hypersensitivity reaction mediated by IgE antibodies, commonly observed with ß-lactam use. This immune-mediated reaction necessitates a sensitization period, leading to more severe systemic reactions, including angioedema and anaphylaxis.
Additionally, drug-induced urticaria can result from the activation of the complement cascade, a type-III hypersensitivity mediated by immune complexes. Complement cascade activation generates anaphylatoxins, releasing chemical mediators from basophils and mast cells, subsequently causing urticaria. This mechanism is seen in serum sickness and is associated with systemic symptoms such as fever, joint pain, and neurological symptoms.
Some medications, like opioids and certain other drugs, induce urticaria by directly acting on mast cells, triggering histamine release. Non-steroidal anti-inflammatory drugs (NSAIDs) contribute uniquely to urticaria by inhibiting the COX-1 pathway, leading to increased production of leukotrienes, vasodilators implicated in edema and urticaria. NSAIDs are the most common culprit of drug-induced urticaria and reactions to NSAIDs are often associated with angioedema.
Topical medications, typically cause contact dermatitis, though can also induce urticaria through immune-mediated or non-immunological mechanisms. Antibiotics, often present in topical creams, are a common source of contact urticaria.
Treatment and prevention
Patients experiencing drug-induced urticaria should avoid the causative drug if possible. When avoidance is not feasible, alternatives, such as using selective COX-2 inhibitors in place of typical NSAIDs, may be considered. Evidence suggests that individuals with NSAID-induced urticaria, particularly with angioedema, may develop tolerance over time. Pre-treatment with anti-histamines or leukotriene antagonists can potentially prevent reactions in cases where avoidance or substitution is challenging.
For post-exposure urticaria, discontinuation of the offending medication is crucial. Symptoms typically resolve upon removal of the causal agent, and management may involve anti-histamines or corticosteroids based on the severity of the reaction.
List of medications known to cause urticaria
Nonsteroidal anti-inflammatory agents
Antibiotics: cephalosporins, penicillins, tetracyclines, aminoglycosides, sulfonamides, sorbitol complexes,
Angiotensin-converting enzyme (ACE) inhibitors
Hydralazine
Narcotic analgesics
Contrast media
Enzymes: streptokinase, trypsin, chymopapain
Vaccines
Antifungal agents: ketoconazole, fluconazole
Steroids
Polypeptide hormones: insulin, corticotrophin
Vasopressin
Anesthetic agents (local and general)
Quinidine
Anticancer drugs
Muscle myorelaxants (curare)
Mannitol
Dextrans
Protamine
Vitamins
See also
List of cutaneous conditions
Localized heat contact urticaria
Skin lesion
Urticarial dermatoses
References
Drug eruptions
Urticaria and angioedema
Drug-induced diseases | Drug-induced urticaria | Chemistry | 860 |
53,653,732 | https://en.wikipedia.org/wiki/WISEA%201101%2B5400 | WISEA 1101+5400 (full name WISEA J110125.95+540052.8) is a T-type brown dwarf (specifically T5.5) approximately 100 light-years away in the constellation Ursa Major. It was discovered in March 2017 by members of the citizen science project Backyard Worlds. Initial photometric analysis suggested it was a T5.5 dwarf, which was later confirmed by a spectrum of the object obtained with the NASA Infrared Telescope Facility. It is the first confirmed brown dwarf found by the project.
The brown dwarf was identified by several volunteers, including the therapist Rosa Castro, Bob Fletcher, Khasan Mokaev and Tamara Stajic. WISEA 1101+5400 was discovered six days after the launch of the project and the discovery was the fastest publication for any Zooniverse project at the time of the publication.
The discovery of this brown dwarf allowed the backyard worlds collaboration to estimate the amount of new brown dwarfs the project could discover. This was allowed due to the fact that the brown dwarf is one magnitude fainter than any brown dwarf previously discovered with proper motion surveys. The team estimated that the project would discover new L-dwarfs, T-dwarfs and Y-dwarfs. As of July 2019 the project did meet this estimate with spectroscopically confirmed T- and L-dwarfs (70 T-dwarfs and 61 L-dwarfs), but exceeded this estimate by brown dwarf candidates (1305).
External links
WISEA 1101+5400 on wiseview, a tool created by Backyard Worlds volunteers
Subject 5566284 Zooniverse subject
New Brown Dwarf Found by NASA-funded Citizen Science Project Goddard Media Studios (GSFC)
Citizen Scientists Uncover Cold New World Near the Sun story by AMNH
References
Brown dwarfs
T-type brown dwarfs
20170306
Ursa Major
WISE objects | WISEA 1101+5400 | Astronomy | 377 |
12,194,869 | https://en.wikipedia.org/wiki/C3H5NO | {{DISPLAYTITLE:C3H5NO}}
The molecular formula C3H5NO (molar mass: 71.08 g/mol, exact mass: 71.0371 u) may refer to:
Acrylamide (repeating unit in polyacrylamide)
2-Azetidinone
Isoxazoline
Lactonitrile
Oxazoline | C3H5NO | Chemistry | 83 |
69,612,044 | https://en.wikipedia.org/wiki/Scotland%27s%20Churches%20Trust | Scotland's Churches Trust is a Scottish registered charity whose “aims are to advance the preservation, promotion and understanding of Scotland’s rich architectural heritage represented in its churches and places of worship of all denominations.” Its principal activities are “promoting heritage and tourism” and “giving of grants”. It primarily carries out these activities by offering financial support and practical advice for church repairs and modernisation projects, organ recitals and concerts, a church recording scheme and by promoting its fourteen Pilgrim Journeys across Scotland, that include over 500 places of current or former worship.
Formed in 2012 from two older built heritage organisations, the Scottish Churches Architectural Heritage Trust and Scotland's Churches Scheme, it currently has over 1300 churches in its membership.
History
In 1974, broadcaster and writer Magnus Magnusson created The Steeplechase fundraising scheme to help raise funds to preserve Scotland's churches. In 1978 he became the founding chairperson of the Scottish Churches Architectural Heritage Trust, a position he held until 1985. Its primary aim was to assist congregations in the preservation and upkeep of their buildings.
In 1980, the board invited noted fundraiser Florence MacKenzie (1935-2010) to become the Trust's director, in which post she remained until her retirement in 2009. MacKenzie was granted an MBE for her services to the restoration of church buildings in 1996. Other former trustees include Lady Marion Fraser and Lord Penrose.
During its first three decades SCAHT was instrumental in preserving “churches of all sizes – historic and small country kirks as well as synagogues”. These buildings include Kilarrow Parish Church in Islay, St Magnus' in Orkney, St Marnoch's in Angus, Sacred Heart in Wigton, Yester Parish Church in East Lothian and St Michael and All Angels, Inverness.
Founded in 1996, Scotland's Churches Scheme was an ecumenical membership charitable trust that assisted “living” churches work together and make their buildings the focus of their communities by regularly opening their doors and sharing their history and heritage. Among other activities, the Scheme provided a series of “how-to” guides to assist its member churches in researching and presenting their stories, secure their buildings, welcome visitors and record and interpret their graveyards. It also published a series of Regional Guides listing the history and architectural heritage of ecclesiastical buildings across Scotland.
In 2012 the Scottish Churches Architectural Heritage Trust and Scotland's Churches Scheme merged to form Scotland's Churches Trust. HRH Princess Anne, Princess Royal became its patron and Dr Brian Fraser its first Director. In 2013 the SCT launched Scotland’s Pilgrim Journeys, a collection of six trails across Scotland that encompassed the medieval tradition pilgrim visits to ecclesiastical sites with contemporary faith tourism.
In recent years the Trust has provided grants towards the costs of major fabric works and minor maintenance activities. It also offers financial support to church organists seeking to improve their skills and churches offering organ concerts. Its Scottish Pilgrim Journeys initiative has been increased from six to fourteen different trails across the country.
Governance
Patron: HRH Princess Anne, Princess Royal KG, KT, GCVO, GCStJ, QSO, CD
Hon President: Robin Blair CVO, WS
Vice Presidents:
Trustees:
Director: Dr DJ Johnston-Smith
Chairperson, Board of Trustees: Prof Adam Cumming
Chairperson, Grants Committee: Ros Taylor RIBA
References
External links
Christian charities based in the United Kingdom
Heritage organisations in the United Kingdom
Architectural history | Scotland's Churches Trust | Engineering | 686 |
22,385,872 | https://en.wikipedia.org/wiki/Tom%20Poberezny | Thomas Paul Poberezny (October 3, 1946 – July 25, 2022) was an American aerobatic world champion aviator, as well as chairman of the annual Experimental Aircraft Association (EAA) Fly-In and Convention (now named AirVenture) from 1977 to 2011 and president of EAA from 1989 to 2010, presiding over a time period of expansive growth for the organization and convention. He succeeded his father, Paul Poberezny, who founded them in 1953.
Poberezny was a member of the Eagles Aerobatic Team (originally the Red Devils), which was formed in 1971 and flew for more than 25 years, setting the record for the longest-running aerobatic team with the same members. He led the effort to build what is now known as the EAA Aviation Museum, opened in 1983, and is a co-founder of the Young Eagles, an EAA program created in 1992 to give children the opportunity to experience flight and learn about general aviation, flying more than two million young people since its creation and making it the most successful program of its kind in history. From his involvement in the EAA, Poberezny is often credited with having led the introduction of the light-sport aircraft category in 2004. In 2016, he was inducted into the National Aviation Hall of Fame.
Life and career
Tom Poberezny was born and raised in the greater Milwaukee metropolitan area of Wisconsin, the son of Audrey and Paul Poberezny. He was surrounded by aviation from the very early stages of his life. Because of his father's early key involvement with EAA, the basement of Tom's childhood home in Hales Corners, Wisconsin was considered "the regional social center of [aircraft] homebuilding." Poberezny graduated from Northwestern University in 1970 with a degree in industrial engineering, and became preoccupied with aviation soon after. He joined the US National Unlimited Aerobatic Team and was part of the team that won the World Championship in 1972 at Salon, France. In 1973, he won the individual US National Unlimited Aerobatic Championship.
In 1971, Poberezny, Charlie Hillard, and Gene Soucy formed the aerobatic team The Red Devils (soon renamed the Eagles Aerobatic Team) and went on to perform at airshows until the Daytona Skyfest in 1995. This makes the Eagles the longest-performing aerobatic team in the world with one group of members. Poberezny also appeared as himself in the 1980 movie Cloud Dancer, for which he was the chief pilot and technical advisor.
He was appointed to chairman of the EAA Convention and Fly-In (now known as AirVenture) in 1977. This annual event takes place in Oshkosh, Wisconsin and attracts over 600,000 visitors with 10,000 aircraft from 68 countries, making it the world's largest aviation gathering. Much of the convention's subsequent growth occurred under the leadership of Tom Poberezny, bringing it from a national gathering of homebuilt and small plane enthusiasts to an international event that embraced every aspect of aviation, with a nearly $200 million economic impact on the surrounding area by 2017. In the late 1970s, he led the campaign to build the present-day EAA Aviation Museum at Wittman Regional Airport in Oshkosh, which officially opened in 1983.
In 1989, Poberezny was elected president of the Experimental Aircraft Association. EAA promotes the hobby of building and flying small aircraft and has over 180,000 members worldwide. In 1992 he led the creation of the Young Eagles program, which introduces young people to aviation, with actor Cliff Robertson appointed founding chairman upon its inception. The goal of giving one million kids a ride in an aircraft was met in October 2003; and in July 2016, the two millionth Young Eagle was flown by actor and former chairman of the organization, Harrison Ford.
Poberezny was a member of the Centennial of Flight Commission, a six-person board created by Congress in 1999 to coordinate the nation's celebration of the 100th anniversary of the Wright brothers' 1903 historic first flight. He was also president of the EAA Aviation Foundation, an educational outreach project, and was a founding member of the U.S. Aerobatic Foundation.
Poberezny heavily promoted the EAA's role in the light-sport aircraft category, bringing new opportunities for people to learn to fly or keep flying. It became an official category recognized with an airworthiness certificate by the FAA in 2004.
In March 2009, Paul Poberezny stepped down as chairman of EAA and Tom Poberezny took on these duties as well, with Rod Hightower as president and CEO from September 7, 2010. Tom retained the positions of chairman of both EAA and AirVenture.
On July 26, 2011, Tom Poberezny and the EAA announced that he would be retiring from EAA effective August 1, 2011. The president and CEO, Rod Hightower, would assume Poberezny's duties until a replacement was found. However, on 22 October 2012, Hightower resigned as president and CEO of EAA, and on the same day, former Cessna CEO Jack J. Pelton was elected chairman of the EAA board of directors. He issued a press announcement saying that he would assume all leadership duties of the organization until suitable replacements could be named.
Poberezny served on the boards of several aviation organizations, including the Board of Directors for Garmin International and the Advisory Boards of Aircraft Kit Industry Association (AKIA), Cirrus Aircraft, Citation Jet Pilots Association, and Angel Flight West.
During the 2015 AirVenture convention, Poberezny returned to the show for the first time since his retirement, driving around the grounds in his "Red Three" Volkswagen Beetle.
He died following a brief illness on July 25, 2022, the opening day of AirVenture, and is survived by his wife Sharon and daughter Lesley.
Awards and recognition
Poberezny was inducted into the Wisconsin Aviation Hall of Fame in October 1996. He was also awarded the Distinguished Wisconsin Aviator Award in May 2007. Past recipients of this award include astronaut Mark C. Lee, Major General Albert Wilkening, Major General Fred R. Sloan, and astronaut Jim Lovell. In 2011, Poberezny was inducted into the International Air & Space Hall of Fame at the San Diego Air & Space Museum.
In early 2013, Poberezny received the prestigious Living Legend of Aviation award at a ceremony in Beverly Hills, California. Later that year, a campaign and website was launched dedicated to honoring Poberezny and his accomplishments during the 20 years he led EAA. The website also included a Roster of Support for others to add to the cause. Notable proponents behind the effort consisted of aerospace engineer Burt Rutan, Cirrus Aircraft CEO and Co-founder Dale Klapmeier, and retired test, fighter and air show pilot Bob Hoover.
Tom Poberezny was inducted into the National Aviation Hall of Fame on October 1, 2016 in Dayton, Ohio, making him and Paul Poberezny (1999 inductee) the first father and son duo to be honored by the Hall.
On the day of his death in July 2022, several aviation industry executives offered statements in response. Dale Klapmeier called him a "true aviation hero" and "pillar of this industry", Jack Pelton said "Tom’s legacy is tremendous in the world of aviation with his personal achievements as well as the growth of EAA", and General Aviation Manufacturers Association (GAMA) president and CEO Pete Bunce wrote:
References
External links
Profile in the National Aviation Hall of Fame
Biography in the Gathering of Eagles Foundation
Biography in the U.S. Centennial of Flight Commission
1992 Eagles Aerobatic Team article in the Chicago Tribune
Poberezny obituary in the Milwaukee Journal Sentinel
1946 births
2022 deaths
American aerospace businesspeople
Aerobatic pilots
Northwestern University alumni
Aviators from Wisconsin
Experimental Aircraft Association
National Aviation Hall of Fame inductees
American people of Ukrainian descent
People from Hales Corners, Wisconsin
People from Milwaukee | Tom Poberezny | Engineering | 1,662 |
8,077,742 | https://en.wikipedia.org/wiki/Target%20Disk%20Mode | Target Disk Mode (sometimes referred to as TDM or Target Mode) is a boot mode unique to Macintosh computers.
When a Mac that supports Target Disk Mode is started with the 'T' key held down, its operating system does not boot. Instead, the Mac's firmware enables its drives to behave as a SCSI, FireWire, Thunderbolt, or USB-C external mass storage device.
A Mac booted in Target Mode can be attached to the port of any other computer, Mac or PC, where it will appear as an external device. Hard drives within the target Mac, for example, can be formatted or partitioned exactly like any other external drive. Some computers will also make their internal CD/DVD drives and other internal and external peripheral hardware available to the host computer.
Target Disk Mode is useful for accessing the contents of a Mac which cannot load its own operating system. Target Disk Mode is the preferred form of old-computer to new-computer interconnect used by Apple's Migration Assistant. Migration Assistant supports Ethernet (wired) or Wi-Fi, which TDM does not. Neither supports USB; however, Thunderbolt-to-FireWire, Thunderbolt-to-Gigabit-Ethernet, and USB-3.0-to-Gigabit-Ethernet adapters are an option when one of the computers does not have FireWire or Thunderbolt.
History
Apple introduced disk mode access with the original PowerBook 100 and continued to offer it with most subsequent PowerBook series and FireWire-equipped Macs. As long as the requisite software appeared in the system ROM, the Mac could be booted into disk mode.
Target Disk Mode was originally called SCSI Disk Mode, and a special cable (SCSI System Cable) allowed the original PowerBook series to attach to a desktop Mac as an external SCSI disk. A unique system control panel on the PowerBook was used to select a non-conflicting SCSI ID number from the host Mac.
This also made it possible to select the disk in the Startup control panel and boot up from it.
With the change to IDE drives starting with the PowerBook 150 and 190, Apple implemented HD Target Mode, which essentially enabled SCSI Disk Mode by translating the external SCSI commands via the ATA driver. Officially reserved for Apple's portables only, the mode was supported by all PowerBooks except the 140, 145, 145B, 150 and 170. However, SCSI Disk Mode can be implemented unofficially on any Macintosh with an external SCSI port by suspending the startup process with the interrupt switch, as long as all internal drives on the chain can be set to different IDs than the active host system's devices.
When Apple dropped the SCSI interface, starting with the AGP Power Mac G4 and “Pismo” PowerBook G3, FireWire Target Disk mode replaced the earlier disk mode implementation, also receiving official support beyond laptops to all subsequent Macs with built-in FireWire.
Thunderbolt supports Target Disk Mode.
The 12-inch Retina MacBook (early 2015) has only one expansion port, a USB-C port that supports charging, external displays, and Target Disk Mode. Using Target Disk Mode on this MacBook requires a cable that supports USB 3.0 or USB 3.1, with either a USB-A or USB-C connector on one end and a USB-C connector on the other end for the MacBook.
With the Mac transition to Apple silicon, Apple replaced Target Disk Mode with Mac Sharing Mode.
System requirements
The target computer (the computer to be placed into TDM) must:
Have FireWire or Thunderbolt Port
Have an ATA device at ATA bus 0
Be any Macintosh except the following models:
iMac (Tray-Loading)
Power Macintosh G3 (Blue & White)
iBook G3 models without FireWire
Power Macintosh G4 (PCI Graphics)
MacBook Air (2008-2010)
MacBook (Unibody)
The host computer (the computer into which the Target Disk Mode booted computer is plugged) merely needs to meet the same requirements as for any external mass storage device using the bus in question, and (if access to native Mac formatted partitions such as the boot volume is desired) support for the correct version of Hierarchical File System. On Classic Mac OS, this means FireWire 2.3.3 or later and Mac OS 8.6 or later are required to use a FireWire target.
The host computer may run Microsoft Windows, but with some possible shortcomings: to read a Mac's HFS-formatted partitions, extra drivers such as MacDrive, TransMac, MacDisk, or HFSExplorer are necessary. Users also must ensure their computer possesses appropriate interface hardware in order to physically connect to a Mac in Target Mode. MacDrive also has a read-only option to prevent any accidental editing of the computer in Target Disk Mode; however, this mode cannot be set after an HFS/HFS+ disk is mounted. With the addition of HFS drivers into Apple's Boot Camp, it has also become possible for Macs running Windows to read (but not write) HFS partitions, without the purchase of software. Users have separated these drivers from the main Bootcamp install, and now also install on other Windows computers. Host computers running Linux are also able to read and write to a Mac's HFS or HFS+ formatted devices through Target Disk Mode. It is working out-of-the-box on most distributions as HFS+ support is part of the Linux kernel. However these filesystems cannot be checked for errors, so for shrinking or moving partitions it is preferred to use Mac OS.
See also
LIO Target
NetBoot
Notes
MacOS
Macintosh platform | Target Disk Mode | Technology | 1,188 |
2,681,594 | https://en.wikipedia.org/wiki/Alpha%20Sextantis | Alpha Sextantis (α Sex, α Sextantis) is the brightest star in the equatorial constellation of Sextans. It is visible to the naked eye on a dark night with an apparent visual magnitude of 4.49. The distance to this star, as determined from parallax measurements, is around 280 light years. This is considered an informal "equator star", as it lies less than a quarter of a degree south of the celestial equator. In 1900, it was 7 minutes of arc north of the equator. As a result of a shift in the Earth's axial tilt, it crossed over to the southern hemisphere in December 1923.
The variability of Alpha Sextantis was discovered by Aven Magded Hamadamen and included in the International Variable Star Index. The star undergoes pulsations with a period of 9.1 hours.
This is an evolved A-type giant star with a stellar classification of A0 III. It has around 2.5 times the mass of the Sun and three times the Sun's radius. The abundance of elements is similar to that in the Sun. It radiates 90 times the solar luminosity from its outer atmosphere at an effective temperature of 9,984 K. Alpha Sextantis is nearing the end of its life as a main-sequence star; it is around 385 million years old with a projected rotational velocity of 21 km/s.
References
External links
Astronomy Knowledge Database
A-type giants
Sextans
Sextantis, Alpha
BD+00 2615
Sextantis, 15
087887
049641
03981 | Alpha Sextantis | Astronomy | 330 |
14,755,064 | https://en.wikipedia.org/wiki/DAZ1 | Deleted in azoospermia 1, also known as DAZ1, is a protein which in humans is encoded by the DAZ1 gene.
Function
This gene is a member of the DAZ gene family and is a candidate for the human Y-chromosomal azoospermia factor (AZF). Its expression is restricted to pre-meiotic germ cells, particularly in spermatogonia. It encodes an RNA-binding protein that is important for spermatogenesis. Four copies of this gene are found on chromosome Y within palindromic duplications; one pair of genes is part of the P2 palindrome and the second pair is part of the P1 palindrome. Each gene contains a 2.4 kb repeat including a 72-bp exon, called the DAZ repeat; the number of DAZ repeats is variable and there are several variations in the sequence of the DAZ repeat. Each copy of the gene also contains a 10.8 kb region that may be amplified; this region includes five exons that encode an RNA recognition motif (RRM) domain. This gene contains three copies of the 10.8 kb repeat. However, no transcripts containing three copies of the RRM domain have been described; thus the RefSeq for this gene contains only two RRM domains.
Interactions
DAZ1 has been shown to interact with DAZAP2, DAZL and DAZ associated protein 1.
References
Further reading | DAZ1 | Chemistry | 305 |
60,913 | https://en.wikipedia.org/wiki/Stigmergy | Stigmergy ( ) is a mechanism of indirect coordination, through the environment, between agents or actions. The principle is that the trace left in the environment by an individual action stimulates the performance of a succeeding action by the same or different agent. Agents that respond to traces in the environment receive positive fitness benefits, reinforcing the likelihood of these behaviors becoming fixed within a population over time.
Stigmergy is a form of self-organization. It produces complex, seemingly intelligent structures, without need for any planning, control, or even direct communication between the agents. As such it supports efficient collaboration between extremely simple agents, who may lack memory or individual awareness of each other.
History
The term "stigmergy" was introduced by French biologist Pierre-Paul Grassé in 1959 to refer to termite behavior. He defined it as: "Stimulation of workers by the performance they have achieved." It is derived from the Greek words στίγμα stigma "mark, sign" and ἔργον ergon "work, action", and captures the notion that an agent’s actions leave signs in the environment, signs that it and other agents sense and that determine and incite their subsequent actions.
Later on, a distinction was made between the stigmergic phenomenon, which is specific to the guidance of additional work, and the more general, non-work specific incitation, for which the term sematectonic communication was coined by E. O. Wilson, from the Greek words σῆμα sema "sign, token", and τέκτων tecton "craftsman, builder": "There is a need for a more general, somewhat less clumsy expression to denote the evocation of any form of behavior or physiological change by the evidences of work performed by other animals, including the special case of the guidance of additional work."
Stigmergy is now one of the key concepts in the field of swarm intelligence.
Stigmergic behavior in non-human organisms
Stigmergy was first observed in social insects. For example, ants exchange information by laying down pheromones (the trace) on their way back to the nest when they have found food. In that way, they collectively develop a complex network of trails, connecting the nest in an efficient way to various food sources. When ants come out of the nest searching for food, they are stimulated by the pheromone to follow the trail towards the food source. The network of trails functions as a shared external memory for the ant colony.
In computer science, this general method has been applied in a variety of techniques called ant colony optimization, which search for solutions to complex problems by depositing "virtual pheromones" along paths that appear promising. In the field of artificial neural networks, stigmergy can be used as a computational memory. Federico Galatolo showed that a stigmergic memory can achieve the same performances of more complex and well established neural networks architectures like LSTM.
Other eusocial creatures, such as termites, use pheromones to build their complex nests by following a simple decentralized rule set. Each insect scoops up a 'mudball' or similar material from its environment, infuses the ball with pheromones, and deposits it on the ground, initially in a random spot. However, termites are attracted to their nestmates' pheromones and are therefore more likely to drop their own mudballs on top of their neighbors'. The larger the heap of mud becomes, the more attractive it is, and therefore the more mud will be added to it (positive feedback). Over time this leads to the construction of pillars, arches, tunnels and chambers.
Stigmergy has been observed in bacteria, various species of which differentiate into distinct cell types and which participate in group behaviors that are guided by sophisticated temporal and spatial control systems. Spectacular examples of multicellular behavior can be found among the myxobacteria. Myxobacteria travel in swarms containing many cells kept together by intercellular molecular signals. Most myxobacteria are predatory: individuals benefit from aggregation as it allows accumulation of extracellular enzymes which are used to digest prey microorganisms. When nutrients are scarce, myxobacterial cells aggregate into fruiting bodies, within which the swarming cells transform themselves into dormant myxospores with thick cell walls. The fruiting process is thought to benefit myxobacteria by ensuring that cell growth is resumed with a group (swarm) of myxobacteria, rather than isolated cells. Similar life cycles have developed among the cellular slime molds. The best known of the myxobacteria, Myxococcus xanthus and Stigmatella aurantiaca, are studied in various laboratories as prokaryotic models of development.
Analysis of human behavior
Stigmergy studied in eusocial creatures and physical systems, has been proposed as a model of analyzing some robotics systems, multi-agent systems, communication in computer networks, and online communities.
On the Internet there are many collective projects where users interact only by modifying local parts of their shared virtual environment. Wikipedia is an example of this. The massive structure of information available in a wiki, or an open source software project such as the FreeBSD kernel could be compared to a termite nest; one initial user leaves a seed of an idea (a mudball) which attracts other users who then build upon and modify this initial concept, eventually constructing an elaborate structure of connected thoughts.
In addition the concept of stigmergy has also been used to describe how cooperative work such as building design may be integrated. Designing a large contemporary building involves a large and diverse network of actors (e.g. architects, building engineers, static engineers, building services engineers). Their distributed activities may be partly integrated through practices of stigmergy.
Analysis of human social movements
The rise of open source software in the 21st century has disrupted the business models of some proprietary software providers, and open content projects like Wikipedia have threatened the business models of companies like Britannica. Researchers have studied collaborative open source projects, arguing they provide insights into the emergence of large-scale peer production and the growth of gift economy.
Stigmergic society
Heather Marsh, associated with the Occupy Movement, Wikileaks, and Anonymous, has proposed a new social system where competition as a driving force would be replaced with a more collaborative society. This proposed society would not use representative democracy but new forms of idea and action based governance and collaborative methods including stigmergy. "With stigmergy, an initial idea is freely given, and the project is driven by the idea, not by a personality or group of personalities. No individual needs permission (competitive) or consensus (cooperative) to propose an idea or initiate a project."
Some at the Hong Kong Umbrella Movement in 2014 were quoted recommending stigmergy as a way forward.
See also
Ant mill
Biosemiotics
Extended mind thesis
Path dependence
Spontaneous order
Watchmaker analogy
r/place
References
Further reading
Systems theory
Self-organization | Stigmergy | Mathematics | 1,466 |
40,729,402 | https://en.wikipedia.org/wiki/Arctic%20moss | Arctic moss is a common name for several plants and may refer to:
Calliergon giganteum, an aquatic moss
Cladonia, a genus of lichens | Arctic moss | Biology | 35 |
27,774,331 | https://en.wikipedia.org/wiki/Mechanics%20of%20human%20sexuality | The mechanics of human sexuality or mechanics of sex, or more formally the biomechanics of human sexuality, is the study of the mechanics related to human sexual activity. Examples of topics include the biomechanical study of the strength of vaginal tissues and the biomechanics of male erectile function. The mechanics of sex under limit circumstances, such as sexual activity at zero-gravity in outer space, are also being studied.
Pioneering researchers studied the male and female genitals during coitus (penile-vaginal penetration) with ultrasound technology in 1992 and magnetic resonance imaging (MRI) in 1999, mapping the anatomy of the activity and taking images illustrating the fit of male and female genitals. In the research using MRI, researchers imaged couples performing coitus inside an MRI machine. The magnetic resonance images also showed that the penis has the shape of a boomerang, that one third of its length consists of the root of the penis, and that the vaginal walls wrap snugly around it. Moreover, MRI during coitus indicate that the internal part of the clitoris is stimulated by penile-vaginal movements. These studies highlight the role of the clitoris and indicate that what is termed the G-spot may only exist because the highly innervated clitoris is pulled closely to the anterior wall of the vagina when the woman is sexually aroused and during vaginal penetration.
References
Further reading
Biomechanics
Human sexuality | Mechanics of human sexuality | Physics,Biology | 299 |
55,059,510 | https://en.wikipedia.org/wiki/Immunological%20memory | Immunological memory is the ability of the immune system to quickly and specifically recognize an antigen that the body has previously encountered and initiate a corresponding immune response. Generally, they are secondary, tertiary and other subsequent immune responses to the same antigen. The adaptive immune system and antigen-specific receptor generation (TCR, antibodies) are responsible for adaptive immune memory.
After the inflammatory immune response to danger-associated antigen, some of the antigen-specific T cells and B cells persist in the body and become long-living memory T and B cells. After the second encounter with the same antigen, they recognize the antigen and mount a faster and more robust response. Immunological memory is the basis of vaccination. Emerging resources show that even the innate immune system can initiate a more efficient immune response and pathogen elimination after the previous stimulation with a pathogen, respectively with PAMPs or DAMPs. Innate immune memory (also called trained immunity) is neither antigen-specific nor dependent on gene rearrangement, but the different response is caused by changes in epigenetic programming and shifts in cellular metabolism. Innate immune memory was observed in invertebrates as well as in vertebrates.
Adaptive immune memory
Development of adaptive immune memory
Immunological memory occurs after a primary immune response against the antigen. Immunological memory is thus created by each individual, after a previous initial exposure, to a potentially dangerous agent. The course of secondary immune response is similar to primary immune response. After the memory B cell recognizes the antigen it presents the peptide: MHC II complex to nearby effector T cells. That leads to activation of these cells and rapid proliferation of cells. After the primary immune response has disappeared, the effector cells of the immune response are eliminated.
However, antibodies that were previously created in the body remain and represent the humoral component of immunological memory and comprise an important defensive mechanism in subsequent infections. In addition to the formed antibodies in the body there remains a small number of memory T and B cells that make up the cellular component of the immunological memory. They stay in blood circulation in a resting state and at the subsequent encounter with the same antigen these cells are able to respond immediately and eliminate the antigen. Memory cells have a long life and last up to several decades in the body.
Immunity to chickenpox, measles, and some other diseases lasts a lifetime. Immunity to many diseases eventually wears off. The immune system's response to a few diseases, such as dengue, counterproductively worsens the next infection (antibody-dependent enhancement).
As of 2019, researchers are still trying to find out why some vaccines produce life-long immunity, while the effectiveness of other vaccines drops to zero in less than 30 years (for mumps) or less than six months (for H3N2 influenza).
Evolution of adaptive immune memory
The evolutionary invention of memory T and B cells is widespread; however, the conditions required to develop this costly adaptation are specific. First, in order to evolve immune memory the initial molecular machinery cost must be high and will demand losses in other host characteristics. Second, middling or long lived organisms have higher chance of evolving such apparatus. The cost of this adaption increases if the host has a middling lifespan as the immune memory must be effective earlier in life.
Furthermore, research models show that the environment plays an essential role in the diversity of memory cells in a population. Comparing the influence of multiple infections to a specific disease as opposed to disease diversity of an environment provide evidence that memory cell pools accrue diversity based on the number of individual pathogens exposed, even at the cost of efficiency when encountering more common pathogens. Individuals living in isolated environments such as islands have a less diverse population of memory cells, which are, however, present with sturdier immune responses. That indicates that the environment plays a large role in the evolution of memory cell populations.
Previously acquired immune memory can be depleted by measles in unvaccinated children, leaving them at risk of infection by other pathogens in the years after infection.
Memory B cells
Memory B cells are plasma cells that are able to produce antibodies for a long time. Unlike the naive B cells involved in the primary immune response the memory B cell response is slightly different. The memory B cell has already undergone clonal expansion, differentiation and affinity maturation, so it is able to divide multiple times faster and produce antibodies with much higher affinity (especially IgG).
In contrast, the naive plasma cell is fully differentiated and cannot be further stimulated by antigen to divide or increase antibody production. Memory B cell activity in secondary lymphatic organs is highest during the first 2 weeks after infection. Subsequently, after 2 to 4 weeks its response declines. After the germinal center reaction the memory plasma cells are located in the bone marrow which is the main site of antibody production within the immunological memory.
Memory T cells
Memory T cells can be both CD4+ and CD8+. These memory T cells do not require further antigen stimulation to proliferate; therefore, they do not need a signal via MHC. Memory T cells can be divided into two functionally distinct groups based on the expression of the CCR7 chemokine receptor. This chemokine indicates the direction of migration into secondary lymphatic organs. Those memory T cells that do not express CCR7 (these are CCR7-) have receptors to migrate to the site of inflammation in the tissue and represent an immediate effector cell population. These cells were named memory effector T cells (TEM). After repeated stimulation they produce large amounts of IFN-γ, IL-4 and IL-5. In contrast, CCR7 + memory T cells lack proinflammatory and cytotoxic function but have receptors for lymph node migration. These cells were named central memory T cells (TCM). They effectively stimulate dendritic cells, and after repeated stimulation they are able to differentiate in CCR7- effector memory T cells. Both populations of these memory cells originate from naive T cells and remain in the body for several years after initial immunization.
Experimental techniques used to study these cells include measuring antigen-stimulated cell proliferation and cytokine release, staining with peptide-MHC multimers or using an activation-induced marker (AIM) assay.
Innate immune memory
Many invertebrates such as species of fresh water snails, copepod crustaceans, and tapeworms have been observed activating innate immune memory to instigate a more efficient immune response to second encounter with specific pathogens, despite missing an adaptive branch of the immune system. RAG1-deficient mice without functional T and B cells were able to survive the administration of a lethal dose of Candida albicans when exposed previously to a much smaller amount, showing that vertebrates also retain this ability. Despite not having the ability to manufacture antibodies like the adaptive immune system, innate immune system has immune memory properties as well. Innate immune memory (trained immunity) is defined as a long-term functional reprogramming of innate immune cells evoked by exogenous or endogenous insults and leading to an altered response towards a second challenge after returning to a non-activated state.
When innate immune cells receive an activation signal; for example, through recognition of PAMPs with PRRs, they start the expression of proinflammatory genes, initiate an inflammatory response, and undergo epigenetic reprogramming. After the second stimulation, the transcription activation is faster and more robust. Immunological memory was reported in monocytes, macrophages, NK cells, ILC1, ILC2, and ILC3 cells. Concomitantly, some nonimmune cells, for example, epithelial stem cells on barrier tissues, or fibroblasts, change their epigenetic state and respond differently after priming insult.
Mechanism of innate immune memory
At the steady state, unstimulated cells have reduced biosynthetic activities and more condensed chromatin with reduced gene transcription. The interaction of exogenous PAMPs (β-glucan, muramyl peptide) or endogenous DAMPs (oxidized LDL, uric acid) with PRR initiates a cellular response. Triggered Intracellular signaling cascades lead to the upregulation of metabolic pathways such as glycolysis, Krebs cycle, and fatty acid metabolism. An increase in metabolic activity provides cells with energy and building blocks, which are needed for the production of signaling molecules such as cytokines and chemokines.
Signal transduction changes the epigenetic marks and increases chromatin accessibility, to allow binding of transcription factors and start transcription of genes connected with inflammation. There is an interplay between metabolism and epigenetic changes because some metabolites such as fumarate and acetyl-CoA can activate or inhibit enzymes involved in chromatin remodeling. After the stimulus let up, there is no need for immune factors production, and their expression in immune cells is terminated. Several epigenetic modifications created during stimulation remain. Characteristic epigenetic rewiring in trained cells is the accumulation of H3K4me3 on immune genes promoters and the increase of H3k4me1 and H3K27ac on enhancers. Additionally, cellular metabolism does not return to the state before stimulation, and trained cells remain in a prepared state. This status can last from weeks to several months and can be transmitted into daughter cells. Secondary stimulation induces a new response, which is faster and stronger.
Evolution of innate immune memory
Immune memory brings a major evolutionary advantage when the organism faces repeated infections. Inflammation is very costly, and increased effectivity of response accelerates pathogen elimination and prevents damage to the host's own tissue. Classical adaptive immune memory evolved in jawed vertebrates and in jawless fish (lamprey), which is approximately just 1% of living organisms. Some form of immune memory is, therefore, reported in other species. In plants and invertebrates, faster kinetics, increased magnitude of immune response and an improved survival rate can be seem after secondary infection encounters. Immune memory is common for the vast majority of biodiversity on earth.
It has been proposed that immune memory in innate and adaptive immunity represents an evolutionary continuum in which a more robust immune response evolved first, mediated by epigenetic reprogramming. In contrast, specificity through antigen-specific receptors evolved later in some vertebrates.
Evolutionary mechanisms leading to the development of immunological memory
The emergence of the adaptive immune system is rooted in the deep history of evolution dating back roughly 500 million years. Investigations and recent studies found that two major events led to the emergence of the same. These two macroevolutionary events were the origin of RAG and two whole rounds of genome duplication (WGD).The early origins and evidence for emergence of features resembling AIS dates to the era where jawed and jawless vertebrates diverged phylogenetically. Early investigations around the 1970s led to the discovery of unique inverted repeat flanking signal sequences while groups studied the RAG genome. These so-called RAG transposons invaded regions of genome which may have been involved in AIS. Culmination of several works and review suggests that these disruptions could have been selected for a rearrangement to maintain genomic integrity which ultimately led to mechanisms like RAG diversifications in AIS. This discovery led to the hypothesis that there was an invasion event of a regulatory element-like region because these repeats resembled a remnant transposable element. This invasion was argued to be necessary for the emergence of BCR and TCR-dependent immunity as we see now in all gnathostomes .According to recent scientific findings around 450-500mya the vertebrate genome went through two rounds of whole genome duplication. This is usually referred to as the “2R hypothesis”. Such intense genomic events lead to gene sub-functionalization, neofunctionalization or in many cases lead to loss of functions. Ohno, 40 years ago proposed that the evolutionary events which led to whole genome duplication was key for the emergence of the diversity we see in adaptive immunity and memory. Further works illustrate that newer genic regions which arose because of this duplication event, are major contributors to today's adaptive immune systems which control immunological memory in gnathostomes. Okada’s work on investigating ohnologues that arose from WGD is clear proof of the same, that today AIS systems are remnants of the WGD events
See also
Immunity (medical)
Seroconversion
Serostatus
Virgin soil epidemic
References
Immune system | Immunological memory | Biology | 2,600 |
8,876,082 | https://en.wikipedia.org/wiki/Kirkman%27s%20schoolgirl%20problem | Kirkman's schoolgirl problem is a problem in combinatorics proposed by Thomas Penyngton Kirkman in 1850 as Query VI in The Lady's and Gentleman's Diary (pg.48). The problem states:
Fifteen young ladies in a school walk out three abreast for seven days in succession: it is required to arrange them daily so that no two shall walk twice abreast.
Solutions
A solution to this problem is an example of a Kirkman triple system, which is a Steiner triple system having a parallelism, that is, a partition of the blocks of the triple system into parallel classes which are themselves partitions of the points into disjoint blocks. Such Steiner systems that have a parallelism are also called resolvable.
There are exactly seven non-isomorphic solutions to the schoolgirl problem, as originally listed by Frank Nelson Cole in Kirkman Parades in 1922. The seven solutions are summarized in the table below, denoting the 15 girls with the letters A to O.
From the number of automorphisms for each solution and the definition of an automorphism group, the total number of solutions including isomorphic solutions is therefore:
.
History
The problem has a long and storied history. This section is based on historical work done at different times by Robin Wilson and by Louise Duffield Cummings. The history is as follows:
In 1844, Wesley Woolhouse, the editor of The Lady's and Gentleman's Diary at the time, asked the general question: "Determine the number of combinations that can be made out of n symbols, p symbols in each; with this limitation, that no combination of q symbols, which may appear in any one of them shall be repeated in any other." Only two answers were received, one incorrect and the other correctly answering the question with . As the question did not ask for anything more than the number of combinations, nothing was received about the conditions on n, p, or q when such a solution could be achieved.
In 1846, Woolhouse asked: "How many triads can be made out of n symbols, so that no pair of symbols shall be comprised more than once among them?". This is equivalent to repeating his 1844 question with the values p = 3 and q = 2.
In 1847, at age 41, Thomas Kirkman published his paper titled On a Problem in Combinations which comprehensively described and solved the problem of constructing triple systems of order n where n = 1 or 3 (mod 6). He also considered other values of n even though perfect balance would not be possible. He gave two different sequences of triple systems, one for n = 7, 15, 19, 27, etc., and another for n = 9, 13, 25, etc. Using these propositions, he proved that triple systems exist for all values of n = 1 or 3 (mod 6) (not necessarily resolvable ones, but triple systems in general). He also described resolvable triple systems in detail in that paper, particularly for n = 9 and 15; resolvable triple systems are now known as Kirkman triple systems. He could not conclusively say for what other values of n would resolvable triple systems exist; that problem would not be solved until the 1960s (see below).
In 1850, Kirkman posed the 15 schoolgirl problem, which would become much more famous than the 1847 paper he had already written. Several solutions were received. Kirkman himself gave a solution that later would be found to be isomorphic to Solution I above. Kirkman claimed it to be the only possible solution but that was incorrect. Arthur Cayley's solution would be later found to be isomorphic to Solution II. Both solutions could be embedded in PG(3,2) though that geometry was not known at the time. However, in publishing his solutions to the schoolgirl problem, Kirkman neglected to refer readers to his own 1847 paper, and this omission would have serious consequences for invention and priority as seen below.
Also in 1850, James Joseph Sylvester asked if there could be 13 different solutions to the 15-schoolgirl problem that would use all triples exactly once overall, observing that . In words, is it possible for the girls to march every day for 13 weeks, such that every two girls march together exactly once each week and every three girls march together exactly once in the term of 13 weeks? This problem was much harder, and a computational solution would finally be provided in 1974 by RHF Denniston (see below).
In 1852, Robert Richard Anstice provided a cyclic solution, made by constructing the first day's five triples to be 0Gg, AbC, aDE, cef, BdF on the 15 symbols 0ABCDEFGabcdefg and then cyclically shifting each subsequent day by one letter while leaving 0 unchanged (uppercase staying uppercase and lowercase staying lowercase). If the four triples without the 0 element (AbC, aDE, cef, BdF) are taken and uppercase converted to lowercase (abc, ade, cef, bdf) they form what would later be called the Pasch configuration. The Pasch configuration would become important in isomorph rejection techniques in the 20th century.
In 1853, Jakob Steiner, completely unaware of Kirkman's 1847 paper, published his paper titled Combinatorische Aufgabe which reintroduced the concept of triple systems but did not mention resolvability into separate parallel classes. Steiner noted that it is necessary for n to be 1 or 3 (mod 6) but left an open question as to when this would be realized, unaware that Kirkman had already settled that question in 1847. As this paper was more widely read by the European mathematical establishment, triple systems later became known as Steiner triple systems.
In 1859, Michel Reiss answered the questions raised by Steiner, using both methodology and notation so similar to Kirkman's 1847 work (without acknowledging Kirkman), that subsequent authors such as Louise Cummings have called him out for plagiarism. Kirkman himself expressed his bitterness.
In 1860, Benjamin Peirce unified several disparate solutions presented thus far, and showed that there were three possible cyclic solution structures, one corresponding to Anstice's work, one based on Kirkman's solution, and one on Cayley's.
In 1861, James Joseph Sylvester revisited the problem and tried to claim that he had invented it, and that his Cambridge lectures had been the source of Kirkman's work. Kirkman quickly rebuffed his claims, stating that when he wrote his papers he had never been to Cambridge or heard of Sylvester's work. This priority dispute led to a falling out between Sylvester and Kirkman.
In 1861-1862, Kirkman had a falling out with Arthur Cayley over an unrelated matter (Cayley's choosing not to publish a series of papers by Kirkman on group theory and polyhedra which cost Kirkman recognition by the mathematical community in Europe), further contributing to his being sidelined by the mathematics establishment. His comprehensive 1847 paper in particular was forgotten, with many subsequent authors either crediting Steiner or Reiss, unaware of the history.
The schoolgirl puzzle's popularity itself was unaffected by Kirkman's academic conflicts, and in the late 19th and early 20th centuries the puzzle appeared in several recreational mathematics books by Édouard Lucas, Rouse Ball, Wilhelm Ahrens, and Henry Dudeney. In his lifetime, Kirkman would complain about his serious mathematical work being eclipsed by the popularity of the schoolgirl problem. Kirkman died in 1895.
In 1918, Kirkman's serious mathematical work was brought back to wider attention by Louise Duffield Cummings in a paper titled An Undervalued Kirkman Paper which discussed the early history of the field and corrected the historical omission.
At about the same time, Cummings was working with Frank Nelson Cole and Henry Seely White on triple systems. This culminated in their famous and widely cited 1919 paper Complete classification of triad systems on 15 elements which was the first paper to lay out all 80 solutions to the Steiner triple system of size 15. These included both resolvable and non-resolvable systems.
In 1922, Cole published his paper Kirkman Parades which listed for the first time all seven non-isomorphic solutions to the 15 schoolgirl problem, thus answering a long-standing question since the 1850s. The seven Kirkman solutions correspond to four different Steiner systems when resolvability into parallel classes is removed as a constraint. Three of the Steiner systems have two possible ways of being separated into parallel classes, meaning two Kirkman solutions each, while the fourth has only one, giving seven Kirkman solutions overall.
In the 1960s, it was proved that Kirkman triple systems exist for all orders n = 3 (mod 6). This was first proved by Lu Jiaxi () in 1965, and he submitted it to Acta Mathematica Sinica but the journal erroneously thought the problem had been solved already and rejected his paper in 1966, which was later found to be a serious mistake. His subsequent academic contributions were disrupted by the Cultural Revolution and rejected again. In 1968, the generalized theorem was proven independently by D. K. Ray-Chaudhuri and R. M. Wilson.
In 1974, RHF Denniston solved the Sylvester problem of constructing 13 disjoint Kirkman solutions and using them to cover all 455 triples on the 15 girls. His solution is discussed below.
Sylvester's problem
James Joseph Sylvester in 1850 asked if 13 disjoint Kirkman systems of 35 triples each could be constructed to use all triples on 15 girls. No solution was found until 1974 when RHF Denniston at the University of Leicester constructed it with a computer. Denniston's insight was to create a single-week Kirkman solution in such a way that it could be permuted according to a specific permutation of cycle length 13 to create disjoint solutions for subsequent weeks; he chose a permutation with a single 13-cycle and two fixed points like (1 2 3 4 5 6 7 8 9 10 11 12 13)(14)(15). Under this permutation, a triple like 123 would map to 234, 345, ... (11, 12, 13), (12, 13, 1) and (13, 1, 2) before repeating. Denniston thus classified the 455 triples into 35 rows of 13 triples each, each row being the orbit of a given triple under the permutation. In order to construct a Sylvester solution, no single-week Kirkman solution could use two triples from the same row, otherwise they would eventually collide when the permutation was applied to one of them. Solving Sylvester's problem is equivalent to finding one triple from each of the 35 rows such that the 35 triples together make a Kirkman solution. He then asked an Elliott 4130 computer to do exactly that search, which took him 7 hours to find this first-week solution, labeling the 15 girls with the letters A to O:
Day 1 ABJ CEM FKL HIN DGO
Day 2 ACH DEI FGM JLN BKO
Day 3 ADL BHM GIK CFN EJO
Day 4 AEG BIL CJK DMN FHO
Day 5 AFI BCD GHJ EKN LMO
Day 6 AKM DFJ EHL BGN CIO
Day 7 BEF CGL DHK IJM ANO
He stopped the search at that point, not looking to establish uniqueness.
The American minimalist composer Tom Johnson composed a piece of music called Kirkman's Ladies based on Denniston's solution.
As of 2021, it is not known whether there are other non-isomorphic solutions to Sylvester's problem, or how many solutions there are.
9 schoolgirls and extensions
The equivalent of the Kirkman problem for 9 schoolgirls results in S(2,3,9), an affine plane isomorphic to the following triples on each day:
Day 1: 123 456 789
Day 2: 147 258 369
Day 3: 159 267 348
Day 4: 168 249 357
The corresponding Sylvester problem asks for 7 different S(2,3,9) systems of 12 triples each, together covering all triples. This solution was known to Bays (1917) which was found again from a different direction by Earl Kramer and Dale Mesner in a 1974 paper titled Intersections Among Steiner Systems (J Combinatorial Theory, Vol 16 pp 273-285). There can indeed be 7 disjoint S(2,3,9) systems, and all such sets of 7 fall into two non-isomorphic categories of sizes 8640 and 6720, with 42 and 54 automorphisms respectively.
Solution 1:
Day 1 Day 2 Day 3 Day 4
Week 1 ABC.DEF.GHI ADG.BEH.CFI AEI.BFG.CDH AFH.BDI.CEG
Week 2 ABD.CEH.FGI ACF.BGH.DEI AEG.BCI.DFH AHI.BEF.CDG
Week 3 ABE.CDI.FGH ACG.BDF.EHI ADH.BGI.CEF AFI.BCH.DEG
Week 4 ABF.CEI.DGH ACD.BHI.EFG AEH.BCG.DFI AGI.BDE.CFH
Week 5 ABG.CDE.FHI ACH.BEI.DFG ADI.BCF.EGH AEF.BDH.CGI
Week 6 ABH.CDF.EGI ACI.BDG.EFH ADE.BFI.CGH AFG.BCE.DHI
Week 7 ABI.CFG.DEH ACE.BFH.DGI ADF.BEG.CHI AGH.BCD.EFI
Solution 1 has 42 automorphisms, generated by the permutations (A I D C F H)(B G) and (C F D H E I)(B G). Applying the 9! = 362880 permutations of ABCDEFGHI, there are 362880/42 = 8640 different solutions all isomorphic to Solution 1.
Solution 2:
Day 1 Day 2 Day 3 Day 4
Week 1 ABC.DEF.GHI ADG.BEH.CFI AEI.BFG.CDH AFH.BDI.CEG
Week 2 ABD.CEH.FGI ACF.BGH.DEI AEG.BCI.DFH AHI.BEF.CDG
Week 3 ABE.CGH.DFI ACI.BFH.DEG ADH.BGI.CEF AFG.BCD.EHI
Week 4 ABF.CGI.DEH ACE.BDG.FHI ADI.BCH.EFG AGH.BEI.CDF
Week 5 ABG.CDI.EFH ACH.BDF.EGI ADE.BHI.CFG AFI.BCE.DGH
Week 6 ABH.CEI.DFG ACD.BFI.EGH AEF.BCG.DHI AGI.BDE.CFH
Week 7 ABI.CDE.FGH ACG.BDH.EFI ADF.BEG.CHI AEH.BCF.DGI
Solution 2 has 54 automorphisms, generated by the permutations (A B D)(C H E)(F G I) and (A I F D E H)(B G). Applying the 9! = 362880 permutations of ABCDEFGHI, there are 362880/54 = 6720 different solutions all isomorphic to Solution 2.
Thus there are 8640 + 6720 = 15360 solutions in total, falling into two non-isomorphic categories.
In addition to S(2,3,9), Kramer and Mesner examined other systems that could be derived from S(5,6,12) and found that there could be up to 2 disjoint S(5,6,12) systems, up to 2 disjoint S(4,5,11) systems, and up to 5 disjoint S(3,4,10) systems. All such sets of 2 or 5 are respectively isomorphic to each other.
Larger systems and continuing research
In the 21st century, analogues of Sylvester's problem have been visited by other authors under terms like "Disjoint Steiner systems" or "Disjoint Kirkman systems" or "LKTS" (Large Sets of Kirkman Triple Systems), for n > 15. Similar sets of disjoint Steiner systems have also been investigated for the S(5,8,24) Steiner system in addition to triple systems.
Galois geometry
In 1910 the problem was addressed using Galois geometry by George Conwell.
The Galois field GF(2) with two elements is used with four homogeneous coordinates to form PG(3,2) which has 15 points, 3 points to a line, 7 points and 7 lines in a plane. A plane can be considered a complete quadrilateral together with the line through its diagonal points. Each point is on 7 lines, and there are 35 lines in all.
The lines of PG(3,2) are identified by their Plücker coordinates in PG(5,2) with 63 points, 35 of which represent lines of PG(3,2). These 35 points form the surface S known as the Klein quadric. For each of the 28 points off S there are 6 lines through it which do not intersect S.
As there are seven days in a week, the heptad is an important part of the solution:
A heptad is determined by any two of its points. Each of the 28 points off S lies in two heptads. There are 8 heptads. The projective linear group PGL(3,2) is isomorphic the alternating group on the 8 heptads.
The schoolgirl problem consists in finding seven lines in the 5-space which do not intersect and such that any two lines always have a heptad in common.
Spreads and packing
In PG(3,2), a partition of the points into lines is called a spread, and a partition of the lines into spreads is called a or . There are 56 spreads and 240 packings. When Hirschfeld considered the problem in his Finite Projective Spaces of Three Dimensions (1985), he noted that some solutions correspond to packings of PG(3,2), essentially as described by Conwell above, and he presented two of them.
Generalization
The problem can be generalized to girls, where must be an odd multiple of 3 (that is ), walking in triplets for days, with the requirement, again, that no pair of girls walk in the same row twice. The solution to this generalisation is a Steiner triple system, an S(2, 3, 6t + 3) with parallelism (that is, one in which each of the 6t + 3 elements occurs exactly once in each block of 3-element sets), known as a Kirkman triple system. It is this generalization of the problem that Kirkman discussed first, while the famous special case was only proposed later. A complete solution to the general case was published by D. K. Ray-Chaudhuri and R. M. Wilson in 1968, though it had already been solved by Lu Jiaxi () in 1965, but had not been published at that time.
Many variations of the basic problem can be considered. Alan Hartman solves a problem of this type with the requirement that no trio walks in a row of four more than once using Steiner quadruple systems.
More recently a similar problem known as the Social Golfer Problem has gained interest that deals with 32 golfers who want to get to play with different people each day in groups of 4, over the course of 10 days.
As this is a regrouping strategy where all groups are orthogonal, this process within the problem of organising a large group into smaller groups where no two people share the same group twice can be referred to as orthogonal regrouping.
The Resolvable Coverings problem considers the general girls, groups case where each pair of girls must be in the same group at some point, but we want to use as few days as possible. This can, for example, be used to schedule a rotating table plan, in which each pair of guests must at some point be at the same table.
The Oberwolfach problem, of decomposing a complete graph into edge-disjoint copies of a given graph, also generalizes Kirkman's schoolgirl problem. Kirkman's problem is the special case of the Oberwolfach problem in which the graph consists of five disjoint triangles.
See also
Cooperative learning strategy for increasing interaction within classroom teaching
Dobble card game
Progressive dinner party designs
Speed Networking events
Sports Competitions
Combinatorics
R M Wilson
Dijen K. Ray-Chaudhuri
Discrete mathematics
Notes
References
External links
String (March, 2015) - Solution visualised, Stack Exchange
Combinatorial design
Mathematical problems
Families of sets | Kirkman's schoolgirl problem | Mathematics | 4,411 |
16,675,717 | https://en.wikipedia.org/wiki/Rotane | A rotane is a hydrocarbon consisting of a central cycloalkane ring with cyclopropane units spiro-linked to each corner. The systematic naming pattern for these molecules is "[n]rotane", where n is the number of atoms in the central ring.
The simplest such chemical, [3]rotane, consists solely of a branched array of spiro-cyclopropane units, and is thus a branched triangulane.
References
Hydrocarbons
Cyclopropanes
Spiro compounds | Rotane | Chemistry | 113 |
21,183,896 | https://en.wikipedia.org/wiki/CACNA1I | Calcium channel, voltage-dependent, T type, alpha 1I subunit, also known as CACNA1I or Cav3.3 is a protein which in humans is encoded by the CACNA1I gene.
Function
Voltage-dependent calcium channels can be distinguished based on their voltage-dependence, deactivation, and single-channel conductance. Low-voltage-activated calcium channels are referred to as 'T' type because their currents are both transient, owing to fast inactivation, and tiny, owing to small conductance. T-type channels are thought to be involved in pacemaker activity, low-threshold calcium spikes, neuronal oscillations and resonance, and rebound burst firing.
See also
T-type calcium channel
References
External links
Ion channels
Integral membrane proteins | CACNA1I | Chemistry | 163 |
48,189,396 | https://en.wikipedia.org/wiki/CssII | Centruroides suffusus suffusus toxin II (CssII) is a scorpion β-toxin from the venom of the scorpion Centruroides suffusus suffusus. CssII primarily affects voltage-gated sodium channels by causing a hyperpolarizing shift of voltage dependence, a reduction in peak transient current, and the occurrence of resurgent currents.
Sources
Centruroides suffusus suffusus is a Mexican scorpion from the genus Centruroides belonging to the family Buthidae. C. suffusus suffusus has at least seven different β-toxins, of which CssII is considered the major toxin in the venom affecting mammals. The single gene for CssII has been identified, and cloned using E. coli, resulting in recombinant CssII.
Chemistry
CssII is a single chain miniprotein, consisting of 66 amino acids:
Lys-Glu-Gly-Tyr-Leu-Val-Ser-Lys-Ser-Thr-Gly-Cys-Lys-Tyr-Glu-Cys-Leu-Lys-Leu-Gly-Asp-Asn-Asp-Tyr-Cys-Leu-Arg-Glu-Cys-Lys-Gln-Gln-Tyr-Gly-Lys-Ser-Ser-Gly-Gly-Tyr-Cys-Tyr-Ala-Phe-Ala-Cys-Trp-Cys-Thr-His-Leu-Tyr-Glu-Gln-Ala-Val-Val-Trp-Pro-Leu-Pro-Asn-Lys-Thr-Cys-Asn
It has four disulfide bridges and its scaffold is formed by a single α-helix, and a three-stranded β-sheet structure. Typical for Css β-toxins, no methionine and isoleucine amino acids occur in the miniprotein. CssII’s characteristics include the replacement of proline in position 59 with tyrosine, differentiating it from all other α- and β-toxins. Moreover, glutamine (position 32) and histidine (position 57) replace lysine and glycine residues respectively, differentiating CssII from all other β-toxins. The protein is amidated at the C-terminal end.
CssII has been successfully produced in E.coli, resulting in a recombinant variant of CssII. rCssII is not amidated at the C-terminal, resulting in a slightly lower weight. Recombinant CssII was shown to exhibit similar toxicity as native CssII. However, His tagged CssII (HisrCssII) was shown to be less toxic to mice
Target
CssII targets voltage-gated sodium channels, and has the highest affinity for Nav1.6 channels. CssII is thought to bind to a receptor site only accessible when the sodium channel is in its open state. Within this site Css toxin is hypothesized to bind to the residues of the IIS3-S4 loop, as well as the extracellular IIS4 end.
Nav1.6 channels are primarily expressed in the central nervous system, as well as the heart and glia cells.
Mode of action
As CssII is a β-toxin, it binds to site 4 on the sodium channel, thus primarily affecting the voltage sensor domain of the channel. It is thought that CssII binding to the voltage sensor domain is dependent on a conformational change of the sodium channel. The binding to site 4 induces a negative shift in voltage dependence, resulting in the aberrant opening of sodium channels. In addition, CssII reduces the peak transient current of the Nav1.6 channels, and causes the occurrence of resurgent currents in cells that otherwise would not exhibit this behavior. These effects might arise from different channel binding sites for CssII, as the effects on the resurgent current occur earlier than the left shift of activation, and the transient peak current reduction. However, these additional binding sites have not yet been defined.
Also, CssII might indirectly affects the reuptake of GABA. This interaction is thought to be due to a change in membrane potential which inhibits sodium dependent reuptake of GABA.
Toxicity
The LD50 in mice is 25 μg/kg for subcutaneous injections, and .60 μg/kg for intracerebroventricular injections. Bark scorpion venom is generally considered neurotoxic, and stings can be fatal. Buthidae stings are highly prevalent, especially in Mexico, with more than 200,000 stings annually.
Treatment
C. suffusus suffusus can be treated by antivenom, such as Alacramyn. A non-toxic recombinant variant of CssII, that is able to displace native CssII, facilitates the production of specific antibodies that could protect against the C. suffusus suffusus sting.
References
Ion channel toxins
Scorpion toxins
Neurotoxins | CssII | Chemistry | 1,107 |
34,303,509 | https://en.wikipedia.org/wiki/Research%20Works%20Act | The Research Works Act, 102 H.R. 3699, was a bill that was introduced in the United States House of Representatives at the 112th United States Congress on December 16, 2011, by Representative Darrell Issa (R-CA) and co-sponsored by Carolyn B. Maloney (D-NY). The bill contained provisions to prohibit open-access mandates for federally funded research and effectively revert the United States' National Institutes of Health Public Access Policy, which requires taxpayer-funded research to be freely accessible online. If enacted, it would have also severely restricted the sharing of scientific data. The bill was referred to the House Committee on Oversight and Government Reform, of which Issa is the chair. Similar bills were introduced in 2008 and 2009 but have not been enacted since.
On February 27, 2012, Elsevier, a major publisher, announced that it was withdrawing support for the Act. Later that day, Issa and Maloney issued a statement saying that they would not push for legislative action on the bill.
Reception
The bill was supported by the Association of American Publishers (AAP) and the Copyright Alliance.
The Scholarly Publishing and Academic Resources Coalition, the Alliance for Taxpayer Access, the American Library Association, the International Society for Computational Biology, the Confederation of Open Access Repositories and prominent open science and open access advocates criticized the Research Works Act, some of them urging scholarly societies to resign from the AAP because of its support for the bill. Several AAP members, including MIT Press, Rockefeller University Press, Nature Publishing Group, American Association for the Advancement of Science stated their opposition to the bill but signaled no intention to leave the association. Other AAP members stated their opposition to the bill as did the Association of American Universities (AAU) and the Association of Public and Land-grant Universities. Several public health groups opposed the bill.
Opponents stressed particularly the effects on public availability of biomedical research results, such as those funded by NIH grants, submitting that under the bill "taxpayers who already paid for the research would have to pay again to read the results". Mike Taylor from the University of Bristol said that the bill's denial of access to scientific research would cause "preventable deaths in developing countries" and "an incalculable loss to science", and said Representatives Issa and Maloney were motivated by multiple donations they had received from the academic publisher Elsevier.
An online petition – The Cost of Knowledge – inspired by British mathematician and Fields medalist Timothy Gowers to raise awareness of the bill, to call for lower prices for journals and to promote increased open access to information, was signed by more than 10,000 scholars. Signatories vowed to withhold their support from Elsevier journals as editors, reviewers or authors "unless they radically change how they operate". On February 27, 2012, Elsevier announced its withdrawal of support for the bill, citing concerns from journal authors, editors, and reviewers. While participants in the boycott celebrated the dropping of support for the Research Works Act, Elsevier denied that their action was a result of the boycott and stated that they took this action at the request of those researchers who did not participate in the boycott.
Related legislation and executive action
The Research Works Act followed other attempts to challenge institutional open-access mandates in the US. On September 9, 2008, an earlier bill aimed at reversing the NIH's Public Access Policy – the Fair Copyright in Research Works Act, or Conyers Bill – was introduced as 110 H. R. 6845 in the House of Representatives at the 110th United States Congress by U.S Representative John Conyers (D-MI), with three cosponsors. It was referred to the House Committee on the Judiciary, to which Conyers delivered an introduction on September 10, 2008. After the start of the 111th United States Congress, Conyers and six-cosponsors reintroduced the bill to the House of Representatives as 111 H. R. 801 on February 3, 2009. It was on the same day referred to the House Committee on the Judiciary and on March 16 to the Subcommittee on Courts and Competition Policy.
On the other hand, the Federal Research Public Access Act proposed to expand the open public access mandate to research funded by eleven U.S. federal agencies. Originally introduced to the Senate in 2006 by John Cornyn (R-TX) with two cosponsors, it was reintroduced in 2009 by Lieberman, co-sponsored by Cornyn, and again in 2012. These bills proposed requiring that those eleven agencies with research expenditures over $100 million create online repositories of journal articles of the research completed by that agency and make them publicly available without charge within six months after it has been published in a peer-reviewed journal. On February 22, 2013 the Obama administration issued a similar policy memorandum, directing Federal agencies with more than $100 million in annual research and development expenditures to develop plans to make research freely available to the public within one year of publication in most cases.
Later developments
The controversy about Research Works Act finally ended on August 25, 2022, when the US Office of Science and Technology Policy under Biden's administration issued a contractual mandate to make all publications reporting studies funded by the U.S. federal government freely available without delay, thus ending over 50 years of the serials crisis, albeit only for U.S. contributions.
See also
PubMed Central
Open-access journal
References
External links
H.R. 3699 on Thomas – Library of Congress
H.R. 3699 on GovTrack
Notes on the Research Works Act from the Harvard Open Access Project
Internet access
Open access (publishing)
United States intellectual property law
Proposed legislation of the 112th United States Congress | Research Works Act | Technology | 1,168 |
28,084,571 | https://en.wikipedia.org/wiki/Frozen%20Ark | The Frozen Ark is a charitable frozen zoo project created jointly by the Zoological Society of London, the Natural History Museum and University of Nottingham. The project aims to preserve the DNA and living cells of endangered species to retain the genetic knowledge for the future. The Frozen Ark collects and stores samples taken from animals in zoos and those threatened with extinction in the wild. Its current director is Michael W. Bruford (Cardiff University). The Frozen Ark was a finalist for the Saatchi & Saatchi Award for World Changing Ideas in 2006.
The project was founded by Ann Clarke, her husband Bryan Clarke and Dame Anne McLaren. Since Bryan Clarke's death in 2014, the Frozen Ark's interim director has been Mike Bruford.
References
External links
A video on the Frozen Ark
Conservation projects
Cryobiology
Zoology
Zoological Society of London
Biorepositories
Rare breed conservation
Natural History Museum, London
University of Nottingham | Frozen Ark | Physics,Chemistry,Biology | 186 |
29,016,014 | https://en.wikipedia.org/wiki/Prodigy%20house | Prodigy houses are large and showy English country houses built by courtiers and other wealthy families, either "noble palaces of an awesome scale" or "proud, ambitious heaps" according to taste. The prodigy houses stretch over the periods of Tudor, Elizabethan, and Jacobean architecture, though the term may be restricted to a core period of roughly 1570 to 1620. Many of the grandest were built with a view to housing Elizabeth I and her large retinue as they made their annual royal progress around her realm. Many are therefore close to major roads, often in the English Midlands.
The term originates with the architectural historian Sir John Summerson, and has been generally adopted. He called them "... the most daring of all English buildings." The houses fall within the broad style of Renaissance architecture, but represent a distinctive English take on the style, mainly reliant on books for their knowledge of developments on the Continent. Andrea Palladio (1508–1580) was already dead before the prodigy houses reached their peak, but it has conveniently been said that his more restrained classical style did not reach England until the work of Inigo Jones in the 1620s, and that as regards ornament, French and Flemish Northern Mannerist decoration was more influential than Italian.
Elizabeth I travelled through southern England in annual summer "progresses", staying at the houses of wealthy courtiers, on these trips she went as far north as Coventry, and planned a trip to Shrewsbury (where she planned on watching plays staged by Thomas Ashton), but this leg was cancelled because of illness.
The hosts were expected to house the monarch in style, and provide sufficient accommodation for about 150 travelling members of the court, for whom temporary buildings might need to be erected. Elizabeth was not slow to complain if she felt her accommodation had not been appropriate, and did so even about two of the largest prodigy houses, Theobalds House and Old Gorhambury House (the former destroyed, the latter ruined).
Partly as a result of this imperative, but also general increasing wealth, there was an Elizabethan building boom, with large houses built in the most modern styles by courtiers, wealthy from acquired monastic estates, who wished to display their wealth and status. A characteristic was the large area of glass – a new feature that superseded the need for easily defended external walls and announced the owners' wealth. Hardwick Hall, for example, was proverbially described as "Hardwick Hall, more glass than wall." Many other smaller prodigy houses were built by businessmen and administrators, as well as long-established families of the peerage and gentry. The large Doddington Hall, Lincolnshire, was built between 1593 and 1600 by Robert Smythson for Thomas Tailor, who was the recorder to the Bishop of Lincoln; "Tailor was a lawyer and therefore rich", says Simon Jenkins.
Some recent uses of the term extend the meaning to describe large ostentatious houses in the United States of later periods, such as colonial mansions in Virginia, first so described by the American writer Cary Carson.
Style
In many respects the style of the houses varies greatly, but consistent features are a love of glass, a high elevation, symmetrical exteriors, consistency between all sides of the building, a rather square plan, often with tower pavilions at the corners that rise above the main roofline, and a decorated skyline. Altogether "...a strange amalgam of exuberant pinnacles and turrets, native Gothic mullioned windows, and Renaissance decoration." Many houses stand alone, with stables and other outbuildings at a discreet distance. Glass was then an expensive material, and its use on a large scale a demonstration of wealth. The large windows required mullions, normally in stone even in houses mainly in brick. For the main structure, stone is preferred, often as a facing over brick, but some buildings use mostly brick, for example Hatfield House, following the precedent of Hampton Court Palace and other earlier houses. Though there were often reminiscences of the medieval castle, the houses were exceptionally without defences, compared to contemporary Italian and French equivalents.
To have two internal courtyards, requiring a very large building, was a status symbol, found at Audley End, Blickling Hall, and others. By the end of the Elizabethan period this sprawling style, essentially developing the form of late medieval buildings like Knole in Kent (which has a total of 7 courtyards), and many Oxbridge colleges, was giving way to more compact high-rising structures with a coherent and dramatic structural plan, making the whole form of the building visible from outside the house. Hardwick Hall, Burghley House, and on a smaller scale Wollaton Hall, exemplify this trend. The outer exteriors of the house are more decorated than internal exteriors such as courtyards, the reverse of the usual priority in medieval houses. The common E- and H-shaped plans, and in effect incorporating an imposing gatehouse into the main facade, rather than placing it on the far side of an initial courtyard, increased the visibility of the most grandly decorated parts of the exterior.
The classical orders were often used as decoration, piled up one above the other on the storeys over the main entrance. But, with a few exceptions such as Kirby Hall, columns were restricted to such individual features; in other buildings, such as the Bodleian Library, similar "Towers of the Five Orders" sit at the centre of frankly Gothic facades. At Longleat and Wollaton shallow pilasters are used across the facades. A crib-book, The First and Chief Grounds of Architecture by John Shute (1563) had been commissioned or sponsored by "Protector Somerset", John Dudley, 1st Duke of Northumberland, and is recorded in the libraries of many important clients of buildings, along with Sebastiano Serlio's Architettura, initially in Italian or another language until 1611, when Robert Peake published four of the volumes in English. The heavily illustrated books on ornament by the Netherlander Hans Vredeman de Vries (1560s onwards) and German Wendel Dietterlin (1598) supplied much of the Northern Mannerist decorative detail such as strapwork. It is evident from surviving letters that courtiers took a keen and competitive interest in architectural matters.
Interiors
Inside, most houses still had a large hall in the medieval style, often with a stone or wood screen at one end. But this was only used for eating in by the servants, except on special occasions. The main room for the family to eat and live in was the great chamber, usually on the first floor (above the ground floor), a continuation of late medieval developments. In the 16th century a withdrawing room was usually added between the great chamber and the principal bedroom, as well as the long gallery. The parlour was another name for a more private room, and increasingly there were a number of these in larger houses, where the immediate family would now usually eat, and where they might retreat entirely in cold weather. Although the first modern corridor in England was probably built in this period, in 1579, they remained rare, and houses continued to have most rooms only accessible through other rooms, with the most intimate spaces of the family at the end of a suite.
Staircases became wide and elaborate, and normally made of oak; Burghley and Hardwick are exceptions using stone. The new concept of a large long gallery was an important space, and many houses had spaces for entertaining on the top floor, whether small rooms in towers on the roof, or the very large top-floor rooms at Hardwick and Wollaton. Meanwhile, the servants lived on the ground floor. This might be seen as a lingering memory of the medieval castle, where domestic spaces were often placed high above the soldiery, and viewpoints were highly functional, and is a feature rarely found in subsequent large houses for two centuries or more. At Hardwick the windows increase in size as the storeys rise up, reflecting the increasing status of the rooms. In several houses the mostly flat roof itself was part of the reception spaces, with banqueting houses in the towers that were only accessible from "the leads", and a layout that allowed walking around to admire the views.
Architects
The designers are often unclear, and the leading figures had a background in one of the specialisms of building. Sometimes owners played a part in the detailed design, though the age of the gentleman amateur architect mostly came later. Few original drawings survive, though there are some by the architect-mason Robert Smythson (1535–1614) who was an important figure; many houses at least show his influence. Robert Lyminge was in charge of Hatfield and Blickling. John Thorpe laid the foundation stone of Kirby Hall as a five-year old (his father was chief mason, and children were often asked to perform this ritual) and is associated with Charlton House, Longford Castle, Condover Hall and the original Holland House, and perhaps Rushton Hall and Audley End. The demand for skilled senior builders, able to design and manage projects or parts of them, exceeded supply, and, at least in the largest houses, they appear to have been usually given a great deal of freedom in deciding the actual design by their mainly absentee clients.
History
The first "prodigy house" might be said to be Henry VII's Richmond Palace, completed in 1501 but now destroyed, although as a royal palace it does not strictly fit the definition. Hampton Court Palace, built by Cardinal Wolsey but taken over by the king on his fall, is certainly an example. The trend continued through the reigns of Henry VIII, Elizabeth, and into the reign of James I, when it reached its height. Henry was a prolific builder himself, though little of his work survives, but the prudent Elizabeth (like her siblings) built nothing herself, instead encouraging her courtiers to "...build on a scale which in the past would have been seen as a dynastic threat."
Others see the original Somerset House in the Strand, London as the first prodigy house, or at least the first English attempt at a thoroughly and consistently classical style. With some other Châteaux of the Loire Valley, the Château de Chambord of François I of France (built 1519–1547) had many features of the English houses, and certainly influenced Henry VIII's Nonsuch Palace.
Important political families such as the Cecils and Bacons were serial builders of houses. These newly-risen families were typically the most frenetic builders. Sites were chosen for their potential convenience for royal progresses, rather than being the centre of landholdings, which were looked after by agents, or any local political powerbase.
The term prodigy house ceases to be used for houses built after about 1620. Despite some features of more strictly classical houses like Wilton House (rebuilding begun 1630) continuing those of the prodigy house, the term is not used of them. Much later houses like Houghton Hall and Blenheim Palace show a lingering fondness for elements of the 16th-century prodigy style.
In the 19th century Jacobethan revivals began, most spectacularly at Harlaxton Manor, which Anthony Salvin began in 1837. This manages to impart a Baroque swagger to the Northern Mannerist vocabulary. Mentmore Towers, by Joseph Paxton, is an enormous revival of a Smythson-type style, and like Westonbirt House (Lewis Vulliamy, 1860s) and Highclere Castle (by Sir Charles Barry 1839–42, used for filming Downton Abbey), is something of an inflated Wollaton. The royal Sandringham House in Norfolk includes prodigy elements in its mixed styles. Apart from private houses, elements of the prodigy style were popular for at least the exteriors of all other types of public buildings, and office buildings designed to impress.
Many of the houses were later demolished, in the English Civil War or other times, and many smothered by later rebuilding. But the period retained a prestige, especially for families who rose to prominence during it, and in many the exteriors at least were largely retained. The north fronts of The Vyne and Lyme Park are examples of a slightly incongruous mixture of the Elizabethan and Palladian in a single facade.
Criticism
The houses attracted criticism from the first, surprisingly often from their owners. The flattering poem To Penshurst by Ben Jonson (1616), contrasts Penshurst Place, a large and important late medieval house that was extended in a similar style under Elizabeth, with prodigy houses:
Thou art not, Penshurst, built to envious show,
Of touch or marble; nor canst boast a row
Of polished pillars, or a roof of gold;
Thou hast no lantern, whereof tales are told,
Or stair, or courts; but stand’st an ancient pile, ...
And though thy walls be of the country stone,
They’re reared with no man’s ruin, no man’s groan;
There’s none that dwell about them wish them down;
But all come in, the farmer and the clown, ...
Now, Penshurst, they that will proportion thee
With other edifices, when they see
Those proud, ambitious heaps, and nothing else,
May say their lords have built, but thy lord dwells.
As new fashions in architecture took over, the prodigy houses came to seem old-fashioned, and by the standards of Palladian architecture often over-fussy and over-decorated. Even though the style was being revived in his time, in 1905 the American architectural historian Charles Herbert Moore held that: "While one great house of the period differs from another in unimportant ways, those in which ornaments are extensively applied are without exception disfigured by them. The Elizabethan architectural ornamention is at once pretentious and grotesquely ugly." In particular "few are more tasteless and pretentious than Woolaton Hall", which he analyses.
Alternatives
Though the style became dominant for very large houses from around 1570, there were alternatives. At Kenilworth Castle, Robert Dudley, 1st Earl of Leicester did not want to lose the historic royal associations of his building, and from 1563 modernised and extended it to harmonize the old and new, though the expanses of glass still impressed Midlanders. Bolsover Castle, Broughton Castle, Haddon Hall and Carew Castle in Wales were other sympathetic expansions of a medieval castle. The vernacular half-timbered style retained some popularity for gentry houses like Speke Hall and Little Moreton Hall, mostly in areas short of good building stone.
Earlier, Compton Wynyates (begun c. 1481, greatly extended 1515–1525) was a resolutely unsymmetrical jumble of essentially medieval styles, including prominent half-timbering on the gables of the facade. It also nestles in a hollow, as medieval houses often did, avoiding the worst of the wind. In contrast, prodigy houses, like castles before them, often deliberately chose exposed sites where they could command the landscape (Wollaton, Hardwick); their owners mostly did not anticipate being there in winter.
Examples
Essentially intact
(especially on the exterior)
Burghley House, Cambridgeshire
Longleat House, Wiltshire
Hatfield House, Hertfordshire
Wollaton Hall, Nottingham
Hardwick Hall, Derbyshire
Longford Castle, Wiltshire
Castle Ashby House, Northamptonshire
Montacute House, Somerset
Bramshill House, Hampshire
Aston Hall, Birmingham
Charlton Park, Wiltshire
Barrington Court, Somerset, early Elizabethan E plan
Astley Hall, Chorley, Lancashire
Doddington Hall, Lincolnshire
Fountains Hall, North Yorkshire, built with stone from Fountains Abbey next door
Charlton House, London, relatively modest, to house James I's young son
East Barsham Manor, Norfolk
Burton Constable Hall, Yorkshire (exterior)
Early Henrician examples
Hampton Court Palace
Hengrave Hall, Suffolk
Sutton Place, Surrey
Part-destroyed
Audley End, Essex, part destroyed
Kirby Hall, Northamptonshire, part destroyed shell
Layer Marney Hall, Essex, Henrician and only ever part-built
Berry Pomeroy, Devon, Built by the Seymours but never completed
Now destroyed
Nonsuch Palace, Surrey, a royal palace of Henry VIII, now destroyed
Theobalds House
Holdenby House
Old Gorhambury House, Hertfordshire
Worksop Manor
Rocksavage, Cheshire
Wimbledon House
Oxwich Castle, West Glamorgan, substantial ruins remain
Notes
References
For individual houses, see Airs, Jenkins, Norwich, and of course the Pevsner Architectural Guides
Airs, Malcolm, The Buildings of Britain, A Guide and Gazetteer, Tudor and Jacobean, 1982, Barrie & Jenkins (London),
Barbagli, Marzio, Kertzer, David I. (eds.), The History of the European Family: Family life in early modern times (1500–1789), The History of the European Family, 2001, Yale University Press, , 9780300094947, , 9780300089714, google books
Esher, Lionel, The Glory of the English House, 1991, Barrie and Jenkins,
Girouard, Mark, Life in the English Country House: A Social and Architectural History 1978, Yale, Penguin etc.
Jenkins, Simon, England's Thousand Best Houses, 2003, Allen Lane,
Mooney, Barbara Burlison, Prodigy Houses of Virginia: Architecture and the Native Elite, 2008, University of Virginia Press,
Musson, Jeremy, How to Read a Country House, 2005, Ebury Press,
John Julius Norwich, The Architecture of Southern England, Macmillan, London, 1985,
Ridley, Jasper, A Brief History of the Tudor Age, 2002, Hachette UK, 2013 ed., , 9781472107954, google books
Song, Eric B., Dominion Undeserved: Milton and the Perils of Creation, 2013, Cornell University Press,
Strong, Roy: The Spirit of Britain, 1999, Hutchison, London,
Summerson (1980), Summerson, John, The Classical Language of Architecture, 1980 edition, Thames and Hudson World of Art series,
Summerson (1993), Summerson, John, Architecture in Britain, 1530 to 1830, 1993 edition, Yale University Press Pelican History of Art, Yale University Press, , 9780300058864
Williams, Penny, The Later Tudors: England, 1547–1603, Volume 2 of The New Oxford history of England, 1998 revised edition, Oxford University Press,
Further reading
Mark Girouard: Montacute House, Somerset (1964); Robert Smythson and the Architecture of the Elizabethan Era (1966); Hardwick Hall (1976); Robert Smythson and the Elizabethan Country House (1983); Elizabethan Architecture: Its Rise and Fall, 1540–1640 (2009)
Architectural history
16th-century architecture in England
Elizabethan architecture
Tudor architecture
Jacobean architecture
17th-century architecture in England | Prodigy house | Engineering | 3,893 |
52,338,899 | https://en.wikipedia.org/wiki/Self-making%20bed | A self-making bed (also known as a smart bed) is designed to automatically rearrange the bedding on a bed and prepare itself for use.
History
In 2008, inventor Enrico Berruti featured his self-making bed, dubbed "Selfy", at The International Exhibition of Inventions in Geneva, Switzerland. The bed makes itself by stretching and smoothing the sheets over the mattress by using metal rails that connect to the bed sheets alongside the bed.
In 2017, the company Smartduvet released a fabric to make the bed through a network of air chambers. This is a breathable layer that is made of lightweight material. When activated it inflates the sheet's air chamber, placing the duvet and sheets back in position. Using an app, the user can preset a different bed-making time for each day of the week. It does not replace the existing bed and is non-permanent so it can be used with existing duvet and duvet coverbedding.
See also
Bed-making
Smart home
References
Beds
Domestic life
Home automation | Self-making bed | Technology,Biology | 215 |
842,479 | https://en.wikipedia.org/wiki/Prestel | Prestel was the brand name of a videotex service launched in the UK in 1979 by Post Office Telecommunications, a division of the British Post Office. It had around 95,500 attached terminals at its peak, and was a forerunner of the internet-based online services developed in the late 20th and early 21st centuries. Prestel was discontinued in 1994 and its assets sold by British Telecom to a company consortium.
A subscriber to Prestel used an adapted TV set with a keypad or keyboard, a dedicated terminal, or a microcomputer to interact with a central database via an ordinary phoneline. Prestel offered hundreds of thousands of pages of general and specialised information, ranging from consumer advice to financial data, as well as services such as home banking, online shopping, travel booking, telesoftware, and messaging.
In September 1982, to mark Information Technology Year, the Royal Mail issued two commemorative stamps, one of which featured a Prestel TV set and keyboard.
In April 1984, British Telecom won a Queen's Award for Technological Achievement for the development of Prestel.
History
Invention and development
In 1970, Samuel Fedida, a research engineer who had worked at English Electric and a US consultancy company, joined the Post Office as head of the Computer Applications Research Division. Within a year, he had completed the initial design of a viewdata system (the generic term in use at the time) for the general public: it would comprise information stored on a central computer accessed over the public phone network using modified televisions as terminals. By early 1973, the Post Office had decided to develop an experimental system, and was working with the BBC, the Independent Broadcasting Authority, and standards organisations to develop compatible standards for teletext and viewdata. During 1974, it decided to commercialise the viewdata concept.
Pilot trial
The first public demonstration of viewdata took place in London in 1975 during Eurocomp, the European Computing Conference on Communications Networks, where Fedida presented a paper on the technology and the potential appeal, as the Post Office saw it, of a public interactive information service.
Further demonstrations followed, and based on the favourable reactions of TV manufacturers and potential providers of information and services, the Post Office decided to run a pilot trial. It also agreed with potential information providers (IPs) that it would not select IPs or exert editorial control over what they put on the system.
The two-year pilot service began in January 1976. By mid-1977, IPs included the Consumers' Association, the British Farm Produce Council, British Rail, London Transport, the Open University, the London Stock Exchange, the Institute for Scientific Information, and National Giro. Interviewed by The Times, Fedida was quoted as saying that the Post Office saw viewdata playing several roles: as a "centralised information source", an "intelligent interface" to specialised scientific and technical data, a "communication machine" for passing messages, a personal information store, a new information distribution medium, a "channel for education in the home", and as providing an "advanced calculator service".
Test service
After some delay, the Post Office launched a test service of Prestel, as it was now called, in October 1978. At the end of December, there were 95,500 information pages, growing at a rate of 3,500 per week, and just over 300 users, increasing by 3050 per week.
Commercial launch
In March 1979, the Post Office opened a limited "London Residential Service" for subscribers in the capital. The full commercial service launched in September 1979; the director of Prestel stated that there were over 130,000 pages in the database and 1363 "sets" connected to the system at the start of that month.
By February 1980, there were 131 IPs and 116 sub-IPs. The Post Office categorised the IPs as follows: national and local newspaper groups; magazine and other publishing groups; central government departments, and other agencies (such as the British Tourist Authority and the British Library); nationalised industries (including British Airways, Sealink, and British Rail), and companies in other fields of business, such as banks and travel agencies; new companies set up to exploit the viewdata medium, and those expanding from an existing base of online services, such as Reuters; associations; software companies; and miscellaneous.
Particularly popular were the travel-oriented nationalised industries; new companies, such as Fintel; and the Consumers' Association. Overall, popular topics included games, quizzes, jokes, and horoscopes; the Stock Market, company information, and business news; travel and holiday information; national news, sports, and "What's On" locally; cars; and consumer advice. This was reflected in advertisements for Prestel.
Writing in the winter 1980/81 issue of British Telecom Journal, Prestel's public relations manager stated there were over 7,500 sets attached to the system, 170,000 frames in use, and more than 400 IPs and sub-IPs. By the end of 1981, according to Butler Cox, a management consultancy, Prestel had 2,000 residential and 11,000 business users, with 14,000 "terminals" in use. The service was within local call reach of 62% of phone subscribers in Britain. IPs numbered 153, with 593 sub-IPs. Users accessed 190,000 frames per day, and the average time on the system, for each user per day, was 9 minutes. There were 193,000 frames available, including 2,000 response frames.
March 1982 saw the launch of the Prestel Gateway service. This enabled users to connect, via the Prestel network, to external computers operated by IPs or other companies. Travel agents, for example, used Gateway to connect to tour operators' systems and make reservations.
User charges
At the launch of the commercial service in September 1979, and in addition to phone charges, users were charged 3p per minute online to Prestel from 8 am to 6 pm Monday to Friday, and 3p for three minutes at other times. Installing a jack cost £13, with a quarterly rental of 50p. Business users paid an additional standing charge (i.e., a flat charge regardless of usage) of £12 per quarter.
By October 1982, the online usage charge had risen to 5p per minute (8 am to 6 pm Monday to Friday and also 8 am to 1 pm on Saturdays, free at other times), the business standing charge to £15 per quarter, residential users now paid £5 per quarter, and jack installation cost "from £15", with a 15p quarterly rental fee.
Growth
From September 1986, under page *656#, Prestel's publicity department published a "Factframe" showing, at the end of each month, the average number of terminals attached and the respective percentages in businesses and in homes; the number of frames available and the number of frame accesses per week; and the number of messages sent per week. Actual subscriber figures were not published; Thomas et al. (1992) suggest these were "significantly less" than the number of terminals, as "businesses were assumed to 'attach' more than one terminal", and note that British Telecom stopped publishing figures at the end of 1988.
In September 1982, The Times reported there were 18,000 users, of whom 3,000 were residential. Noting that British Telecom had originally forecast 50,000 users at this point, the report went on to outline a new approach to attracting them, quoting senior managers from British Telecom and the head of a joint venture. The plans involved the introduction of a home banking service; the marketing of a Prestel adaptor for computer terminals to the business and higher education sectors; and the launch of Micronet 800, a service for microcomputer users.
Six months later, in February 1983, the same newspaper recorded 22,400 users, of whom 15% were residential, writing that the future of Prestel "could be in doubt by 1985 if it is not approaching profitability."
In mid-1984, the UK Department of Trade and Industry issued a booklet stating that the availability of travel information, the launch of Micronet 800, and the provision nationwide of the messaging service, Mailbox, had contributed to a rise to 45,000 attached terminals by June of that year. 61% were in businesses, and 39% in homes. In that month, on average, the Prestel database contained 320,000 frames that were accessed 14.6 million times. 17 Prestel Gateways to external computers were in operation. For July, the Butler Cox consultancy recorded 47,000 users (60% business, 40% residential), and a total of 1,200 IPs and sub-IPs.
After another year, in mid-1985, The Times stated there were 53,000 "terminals, adapted televisions, microcomputers or specially designed units" attached to Prestel, with residential users now accounting for 45% of the total. In the reporter's view, this represented "a change of fortune for [a service] deemed commercially dubious by many commentators." The figure of 65,000 was reached at the beginning of 1986about a third were Micronet 800 subscribers. Prestel had reportedly traded at a profit from the previous October onwards. Commenting in September 1986 on what it referred to as "only 70,000 users ... growing at a rate of ... a few hundred customers a week", The Times declared that Prestel "had failed to live up to expectations", comparing it unfavourably to the French Minitel videotex service and to British Telecom's own Telecom Gold electronic mail service.
Writing in the The Guardian just before Christmas 1988, Jack Schofield reported that Prestel "had become reclusive" about user numbers, with the Factframe, "[a]fter prompting, ... finally updated this summer ... claim[ing] 90,000 users", while the figure of "only 75,000" was being quoted by the British Telecom manager responsible for the service. In January 1989, drawing on what turned out to be the final Factframe, published at the end of 1988, Schofield wrote that "After ten years, [Prestel] has yet to achieve the number of users it expected to get in its first year", quoting a figure of 95,460 terminals attached. This was the highest figure claimed during the lifetime of Prestel.
Decline
In October 1991, British Telecom closed Micronet 800, stating, in a letter to customers, that "With over 10,000 members, Micronet is easily the largest online service in the UK specialising in microcomputing. However, it is still not large enough to enable us to maintain a cost-effective service and provide the extra facilities requested by our customers." Membership had decreased from a peak of around 20,000. The Guardian attributed this to the introduction by British Telecom of an off-peak Prestel time-charge in mid-1988, discouraging the use of Micronet's popular "Chatline" service. The Times agreed, and also pointed to a steep rise in subscription charges, opining that "BT's failure to provide even this committed group with an economic ... service means that Prestel is destined ... for businesses." The closure in April 1991 of Homelink, the home banking service launched in 1983 by the Nottingham Building Society, also contributed to shrinking the number of Prestel subscribers.
During 1991, Prestel was closed to residential users. Towards the end of 1993, it was reported that British Telecom was planning to close Prestel altogether: according to the company, of the around 35,000 subscribers at that point, only some 2,500 used the service regularly.
Closure
British Telecom closed Prestel in early 1994, selling it to a consortium. It was rebranded as "New Prestel", focusing on the provision of financial data to businesses. In mid-1996, New Prestel transferred to the Web, becoming the Internet service provider (ISP) "Prestel On-line"
In 1999, the financial data component of Prestel On-line was bought by the company Financial Express to become "Financial Express Prestel". The service component merged with the ISP Demon Internet, which ran a "Prestel Internet Service". This closed in 2002.
Database
Pages and frames
Information on Prestel was held in a database of "pages". Each page corresponded to a screenful of information, and had a unique number up to nine digits long. A page could have up to 26 sub-pages, with each sub-page labelled with a letter from "a" to "z". A sub-page was called a "frame": the page itself was frame "a". Neither pages nor frames could scroll.
Each frame had 24 lines of 40 characters each, like the display format used by the Ceefax and ORACLE teletext services. The top line showed the name of the Information Provider (IP), the page number, and the price. The bottom line was reserved for system messages, leaving 22 lines available for the IP to present information to the user.
An IP rented a three-digit number as its master page. For example, the Meteorological Office's was 209, and the numbers identifying all its pages began with these digits such as for 20971, the page for "Aviation forecasts".
Single- and double-digit pages were reserved by Prestel for system information purposes, such as page 1, which showed the main index. Pages starting with 9 were for account and other system management functions: page 92, for example, showed details of a Prestel user's bill.
When preparing and editing a page, an IP could use upper- and lower-case letters, digits, punctuation marks, a few arithmetic symbols, and a set of "mosaic characters" for composing rudimentary graphics. By embedding cursor-control characters in the page, simple animations could be produced by rewriting parts of the screen already displayed. These were called "dynamic frames".
The IP's name on line 1 occupied at least 43 bytes, depending on the number of control characters involved, so the space available for the IP's data on-screen was a maximum of 877 characters A line could occupy all forty of the character positions available, or be terminated early with a control character. Each control character consumed two bytes, so the more complex the page, the less information could be shown.
Most frames were set up to provide information. Other types were for messaging, or provided a gateway to other computer-based services. A "follow-on" type could also be specified: this caused the following frame to be automatically displayed as soon as the current frame had finished being transmitted. For dynamic frames, this provided a way to continue animations that could not fit within the number of characters available in one frame alone.
This follow-on frame facility was also used for telesoftware, enabling computer programs, such as those for the BBC Micro, to be downloaded from Prestel.
Links
A page could be directly linked to up to ten other pages by specifying, during editing, the number of the page whose content would be displayed when a user pressed a digit from 0 to 9 on their keypad or keyboard. Double-digit linkssuch as "56"were achieved by linking the first digit to an intermediate, stepping-stone frame on the IP's database: this, in turn, connected the second digit to the target page.
The content of pages ranged between two poles: at one, a menu listing the topics available and the number to key to reach them, with no, or minimal, further informationreferred to as an "index page"; and at the other, a screenful of information with few, if any, links to other pagesan "information page". According to Rex Winsbury, a media journalist and editorial director of Fintel, a major IP, as experience with the viewdata medium grew, IPs "gave information on all or most pages, simply varying the amount according to the number of routings [links] that have to be given as well."
Structures
When the public Prestel service began in 1979, a user connecting to the system was presented with the main index page. As they made and keyed successive menu choices, they moved down a subject hierarchy, from the general to the specific, to finish with the information page they sought. The Post Office, academics, and the media referred to this hierarchical database arrangement as a tree structure or "inverted tree".
Though simple in theory, in practice this structure could lead a user to a dead end: they might find that how a subject was described in a menu did not match what they saw on the final destination page, or formed only part of what they were looking for, or provided information without the means to look up related material. Going back through the sequence of menu choices (using the *# command) to try another series of links was limited to three steps in all.
As Prestel developed, IPs accommodated the particularities of the different types of information and services they provided, and the expectations of their users, through the extensive use of backlinks and crosslinks between their pages. This resulted in a variety of database structures that acquired labels such as cartwheels, ring-of-rings, Chinese lanterns and lobster-pots to help visualise how pages were connected.
Navigation and search
There were three basic navigation commands:
*number# took the user directly to the first frame of the page number specified: for example, *5052# displayed the contents of 5052a onscreen;
# moved the user successively forward through the frames: 5052b, 5052c, and so on;
*# returned the user to the previous page in strict sequence, and could be repeated three times.
Keyword access was introduced in 1987, with *keyword# taking the user directly to the subject (or subject index) specified.
A topic index, updated daily, was published on page 199, and an IP index on page 198. A printed AZ directory of the topics available on Prestel, with the appropriate page number to key, was sent to new users. From 1987, the topic names could also be used as keywords. Every two months, users were sent a magazine, Connexions, that included an updated directory, and the directory was also incorporated into the quarterly Prestel Business Directory created by the Financial Times. Micronet 800, an IP, visualised the relationships between its pages in a London Tube-style schematic map as part of a guide for users.
Information providers
There were two types of information provider (IP): main IPs, and sub-IPs.
Charges
Page rental
A main IP rented pages from the Post Office (initially) or British Telecom (later), and controlled a three-digit master-page in the database. In 1982, this cost an annual £5,500 for a basic package, equivalent to around £29,000 in 2021.
The basic package included 100 frames; the ability to enter and amend information, retrieve response frames, and store 10 completed response frames; staff training in editing (a two-day seminar), and a copy of the IP editing manual; and, if required, bulk update facilities and an annual print-out of frames in use. Additional frames were available, in batches of 500, for £500 a year (over £2,600 in 2021), while using "Closed User Groups" (CUGs) and the sub-IP facility each cost £250 annually (over £1,300 in 2021).
Sub-IPsthose with smaller requirements or budgetrented pages from a main IP. A main IP could rent out pages at the market rate. Such IPs were known as "umbrella" IPs. Sub-IPs paid a per-minute charge for editing online: in 1982, this was 8p per minute from Monday to Friday between 8 am and 6 pm, and 8p per four-minute block at all other times Sub-IPs had a four-digit (or more) master-page within a main IP's area. Generally speaking, they could only edit existing pages, and were not able to create or delete them.
Prestel Gateway
The cost to an IP of connecting an external computer to the Prestel system varied according to the number of simultaneous users required, the distance between Prestel and the IP's computer, and whether the connection was made using a private line or via the PSS packet-switched network. There were also time and data-volume charges. Other factors to be taken into account included the traffic pattern (i.e., the expected volume and frequency of data flows), the response time required (as perceived by a user), the size of the database to be accessed, and the changeability of the information stored.
In 1985, British Telecom estimated that for an IP using a typical minicomputer (such as the PDP-11) located 100 km from London and handling up to 10 users simultaneously at peak times, the one-off software set-up cost would be at least £16,000, communication costs would range from £4,280 to £5,550 a year (depending on the type of connection), and Prestel usage would cost £8,600 a year.
Relationships
Several typical relationships developed between umbrella IPs and their sub-IP clients. A sub-IP could be:
An independent supplier of information, with exclusive or partial editorial control and full or partial editing rights.
An organisation making information available to an IP, sometimes on a royalty basis.
An organisation advertising on an IP's pages.
An individual authoring articles or columns for an IP, usually on a royalty basis.
In addition, the IP Micronet 800 used the sub-IP facility to offer the "Gallery" service, where a group, club, or individual could rent one or a number of frames cheaply, and for short periods if required.
An analysis in 1981 of the pros and cons of using an umbrella IP to publish information on Prestel concluded that if the owner of the information needed less than 500 frames, it would be cheaper to use an umbrella IP, but if over 5000, this would be more expensive than doing it themselves. In between these two figures, speed, convenience, and the need for design skills favoured using an IP, while going it alone assured confidentiality and provided more control.
Editing pages
There were two ways to edit pages: directly, by creating or amending them using special editing keyboards while connected online to the main Update Computer; or offline, creating pages locally and uploading them in bulk. Bulk update required that pages be created offline using editing terminals that could store pages, or by using microcomputers. The pages were then either transmitted to the Update Computer online as a batch via a special dialup port and protocol, or sent on magnetic tape to the Update Centre (UDC), where they were uploaded.
Using the online editor, IPs were also able to view information about a page hidden from ordinary users, such as the time and date of its last update, whether the frame was in a Closed User Group (CUG), the price-to-view (if any), and the "frame count"the number of times the frame had been accessed.
IPs and sub-IPs accessed the Edit computer using their normal ID and password, but had a separate password to access the editing facility. Bulk uploads only required the edit password and the IP's account number.
Information and services
Prestel's pre-launch promotional material focused on the general public: When the service launched in late 1979, Post Office Telecommunications had a hands-off approach towards managing whatever IPs placed on the system. This changed in early 1980, when British Telecom (its successor) started targeting the business, professional and hobbyist markets via joint ventures with companies and organisations with specialised expertise.
By the mid-1980s, the specialised services on Prestel included:
Prestel CitiService, involving the London Stock Exchange and ICV Information Systems, targeted three groups: the business community as a whole, with mainly company information; private investors in a closed user group, offering regularly updated share prices; and for brokers and other investment professionals, continuously updated share prices, also in a closed user group.
British Telecom Travel Service provided travel agents with information from tour operators, airlines, and other transport operators, and enabled online reservations. Services for other users included flight arrivals and departures, car rental, and exchange rates.
Prestel Farmlink packaged information for farmers from the Ministry of Agriculture, Fisheries and Food, the Meat and Livestock Commission, the Meteorological Office, and others. A link to Prestel CitiService provided farm commodity prices, and farmers could calculate, online, weekly wages and the formulation of feedstuffs.
Banking: the Nottingham Building Society offered Homelink, and the Bank of Scotland HOBS, the Home & Office Banking Service. Subscribers were provided with free or subsidised Prestel terminals.
Prestel Microcomputing offered downloadable software (telesoftware), noticeboards, newsletters, and reviews. It incorporated Micronet 800 from EMAP, Viewfax 258, and Clubspot 810.
Prestel Education targeted schools and colleges, and provided course and careers advice, educational software, and help with using computers.
British Telecom Insurance Services provided financial information to insurance intermediaries and enabled them to get online quotes from major insurance companies.
Prestel Teleshopping was a specialised e-commerce service for the residential market, and involved Littlewoods, Grattan, and Kays Catalogues, among others.
Prestel for Medical Practitioners packaged material from bodies such as the Royal College of General Practitioners, the British Medical Association, and the Department of Health and Social Security with drug data from pharmaceutical companies, information on locum vacancies, conference and training diaries, and research news.
Messaging
Response frames
A "response frame" enabled a user to send a message to an IP using a preformatted page to order goods or services or to transmit data. The user's name and other information needed (such as their address) were automatically added to the frame from their Prestel account details.
Initially, response frames had to be collected by an IP from each IRC in turn; later, they were ingathered at the UDC, where the IP concerned could retrieve them. Eventually, with the introduction of Mailbox, response frames could be retrieved from any IRC.
Mailbox
Prestel Mailbox was launched in 1983. Initially hosted on a computer in London, it was later made available UK-wide.
The entry page for Prestel Mailbox was *7#. This linked to pages where new messages could be composed, stored messages retrieved, and standard, pre-formatted messages completedmany designs were available, including greetings cards and seasonal messages such as valentines.
To prepare a basic message, a blank message page (directly accessible via *77#) was displayed, with the sender's Mailbox number pre-filled and blank fields shown for entering the recipient's number and the message text. As messages could only occupy a single frame, there was space for up to 100 words or so, and fewer if graphics were used. After addressing (with a Mailbox number) and writing the message, the user was offered the choice of keying 1 to send it, or keying 2 to not send it. Successful dispatch led to a confirmation page; if there were problems, such as a mistake in entering the recipient's Mailbox number, an error message was displayed. To send the message to more than one recipient meant re-keying the text into a fresh message frame, although some microcomputers allowed the original message to be stored and then copy-pasted instead.
Prestel Mailbox numbers were based on the last nine digits of a user's phone number. For example, the Mailbox number for Prestel HQ, with the phone number 01-822-2211, was 018222211, while for a user in Manchester with the number 061-228-7878, it was 612287878. In keeping with phone directory practice at the time, Prestel Mailbox numbers were published in a list accessible from page *486#. Ex-directory Mailbox numbers were available on request.
When a user connected to Prestel, a Mailbox banner on their Welcome page alerted them if they had any new messages. Similarly, when a user signed off via *90#, a warning would appear if any new messages had arrived in the meantime, with the option to read them before disconnecting. Messages were retrieved from page *930#, where they were presented in chronological order. After reading a new message, a user had to choose between deleting or saving it before the next message was presented. Initially, only three messages could be saved at a time; these stored messages were accessible via page *931#.
Using this first version of the basic Mailbox service was free of charge.
Telex Link
Prestel Mailbox was extended in 1984 to give access to the Telex service via "Telex Link". On *8#, the Telex Link entry page, a message could be composed, the destination country chosen, and the telex number entered before sending the telex like a standard message. Telex Link added the necessary telex codes and tried to send the message several times before confirming receipt (or failure) via Mailbox.
A telex could be sent to a Mailbox user from any telex terminal by using 295141 TXLINK G, the Telex Link number, as the telex address, and entering "MBX", followed by the Prestel user's Mailbox number, as the first line of the telex. An incoming telex appeared to the Prestel recipient as an ordinary Mailbox message, with the telex number of the sender added at the top of the screen.
Sending a telex cost 50p for UK destinations, £1.00 for Europe, £2.00 for North America, £3.00 for elsewhere in the world, and £5.00 for sending to ships (via INMARSAT). There was no charge for receiving one.
Telex Link was upgraded in 1987, with connections to more telex lines and faster delivery times, and its address changed to 934999 TXLINK G.
Mailbox upgrade
A new messaging system was introduced in July 1989. This enabled messages up to five frames long, storing messages before sending, sending to multiple recipients (either individually or via a mailing list), message forwarding, and acknowledgment of receipt.
Sending a basic message without using any of these new facilities remained free: all the new options were charged at 1p per use per recipient. For the first time, sending spam was permitted at a cost of 20p per message per recipient. In addition, the stored message facility was replaced by a summary page listing all the messages, both new and old, that were waiting: the user could then pick which message to view, rather than needing to read through them in chronological order.
Message statistics
By 1984, Prestel users were sending messages at the rate of around 71,000 per month via a computer in London. In September 1985, after Mailbox became a national service, the chief executive of the part of British Telecom responsible for Prestel stated that 100,000 "electronic mail messages" were being sent each week, with 60,000 terminals attached to the system.
Hack
A security breach of the Prestel mailbox of Prince Philip, Duke of Edinburgh occurred in November 1984 as part of a wider hack of Prestel.
Infrastructure
Terminals
During the development phase of Prestel, British Telecom's research department produced a Prestel terminal specification. This formed the basis of design and type-approval discussions with, initially, manufacturers of TVs, and later with suppliers of other forms of terminal.
Several types of Prestel terminal were produced:
integrated residential terminals, typically based on television sets;
integrated business terminals;
adaptors for television sets;
adaptors for microcomputers, with associated or standalone editing software;
editing terminals.
Network
Configuration and growth
In March 1979, the Post Office launched a limited "London Residential Service" for subscribers in the capital. This was based on the computer used in an earlier test phase to both store the Prestel database and enable IPs to make updates to their pages.
When the full commercial service launched in September 1979, three new computer centres were opened in London. Two, known as Byron and Juniper, were "Information Retrieval Centres" (IRCs): their computers each contained a copy of the Prestel database, and were accessible by users. The third, Duke, was Prestel's "Update Centre" (UDC): IPs used this to create, modify or delete their pages, with their updates sent to the IRCs. A fourth IRC, Dickens, opened in Birmingham in December.
IRCs were connected to the UDC in a star network configuration using leased-line connections (based on the X.25 protocol) operating at 2400 baud. This network handled about 2,000 Prestel terminals and provided users with over 160,000 pages supplied by around 130 IPs. By mid-1981, this arrangement had been replaced by dedicated X.25 circuits using the then-new PSS packet-switched network and operating at 4.8 kbit/s.
Each IRC typically housed two information retrieval computers, though some in London had a single machine. IRCs were usually located in major telephone exchanges, rather than data-processing centres, to accommodate the extensive communications equipment needed: exchange buildings could more easily house the large numbers of rack-mounted 1200/75 baud modems and associated cabling required, as well as the GEC multiplexers connecting the modems to the computers.
By June 1980, the network had grown to four individual information-retrieval computers in London, and six others installed in pairs in each of Birmingham, Edinburgh and Manchester, making ten in all.
These ten computers could initially connect to around 1000 user ports, expandable to 2000. At this point, the Prestel database contained about 164,000 pages with expandability to up to 260,000 built in: allowing for system management pages, this arrangement capped the size of the public database at around 250,000 frames.
By September 1980, there were five IRC machines in London and pairs of machines in Birmingham, Nottingham, Edinburgh, Glasgow, Manchester, Liverpool and Belfast, offering a total of 914 user ports. Further IRCs were planned in Luton, Reading, Sevenoaks, Brighton, Leeds, Newcastle, Cardiff, Bristol, Bournemouth, Chelmsford and Norwich by the end of 1980.
By the end of 1980, 1500 user ports were available. By July 1981, the number of IRC computers had grown to 18: this increased the proportion of phone subscribers who could dial-up Prestel at local rates from 30% to 62%. By 1984, the short dialling codes 618 and 918 could be used in most of the UK for access at local call rates.
International access
In late 1981, an IRC called Jefferson opened in Boston, Massachusetts, giving US subscribers access to Prestel via the American Telenet packet-switched network.
Mailbox computer
Mailbox, the Prestel messaging service, was launched on Enterprise computer, and allowed messaging only between users accessing that machine. By 1984, Mailbox had been rolled out nationwide using a dedicated computer in London known as Pandora.
Hardware
Prestel's computers were based on the GEC 4000 series minicomputer. The main IRC machines were originally model GEC 4082s equipped with 384 Kbyte memory-core stores, six 70 Mbyte hard disk drives, and 100 ports. This set-up accommodated an initial 1500 Prestel users.
Each IRC computer had 208 ports. With eight reserved for testing and control, a computer could support up to 200 simultaneous Prestel users. For the ordinary user, access was via an asynchronous, duplex interface provided by banks of multiplexers. These, in turn, were accessed via standard modems, operating at 1200/75 bit/s, directly connected to the public phone network.
Besides the multiplexers required to support 1200/75 dial-up access, the Update Centre machines were also connected to special modems that handled online bulk updating by IPs. Banks of 300/300 bit/s full-duplex asynchronous V.21 modems supported direct IP-computer-to-Prestel-computer links, while 1200 bit/s half-duplex V.23 modems supported access by IPs using editing terminals that stored frames offline before uploading them. In addition, twin 9-track NRZI tape decks of 800 bytes/inch capacity were provided for bulk offline updates.
Though categorised as a minicomputer, GEC 4000 series machines were large: one occupied several standard computer cabinets each standing high by wide. The CDC 9762 hard disc drives were housed separately in large, stand-alone units about the size of a domestic washing machine. A GEC machine cost over £200,000 at standard prices, in addition to which were the costs of the associated communications equipment. Combining the two to assemble a single IRC was a major undertaking, and took some 15 months from order placement to commissioning.
Software
GEC 4000 series computers could run on several operating systems. The Prestel machines used OS4000, which was developed by GEC and supported BABBAGE, the high-level assembler in which all Prestel software was written.
The pilot-trial system had five core software components: process, process, process, -handler process, and several processes. received data from a Prestel user; accepted characters, one at a time, from and fed them to a ; the key frame-getter fetched a fresh page or the next frame of an already-displayed page from . then displayed a whole frame, preceded by a clear-screen command, to the user.
The commercial service had several important additional functions, including an editing program and bulk update facilities, closed user groups, messages, user billing and IP revenue allocation, optional additional user passwords, error-reporting routines, system manager facilities, and statistics-collecting routines.
In 1987, a Prestel Admin computer was introduced to support the user registration process. It captured a new user's details from the paper Prestel application form, transferred the data to the relevant Prestel computer, and then printed the welcome letter to be sent to the user concerned.
Monitoring
Users' connections to Prestel were monitored by a device known as VAMPIREViewdata Access Monitor and Priority Incident Reporting Equipment. Via private circuits connected to an IRC computer's ports, this produced a continuously updated display on a monitoring screen at the Prestel Regional Centre responsible for an IRC. The screen showed a matrix of small squares, each corresponding to a port on an IRC computer. Free ports were green, occupied ones yellow, incoming calls-to-connect by Prestel users were pale blue, and faulty ports red. In this way, the overall status of an IRC machine could be summarised and seen at a glance.
The response time of the Prestel system was measured by a microcomputer-based device known as PET. This monitored frame retrieval times for users and how quickly frame-editing commands issued by IP editors were implemented. PET operated in conjunction with a hardware performance monitor that recorded central processing unit and disk-drive usage.
Public take-up
Writing in early 1979 about the test service that had launched in October 1978, a Post Office executive concluded that:
While teletext services were provided free of charge as part of regular television broadcasts, Prestel was transmitted via telephone lines to a set-top box, computer, or dedicated terminal: gaining access to the service involved arranging for a Post Office engineer to first install a connection point known as a Jack 96A. Thereafter it was necessary to pay both a monthly subscription and the cost of local telephone calls. On top of this, some content was sold on a paid-for basis: each Prestel page carried a price in the top right-hand corner, and a single page could cost up to 99 pence.
The original idea was to persuade consumers to buy a modified television set with an inbuilt modem and a keypad remote control in order to access the service, but no more than a handful of models were ever marketed, and they were expensive. Eventually set-top boxes became available, and some organisations supplied these as part of their subscription package: for example, branded Tandata terminals were provided by the Nottingham Building Society for its customers, who could make financial transactions via Prestel.
Because the transmission of Prestel over telephone lines did not use an error-correction protocol, it was prone to interference from line-noise, which would result in garbled text. This was particularly problematic with early home modems, which used acoustic couplers.
Regardless of the hardware, Prestel was expensive, and as a result, only gained limited market penetration, with a total of around 90,000 subscribers at its peak. The largest user-groups were Micronet 800 with 20,000 and Prestel Travel with 6,500 subscribers respectively.
Having developed Prestel as a way of maximising telephone line use, the Post Office and subsequently British Telecom provided only the framework for Prestel, delegating the provision of information to information providers. Nevertheless, considerable investment was required in Prestel's infrastructure, though with information providers paying rental charges and users installation and rental fees, the outcome was considered likely to be profitable. A mass public service was envisaged, with considerable public take-up, but a lack of compelling content and services gave domestic users, in particular, the impression that Prestel was something that would cost a lot for relatively little in return. That said, it was predicted that eventually "Prestelor another viewdata systemwill be ubiquitous."
International sales
Prestel software and knowhow was sold to several countries, including Austria, Australia, former West Germany, the then-British colony of Hong Kong, Hungary, Italy, Malaysia, the Netherlands, New Zealand, Singapore, and former Yugoslavia.
See also
Notes
References
Further reading
Examines the development, marketing, and public reception of Prestel within the digital platform economy emerging at the time.
The chapter covering Prestel describes its history in the context of the 1980s boom in Britain of home computing, and has a particular focus on marketing strategies.
Surveys the social history of television as a developing technology and the social history of how what was developed was used; anticipatesfrom a mid-1970s perspectivepotential innovations (including "interactive devices"); and provides an overview of the situation in 1990.
Analyses the evolution of British computer networks and the Internet between 1970 and 1995, with the Prestel chapter focusing on Prestel's communications infrastructure, how this enabled the services offered, and marketing decisions and campaigns.
This multi-authored work covers relations between the interest groups involved in providing a videotex service on Prestel, videotex's impact on the press, editorial issues, economic aspects, and likely technological developments.
Focuses on the practicalities and economics of creating information on Prestel, and on Prestel's relationships with the British newspaper and broader print publishing industry of the late 1970s.
Co-authored by one of Prestel's inventors, Samuel Fedida, this describes the genesis and context of viewdata, the components of the initial Prestel service, and potential developments from the perspective of 1979.
External links
A User's View of Prestel in an archived copy of Creative Computing magazine
Prestel at Celebrating the Viewdata Revolution
Objects catalogued under "telecommunications /Prestel" and "computing & data-processing /Prestel in the Science Museum Group's collections
Catalogue records with "Prestel" in the title in BT Group's digital archives
BT Group
History of telecommunications in the United Kingdom
Legacy systems
Pre–World Wide Web online services
Videotex | Prestel | Technology | 9,231 |
20,607,025 | https://en.wikipedia.org/wiki/Google%20Native%20Client | Google Native Client (NaCl) is a discontinued sandboxing technology for running either a subset of Intel x86, ARM, or MIPS native code, or a portable executable, in a sandbox. It allows safely running native code from a web browser, independent of the user operating system, allowing web apps to run at near-native speeds, which aligns with Google's plans for ChromeOS. It may also be used for securing browser plugins, and parts of other applications or full applications such as ZeroVM.
To demonstrate the readiness of the technology, on 9 December 2011, Google announced the availability of several new Chrome-only versions of games known for their rich and processor-intensive graphics, including Bastion (no longer supported on the Chrome Web Store). NaCl runs hardware-accelerated 3D graphics (via OpenGL ES 2.0), sandboxed local file storage, dynamic loading, full screen mode, and mouse capture. There were also plans to make NaCl available on handheld devices.
Portable Native Client (PNaCl) is an architecture-independent version. PNaCl apps are compiled ahead-of-time. PNaCl is recommended over NaCl for most use cases. The general concept of NaCl (running native code in web browser) has been implemented before in ActiveX, which, while still in use, has full access to the system (disk, memory, user-interface, registry, etc.). Native Client avoids this issue by using sandboxing.
An alternative by Mozilla was asm.js, which also allows applications written in C or C++ to be compiled to run in the browser and also supports ahead-of-time compilation, but is a subset of JavaScript and hence backwards-compatible with browsers that do not support it directly.
On 12 October 2016, a comment on the Chromium issue tracker indicated that Google's Pepper and Native Client teams had been destaffed. On 30 May 2017, Google announced deprecation of PNaCl in favor of WebAssembly. Although initially Google planned to remove PNaCl in first quarter of 2018, and later in the second quarter of 2019, it has been removed in June 2022 (together with Chrome Apps).
Overview
Native Client was an open-source project developed by Google. Games such as Quake, XaoS, Battle for Wesnoth, Doom, Lara Croft and the Guardian of Light, From Dust, and MAME, as well as the sound processing system Csound, have been ported to Native Client. Native Client has been available in the Google Chrome web browser since version 14, and has been enabled by default since version 31, when the Portable Native Client (PNaCl, pronounced: pinnacle) was released. Native Client has also been used to safely run downloaded code in software other than web browsers, like in the Dæmon game engine.
An ARM implementation was released in March 2010. x86-64, IA-32, and MIPS were also supported.
To run an application portably under PNaCl, it must be compiled to an architecture-agnostic and stable subset of the LLVM intermediate representation bytecode. The executables are called PNaCl executables (pexes). The PNaCl Toolchain makes .pexe files; NaCl Toolchain .nexe files. The magic number of .nexe files is 0x7F 'E' 'L' 'F', which is ELF. In Chrome, they are translated to architecture-specific executables so that they can be run.
NaCl uses software fault detection and isolation for sandboxing on x86-64 and ARM. The x86-32 implementation of Native Client is notable for its novel sandboxing method, which makes use of the x86 architecture's rarely used segmentation facility. Native Client sets up x86 segments to restrict the memory range that the sandboxed code can access. It uses a code verifier to prevent use of unsafe instructions such as those that perform system calls. To prevent the code from jumping to an unsafe instruction hidden in the middle of a safe instruction, Native Client requires that all indirect jumps be jumps to the start of 32-byte-aligned blocks, and instructions are not allowed to straddle these blocks. Because of these constraints, C and C++ code must be recompiled to run under Native Client, which provides customized versions of the GNU toolchain, specifically GNU Compiler Collection (GCC), GNU Binutils, and LLVM.
Native Client is licensed under a BSD-style license.
Native Client uses Newlib as its C library, but a port of GNU C Library (GNU libc) is also available.
Pepper
NaCl denotes sodium chloride, common table salt; as a pun, the name of pepper was also used. Pepper API is a cross-platform, open-source API for creating Native Client modules. Pepper Plugin API, or PPAPI is a cross-platform API for Native Client-secured web browser plugins, first based on Netscape's NPAPI, then rewritten from scratch. It was used in Chromium and Google Chrome to enable the PPAPI version of Adobe Flash and the built-in PDF viewer.
PPAPI
On 12 August 2009, a page on Google Code introduced a new project, Pepper, and the associated Pepper Plugin API (PPAPI), "a set of modifications to NPAPI to make plugins more portable and more secure". This extension is designed specifically to ease implementing out-of-process plugin execution. Further, the goals of the project are to provide a framework for making plugins fully cross-platform. Topics considered include:
Uniform semantics for NPAPI across browsers.
Execution in a separate process from the renderer-browser.
Standardize rendering using the browser's compositing process.
Defining standardized events, and 2D rasterizing functions.
Initial attempt to provide 3D graphics access.
Plugin registry.
The Pepper API also supports Gamepads (version 19) and WebSockets (version 18).
, Google's open source browser, Chromium, was the only web browser to use the new browser plug-in model. As of 2020, Pepper is supported by Chrome, Chromium and Blink layout engine-based browsers such as Opera and Microsoft Edge.
In August 2020, Google announced that support for PPAPI would be removed from Google Chrome and Chromium in June 2022.
PPAPI in Firefox
Firefox developers stated in 2014 that they would not support Pepper, as there were no full specification of the API beyond its implementation in Chrome, which itself was designed for use with Blink layout engine only, and had private APIs specific to the Flash Player plugin which were not documented. In October 2016 Mozilla announced that it had re-considered and was exploring whether to incorporate the Pepper API and PDFium in future releases of Firefox, however no such steps were taken. In July 2017, Adobe deprecated Flash and announced its end-of-life in the end of 2020. By January 2021, Adobe Flash Player, Google Chrome, Firefox, Safari, and Windows received updates disabling or entirely removing Flash.
Applications
One website used NaCL on the server to let users experiment with the Go programming language from their browsers.
Usage outside of web browsers
The open-source Unvanquished game makes use of Native Client in the Dæmon game engine in replacement of the Q3VM (Quake III virtual machine). In such game engine, the Native Client sandbox is used to safely run arbitrary game code (mods) downloaded from game servers. Using the Native Client technology makes possible for gameplay developers to use the C++ language for games running in the virtual machine, to use C++ libraries, to share code between the game and the engine and to get better performance than with the Q3VM.
Reception
Some groups of browser developers supported the Native Client technology while others did not.
Supporters
Chad Austin (of IMVU) praised the way Native Client can bring high-performance applications to the web (with about 5% penalty compared to native code) in a secure way, while also accelerating the evolution of client-side applications by giving a choice of the programming language used (besides JavaScript).
Id Software's John D. Carmack praised Native Client at QuakeCon 2012, saying: "if you have to do something inside a browser, Native Client is much more interesting as something that started out as a really pretty darn clever x86 hack in the way that they could sandbox all of this in user mode interestingly. It's now dynamic recompilation, but something that you program in C or C++ and it compiles down to something that's going to be not your -O4 optimization level for completely native code but pretty damn close to native code. You could do all of your evil pointer chasings, and whatever you want to do as a to-the-metal game developer."
Detractors
Other IT professionals were more critical of this sandboxing technology as it had substantial or substantive interoperability issues.
Mozilla's vice president of products, Jay Sullivan, said that Mozilla has no plans to run native code inside the browser, as "These native apps are just little black boxes in a webpage. [...] We really believe in HTML, and this is where we want to focus."
Mozilla's Christopher Blizzard criticized NaCl, claiming that native code cannot evolve in the same way that the source code-driven web can. He also compared NaCl to Microsoft's ActiveX technology, plagued with DLL Hell.
Håkon Wium Lie, Opera's CTO, believed that "NaCl seems to be 'yearning for the bad old days, before the web'", and that "Native Client is about building a new platform – or porting an old platform into the web [...] it will bring in complexity and security issues, and it will take away focus from the web platform."
Second generation
The second generation of sandboxing developed in Google is gVisor. It is intended to replace NaCl in Google Cloud, to be more exact in Google App Engine. Google has also been promoting WebAssembly.
See also
Application virtualization
Emscripten
Sandboxie, running Windows programs in a sandbox
WebAssembly, a bytecode standard for web browsers
XAML Browser Applications (XBAP)
References
External links
– Technical talk at Google I/O 2009
A list of OSS projects ported to Native Client
Native Client source code in Git
Game engine-focused introduction to Native Client with a comparison between the Quake3 Virtual Machine and PNaCL
Examples
Folding@home
PNaCl examples (runs in Chrome 31+, PNaCl, i.e. no installation needed)
Native Client SDK Gallery
torapp.info, vector editor, especially powerful for security printing (not PNaCl)
NACLBox, a port of DOSBox to Native Client (PNaCl)
SodaSynth, a synthesizer for Native Client (not PNaCl)
Abadía del crimen, a port of the SDL version of Vigasoco (remake of La Abadía del Crimen) to Native Client (PNaCl)
Bennugd, a port of Bennugd Videogames examples to Native Client (PNaCl)
Computer security software
Software using the BSD license
Native Client
Cross-platform free software | Google Native Client | Engineering | 2,407 |
27,153,398 | https://en.wikipedia.org/wiki/Natural%20logarithm%20of%202 | In mathematics, the natural logarithm of 2 is the unique real number argument such that the exponential function equals two. It appears regularly in various formulas and is also given by the alternating harmonic series. The decimal value of the natural logarithm of 2 truncated at 30 decimal places is given by:
The logarithm of 2 in other bases is obtained with the formula
The common logarithm in particular is ()
The inverse of this number is the binary logarithm of 10:
().
By the Lindemann–Weierstrass theorem, the natural logarithm of any natural number other than 0 and 1 (more generally, of any positive algebraic number other than 1) is a transcendental number. It is also contained in the ring of algebraic periods.
Series representations
Rising alternate factorial
This is the well-known "alternating harmonic series".
Binary rising constant factorial
Other series representations
using
(sums of the reciprocals of decagonal numbers)
Involving the Riemann Zeta function
( is the Euler–Mascheroni constant and Riemann's zeta function.)
BBP-type representations
(See more about Bailey–Borwein–Plouffe (BBP)-type representations.)
Applying the three general series for natural logarithm to 2 directly gives:
Applying them to gives:
Applying them to gives:
Applying them to gives:
Representation as integrals
The natural logarithm of 2 occurs frequently as the result of integration. Some explicit formulas for it include:
Other representations
The Pierce expansion is
The Engel expansion is
The cotangent expansion is
The simple continued fraction expansion is
,
which yields rational approximations, the first few of which are 0, 1, 2/3, 7/10, 9/13 and 61/88.
This generalized continued fraction:
,
also expressible as
Bootstrapping other logarithms
Given a value of , a scheme of computing the logarithms of other integers is to tabulate the logarithms of the prime numbers and in the next layer the logarithms of the composite numbers based on their factorizations
This employs
In a third layer, the logarithms of rational numbers are computed with , and logarithms of roots via .
The logarithm of 2 is useful in the sense that the powers of 2 are rather densely distributed; finding powers close to powers of other numbers is comparatively easy, and series representations of are found by coupling 2 to with logarithmic conversions.
Example
If with some small , then and therefore
Selecting represents by and a series of a parameter that one wishes to keep small for quick convergence. Taking , for example, generates
This is actually the third line in the following table of expansions of this type:
Starting from the natural logarithm of one might use these parameters:
Known digits
This is a table of recent records in calculating digits of . As of December 2018, it has been calculated to more digits than any other natural logarithm of a natural number, except that of 1.
See also
Rule of 72#Continuous compounding, in which figures prominently
Half-life#Formulas for half-life in exponential decay, in which figures prominently
Erdős–Moser equation: all solutions must come from a convergent of .
References
External links
Logarithms
Mathematical constants
Real transcendental numbers | Natural logarithm of 2 | Mathematics | 691 |
2,479,857 | https://en.wikipedia.org/wiki/StrongSwan | strongSwan is a multiplatform IPsec implementation. The focus of the project is on authentication mechanisms using X.509 public key certificates and optional storage of private keys and certificates on smartcards through a PKCS#11 interface and on TPM 2.0.
Overview
The project is maintained by Andreas Steffen who is a professor emeritus for Security in Communications with the University of Applied Sciences in Rapperswil, Switzerland.
As a descendant of the FreeS/WAN project, strongSwan continues to be released under the GPL license. It supports certificate revocation lists and the Online Certificate Status Protocol (OCSP). A unique feature is the use of X.509 attribute certificates to implement access control schemes based on group memberships. StrongSwan interoperates with other IPsec implementations, including various Microsoft Windows and macOS VPN clients. The current version of strongSwan fully implements the Internet Key Exchange (IKEv2) protocol defined by RFC 7296.
Features
strongSwan supports IKEv1 and fully implements IKEv2.
IKEv1 and IKEv2 features
strongSwan offers plugins, enhancing its functionality. The user can choose among three crypto libraries (legacy [non-US] FreeS/WAN, OpenSSL, and gcrypt).
Using the openssl plugin, strongSwan supports Elliptic Curve Cryptography (ECDH groups and ECDSA certificates and signatures) both for IKEv2 and IKEv1, so that interoperability with Microsoft's Suite B implementation on Vista, Win 7, Server 2008, etc. is possible.
Automatic assignment of virtual IP addresses to VPN clients from one or several address pools using either the IKEv1 ModeConfig or IKEv2 Configuration payload. The pools are either volatile (i.e. RAM-based) or stored in a SQLite or MySQL database (with configurable lease-times).
The ipsec pool command line utility allows the management of IP address pools and configuration attributes like internal DNS and NBNS servers.
IKEv2 only features
The IKEv2 daemon is inherently multi-threaded (16 threads by default).
The IKEv2 daemon comes with a High-Availability option based on Cluster IP where currently a cluster of two hosts does active load-sharing and each host can take over the ESP and IKEv2 states without rekeying if the other host fails.
The following EAP authentication methods are supported: AKA and SIM including the management of multiple [U]SIM cards, MD5, MSCHAPv2, GTC, TLS, TTLS. EAP-MSCHAPv2 authentication based on user passwords and EAP-TLS with user certificates are interoperable with the Windows 7 Agile VPN Client.
The EAP-RADIUS plugin relays EAP packets to one or multiple AAA servers (e.g. FreeRADIUS or Active Directory).
Support of RFC 5998 EAP-Only Authentication in conjunction with strong mutual authentication methods like e.g. EAP-TLS.
Support of RFC 4739 IKEv2 Multiple Authentication Exchanges.
Support of RFC 5685 IKEv2 Redirection.
Support of the RFC 4555 Mobility and Multihoming Protocol (MOBIKE) which allows dynamic changes of the IP address and/or network interface without IKEv2 rekeying. MOBIKE is also supported by the Windows 7 Agile VPN Client.
The strongSwan IKEv2 NetworkManager applet supports EAP, X.509 certificate and PKCS#11 smartcard based authentication. Assigned DNS servers are automatically installed and removed again in /etc/resolv.conf.
Support of Trusted Network Connect (TNC). A strongSwan VPN client can act as a TNC client and a strongSwan VPN gateway as a Policy Enforcement Point (PEP) and optionally as a co-located TNC server. The following TCG interfaces are supported: IF-IMC 1.2, IF-IMV 1.2, IF-PEP 1.1, IF-TNCCS 1.1, IF-TNCCS 2.0 (RFC 5793 PB-TNC), IF-M 1.0 (RFC 5792 PA-TNC), and IF-MAP 2.0.
The IKEv2 daemon has been fully ported to the Android operating system including integration into the Android VPN applet. It has also been ported to the Maemo, FreeBSD and macOS operating systems.
KVM simulation environment
The focus of the strongSwan project lies on strong authentication by means of X.509 certificates, as well as the optional safe storage of private keys on smart cards using the standardized PKCS#11 interface, strongSwan certificate check lists and On-line Certificate Status Protocol (OCSP).
An important capability is the use of X.509 Certificate Attributes, which permits it to utilize complex access control mechanisms on the basis of group memberships.
strongSwan comes with a simulation environment based on KVM. A network of eight virtual hosts allows the user to enact a multitude of site-to-site and roadwarrior VPN scenarios.
See also
Libreswan
Openswan
References
External links
strongSwan Documentation
Free security software
Cryptographic software
Key management
IPsec
Virtual private networks | StrongSwan | Mathematics | 1,119 |
5,578,915 | https://en.wikipedia.org/wiki/Norcamphor | Norcamphor is an organic compound, classified as a bicyclic ketone. It is an analog of camphor, but without the three methyl groups. A colorless solid, it is used as a building block in organic synthesis. Norcamphor is prepared from norbornene via the 2-formate ester, which is oxidized. It is a useful precursor to norborneols.
See also
Norbornane
References
Ketones
Cyclopentanes
Norbornanes | Norcamphor | Chemistry | 102 |
6,879,482 | https://en.wikipedia.org/wiki/PCMark | PCMark is a computer benchmark tool developed by UL (formerly Futuremark) to test the performance of a PC at the system and component level. In most cases, the tests in PCMark are designed to represent typical home user workloads. Running PCMark produces a score with higher numbers indicating better performance. Several versions of PCMark have been released. Scores cannot be compared across versions since each includes different tests.
Versions
Controversy
In a 2008 Ars Technica article, a VIA Nano gained significant performance after its CPUID changed to Intel. This was because Intel compilers create conditional code that uses more advanced instructions for CPUs that identify as Intel.
See also
Benchmark (computing)
3DMark
Futuremark
References
External links
PCMark benchmarks
Unsupported PCMark benchmarks (PCMark2002 - 7)
UL Benchmarks
Benchmarks (computing)
Software developed in Finland | PCMark | Technology | 178 |
1,928,089 | https://en.wikipedia.org/wiki/Conservation%20status | The conservation status of a group of organisms (for instance, a species) indicates whether the group still exists and how likely the group is to become extinct in the near future. Many factors are taken into account when assessing conservation status: not simply the number of individuals remaining, but the overall increase or decrease in the population over time, breeding success rates, and known threats. Various systems of conservation status are in use at international, multi-country, national and local levels, as well as for consumer use such as sustainable seafood advisory lists and certification. The two international systems are by the International Union for Conservation of Nature (IUCN) and The Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES).
International systems
IUCN Red List of Threatened Species
The IUCN Red List of Threatened Species by the International Union for Conservation of Nature is the best known worldwide conservation status listing and ranking system. Species are classified by the IUCN Red List into nine groups set through criteria such as rate of decline, population size, area of geographic distribution, and degree of population and distribution fragmentation.
Also included are species that have gone extinct since 1500 CE. When discussing the IUCN Red List, the official term "threatened" is a grouping of three categories: critically endangered, endangered, and vulnerable.
Extinct (EX) – There are no known living individuals
Extinct in the wild (EW) – Known only to survive in captivity, or as a naturalized population outside its historic range
Critically Endangered (CR) – Highest risk of extinction in the wild
Endangered (EN) – Higher risk of extinction in the wild
Vulnerable (VU) – High risk of extinction in the wild
Near Threatened (NT) – Likely to become endangered in the near future
Conservation Dependent (CD) – Low risk; is conserved to prevent being near threatened, certain events may lead it to being a higher risk level
Least concern (LC) – Very Low risk; does not qualify for a higher risk category and not likely to be threatened in the near future. Widespread and abundant taxa are included in this category.
Data deficient (DD) – Not enough data to make an assessment of its risk of extinction
Not evaluated (NE) – Has not yet been evaluated against the criteria.
The Convention on International Trade in Endangered Species of Wild Fauna and Flora
The Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES) went into force in 1975. It aims to ensure that international trade in specimens of wild animals and plants does not threaten their survival. Many countries require CITES permits when importing plants and animals listed on CITES.
Multi-country systems
In the European Union (EU), the Birds Directive and Habitats Directive are the legal instruments which evaluate the conservation status within the EU of species and habitats.
NatureServe conservation status focuses on Latin America, the United States, Canada, and the Caribbean. It has been developed by scientists from NatureServe, The Nature Conservancy, and a network of natural heritage programs and data centers. It is increasingly integrated with the IUCN Red List system. Its categories for species include: presumed extinct (GX), possibly extinct (GH), critically imperiled (G1), imperiled (G2), vulnerable (G3), apparently secure (G4), and secure (G5). The system also allows ambiguous or uncertain ranks including inexact numeric ranks (e.g. G2?), and range ranks (e.g. G2G3) for when the exact rank is uncertain. NatureServe adds a qualifier for captive or cultivated only (C), which has a similar meaning to the IUCN Red List extinct in the wild (EW) status.
The Red Data Book of the Russian Federation is used within the Russian Federation, and also accepted in parts of Africa.
National systems
In Australia, the Environment Protection and Biodiversity Conservation Act 1999 (EPBC Act) describes lists of threatened species, ecological communities and threatening processes. The categories resemble those of the 1994 IUCN Red List Categories & Criteria (version 2.3). Prior to the EPBC Act, a simpler classification system was used by the Endangered Species Protection Act 1992. Some state and territory governments also have their own systems for conservation status. The codes for the Western Australian conservation system are given at Declared Rare and Priority Flora List (abbreviated to DECF when using in a taxobox).
In Belgium, the Flemish Research Institute for Nature and Forest publishes an online set of more than 150 nature indicators in Dutch.
In Canada, the Committee on the Status of Endangered Wildlife in Canada (COSEWIC) is a group of experts that assesses and designates which wild species are in some danger of disappearing from Canada. Under the Species at Risk Act (SARA), it is up to the federal government, which is politically accountable, to legally protect species assessed by COSEWIC.
In China, the State, provinces and some counties have determined their key protected wildlife species. There is the China red data book.
In Finland, many species are protected under the Nature Conservation Act, and through the EU Habitats Directive and EU Birds Directive.
In Germany, the Federal Agency for Nature Conservation publishes "red lists of endangered species".
India has the Wild Life Protection Act, 1972, Amended 2003 and the Biological Diversity Act, 2002.
In Japan, the Ministry of Environment publishes a Threatened Wildlife of Japan Red Data Book.
In the Netherlands, the Dutch Ministry of Agriculture, Nature and Food Quality publishes a list of threatened species, and conservation is enforced by the Nature Conservation Act 1998. Species are also protected through the Wild Birds and Habitats Directives.
In New Zealand, the Department of Conservation publishes the New Zealand Threat Classification System lists. threatened species or subspecies are assigned one of seven categories: Nationally Critical, Nationally Endangered, Nationally Vulnerable, Declining, Recovering, Relict, or Naturally Uncommon. While the classification looks only at a national level, many species are unique to New Zealand, and species which are secure overseas are noted as such.
In Russia, the Red Book of Russian Federation came out in 2001, it contains categories defining preservation status for different species. In it there are 8 taxa of amphibians, 21 taxa of reptiles, 128 taxa of birds, and 74 taxa of mammals, in total 231. There are also more than 30 regional red books, for example the red book of the Altaic region which came out in 1994.
In South Africa, the South African National Biodiversity Institute, established under the National Environmental Management: Biodiversity Act, 2004, is responsible for drawing up lists of affected species, and monitoring compliance with CITES decisions. It is envisaged that previously diverse Red lists would be more easily kept current, both technically and financially.
In Thailand, the Wild Animal Reservation and Protection Act of BE 2535 defines fifteen reserved animal species and two classes of protected species, of which hunting, breeding, possession, and trade are prohibited or restricted by law. The National Park, Wildlife and Plant Conservation Department of the Ministry of Natural Resources and Environment is responsible for the regulation of these activities.
In Ukraine, the Ministry of Environment Protection maintains list of endangered species (divided into seven categories from "0" - extinct to "VI" - rehabilitated) and publishes it in the Red Book of Ukraine.
In the United States of America, the Endangered Species Act of 1973 created the Endangered Species List.
Consumer guides
Some consumer guides for seafood, such as Seafood Watch, divide fish and other sea creatures into three categories, analogous to conservation status categories:
Red ("say no" or "avoid")
Yellow or orange ("think twice", "good alternatives" or "some concerns")
Green ("best seafood choices")
The categories do not simply reflect the imperilment of individual species, but also consider the environmental impacts of how and where they are fished, such as through bycatch or ocean bottom trawlers. Often groups of species are assessed rather than individual species (e.g. squid, prawns).
The Marine Conservation Society has five levels of ratings for seafood species, as displayed on their FishOnline website.
See also
Conservation status of wolves in Europe
Conservation biology
Convention on the Conservation of Migratory Species of Wild Animals
Lazarus taxon
List of endangered species in North America
Listing priority number
Lists of extinct animals
Lists of organisms by population
Living Planet Index
Red List Index
Regional Red List
Reintroduction
References
External links
Search the IUCN Red List
IUCN Red List Categories and Criteria Version 3.1 (archived 23 March 2014)
Evolutionary biology terminology
Conservation biology
Environmental conservation
Environmental terminology
NatureServe | Conservation status | Biology | 1,729 |
592,830 | https://en.wikipedia.org/wiki/Monkeys%20and%20apes%20in%20space | Before humans went into space in the 1960s, several other animals were launched into space, including numerous other primates, so that scientists could investigate the biological effects of spaceflight. The United States launched flights containing primate passengers primarily between 1948 and 1961 with one flight in 1969 and one in 1985. France launched two monkey-carrying flights in 1967. The Soviet Union and Russia launched monkeys between 1983 and 1996. Most primates were anesthetized before lift-off.
Over thirty-two non-human primates flew in the space program; none flew more than once. Numerous backup primates also went through the programs but never flew. Monkeys and non-human apes from several species were used, including rhesus macaque, crab-eating macaque, squirrel monkeys, pig-tailed macaques, and chimpanzees.
United States
The first primate launched into high subspace, although not a space flight, was Albert I, a rhesus macaque, who on June 18, 1948, rode a rocket flight to over in Earth's atmosphere on a V-2 rocket. Albert I died of suffocation during the flight and may actually have died in the cramped space capsule before launch.
On June 14, 1949, Albert II survived a sub-orbital V-2 flight into space (but died on impact after a parachute failure) to become the first monkey, first primate, and first mammal in space. His flight reached – past the Kármán line of 100 km which designates the beginning of space.
On September 16, 1949, Albert III died below the Kármán line, at 35,000 feet (10.7 km), in an explosion of his V2. On December 8, Albert IV, the second mammal in space, flew on the last monkey V-2 flight and died on impact after another parachute failure after reaching 130.6 km. Alberts, I, II, and IV were rhesus macaques while Albert III was a crab-eating macaque.
Monkeys later flew on Aerobee rockets. On April 18, 1951, a monkey, possibly called Albert V, died due to parachute failure. Yorick, also called Albert VI, along with 11 mouse crewmates, reached 236,000 ft (72 km, 44.7 mi) and survived the landing, on September 20, 1951, the first monkey to do so (the dogs Dezik and Tsygan had survived a trip to space in July of that year), although he died two hours later. Two of the mice also died after recovery; all of the deaths were thought to be related to stress from overheating in the sealed capsule in the New Mexico sun while awaiting the recovery team. Albert VI's flight surpassed the 50-mile boundary the U.S. used for spaceflight but was below the international definition of space. Patricia and Mike, two cynomolgus monkeys, flew on May 21, 1952, and survived, but their flight was only to 26 kilometers.
On December 13, 1958, Gordo, also called Old Reliable, a squirrel monkey, survived being launched aboard Jupiter AM-13 by the US Army. After flying for over 1,500 miles and reaching a height of 310 miles (500 km) before returning to Earth, Gordo landed in the South Atlantic and was killed due to mechanical failure of the parachute recovery system in the rocket nose cone.
On May 28, 1959, aboard the JUPITER AM-18, Miss Able, a rhesus macaque, and Miss Baker, a squirrel monkey from Peru, flew a successful mission. Able was born at the Ralph Mitchell Zoo in Independence, Kansas. They traveled in excess of 16,000 km/h, and withstood 38 g (373 m/s2). Able died June 1, 1959, while undergoing surgery to remove an infected medical electrode, from a reaction to the anesthesia. Baker became the first monkey to survive the stresses of spaceflight and the related medical procedures. Baker died November 29, 1984, at the age of 27 and is buried on the grounds of the United States Space & Rocket Center in Huntsville, Alabama. Able was preserved, and is now on display at the Smithsonian Institution's National Air and Space Museum. Their names were taken from the 1943–1955 US military phonetic alphabet.
On December 4, 1959, from Wallops Island, Virginia, Sam, a rhesus macaque, flew on the Little Joe 2 in the Mercury program to 53 miles high. On January 21, 1960, Miss Sam, also a rhesus macaque, followed, on Little Joe 1B although her flight was only to in a test of emergency procedures.
Chimpanzees Ham and Enos also flew in the Mercury program, with Ham becoming the first great ape or Hominidae in space. The names "Sam" and "Ham" were acronyms. Sam was named in homage to the School of Aerospace Medicine at Brooks Air Force Base in San Antonio, Texas, and the name "Ham" was taken from Holloman Aerospace Medicine at Holloman Air Force Base, New Mexico. Ham and Enos were among 60 chimpanzees brought to New Mexico by the U.S. Air Force for space flight tests. Six were selected to be trained at Cape Canaveral by Tony Gentry et al.
Goliath, a squirrel monkey, died in the explosion of his Atlas rocket on November 10, 1961. A rhesus macaque called Scatback flew a sub-orbital flight on December 20, 1961, but was lost at sea after landing.
Bonny, a pig-tailed macaque, flew on Biosatellite 3, a mission which lasted from June 29 to July 8, 1969. This was the first multi-day monkey flight but came after longer human spaceflights were common. He died within a day of landing.
Spacelab 3 on the Space Shuttle flight STS-51-B featured two squirrel monkeys named No. 3165 and No. 384-80. The flight was from April 29 to May 6, 1985.
France
France launched a pig-tailed macaque named Martine on a Vesta rocket on March 7, 1967, and another named Pierrette on March 13. These suborbital flights reached and , respectively. Martine became the first monkey to survive more than a couple of hours after flying above the international definition of the edge of space (Ham and Enos, launched earlier by the United States, were chimpanzees).
Soviet Union and Russia
The Soviet /Russian space program used only rhesus macaques in its Bion satellite program in 1980s and 1990s. The names of the monkeys began with sequential letters of the Russian alphabet (А, Б, В, Г, Д, Е, Ё, Ж, З...). The animals all survived their missions but for a single fatality in post-flight surgery, after which the program was canceled.
The first monkeys launched by Soviet space program, Abrek and Bion, flew on Bion 6. They remained aloft from December 14, 1983 – December 20, 1983.
Next came Bion 7 with monkeys Verny and Gordy from July 10, 1985 – July 17, 1985.
Then Dryoma and Yerosha on Bion 8 from September 29, 1987 – October 12, 1987. After returning from space Dryoma was presented to Cuban leader Fidel Castro.
Bion 9 with monkeys Zhakonya and Zabiyaka followed from September 15, 1989, to September 28, 1989. The two took the space endurance record for monkeys at 13 days, 17 hours in space.
Monkeys Ivasha and Krosh flew on Bion 10 from December 29, 1992, to January 7, 1993. Krosh produced offspring, after rehabilitation upon returning to Earth.
Lapik and Multik were the last monkeys in space until Iran launched one of its own in 2013. The pair flew aboard Bion 11 from December 24, 1996, to January 7, 1997. Upon return, Multik died while under anesthesia for US biopsy sampling on January 8. Lapik nearly died while undergoing the identical procedure. No follow-up research has been conducted to determine whether these two incidents, together with the 1959 loss of the US monkey Able in post-flight surgery, contraindicate the administration of anesthesia during or shortly after spaceflights. Further US support of the Bion program was canceled.
Argentina
On December 23, 1969, as part of the 'Operación Navidad' (Operation Christmas), Argentina launched Juan (a tufted capuchin, native to Argentina's Misiones Province) using a two-stage Rigel 04 rocket. It ascended perhaps up to 82 kilometers and then was recovered successfully. Other sources give 30, 60 or 72 kilometers. All of these are below the international definition of space (100 km). Later, on February 1, 1970, the experience was repeated with a female monkey of the same species using an X-1 Panther rocket. Although it reached a higher altitude than its predecessor, it was lost after the capsule's parachute failed.
China
The PRC spacecraft Shenzhou 2 launched on January 9, 2001. It is rumored that inside the reentry module (precise information is lacking due to the secrecy surrounding China's space program) a monkey, dog, and rabbit rode aloft in a test of the spacecraft's life support systems. The SZ2 reentry module landed in Inner Mongolia on January 16. No images of the recovered capsule appeared in the press, leading to the widespread inference that the flight ended in failure. According to press reports citing an unnamed source, a parachute connection malfunction caused a hard landing.
Iran
On January 28, 2013, AFP and Sky News reported that Iran had sent a monkey in a "Pishgam" rocket to a height of and retrieved "shipment". Iranian media gave no details on the timing or location of the launch, while details that were reported raised questions about the claim. Pre-flight and post-flight photos clearly showed different monkeys. The confusion was due to the publishing of an archive photo from 2011 by the Iranian Student News Agency (ISNA). According to Jonathan McDowell, a Harvard astronomer, "They just mixed that footage with the footage of the 2013 successful launch."
On December 14, 2013, AFP and BBC reported that Iran again sent a monkey to space and safely returned it. Rhesus macaques Aftab (2013.01.28) and Fargam (2013.12.14) were each launched separately into space and safely returned. Researchers continue to study the effects of the space trip on their offspring.
In popular culture
The 2014 animated series All Hail King Julien: Exiled features a horde of highly intelligent chimpanzee cosmonauts, whom they claim the USSR abandoned on a Madagascar islet following the end of the Space Race. Although faithful to "Mother Russia", the chimpanzees vow to take revenge on humankind for declaring their obsolescence.
See also
Laika
Soviet space dogs
Ham (chimpanzee)
Human spaceflight
Animals in space
Space exploration
List of individual apes
List of individual monkeys
Alice King Chatham (sculptor who designed oxygen masks and safety gear for animals in the U.S. space program)
Captain Simian & the Space Monkeys (1996 television series)
Space Chimps (2008 film)
One Small Step: The Story of the Space Chimps (2008 documentary)
Animal testing on non-human primates
References
Further reading
Animals in Space: From Research Rockets to the Space Shuttle, Chris Dubbs and Colin Burgess, Springer-Praxis Books, 2007
External links
ape-o-naut
NPR article on the 50th anniversary of Able and Baker's flight
A humorous look at monkey astronaut names
Monkey astronauts
One Small Step: The Story of the Space Chimps Official Documentary Site
Argentina and the Conquest of Space (Spanish)
Animals in space
Monkeys
Collection of the Smithsonian Institution
Space | Monkeys and apes in space | Chemistry,Biology | 2,460 |
734,787 | https://en.wikipedia.org/wiki/Automatic%20differentiation | In mathematics and computer algebra, automatic differentiation (auto-differentiation, autodiff, or AD), also called algorithmic differentiation, computational differentiation, and differentiation arithmetic is a set of techniques to evaluate the partial derivative of a function specified by a computer program. Automatic differentiation is a subtle and central tool to automatize the simultaneous computation of the numerical values of arbitrarily complex functions and their derivatives with no need for the symbolic representation of the derivative, only the function rule or an algorithm thereof is required . Auto-differentiation is thus neither numeric nor symbolic, nor is it a combination of both. It is also preferable to ordinary numerical methods: In contrast to the more traditional numerical methods based on finite differences, auto-differentiation is 'in theory' exact, and in comparison to symbolic algorithms, it is computationally inexpensive.
Automatic differentiation exploits the fact that every computer calculation, no matter how complicated, executes a sequence of elementary arithmetic operations (addition, subtraction, multiplication, division, etc.) and elementary functions (exp, log, sin, cos, etc.). By applying the chain rule repeatedly to these operations, partial derivatives of arbitrary order can be computed automatically, accurately to working precision, and using at most a small constant factor of more arithmetic operations than the original program.
Difference from other differentiation methods
Automatic differentiation is distinct from symbolic differentiation and numerical differentiation.
Symbolic differentiation faces the difficulty of converting a computer program into a single mathematical expression and can lead to inefficient code. Numerical differentiation (the method of finite differences) can introduce round-off errors in the discretization process and cancellation. Both of these classical methods have problems with calculating higher derivatives, where complexity and errors increase. Finally, both of these classical methods are slow at computing partial derivatives of a function with respect to many inputs, as is needed for gradient-based optimization algorithms. Automatic differentiation solves all of these problems.
Applications
Currently, for its efficiency and accuracy in computing first and higher order derivatives, auto-differentiation is a celebrated technique with diverse applications in scientific computing and mathematics. It should therefore come as no surprise that there are numerous computational implementations of auto-differentiation. Among these, one mentions INTLAB, Sollya, and InCLosure. In practice, there are two types (modes) of algorithmic differentiation: a forward-type and a reversed-type. Presently, the two types are highly correlated and complementary and both have a wide variety of applications in, e.g., non-linear optimization, sensitivity analysis, robotics, machine learning, computer graphics, and computer vision. Automatic differentiation is particularly important in the field of machine learning. For example, it allows one to implement backpropagation in a neural network without a manually-computed derivative.
Forward and reverse accumulation
Chain rule of partial derivatives of composite functions
Fundamental to automatic differentiation is the decomposition of differentials provided by the chain rule of partial derivatives of composite functions. For the simple composition
the chain rule gives
Two types of automatic differentiation
Usually, two distinct modes of automatic differentiation are presented.
forward accumulation (also called bottom-up, forward mode, or tangent mode)
reverse accumulation (also called top-down, reverse mode, or adjoint mode)
Forward accumulation specifies that one traverses the chain rule from inside to outside (that is, first compute and then and at last ), while reverse accumulation has the traversal from outside to inside (first compute and then and at last ). More succinctly,
Forward accumulation computes the recursive relation: with , and,
Reverse accumulation computes the recursive relation: with .
The value of the partial derivative, called seed, is propagated forward or backward and is initially or . Forward accumulation evaluates the function and calculates the derivative with respect to one independent variable in one pass. For each independent variable a separate pass is therefore necessary in which the derivative with respect to that independent variable is set to one () and of all others to zero (). In contrast, reverse accumulation requires the evaluated partial functions for the partial derivatives. Reverse accumulation therefore evaluates the function first and calculates the derivatives with respect to all independent variables in an additional pass.
Which of these two types should be used depends on the sweep count. The computational complexity of one sweep is proportional to the complexity of the original code.
Forward accumulation is more efficient than reverse accumulation for functions with as only sweeps are necessary, compared to sweeps for reverse accumulation.
Reverse accumulation is more efficient than forward accumulation for functions with as only sweeps are necessary, compared to sweeps for forward accumulation.
Backpropagation of errors in multilayer perceptrons, a technique used in machine learning, is a special case of reverse accumulation.
Forward accumulation was introduced by R.E. Wengert in 1964. According to Andreas Griewank, reverse accumulation has been suggested since the late 1960s, but the inventor is unknown. Seppo Linnainmaa published reverse accumulation in 1976.
Forward accumulation
In forward accumulation AD, one first fixes the independent variable with respect to which differentiation is performed and computes the derivative of each sub-expression recursively. In a pen-and-paper calculation, this involves repeatedly substituting the derivative of the inner functions in the chain rule:
This can be generalized to multiple variables as a matrix product of Jacobians.
Compared to reverse accumulation, forward accumulation is natural and easy to implement as the flow of derivative information coincides with the order of evaluation. Each variable is augmented with its derivative (stored as a numerical value, not a symbolic expression),
as denoted by the dot. The derivatives are then computed in sync with the evaluation steps and combined with other derivatives via the chain rule.
Using the chain rule, if has predecessors in the computational graph:
As an example, consider the function:
For clarity, the individual sub-expressions have been labeled with the variables .
The choice of the independent variable to which differentiation is performed affects the seed values and . Given interest in the derivative of this function with respect to , the seed values should be set to:
With the seed values set, the values propagate using the chain rule as shown. Figure 2 shows a pictorial depiction of this process as a computational graph.
{| class="wikitable"
!Operations to compute value !!Operations to compute derivative
|-
| || (seed)
|-
| || (seed)
|-
| ||
|-
| ||
|-
| ||
|}
To compute the gradient of this example function, which requires not only but also , an additional sweep is performed over the computational graph using the seed values .
Implementation
Pseudocode
Forward accumulation calculates the function and the derivative (but only for one independent variable each) in one pass. The associated method call expects the expression Z to be derived with regard to a variable V. The method returns a pair of the evaluated function and its derivative. The method traverses the expression tree recursively until a variable is reached. If the derivative with respect to this variable is requested, its derivative is 1, 0 otherwise. Then the partial function as well as the partial derivative are evaluated.
tuple<float,float> evaluateAndDerive(Expression Z, Variable V) {
if isVariable(Z)
if (Z = V) return {valueOf(Z), 1};
else return {valueOf(Z), 0};
else if (Z = A + B)
{a, a'} = evaluateAndDerive(A, V);
{b, b'} = evaluateAndDerive(B, V);
return {a + b, a' + b'};
else if (Z = A - B)
{a, a'} = evaluateAndDerive(A, V);
{b, b'} = evaluateAndDerive(B, V);
return {a - b, a' - b'};
else if (Z = A * B)
{a, a'} = evaluateAndDerive(A, V);
{b, b'} = evaluateAndDerive(B, V);
return {a * b, b * a' + a * b'};
}
C++
#include <iostream>
struct ValueAndPartial { float value, partial; };
struct Variable;
struct Expression {
virtual ValueAndPartial evaluateAndDerive(Variable *variable) = 0;
};
struct Variable: public Expression {
float value;
Variable(float value): value(value) {}
ValueAndPartial evaluateAndDerive(Variable *variable) {
float partial = (this == variable) ? 1.0f : 0.0f;
return {value, partial};
}
};
struct Plus: public Expression {
Expression *a, *b;
Plus(Expression *a, Expression *b): a(a), b(b) {}
ValueAndPartial evaluateAndDerive(Variable *variable) {
auto [valueA, partialA] = a->evaluateAndDerive(variable);
auto [valueB, partialB] = b->evaluateAndDerive(variable);
return {valueA + valueB, partialA + partialB};
}
};
struct Multiply: public Expression {
Expression *a, *b;
Multiply(Expression *a, Expression *b): a(a), b(b) {}
ValueAndPartial evaluateAndDerive(Variable *variable) {
auto [valueA, partialA] = a->evaluateAndDerive(variable);
auto [valueB, partialB] = b->evaluateAndDerive(variable);
return {valueA * valueB, valueB * partialA + valueA * partialB};
}
};
int main () {
// Example: Finding the partials of z = x * (x + y) + y * y at (x, y) = (2, 3)
Variable x(2), y(3);
Plus p1(&x, &y); Multiply m1(&x, &p1); Multiply m2(&y, &y); Plus z(&m1, &m2);
float xPartial = z.evaluateAndDerive(&x).partial;
float yPartial = z.evaluateAndDerive(&y).partial;
std::cout << "∂z/∂x = " << xPartial << ", "
<< "∂z/∂y = " << yPartial << std::endl;
// Output: ∂z/∂x = 7, ∂z/∂y = 8
return 0;
}
Reverse accumulation
In reverse accumulation AD, the dependent variable to be differentiated is fixed and the derivative is computed with respect to each sub-expression recursively. In a pen-and-paper calculation, the derivative of the outer functions is repeatedly substituted in the chain rule:
In reverse accumulation, the quantity of interest is the adjoint, denoted with a bar ; it is a derivative of a chosen dependent variable with respect to a subexpression :
Using the chain rule, if has successors in the computational graph:
Reverse accumulation traverses the chain rule from outside to inside, or in the case of the computational graph in Figure 3, from top to bottom. The example function is scalar-valued, and thus there is only one seed for the derivative computation, and only one sweep of the computational graph is needed to calculate the (two-component) gradient. This is only half the work when compared to forward accumulation, but reverse accumulation requires the storage of the intermediate variables as well as the instructions that produced them in a data structure known as a "tape" or a Wengert list (however, Wengert published forward accumulation, not reverse accumulation), which may consume significant memory if the computational graph is large. This can be mitigated to some extent by storing only a subset of the intermediate variables and then reconstructing the necessary work variables by repeating the evaluations, a technique known as rematerialization. Checkpointing is also used to save intermediary states.
The operations to compute the derivative using reverse accumulation are shown in the table below (note the reversed order):
The data flow graph of a computation can be manipulated to calculate the gradient of its original calculation. This is done by adding an adjoint node for each primal node, connected by adjoint edges which parallel the primal edges but flow in the opposite direction. The nodes in the adjoint graph represent multiplication by the derivatives of the functions calculated by the nodes in the primal. For instance, addition in the primal causes fanout in the adjoint; fanout in the primal causes addition in the adjoint; a unary function in the primal causes in the adjoint; etc.
Implementation
Pseudo code
Reverse accumulation requires two passes: In the forward pass, the function is evaluated first and the partial results are cached. In the reverse pass, the partial derivatives are calculated and the previously derived value is backpropagated. The corresponding method call expects the expression Z to be derived and seed with the derived value of the parent expression. For the top expression, Z derived with regard to Z, this is 1. The method traverses the expression tree recursively until a variable is reached and adds the current seed value to the derivative expression.
void derive(Expression Z, float seed) {
if isVariable(Z)
partialDerivativeOf(Z) += seed;
else if (Z = A + B)
derive(A, seed);
derive(B, seed);
else if (Z = A - B)
derive(A, seed);
derive(B, -seed);
else if (Z = A * B)
derive(A, valueOf(B) * seed);
derive(B, valueOf(A) * seed);
}
C++
#include <iostream>
struct Expression {
float value;
virtual void evaluate() = 0;
virtual void derive(float seed) = 0;
};
struct Variable: public Expression {
float partial;
Variable(float value) {
this->value = value;
partial = 0.0f;
}
void evaluate() {}
void derive(float seed) {
partial += seed;
}
};
struct Plus: public Expression {
Expression *a, *b;
Plus(Expression *a, Expression *b): a(a), b(b) {}
void evaluate() {
a->evaluate();
b->evaluate();
value = a->value + b->value;
}
void derive(float seed) {
a->derive(seed);
b->derive(seed);
}
};
struct Multiply: public Expression {
Expression *a, *b;
Multiply(Expression *a, Expression *b): a(a), b(b) {}
void evaluate() {
a->evaluate();
b->evaluate();
value = a->value * b->value;
}
void derive(float seed) {
a->derive(b->value * seed);
b->derive(a->value * seed);
}
};
int main () {
// Example: Finding the partials of z = x * (x + y) + y * y at (x, y) = (2, 3)
Variable x(2), y(3);
Plus p1(&x, &y); Multiply m1(&x, &p1); Multiply m2(&y, &y); Plus z(&m1, &m2);
z.evaluate();
std::cout << "z = " << z.value << std::endl;
// Output: z = 19
z.derive(1);
std::cout << "∂z/∂x = " << x.partial << ", "
<< "∂z/∂y = " << y.partial << std::endl;
// Output: ∂z/∂x = 7, ∂z/∂y = 8
return 0;
}
Beyond forward and reverse accumulation
Forward and reverse accumulation are just two (extreme) ways of traversing the chain rule. The problem of computing a full Jacobian of with a minimum number of arithmetic operations is known as the optimal Jacobian accumulation (OJA) problem, which is NP-complete. Central to this proof is the idea that algebraic dependencies may exist between the local partials that label the edges of the graph. In particular, two or more edge labels may be recognized as equal. The complexity of the problem is still open if it is assumed that all edge labels are unique and algebraically independent.
Automatic differentiation using dual numbers
Forward mode automatic differentiation is accomplished by augmenting the algebra of real numbers and obtaining a new arithmetic. An additional component is added to every number to represent the derivative of a function at the number, and all arithmetic operators are extended for the augmented algebra. The augmented algebra is the algebra of dual numbers.
Replace every number with the number , where is a real number, but is an abstract number with the property (an infinitesimal; see Smooth infinitesimal analysis). Using only this, regular arithmetic gives
using .
Now, polynomials can be calculated in this augmented arithmetic. If , then
where denotes the derivative of with respect to its first argument, and , called a seed, can be chosen arbitrarily.
The new arithmetic consists of ordered pairs, elements written , with ordinary arithmetics on the first component, and first order differentiation arithmetic on the second component, as described above. Extending the above results on polynomials to analytic functions gives a list of the basic arithmetic and some standard functions for the new arithmetic:
and in general for the primitive function ,
where and are the derivatives of with respect to its first and second arguments, respectively.
When a binary basic arithmetic operation is applied to mixed arguments—the pair and the real number —the real number is first lifted to . The derivative of a function at the point is now found by calculating using the above arithmetic, which gives as the result.
Implementation
An example implementation based on the dual number approach follows.
Pseudo code
C++
#include <iostream>
struct Dual {
float realPart, infinitesimalPart;
Dual(float realPart, float infinitesimalPart=0): realPart(realPart), infinitesimalPart(infinitesimalPart) {}
Dual operator+(Dual other) {
return Dual(
realPart + other.realPart,
infinitesimalPart + other.infinitesimalPart
);
}
Dual operator*(Dual other) {
return Dual(
realPart * other.realPart,
other.realPart * infinitesimalPart + realPart * other.infinitesimalPart
);
}
};
// Example: Finding the partials of z = x * (x + y) + y * y at (x, y) = (2, 3)
Dual f(Dual x, Dual y) { return x * (x + y) + y * y; }
int main () {
Dual x = Dual(2);
Dual y = Dual(3);
Dual epsilon = Dual(0, 1);
Dual a = f(x + epsilon, y);
Dual b = f(x, y + epsilon);
std::cout << "∂z/∂x = " << a.infinitesimalPart << ", "
<< "∂z/∂y = " << b.infinitesimalPart << std::endl;
// Output: ∂z/∂x = 7, ∂z/∂y = 8
return 0;
}
Vector arguments and functions
Multivariate functions can be handled with the same efficiency and mechanisms as univariate functions by adopting a directional derivative operator. That is, if it is sufficient to compute , the directional derivative of at in the direction may be calculated as using the same arithmetic as above. If all the elements of are desired, then function evaluations are required. Note that in many optimization applications, the directional derivative is indeed sufficient.
High order and many variables
The above arithmetic can be generalized to calculate second order and higher derivatives of multivariate functions. However, the arithmetic rules quickly grow complicated: complexity is quadratic in the highest derivative degree. Instead, truncated Taylor polynomial algebra can be used. The resulting arithmetic, defined on generalized dual numbers, allows efficient computation using functions as if they were a data type. Once the Taylor polynomial of a function is known, the derivatives are easily extracted.
Implementation
Forward-mode AD is implemented by a nonstandard interpretation of the program in which real numbers are replaced by dual numbers, constants are lifted to dual numbers with a zero epsilon coefficient, and the numeric primitives are lifted to operate on dual numbers. This nonstandard interpretation is generally implemented using one of two strategies: source code transformation or operator overloading.
Source code transformation (SCT)
The source code for a function is replaced by an automatically generated source code that includes statements for calculating the derivatives interleaved with the original instructions.
Source code transformation can be implemented for all programming languages, and it is also easier for the compiler to do compile time optimizations. However, the implementation of the AD tool itself is more difficult and the build system is more complex.
Operator overloading (OO)
Operator overloading is a possibility for source code written in a language supporting it. Objects for real numbers and elementary mathematical operations must be overloaded to cater for the augmented arithmetic depicted above. This requires no change in the form or sequence of operations in the original source code for the function to be differentiated, but often requires changes in basic data types for numbers and vectors to support overloading and often also involves the insertion of special flagging operations. Due to the inherent operator overloading overhead on each loop, this approach usually demonstrates weaker speed performance.
Operator overloading and source code transformation
Overloaded Operators can be used to extract the valuation graph, followed by automatic generation of the AD-version of the primal function at run-time. Unlike the classic OO AAD, such AD-function does not change from one iteration to the next one. Hence there is any OO or tape interpretation run-time overhead per Xi sample.
With the AD-function being generated at runtime, it can be optimised to take into account the current state of the program and precompute certain values. In addition, it can be generated in a way to consistently utilize native CPU vectorization to process 4(8)-double chunks of user data (AVX2\AVX512 speed up x4-x8). With multithreading added into account, such approach can lead to a final acceleration of order 8 × #Cores compared to the traditional AAD tools. A reference implementation is available on GitHub.
See also
Differentiable programming
Notes
References
Further reading
External links
www.autodiff.org, An "entry site to everything you want to know about automatic differentiation"
Automatic Differentiation of Parallel OpenMP Programs
Automatic Differentiation, C++ Templates and Photogrammetry
Automatic Differentiation, Operator Overloading Approach
Compute analytic derivatives of any Fortran77, Fortran95, or C program through a web-based interface Automatic Differentiation of Fortran programs
Description and example code for forward Automatic Differentiation in Scala
finmath-lib stochastic automatic differentiation, Automatic differentiation for random variables (Java implementation of the stochastic automatic differentiation).
Adjoint Algorithmic Differentiation: Calibration and Implicit Function Theorem
C++ Template-based automatic differentiation article and implementation
Tangent Source-to-Source Debuggable Derivatives
Exact First- and Second-Order Greeks by Algorithmic Differentiation
Adjoint Algorithmic Differentiation of a GPU Accelerated Application
Adjoint Methods in Computational Finance Software Tool Support for Algorithmic Differentiationop
More than a Thousand Fold Speed Up for xVA Pricing Calculations with Intel Xeon Scalable Processors
Sparse truncated Taylor series implementation with VBIC95 example for higher order derivatives
Differential calculus
Computer algebra
Articles with example pseudocode
Articles with example Python (programming language) code
Articles with example C++ code | Automatic differentiation | Mathematics,Technology | 5,083 |
24,950,345 | https://en.wikipedia.org/wiki/Microscale%20thermophoresis | Microscale thermophoresis (MST) is a technology for the biophysical analysis of interactions between biomolecules. Microscale thermophoresis is based on the detection of a temperature-induced change in fluorescence of a target as a function of the concentration of a non-fluorescent ligand. The observed change in fluorescence is based on two distinct effects. On the one hand it is based on a temperature related intensity change (TRIC) of the fluorescent probe, which can be affected by binding events. On the other hand, it is based on thermophoresis, the directed movement of particles in a microscopic temperature gradient. Any change of the chemical microenvironment of the fluorescent probe, as well as changes in the hydration shell of biomolecules result in a relative change of the fluorescence detected when a temperature gradient is applied and can be used to determine binding affinities. MST allows measurement of interactions directly in solution without the need of immobilization to a surface (immobilization-free technology).
Applications
Affinity
between any kind of biomolecules including proteins, DNA, RNA, peptides, small molecules, fragments and ions
for interactions with high molecular weight complexes, large molecule assemblies, even with liposomes, vesicles, nanodiscs, nanoparticles and viruses
in any buffer, including serum and cell lysate
in competition experiments (for example with substrate and inhibitors)
Stoichiometry
Thermodynamic parameters
MST has been used to estimate the enthalpic and entropic contributions to biomolecular interactions.
Additional information
Sample property (homogeneity, aggregation, stability)
Multiple binding sites, cooperativity
Technology
MST is based on the quantifiable detection of a fluorescence change in a sample when a temperature change is applied. The fluorescence of a target molecule can be extrinsic or intrinsic (aromatic amino acids) and is altered in temperature gradients due to two distinct effects. On the one hand temperature related intensity change (TRIC), which describes the intrinsic property of fluorophores to change their fluorescence intensity as a function of temperature. The extent of the change in fluorescence intensity is affected by the chemical environment of the fluorescent probe, which can be altered in binding events due to conformational changes or proximity of ligands. On the other hand, MST is also based on the directed movement of molecules along temperature gradients, an effect termed thermophoresis. A spatial temperature difference ΔT leads to a change in molecule concentration in the region of elevated temperature, quantified by the Soret coefficient ST:chot/ccold = exp(-ST ΔT). Both, TRIC and thermophoresis contribute to the recorded signal in MST measurements in the following way: ∂/∂T(cF)=c∂F/∂T+F∂c/∂T. The first term in this equation c∂F/∂T describes TRIC as a change in fluorescence intensity (F) as a function of temperature (T), whereas the second term F∂c/∂T describes thermophoresis as the change in particle concentration (c) as a function of temperature. Thermophoresis depends on the interface between molecule and solvent. Under constant buffer conditions, thermophoresis probes the size, charge and solvation entropy of the molecules. The thermophoresis of a fluorescently labeled molecule A typically differs significantly from the thermophoresis of a molecule-target complex AT due to size, charge and solvation entropy differences. This difference in the molecule's thermophoresis is used to quantify the binding in titration experiments under constant buffer conditions.
The thermophoretic movement of the fluorescently labelled molecule is measured by monitoring the fluorescence distribution F inside a capillary. The microscopic temperature gradient is generated by an IR-Laser, which is focused into the capillary and is strongly absorbed by water. The temperature of the aqueous solution in the laser spot is raised by ΔT=1-10 K. Before the IR-Laser is switched on a homogeneous fluorescence distribution Fcold is observed inside the capillary. When the IR-Laser is switched on, two effects, occur on the same time-scale, contributing to the new fluorescence distribution Fhot. The thermal relaxation induces a binding-dependent drop in the fluorescence of the dye due to its local environmental-dependent response to the temperature jump (TRIC). At the same time molecules typically move from the locally heated region to the outer cold regions. The local concentration of molecules decreases in the heated region until it reaches a steady-state distribution.
While the mass diffusion D dictates the kinetics of depletion, ST determines the steady-state concentration ratio chot/ccold=exp(-ST ΔT) ≈ 1-ST ΔT under a temperature increase ΔT. The normalized fluorescence Fnorm=Fhot/Fcold measures mainly this concentration ratio, in addition to TRIC ∂F/∂T. In the linear approximation we find: Fnorm=1+(∂F/∂T-ST)ΔT. Due to the linearity of the fluorescence intensity and the thermophoretic depletion, the normalized fluorescence from the unbound molecule Fnorm(A) and the bound complex Fnorm(AT) superpose linearly. By denoting x the fraction of molecules bound to targets, the changing fluorescence signal during the titration of target T is given by: Fnorm=(1-x) Fnorm(A)+x Fnorm(AT).
Quantitative binding parameters are obtained by using a serial dilution of the binding substrate. By plotting Fnorm against the logarithm of the different concentrations of the dilution series, a sigmoidal binding curve is obtained. This binding curve can directly be fitted with the nonlinear solution of the law of mass action, with the dissociation constant KD as result.
References
Biochemistry methods
Protein methods
Biophysics
Molecular biology
Laboratory techniques | Microscale thermophoresis | Physics,Chemistry,Biology | 1,290 |
20,957,324 | https://en.wikipedia.org/wiki/Bondi%20accretion | In astrophysics, the Bondi accretion (also called Bondi–Hoyle–Lyttleton accretion), named after Hermann Bondi, is spherical accretion onto a compact object traveling through the interstellar medium. It is generally used in the context of neutron star and black hole accretion. To achieve an approximate form of the Bondi accretion rate, accretion is assumed to occur at a rate
.
where:
is the ambient density
is the object's velocity or the sound speed in the surrounding medium if
is the Bondi radius, defined as .
The Bondi radius comes from setting escape velocity equal to the sound speed and solving for radius. It represents the boundary between subsonic and supersonic infall. Substituting the Bondi radius in the above equation yields:
.
These are only scaling relations rather than rigorous definitions. A more complete solution can be found in Bondi's original work and two other papers.
Application to accreting protoplanets
When a planet is forming in a protoplanetary disk, it needs the gas in the disk to fall into its Bondi sphere in order for the planet to be able to accrete an atmosphere. For a massive enough planet, the initial accreted gas can quickly fill up the Bondi sphere. At this point, the atmosphere must cool and contract (through the Kelvin–Helmholtz mechanism) for the planet to be able to accrete more of an atmosphere.
Bibliography
Bondi (1952) MNRAS 112, 195, link
Mestel (1954) MNRAS 114, 437, link
Hoyle and Lyttleton (1941) MNRAS 101, 227
References
Interstellar media
Equations of astronomy | Bondi accretion | Physics,Astronomy | 352 |
13,992,374 | https://en.wikipedia.org/wiki/Urban%20Land | Urban Land is a magazine published by the Urban Land Institute (ULI). It is published 4 times a year and is headquartered in Washington, D.C. Urban Land'''s articles cover a wide range of international topics, while concentrating on the needs of professionals in the real estate development and land use industry.Urban Land magazine regularly publishes original stories and commentaries from notable land use leaders and urban thinkers. Past and current contributors have included such individuals as former Bogotá mayor Enrique Peñalosa; urban scholar Richard Florida; economist and Brookings Institution fellow Anthony Downs; former director of land-use planning for The Conservation Fund Ed McMahon; former Secretary of Housing and Urban Development (HUD) Henry Cisneros; and Brookings Institution fellow Christopher B. Leinberger.
Background
ULI first began publishing Urban Land magazine in 1941 as a newsletter for members. Its first issue was four pages in length with the title typewritten as News Bulletin. In 2010, the magazine launched its web counterpart (urbanland.uli.org) which protected articles behind a member-only paywall. In 2011, ULI removed the magazine's paywall, making it one of the industry's first open access magazines.
Apgar Urban Land Awards
Since 1991, the magazine's annual ULI Apgar Urban Land Award has recognized the most significant Urban Land'' magazine articles "that best contribute to the mission and current priorities of the institute." The award, founded by ULI Governor, Mahlon “Sandy” Apgar, was first presented between 1991 and 2006. ULI reintroduced the award in 2012 and its winners are currently selected by a committee that judges the articles on the criteria of relevance to current industry issues; the clarity of the author's argument and presentation; the strength of the author's analyses; and the overall value of the article in advancing the goals of ULI.
References
Urban planning
Open access publications
Business magazines published in the United States
Magazines established in 1941
Magazines published in Washington, D.C. | Urban Land | Engineering | 408 |
7,552,376 | https://en.wikipedia.org/wiki/Broadcast%20signal%20intrusion | A broadcast signal intrusion is the hijacking of broadcast signals of radio, television stations, cable television broadcast feeds or satellite signals without permission or licence. Hijacking incidents have involved local TV and radio stations as well as cable and national networks.
Although television, cable, and satellite broadcast signal intrusions tend to receive more media coverage, radio station intrusions are more frequent, as many simply rebroadcast a signal received from another radio station. All that is required is an FM transmitter that can overpower the same frequency as the station being rebroadcast (limited by the inverse-square law). Other methods that have been used in North America to intrude on legal broadcasts include using a directional antenna to overpower the uplink frequency of a broadcast relay station, breaking into the transmitter area and splicing audio directly into the feed, and cyberattacks on internet-connected broadcasting equipment.
As a cable television operator connects itself in the signal path between individual stations and the system's subscribers, broadcasters have fallen victim to signal tampering on cable systems on multiple occasions.
Notable incidents
Soviet pirate broadcasting (1960s1980s)
Broadcast signal intrusion was a common practice in the Soviet Union during the 1970s and 1980s due to the absence of and high demand for any non-government broadcasting. As early as 1966, there was a report of an incident in the city of Kaluga where an 18-year-old had broadcast a hoax announcement that nuclear war had broken out with the United States.
In the mid-1970s so many pirates were operating around the city of Arkhangelsk, especially at night, that local people were urged to telephone reports of violators to a special number.
Hijackers using call signs such as "Cucumber", "Radio Millimeter", "Green Goat", "Fortune", and others, would overpower the signal on relay stations for wired radio networks to transmit their programming, or transmit into wired radio networks during gaps in regular programming. Even though the incidents appear to have been fairly common according to reports from the BBC, most were not publicly acknowledged for policy reasons. Reports in newspapers typically referred to the hijackers as "radio hooligans broadcasting drivel, rudeness, vulgarity, uncensored expressions, and trashy music". State news organizations also spread propaganda against such pirate broadcasters, claiming that they had interfered with a state frequency used by Aeroflot, "preventing a doctor in an air ambulance from transmitting information about a patient".
Southern Television (1977)
On November 26, 1977, an audio message, purporting to come from outer space and conveyed by an individual named 'Vrillon' of the 'Ashtar Galactic Command', was broadcast during an ITN news bulletin on Southern Television in the United Kingdom. The intrusion did not entirely affect the video signal but replaced the program audio with a six-minute speech about the destiny of the human race and a disaster to affect "your world and the beings on other worlds around you". The IBA confirmed that it was the first time such a transmission had been made.
"Telewizja Solidarność" (TV Solidarity) (1985)
In September 1985, four astronomers at Poland's University of Toruń (Zygmunt Turło, Leszek Zaleski, Piotr Łukaszewski, and Jan Hanasz) used a ZX Spectrum home computer, a synchronizing circuit, and a transmitter to superimpose messages in support of the labor movement Solidarność (Solidarity) over state-run television broadcasts in Toruń, including an episode of 07 zgłoś się. The messages read "Dość podwyżek cen, kłamstw I represji. Solidarność Toruń" ("Enough price increases, lies, and repressions. Solidarity Toruń") and "Bojkot wyborów naszym obowiązkiem." ("It is our duty to boycott the election", referring to the Sejm elections of 1985) with the Solidarity logo. The four men were eventually discovered and were charged with "possession of an unlicensed radio transmitter and publication of materials that could cause public unrest". At their sentencing, the judge noted their prize-winning work in the Polish scientific community and gave each of them probation and a fine of the equivalent of US$100 each (or 3,000,000 old złoty, 300 PLN in today's currency).
Captain Midnight (1986)
At 12:32 a.m. Eastern Time on April 27, 1986, HBO (Home Box Office) had its satellite signal feed from its operations center on Long Island in Hauppauge, New York interrupted by a man calling himself "Captain Midnight". The interruption occurred during a presentation of The Falcon and the Snowman. The intrusion lasted between 4 and 5 minutes and was seen by viewers along the East Coast. The man, who during the interruption also threatened to hijack the signals of Showtime and The Movie Channel, was later caught and identified as John R. MacDougall of Ocala, Florida. He was prosecuted shortly thereafter. Authorities were tipped off by a man from Wisconsin in a phone booth at a rest area of Interstate 75 in Gainesville, Florida. The man filing the report said that he overheard MacDougall bragging about the incident.
MacDougall's guilt was confirmed by an FCC investigation that showed he was alone at Central Florida Teleport at the time of the incident and a recording of the jamming video showed that the text was created by a character generator at that location. He was charged with transmitting without a radio license in violation of . MacDougall pled guilty and was fined $5,000 and served a year of probation. Ambiguity about whether the 47 USC 301 charge was applicable since the transmitter had a license resulted in the passage of which made satellite jamming a felony.
MacDougall was able to perform the intrusion while working a second job as a master control operator at a satellite teleport in Florida, where he worked to make ends meet due to declining income from his satellite TV equipment business. He stated that he did it because he was frustrated with HBO's service rates and that it was hurting his business selling satellite dishes (hence his second job at the teleport). The message, placed over SMPTE color bars, broadcast by MacDougall read:
The Playboy Channel religious message (1987)
A broadcast of the movie "Three Daughters" on the Playboy Channel was disrupted with a text-only religious message on Sunday, September 6, 1987. The message read, "Thus sayeth the Lord thy God: Remember the Sabbath and keep it holy. Repent, the kingdom of Heaven is at hand." (from the Bible verses Exodus 20:8 and Matthew 4:17).
Thomas Haynie, an employee of the Christian Broadcasting Network, was convicted of satellite piracy in connection with the incident. Haynie, who pleaded his innocence, was the first person convicted under a new federal law which had made satellite hacking a felony following the Captain Midnight incident.
According to investigators, it was the religious content of the transmission and the type of equipment used that drew them to CBN. The jamming signal left behind subtle technical clues that were captured on a VHS recording made at the Playboy Channel's uplink at the time of the event – like finding "fingerprints" in the video. After investigators were confident that they identified the brand of transmitter and character generator from the video, they concluded that CBN was the culprit. Haynie, of Virginia Beach, Virginia, was on duty at his job as an uplink engineer at the time of the jamming.
CBN maintained that the FCC's case was entirely circumstantial since there were no witnesses and the signal could not be traced to a point of origin. During the investigation, experts on both sides attempted to recreate the incident with CBN's equipment. According to CBN spokesman Dino McCann, they were unsuccessful. Furthermore, CBN asserted that there was not enough power for Haynie to jam Playboy's signal but during the trial, government witnesses said the CBN station was capable of interfering with satellite transmissions.
After initially being deadlocked, the jury eventually sided with the prosecution and convicted Haynie on two of six counts. (Haynie was acquitted of similar charges of interfering with the American Exxxtasy channel; a recording of the event was of such poor quality that it was unusable.) Haynie received three years of probation, a $1,000 fine, and 150 hours of community service.
Max Headroom incidents (1987)
On the night of November 22, 1987, an unidentified man wearing a Max Headroom mask appeared on the signals of two television stations in Chicago, Illinois. WGN-TV, owned by Tribune Broadcasting, was hijacked first. The intrusion occurred during the sports report on its 9:00 p.m. newscast and lasted about 25 seconds. Next came PBS affiliate WTTW, where the man was seen and heard uttering garbled remarks before dropping his pants, partially exposing his buttocks, and was then spanked with a flyswatter by a woman wearing a French maid costume before normal programming resumed. This second interception occurred at about 11:00 p.m. during an episode of the Doctor Who serial, "Horror of Fang Rock", and lasted almost 90 seconds. None of the individuals responsible for the intrusion have been identified. This incident got the attention of the CBS Evening News the next day and was talked about nationwide. The HBO incident was also mentioned in the same news report.
WKCR-FM (1995)
WKCR-FM was allegedly hijacked mid-broadcast once around 1995. The interruption reportedly began with eerie screeches and was followed by silence, then by a woman reciting obituaries, including those of Frank Oppenheimer and several victims of the 1988 bombing of Pan Am Flight 103. After a couple of minutes, the normal broadcast was restored. A recording of the incident was uploaded to 4chan in 2013.
Falun Gong hijackings (2002)
On February 16, 2002, television signals in the Chinese city of Anshan were briefly hijacked by members of the Falun Gong religious movement in order to clarify the events of the Tiananmen Square self-immolation incident of the previous year. On March 5, 2002, further intrusions took place on cable television channels in the cities of Changchun and Songyuan, protesting persecution by the Chinese government. Different sources vary as to the length of the intrusion, with figures cited including 10 minutes, 50 minutes or even as long as four hours. In September of the same year, 15 people were convicted of roles in the incident and were given prison terms of up to 20 years. On September 9, Falun Gong followers again disrupted broadcasting, this time targeting nationwide satellite broadcasting. By 2010, several of those involved had reportedly died in prison.
WBLI and WBAB (2006)
On the morning of Wednesday, May 17, 2006, the signal of Babylon, New York, FM radio station WBAB was hijacked for about 90 seconds while the signal jammers broadcast the song "Nigger Hatin' Me" by 1960s-era white supremacist country singer Johnny Rebel. Roger Luce, the station's morning host, said at the time, "I've never seen this in 22 years on this radio station[...] Whatever that was[...] it was very racist." Former program director John Olsen said, "This was not some child's prank, this was a federal offense."
The hijack was likely accomplished by overpowering the studio transmitter link (STL) signal to the transmitter in Dix Hills. A signal hijacking with the same song happened to WBAB's sister station WBLI about two weeks earlier on a Sunday night.
Lebanon War (2006)
During the 2006 Lebanon War, Israel overloaded the satellite transmission of Hezbollah's Al Manar TV to broadcast anti-Hezbollah propaganda. One spot showed Hezbollah leader Hassan Nasrallah with crosshairs superimposed on his image followed by three gunshots and a voice saying "Your day is coming" and shots of the Israeli Air Force destroying targets in Lebanon.
Zombie apocalypse Emergency Alert System hijackings (2013)
On February 11, 2013, Great Falls, Montana, CBS affiliate KRTV had their Emergency Alert System hijacked with an audible message warning viewers that "the bodies of the dead are rising from their graves and attacking the living." Later the same night in Marquette, Michigan, and the early morning hours in La Crosse, Wisconsin, the same type of hijacking and reference to a "zombie invasion" was made over the EAS systems of CBS affiliate WKBT-DT, ABC affiliate WBUP and PBS member station WNMU during primetime programming. Shortly afterward, PBS affiliate KENW of Portales, New Mexico, was struck with a similar hacking incident, repeating similar information regarding zombies; however, this led to the arrest of the hacker of the four television stations.
"The Winker's Song (Misprint)" incidents (2017)
In June and July 2017, Mansfield 103.2 FM, a local radio station in Mansfield, Nottinghamshire, England in the United Kingdom, had its signal intruded at least eight times during outside broadcasts. During these intrusions, "The Winker's Song (Misprint)" was played. , the perpetrator had not been identified.
Russian invasion of Ukraine (20222023)
On February 26, 2022, the hacker group Anonymous, as part of the cyber war declared to Russia, hacked several pro-Kremlin TV channels (Channel One Russia, Russia-1 and others), broadcasting a poem written by the singer Monatik about the Russo-Ukrainian war with its footage and Ukrainian music.
On May 9, 2022, during Russia's Victory Day parade in Moscow, Russian TV listings were hacked to display information about Russia's war crimes, and to protest Russia's invasion of Ukraine using various messages. The names of every TV station were changed to blood is on your hands and other phrases used in the TV listings included television and the authorities are lying and your hands are covered in blood from the deaths of thousands of Ukrainians and their children.
On June 5, 2023, Russian radio stations in regions bordering Ukraine were hacked, broadcasting a fake radio address claiming to be from Putin, declaring martial law, a nationwide military mobilization, and for residents to evacuate deeper into Russia.
Other incidents
Television signal intrusions
From October 1 to 4, 1959, the entire video portions of the first three 1959 World Series games on then-primary CBS/secondary NBC-ABC station WLUC-TV in Marquette, Michigan had its Lathrop microwave relay station blacked out by a disgruntled former WLUC employee who wanted to bring back his engineering job after being fired by the station's manager John Borgen for "insubordination reasons". Borgen said in a statement that the former employee, 36-year-old Harold William Lindgren of Marquette, suffered mental health issues and planned his own revenge to the station by swiping a steel wool scouring pad from his wife's sink and drove thirty five miles south to Lathrop on the evening of September 30, before climbing a fence around the transmitter and used a ball-point pen before stuffing the pad into a pipe of the relay equipment the following evening, calling the interruption "a diabolical plot". The hijacks immediately led engineers from Chicago and Green Bay to investigate for four days before Lindgren was quickly arrested by Michigan State Police and immediately facing a possible four-year jail sentence. Lindgren told court officials that he wanted to "jam the signal" over his frustration at Borgen for firing Lindgren several weeks beforehand on September 10, 1959.
In 1971, several television stations in Manila complained of unauthorized broadcasts of pornographic films–known in local vernacular as bomba films–being aired on their channels at midnight after the stations' sign-off. They were receivable only in Manila and in the neighboring city of Makati, indicating the usage of a weaker transmitter, and were of poor quality and had no sound. Leonardo Garcia of the Radio Control Office hypothesized that the culprit was either misusing television equipment or was an "amateur electronic buff", and wryly observed, "Whoever he is, he must be a good Catholic. He stopped showing bombas during Holy Week."
On July 26, 1980, Fred Barber, then-vice president and general manager for WSB-TV in Atlanta, Georgia, told The Atlanta Constitution that he fired two longtime master control operators from the station after they purposely replaced two seconds of a Georgia Forestry Commission commercial during a break of The Plastic Man Comedy/Adventure Show with a still-photo of a naked woman appearing on screen. Barber said that the two fired operators were known to be responsible for signing the station off the air and back on the air, and replied that the nighttime operator at the time was apparently in a habit of leaving random pieces on tape. At the time of the incident, one of the station's morning employees failed to notice the video and it mistakenly aired on broadcast.
On November 10, 1982, thirty seconds of an overnight broadcast of the movie The Green Slime on then-new WVAH-TV in Charleston, West Virginia were replaced with pornographic movie content recorded from the Escapade Channel. Manager Gary Dreispul stated that the station's technician mistakenly hit a patch panel and walked away before looking at the on-air monitor. Horrified upon realizing his mistake, the technician ran back to the panel and pressed the button to go dark. Dreispul stated that the technician was suspended pending the investigation and called the incident a "very bad and careless mistake".
On December 24, 1985, an unidentified engineer from CBS affiliate WSBT-TV in South Bend, Indiana was fired after abruptly replacing 20 to 30 minutes of CBS News Nightwatch with pornographic content from the Playboy Channel alongside a movie entitled Birds in Paradise. The station's president, E. Barry Smith, told the South Bend Tribune that the unidentified employee flipped a satellite-tuner switch during the first twenty-to-thirty minutes of the Nightwatch broadcast and was quickly dismissed by the station's management before publicly apologizing to the station's viewers.
During the second inning from Game 1 of the 1988 World Series (known for Kirk Gibson's famous walk-off home run) on October 15, 1988, an unidentified technician from NBC affiliate WMGT-TV in Macon, Georgia, was fired after the station's on-air feed replaced ten seconds of the World Series with a black-and-white pornographic film during the network's broadcast. The broadcast signal hijacking made statewide headlines. The station's manager, L.A. Sturdivant, released a statement explaining that the broadcast intrusion was triggered by accident rather than deliberately planned, and was being "treated as a serious matter." After three days of investigation, Sturdivant identified the most likely cause as being the since-fired technician having accidentally flipped the wrong switch on a master control panel, causing the NBC broadcast to switch from the KU-Band World Series-carrying signal over to the C-Band X-rated material-carrying satellite signal. Despite most likely having occurred at the station's studio, the station control panel's wiring was rerouted during the investigation. Officials put forth other theories that could explain the incident, such as a videotape having been brought into the studio and watched by the technician, or deliberate sabotage from an outside prankster, in similar fashion to WSB-TV's 1980 broadcast signal interruption , but Sturdivant let it be known he still believed an accidental signal switching to be the most likely cause.
On February 10, 1999, a repeat broadcast of The Simpsons episode "Lisa's Rival" on KFXK-TV in Longview, Texas was briefly interrupted by four seconds of a pornographic movie. The station's manager, Mark McCay, reported to the Associated Press that quick reactions from station employees immediately ended the hijack by inserting a 20-second promotional ad for the station and resuming the Simpsons episode. After several minutes, a character generator scroll appeared on-screen apologizing to the viewers and promising an investigation. The unidentified technician who was responsible for viewing the tape was dismissed 40 minutes after the incident.
On January 4, 2000, a broadcast of the children's television series Teletubbies on GMA Network in the Philippines was replaced by a still photo of actress Rosanna Roces for several seconds. The photo shows one of Roces's breasts exposed, prompting a warning from the Movie and Television Review and Classification Board (MTRCB). GMA officials stated that the incident was accidental, and was caused by an errant employee who pressed a button whilst helping repair a computer.
On January 3, 2007, in Australia, during a broadcast of an episode of the Canadian television series Mayday (known in Australia as Air Crash Investigation) on the Seven Network, an audio signal unexpectedly started playing, clearly saying in an American accent, "Jesus Christ, help us all, Lord." This same voice message continued to repeat itself over and over during the show for a total of six minutes. A spokesman for Seven later denied that the transmission was a prank or a security breach and claimed that the repeated line was part of the original broadcast and said, "Jesus Christ, one of the Nazarenes", although there is hardly any similarity between the two phrases. A subsequent investigation by independent researchers revealed that the invading transmission was actually from a videotaped news broadcast of a civilian truck being ambushed in the Iraq War on September 20, 2005. It remains unknown whether or not this was an intentional act of television piracy or a genuine glitch of some sort.
On March 12, 2007, during a 9 p.m. airing of an Ion Life rebroadcast of a Tom Brokaw-hosted NBC special, State of U.S. Health Care, on Phoenix, Arizona, TV station KPPX-TV, a station employee inserted about 30 seconds of a pornographic film into the broadcast, prompting telephone calls to local news media outlets and the local cable provider, Cox Communications. Parent company Ion Media Networks conducted a rigorous investigation into what they called "an intolerable act of human sabotage", and shortly thereafter, announced that the employee found to be responsible had been fired, threatening further legal action.
On June 17, 2007, an intrusion incident occurred on Czech Television's Sunday morning program Panorama, which shows panoramic shots of Prague and various locations across the country, especially mountain resorts. One of the cameras, located in Černý Důl in the Giant Mountains, had been tampered with on-site and its video stream was replaced with the hackers' own, which contained CGI of a small nuclear explosion in the local landscape, ending in white noise. The broadcast looked authentic enough; the only clue for the viewers was the Web address of the artist group Ztohoven, which had already performed several reality hacking incidents before. Czech Television considered legal action against the group, and tourism workers in the area expressed outrage (since the program serves to promote tourism in the areas shown).
On July 13, 2007, a grainy photo of a man and woman interrupted Washington, D.C., ABC affiliate WJLA-TV's digital or HD signal. The picture was not transmitted over the analog signal, however. The incident was deemed a genuine signal intrusion by various websites but has since been confirmed to be the result of an older HDTV encoder malfunctioning in the early morning hours and going undetected. Station management stated that the image was from an advertisement for The Oprah Winfrey Show.
During the 2014 Gaza War, Hamas hacked Channel 10 (Israel) with messages threatening Israelis to stay longer in the bomb shelters and showing pictures of the wounded Gazans.
In March 2017, intruders broadcast pornographic content for approximately 15 minutes on Touba TV, an Islamic TV channel in Senegal run by the Mouride Sufi order. In a statement, the channel's management "unreservedly condemn[ed] this criminal act which seems to be sabotage and a satanic trick".
In August 2020, Pakistani television news channel Dawn News was compromised by Indian hackers. At around 3:30 pm IST, while a commercial was being broadcast, it displayed an overlay of the Indian national flag with the message “Happy Independence Day” (referring to 15 August, Independence Day of India). Dawn News issued a statement saying they are investigating the matter.
On October 8, 2022, during the Mahsa Amini protests in Iran, the state-run TV channel Islamic Republic TV was hacked by a group going by the name of "Adalat Ali". The screen briefly showed a man in a mask, before switching to a black screen containing Supreme Leader Ali Khamenei engulfed in CGI flames with a target on his forehead, pictures of four women recently killed in the protests and the audio message "Women, life, freedom" on repeat. The hack lasted 12 seconds, before cutting back to a bewildered TV presenter.
In February 2023, the Romanian public television broadcaster TVR was hijacked twice by a person from Prahova County. Said person managed to hijack a transmission from the Craiova branch of TVR. A message was transmitted, which did not appear on the screen (a test card appeared), but on the monitors in the directorate. The message is as follows:
”Către Televiziunea Română. Aștept cu drag după telejurnal ca cineva de la tehnic să mă contacteze pentru a discuta breșele de securitate descoperite și despre problemele pe care le-am văzut pe posturi. Văd că cineva vă tot sabotează. Vă voi lăsa adresa de mail la final de jurnal. Cu drag, un telespectator fidel”
Which translates to:
”To the Romanian Television. I look forward after the news to be contacted by a technician to discuss the discovered security breaches and the problems I saw on the TV channels. I see that someone keeps sabotaging you. I will leave an e-mail address at the end of the news report. With love, a loyal viewer.”
Cable network feed intrusions
On April 27, 1983, a Cox Cable headend in Santa Barbara, California, replaced the video feed of KNXT's Channel 2 Eyewitness News with softcore pornography for 15 minutes, while audio of the newscast was still presented throughout its entirety. Tom Pratt, the programmer manager for Cox, said that the hijack was affected for not all of the viewership's households, but added that the "incident was not accidental". Cox contacted the local authorities to investigate the situation. Despite staff at Cox receiving no reports or complaints from the public (with the exception of two journalists who witnessed the incident), there were reports the following morning stating that the employees did witness the movie beforehand.
On May 2, 2007, a Comcast headend replaced Playhouse Disney's program Handy Manny with hard-core pornography for viewers in Lincroft, New Jersey. Comcast stated it was investigating the event's cause but did not announce its findings to the public.
On February 1, 2009, a Comcast headend in Tucson, Arizona, replaced NBC affiliate KVOA's signal with graphic footage from the pornographic video Wild Cherries 5 in portions of Arizona for 28 seconds, interrupting Super Bowl XLIII between the Arizona Cardinals and the Pittsburgh Steelers during the fourth quarter. Comcast claimed "Our initial investigation suggests this was an isolated malicious act. We are conducting a thorough investigation to determine who was behind this." KVOA also announced that it will be investigating the incident. On February 4, 2011, 38-year-old former Cox Cable employee Frank Tanori Gonzalez of Marana was arrested by the FBI and local police about the case. Later that October, Gonzalez pleaded guilty to two counts of computer tampering and was sentenced to three years of probation, as well as a $1,000 fine to the Arizona attorney general's anti-racketeering fund.
In the morning hours of March 16, 2010, Raleigh area Time Warner Cable's transmission from both Kids and Kids Preschool On Demand channels in the Research Triangle counties of Johnston, Wake, Wayne, and Wilson in North Carolina (including the cities of Raleigh and Goldsboro) was replaced by Playboy TV for approximately two hours, while other TWC cable systems in the area outside the four counties only revealed a black screen. An executive from TWC stated to local news station WRAL that it "was a technical malfunction that caused the wrong previews to be shown" on their kids' on-demand channels.
On April 20, 2012, three minutes of a gay pornographic film was broadcast during a morning news show on the Channel Zero-owned independent station CHCH-DT in Hamilton, Ontario, for Shaw Cable viewers. The night before, a cable was cut; while it was being fixed on the morning of the incident, the adult programming was spliced into CHCH's feed.
Satellite feed intrusions
On September 7, 2012, the Disney Junior block on Disney Channel was interrupted on the Dish Network, replacing 6 minutes of Lilo & Stitch with a portion of a hardcore pornographic movie.
On March 11, 2016, private satellite dish owners in Israel watching HaAh HaGadol (the Israeli version of Big Brother) on Channel 2 had their show interrupted by propaganda videos from Hamas. The disruption lasted a little over three and a half minutes.
Radio signal intrusions
The BBC's radio broadcast of a musical program on October 14, 1941 was interrupted by Nazi Germans shouting false statements to Britons on "how much money Winston Churchill has been paid by Germans," and saying that "the Germans have being swindled and was led up the garden path and sold to America." The voice led to a target of "Harassing Harry", a counterpart of Russia's "Ivan the Terrible" who heckles German broadcasters becoming a welcome diversion to British broadcasting. The wavering intensity of the voice often gave a similar quality voice of Donald Duck which led some listeners to name him "Von Donald." Later, the heckler began interspersing comments between British news bulletins from the BBC. Some of the hecklings include the voice telling listeners to wait for the following day in the headline on the Germans keeping up their pressure on Ukraine, and the voice saying that a big offensive swept over Northern France has been shot down.
The Federal Bureau of Investigation made major headlines on November 24, 1943 after reporting a 90-second interruption of a Nazi man speaking rapidly over a CBS Radio program on WOKO-AM in Albany, New York. The FBI later stated that the interruption was reported as a mistake in telephone transmission "or possibly from an enemy broadcast" and reported that there is no "direct allegation" of the latter.
In April 2016, multiple radio stations in the United States were hacked in order to broadcast an explicit podcast about the furry fandom. The hackers targeted individual Barix audio streaming devices that were findable on the search engine Shodan, logged into them, and locked out the owners while airing the podcast.
On November 10, 2017, intruders broadcast ISIS propaganda for 30 minutes on the Malmö station of Swedish radio network Mix Megapol. A spokesperson for Bauer Media Group, Mix Megapol's owners, stated that "somebody interfered with our frequency using a pirate transmitter" and that they would contact the Swedish Post and Telecom Authority.
During the 2020 United States presidential election, the radio station WWEI in Springfield, Massachusetts, was hijacked and interrupted with a voice that said, "Don't be a chump, vote for Trump."
See also
Pirate radio
Pirate television
Culture jamming
Radio jamming
Zoombombing
References
External links
CBS News report on Max Headroom Chicago Takeover at YouTube
Statement made by art group ZTOHOVEN regarding their attack at the public service broadcaster in the Czech Republic
An artistic group interfered with the Czech TV broadcast with fictitious nuclear explosion
Video of the "Telewizja Solidarność" signal intrusions at YouTube
Polish Tv pirate (This page has moved)
Federal Communications Commission
Satellite television
Cable television
Culture jamming
Culture jamming techniques
Broadcast engineering
Security
Obscenity controversies in television
Pirate television | Broadcast signal intrusion | Engineering | 6,725 |
1,085,585 | https://en.wikipedia.org/wiki/FreeJ | FreeJ is a modular software vision mixer for Linux systems. It is capable of real-time video manipulation, for amateur and professional uses. It can be used as an instrument in the fields of dance theater, VJing and television. FreeJ supports the input of multiple layers of video footage, which can be filtered through special-effect-chains, and then mixed for output.
History
Denis Rojo (aka Jaromil) is the original author, and as of 2013 is the current maintainer. Since 0.7 was released, Silvano Galliani (aka kysucix) joined the core development team, implementing several new enhancements.
Features
FreeJ can be operated in real-time from a command line console (S-Lang), and also remotely operated over the network via a Secure Shell (SSH) connection. The software provides an interface for behavior-scripting (currently accessible through JavaScript). Also, it can be used to render media to multiple screens, remote setups, encoders, and live Internet stream servers.
FreeJ can overlay, mask, transform, and filter multiple layers of footage on the screen. It supports an unlimited number of layers that can be mixed, regardless of the source. It can read input from varied sources: movie files, webcams, TV cards, images, renders and Adobe Flash animations.
FreeJ can produce a stream to an icecast server with the video being mixed and audio grabbed from soundcard. The resulting video is accessible to any computer able to decode media encoded with the theora codec.
The console interface of FreeJ is accessible via SSH and can be run as a background process. The remote interface offers simultaneous access from multiple remote locations.
References
External links
Free video software
Television technology | FreeJ | Technology | 364 |
3,559,747 | https://en.wikipedia.org/wiki/Staggered%20conformation | In organic chemistry, a staggered conformation is a chemical conformation of an ethane-like moiety abcX–Ydef in which the substituents a, b, and c are at the maximum distance from d, e, and f; this requires the torsion angles to be 60°. It is the opposite of an eclipsed conformation, in which those substituents are as close to each other as possible.
Such a conformation exists in any open chain single chemical bond connecting two sp3-hybridised atoms, and is normally a conformational energy minimum. For some molecules such as those of n-butane, there can be special versions of staggered conformations called gauche and anti; see first Newman projection diagram in conformational isomerism.
Staggered/eclipsed configurations also distinguish different crystalline structures of e.g. cubic/hexagonal boron nitride, and diamond/lonsdaleite.
See also
Alkane stereochemistry
Eclipsed conformation
References
Stereochemistry
de:Konformation | Staggered conformation | Physics,Chemistry | 219 |
6,962,728 | https://en.wikipedia.org/wiki/Magnetic%20resonance%20elastography | Magnetic resonance elastography (MRE) is a form of elastography that specifically leverages MRI to quantify and subsequently map the mechanical properties (elasticity or stiffness) of soft tissue. First developed and described at Mayo Clinic by Muthupillai et al. in 1995, MRE has emerged as a powerful, non-invasive diagnostic tool, namely as an alternative to biopsy and serum tests for staging liver fibrosis.
Diseased tissue (e.g. a breast tumor) is often stiffer than the surrounding normal (fibroglandular) tissue, providing motivation to assess tissue stiffness. This principle of operation is the basis for the longstanding practice of palpation, which, however, is limited (except at surgery) to superficial organs and pathologies, and by its subjective, qualitative nature, depending on the skill and touch sensitivity of the practitioner. Conventional imaging techniques of CT, MRI, US, and nuclear medicine are unable to offer any insight on the elastic modulus of soft tissue. MRE, as a quantitative method of assessing tissue stiffness, provides reliable insight to visualize a variety of disease processes which affect tissue stiffness in the liver, brain, heart, pancreas, kidney, spleen, breast, uterus, prostate, and skeletal muscle.
MRE is conducted in three steps: first, a mechanical vibrator is used on the surface of the patient's body to generate shear waves that travel into the patient's deeper tissues; second, an MRI acquisition sequence measures the propagation and velocity of the waves; and finally this information is processed by an inversion algorithm to quantitatively infer and map tissue stiffness in 3-D. This stiffness map is called an elastogram, and is the final output of MRE, along with conventional 3-D MRI images as shown on the right.
Mechanics of soft tissue
MRE quantitatively determines the stiffness of biological tissues by measuring its mechanical response to an external stress. Specifically, MRE calculates the shear modulus of a tissue from its shear-wave displacement measurements. The elastic modulus quantifies the stiffness of a material, or how well it resists elastic deformation as a force is applied. For elastic materials, strain is directly proportional to stress within an elastic region. The elastic modulus is seen as the proportionality constant between stress and strain within this region. Unlike purely elastic materials, biological tissues are viscoelastic, meaning that it has characteristics of both elastic solids and viscous liquids. Their mechanical responses depend on the magnitude of the applied stress as well as the strain rate. The stress-strain curve for a viscoelastic material exhibits hysteresis. The area of the hysteresis loop represents the amount of energy lost as heat when a viscoelastic material undergoes an applied stress and is distorted. For these materials, the elastic modulus is complex and can be separated into two components: a storage modulus and a loss modulus. The storage modulus expresses the contribution from elastic solid behavior while the loss modulus expresses the contribution from viscous liquid behavior. Conversely, elastic materials exhibit a pure solid response. When a force is applied, these materials elastically store and release energy, which does not result in energy loss in the form of heat.
Yet, MRE and other elastography imaging techniques typically utilize a mechanical parameter estimation that assumes biological tissues to be linearly elastic and isotropic for simplicity purposes. The effective shear modulus can be expressed with the following equation:
where is the elastic modulus of the material and is the Poisson's ratio.
The Poisson's ratio for soft tissues is approximated to equal 0.5, resulting in the ratio between the elastic modulus and shear modulus to equal 3. This relationship can be used to estimate the stiffness of biological tissues based on the calculated shear modulus from shear-wave propagation measurements. A driver system produces and transmits acoustic waves set at a specific frequency (50–500 Hz) to the tissue sample. At these frequencies, the velocity of shear waves can be about 1–10 m/s. The effective shear modulus can be calculated from the shear wave velocity with the following:
where is the tissue density and is the shear wave velocity.
Recent studies have been focused on incorporating mechanical parameter estimations into post-processing inverse algorithms that account for the complex viscoelastic behavior of soft tissues. Creating new parameters could potentially increase the specificity of MRE measurements and diagnostic testing.
Applications
Liver
Liver fibrosis is a common condition arising in many liver diseases. Progression of fibrosis can lead to cirrhosis and end-stage liver disease. MRE-based measurement of liver stiffness has emerged as the most accurate non-invasive technique for detecting and staging liver fibrosis. MRE provides quantitative maps of tissue stiffness over large regions of the liver. Abnormally increased liver stiffness is a direct consequence of liver fibrosis. The diagnostic performance of MRE in assessing liver fibrosis has been established in multiple studies.
Liver MRE examinations are performed in MRI systems that have been equipped for the technique. Patients should fast for 3 to 4 hours prior to their MRE exam to allow for the most accurate measurement of liver stiffness. Patients lie supine in the MRI scanner for the examination. A special device is placed on the right side of the chest wall over the liver to apply gentle vibration which generates propagating shear waves in the liver. Imaging is for MRE is very quick, with data acquired in a series of 1-4 periods of breath-holding, each lasting 15–20 seconds.
A standardized approach for performing and analyzing liver MRE exams has been documented by the RSNA Quantitative Imaging Biomarkers Alliance. The technical success rate of Liver MRE is very high (95-100%)
Brain
MRE of the brain was first presented in the early 2000s. Elastogram measures have been correlated with memory tasks, fitness measures, and progression of various neurodegenerative conditions. For example, regional and global decreases in brain viscoelasticity have been observed in Alzheimer's disease and multiple sclerosis. It has been found that as the brain ages, it loses its viscoelastic integrity due to degeneration of neurons and oligodendrocytes. A recent study looked into both the isotropic and anisotropic stiffness in brain and found a correlation between the two and with age, particularly in gray matter.
MRE may also have applications for understanding the adolescent brain. Recently, it was found that adolescents have regional differences in brain viscoelasticity relative to adults.
MRE has also been applied to functional neuroimaging. Whereas functional magnetic resonance imaging (fMRI) infers brain activity by detecting relatively slow changes in blood flow, functional MRE is capable of detecting neuromechanical changes in the brain related to neuronal activity occurring on the 100-millisecond scale.
Kidney
MRE has also been applied to investigate the biomechanical properties of the kidney. The feasibility of clinical renal MRE was first reported in 2011 for healthy volunteers and in 2012 for renal transplant patients. Renal MRE is more challenging than MRE of larger organs such as the brain or liver due to fine mechanical features in the renal cortex and medulla as well as the acoustically shielded position of the kidneys within the abdominal cavity. To overcome these challenges, researchers have been looking at different passive drivers and imaging techniques to best deliver shear waves to the kidneys. Studies investigating renal diseases such as renal allograft dysfunction, lupus nephritis, immunoglobulin A nephropathy (IgAN), diabetic nephrology, renal tumors and chronic kidney disease demonstrate that kidney stiffness is sensitive to kidney function and renal perfusion.
Prostate
The prostate can also be examined by MRE, in particular for the detection and diagnosis of prostate cancer. To ensure good shear wave penetration in the prostate gland, different actuator systems were designed and evaluated. Preliminary results in patients with prostate cancer showed that changes in stiffness allowed differentiation of cancerous tissue from normal tissue. Magnetic Resonance Elastography has been successfully used in patients with prostate cancer showing high specificity and sensitivity in differentiating prostate cancer from benign prostatic diseases (see figure on right (b)). Even higher specificity of 95% for prostate cancer was achieved when Magnetic Resonance Elastography was combined with systematic image interpretation using PI-RADS (version 2.1).
Pancreas
The pancreas is one of the softest tissues in the abdomen. Given that pancreatic diseases including pancreatitis and pancreatic cancer significantly increase stiffness, MRE is a promising tool for diagnosing benign and malignant conditions of the pancreas. Abnormally high pancreatic stiffness was detected by MRE in patients with both acute and chronic pancreatitis. Pancreatic stiffness was also used to distinguish pancreatic malignancy from benign masses and to predict the occurrence of pancreatic fistula after pancreaticoenteric anastomosis. Quantification of the volume of pancreatic tumors based on tomoelastographic measurement of stiffness was found to be excellently correlated with tumor volumes estimated by contrast-enhanced computed tomography. In patients with pancreatic ductal adenocarcinoma stiffness was found to be elevated in the tumor as well as in pancreatic parenchyma distal to the tumor, suggesting heterogeneous pancreatic involvement (figure on right (c)).
See also
Strain–encoded magnetic resonance imaging
Tomoelastography
References
Medical imaging
Magnetic resonance imaging | Magnetic resonance elastography | Chemistry | 1,996 |
5,561,097 | https://en.wikipedia.org/wiki/Flood%20bypass | A flood bypass is a region of land or a large man-made structure that is designed to convey excess flood waters from a river or stream in order to reduce the risk of flooding on the natural river or stream near a key point of interest, such as a city. Flood bypasses, sometimes called floodways, often have man-made diversion works, such as diversion weirs and spillways, at their head or point of origin. The main body of a flood bypass is often a natural flood plain. Many flood bypasses are designed to carry enough water such that combined flows down the original river or stream and flood bypass will not exceed the expected maximum flood flow of the river or stream.
Flood bypasses are typically used only during major floods and act in a similar nature to a detention basin. Since the area of a flood bypass is significantly larger than the cross-sectional area of the original river or stream channel from which water is diverted, the velocity of water in a flood bypass will be significantly lower than the velocity of the flood water in the original system. These low velocities often cause increased sediment deposition in the flood bypass, thus it is important to incorporate a maintenance program for the entire flood bypass system when it is not being actively used during a flood operation.
When not being used to convey water, flood bypasses are sometimes used for agricultural or environmental purposes. The land is often owned by a public authority and then rented to farmers or ranchers, who in turn plant crops or herd livestock that feed off the flood plain. Since the flood bypass is subjected to sedimentation during flood events, the land is often very productive and even a loss of crops due to flooding can sometimes be recovered due to the high yield of the land during the non-flood periods.
Examples
Bonnet Carré Spillway
Eastside Bypass
Fargo-Moorhead Area Diversion Project
Yolo Bypass
Hydraulic engineering
Hydrology
Flood control | Flood bypass | Physics,Chemistry,Engineering,Environmental_science | 383 |
2,861,533 | https://en.wikipedia.org/wiki/Heap%20leaching | Heap leaching is an industrial mining process used to extract precious metals, copper, uranium, and other compounds from ore using a series of chemical reactions that absorb specific minerals and re-separate them after their division from other earth materials. Similar to in situ mining, heap leach mining differs in that it places ore on a liner, then adds the chemicals via drip systems to the ore, whereas in situ mining lacks these liners and pulls pregnant solution up to obtain the minerals. Heap leaching is widely used in modern large-scale mining operations as it produces the desired concentrates at a lower cost compared to conventional processing methods such as flotation, agitation, and vat leaching.
Additionally, dump leaching is an essential part of most copper mining operations and determines the quality grade of the produced material along with other factors
Due to the profitability that the dump leaching has on the mining process, i.e. it can contribute substantially to the economic viability of the mining process, it is advantageous to include the results of the leaching operation in the economic overall project evaluation.
The process has ancient origins; one of the classical methods for the manufacture of copperas (iron sulfate) was to heap up iron pyrite and collect the leachate from the heap, which was then boiled with iron to produce iron(II) sulfate.
Process
The mined ore is usually crushed into small chunks and heaped on an impermeable plastic or clay lined leach pad where it can be irrigated with a leach solution to dissolve the valuable metals. While sprinklers are occasionally used for irrigation, more often operations use drip irrigation to minimize evaporation, provide more uniform distribution of the leach solution, and avoid damaging the exposed mineral. The solution then percolates through the heap and leaches both the target and other minerals. This process, called the "leach cycle," generally takes from one or two months for simple oxide ores (e.g. most gold ores) to two years for nickel laterite ores. The leach solution containing the dissolved minerals is then collected, treated in a process plant to recover the target mineral and in some cases precipitate other minerals, and recycled to the heap after reagent levels are adjusted. Ultimate recovery of the target mineral can range from 30% of contained run-of-mine dump leaching sulfide copper ores to over 90% for the ores that are easiest to leach, some oxide gold ores.
The essential questions to address during the process of the heap leaching are:
Can the investment of crushing the ore be justified by the potential increase in recovery and rate of recovery?
How should the concentration of acid be altered over time in order to produce a solution that can be economically treated?
How does the form of a heap affect the recovery and solution grade?
Under any given set of circumstances, what type of recovery can be expected before the leach solution quality drops below a critical limit?
What recovery (quantifiable measure) can be expected?
In recent years, the addition of an agglomeration drum has improved on the heap leaching process by allowing for a more efficient leach. The rotary drum agglomerator works by taking the crushed ore fines and agglomerating them into more uniform particles. This makes it much easier for the leaching solution to percolate through the pile, making its way through the channels between particles.
The addition of an agglomeration drum also has the added benefit of being able to pre-mix the leaching solution with the ore fines to achieve a more concentrated, homogeneous mixture and allow the leach to begin prior to the heap.
Although heap leach design has made significant progress over the last few years through the use of new materials and improved analytical tools, industrial experience shows that there are significant benefits from extending the design process beyond the liner and into the rock pile itself. Characterization of the physical and hydraulic (hydrodynamic) properties of ore-for-leach focuses on the direct measurement of the key properties of the ore, namely:
The relationship between heap height and ore bulk density (density profile)
The relationship between bulk density and percolation capacity (conductivity profile)
The relationship between the bulk density, porosity and its components (micro and macro)
The relationship between the moisture content and percolation capacity (conductivity curve)
The relationship between the aforementioned parameters and the ore preparation practices (mining, crushing, agglomeration, curing, and method of placement)
Theoretical and numerical analysis, and operational data show that these fundamental mechanisms are controlled by scale, dimensionality, and heterogeneity, all of which adversely affect the scalability of metallurgical and hydrodynamic properties from the lab to the field. The dismissal of these mechanisms can result in a number of practical and financial problems that will resonate throughout the life of the heap impacting the financial return of the operation. Through procedures that go beyond the commonly employed metallurgical testing and the integration of data gleaned through real time 3D monitoring, a more complete representative characterization of the physicochemical properties of the heap environment is obtained. This improved understanding results in a significantly higher degree of accuracy in terms of creating a truly representative sample of the environment within the heap.
By adhering to the characterization identified above, a more comprehensive view of heap leach environments can be realized, allowing the industry to move away from the de facto black-box approach to a physicochemically inclusive industrial reactor model.
Precious metals
The crushed ore is irrigated with a dilute alkaline cyanide solution. The solution containing the dissolved precious metals in a pregnant solution continues percolating through the crushed ore until it reaches the liner at the bottom of the heap where it drains into a storage (pregnant solution) pond. After separating the precious metals from the pregnant solution, the dilute cyanide solution (now called "barren solution") is normally re-used in the heap-leach-process or occasionally sent to an industrial water treatment facility where the residual cyanide is treated and residual metals are removed. In very high rainfall areas, such as the tropics, in some cases there is surplus water that is then discharged to the environment, after treatment, posing possible water pollution if treatment is not properly carried out.
The production of one gold ring through this method, can generate 20 tons of waste material.
During the extraction phase, the gold ions form complex ions with the cyanide:
{Au+ (s)} + 2CN^- (aq) -> Au(CN)2^- (aq)
Recuperation of the gold is readily achieved with a redox-reaction:
{2 Au(CN)2^- (aq)} + Zn (s) -> {Zn(CN)4^2- (aq)} + 2 Au (s)
The most common methods to remove the gold from solution are either using activated carbon to selectively absorb it, or the Merrill-Crowe process where zinc powder is added to cause a precipitation of gold and zinc. The fine product can be either doré (gold-silver bars) or zinc-gold sludge that is then refined elsewhere.
Copper ores
The method is similar to the cyanide method above, except sulfuric acid is used to dissolve copper from its ores. The acid is recycled from the solvent extraction circuit (see solvent extraction-electrowinning, SX/EW) and reused on the leach pad. A byproduct is iron(II) sulfate, jarosite, which is produced as a byproduct of leaching pyrite, and sometimes even the same sulfuric acid that is needed for the process. Both oxide and sulfide ores can be leached, though the leach cycles are much different and sulfide leaching requires a bacterial, or bio-leach, component.
In 2011 leaching, both heap leaching and in-situ leaching, produced 3.4 million metric tons of copper, 22 percent of world production. The largest copper heap leach operations are in Chile, Peru, and the southwestern United States.
Although heap leaching is a low cost-process, it normally has recovery rates of 60-70%. It is normally most profitable with low-grade ores. Higher-grade ores are usually put through more complex milling processes where higher recoveries justify the extra cost. The process chosen depends on the properties of the ore.
The final product is cathode copper.
Nickel ores
This method is an acid heap leaching method like that of the copper method in that it utilises sulfuric acid instead of cyanide solution to dissolve the target minerals from crushed ore. The amount of sulfuric acid required is much higher than for copper ores, as high as 1,000 kg of acid per tonne of ore, but 500 kg is more common. The method was originally patented by Australian miner BHP and is being commercialized by Cerro Matoso in Colombia, a wholly owned subsidiary of BHP; Vale in Brazil; and European Nickel for the rock laterite deposits of Turkey, Talvivaara mine in Finland, the Balkans, and the Philippines. There currently are no operating commercial scale nickel laterite heap leach operations, but there is a sulphide HL operating in Finland.
Nickel recovery from the leach solutions is much more complex than for copper and requires various stages of iron and magnesium removal, and the process produces both leached ore residue ("ripios") and chemical precipitates from the recovery plant (principally iron oxide residues, magnesium sulfate and calcium sulfate) in roughly equal proportions. Thus, a unique feature of nickel heap leaching is the need for a tailings disposal area.
The final product can be nickel hydroxide precipitates (NHP) or mixed metal hydroxide precipitates (MHP), which are then subject to conventional smelting to produce metallic nickel.
Uranium ores
Similar to copper oxide heap leaching, also using dilute sulfuric acid. Rio Tinto is commercializing this technology in Namibia and Australia; the French nuclear fuel company Orano, in Niger with two mines and Namibia; and several other companies are studying its feasibility.
The final product is yellowcake and requires significant further processing to produce fuel-grade feed.
Apparatus
While most mining companies have shifted from a previously accepted sprinkler method to the percolation of slowly dripping choice chemicals including cyanide or sulfuric acid closer to the actual ore bed, heap leach pads have not changed too much throughout the years. There are still four main categories of pads: conventional, dump leach, valley fills, and on/off pads. Typically, each pad only has a single, geomembrane liner for each pad, with a minimum thickness of 1.5mm, usually thicker.
The conventional pads simplest in design are used for mostly flat or gentle areas and hold thinner layers of crushed ore. Dump leach pads hold more ore and can usually handle a less flat terrain. Valley fills are pads situated at valley bottoms or levels that can hold everything falling into it. On/off pads involve putting significantly larger loads on the pads and removing and reloading it after every cycle.
Many of these mines which previously had digging depths of about 15 meters are digging deeper than ever before to mine materials, approximately 50 meters, sometimes more, which means that, in order to accommodate all of the ground being displaced, pads will have to hold higher weights from more crushed ore being contained in a smaller area (Lupo 2010). With that increase in build up comes in potential for decrease in yield or ore quality, as well as potential either weak spots in the lining or areas of increased pressure buildup. This build up still has the potential to lead to punctures in the liner. As of 2004 cushion fabrics, which could reduce potential punctures and their leaking, were still being debated due to their tendency to increase risks if too much weight on too large a surface was placed on the cushioning (Thiel and Smith 2004). In addition, some liners, depending on their composition, may react with salts in the soil as well as acid from the chemical leaching to affect the successfulness of the liner. This can be amplified over time.
Environmental concerns
Effectiveness
Heap leach mining works well for large volumes of low grade ores, as reduced metallurgical treatment (comminution) of the ore is required in order to extract an equivalent amount of minerals when compared to milling. The significantly reduced processing costs are offset by the reduced yield of usually approximately 60-70%. The amount of overall environmental impact caused by heap leaching is often lower than more traditional techniques. It also requires less energy consumption to use this method, which many consider to be an environmental alternative.
Government regulation
In the United States, the General Mining Law of 1872 gave rights to explore and mine on public domain land; the original law did not require post-mining reclamation (Woody et al. 2011). Mined land reclamation requirements on federal land depended on state requirements until the passage of the Federal Land Policy and Management Act in 1976. Currently, mining on federal land must have a government-approved mining and reclamation plan before mining can start. Reclamation bonds are required. Mining on either federal, state, or private land is subject to the requirements of the Clean Air Act and the Clean Water Act.
One solution proposed to reclamation problems is the privatization of the land to be mined (Woody et al. 2011).
Cultural and social concerns
With the rise of the environmentalist movement has also come an increased appreciation for social justice, and mining has showed similar trends lately. Societies located near potential mining sites are at increased risk to be subjected to injustices as their environment is affected by the changes made to mined lands—either public or private—that could eventually lead to problems in social structure, identity, and physical health (Franks 2009). Many have argued that by cycling mine power through local citizens, this disagreement can be alleviated, since both interest groups would have shared and equal voice and understanding in future goals. However, it is often difficult to match corporate mining interests with local social interests, and money is often a deciding factor in the successes of any disagreements. If communities are able to feel like they have a valid understanding and power in issues concerning their local environment and society, they are more likely to tolerate and encourage the positive benefits that come with mining, as well as more effectively promote alternative methods to heap leach mining using their intimate knowledge of the local geography (Franks 2009).
Examples
See also
Environmental justice
Gold cyanidation
Gold extraction
In-situ leach
Mineral processing
Tailings
Yellowcake
References
External links
Heap leaching into groundwater is a major health concern from Rensselaer Polytechnic Institute school of engineering
European Nickel PLC official website
USGS 2005 Minerals Yearbook - Nickel
Metallurgical processes | Heap leaching | Chemistry,Materials_science | 3,037 |
18,213,407 | https://en.wikipedia.org/wiki/Kinetic%20capillary%20electrophoresis | Kinetic capillary electrophoresis or KCE is capillary electrophoresis of molecules that interact during electrophoresis.
KCE was introduced and developed by Professor Sergey Krylov and his research group at York University, Toronto, Canada. It serves as a conceptual platform for development of homogeneous chemical affinity methods for studies of molecular interactions (measurements of binding and rate constants) and affinity purification (purification of known molecules and search for unknown molecules). Different KCE methods are designed by varying initial and boundary conditions – the way interacting molecules enter and exit the capillary. Several KCE methods were described: non-equilibrium capillary electrophoresis of the equilibrium mixtures (NECEEM), sweeping capillary electrophoresis (SweepCE), and plug-plug KCE (ppKCE).
External links
More detailed description and several applications of KCE methods (measuring equilibrium and rate constants of molecular interactions, quantitative affinity analysis of proteins, thermochemistry of protein–ligand interactions, selection of aptamers, determination of temperature inside a capillary) can be found in a PDF presentation: KCE is a conceptual platform for kinetic homogeneous affinity methods.
References
Electrophoresis
Chemical kinetics | Kinetic capillary electrophoresis | Chemistry,Biology | 257 |
16,521,792 | https://en.wikipedia.org/wiki/Particle%20size%20analysis | Particle size analysis, particle size measurement, or simply particle sizing, is the collective name of the technical procedures, or laboratory techniques which determines the size range, and/or the average, or mean size of the particles in a powder or liquid sample.
Particle size analysis is part of particle science, and it is generally carried out in particle technology laboratories.
The particle size measurement is typically achieved by means of devices, called Particle Size Analyzers (PSA), which are based on different technologies, such as high definition image processing, analysis of Brownian motion, gravitational settling of the particle and light scattering (Rayleigh and Mie scattering) of the particles.
The particle size can have considerable importance in a number of industries including the chemical, food, mining, forestry, agriculture, cosmetics, pharmaceutical, energy, and aggregate industries.
Particle size analysis based on light scattering
Particle size analysis based on light scattering has widespread application in many fields, as it allows relatively easy optical characterization of samples enabling improved quality control of products in many industries including pharmaceutical, food, cosmetic, and polymer production. Recent years have seen many advancements in light scattering technologies for particle characterization.
For particles in the lower nanometer to lower micrometer range, dynamic light scattering (DLS) has now become an industry standard technique. It is also by far the most widely used light scattering technique for particle characterization in the academic world. This method analyzes the fluctuations of scattered light by particles in suspension when illuminated with a laser to determine the velocity of the Brownian motion, which can then be used to obtain the hydrodynamic size of particles using the Stokes-Einstein relationship. DLS is a fast and non-invasive technique, which is also precise and highly repeatable. Furthermore, since the technique is based on the measurement of light scattering as a function of time, the technique is considered absolute and the DLS instruments do not require calibration. Amongst its disadvantages is the fact that it does not properly resolve highly polydisperse samples, while the presence of large particles can affect size accuracy. Other scattering techniques have emerged, such as nanoparticle tracking analysis (NTA), which tracks individual particle movement through scattering using image recording. NTA also measures the hydrodynamic size of particles from the diffusion coefficient but is capable of overcoming some of the limitations posed by DLS. The next generation of NTA technology is called interferometric nanoparticle tracking analysis (iNTA) and is based on the interferometric scattering microscopy (iSCAT). In contrast to NTA, iNTA has a superior size resolution and gives access to the effective refractive index of the particles.
While the above-mentioned techniques are best suited for measuring particles typically in the submicron region, particle size analyzers (PSAs) based on static light scattering or laser diffraction (LD) have become the most popular and widely used instruments for measuring particles from hundreds of nanometers to several millimeters. Similar scattering theory is also utilized in systems based on non-electromagnetic wave propagation, such as ultrasonic analyzers. In LD PSAs, a laser beam is used to irradiate a dilute suspension of particles. The light scattered by the particles in the forward direction is focused by a lens onto a large array of concentric photodetector rings. The smaller the particle is, the larger the scattering angle of the laser beam is. Thus, by measuring the angle-dependent scattered intensity, one can infer the particle size distribution using Fraunhofer or Mie scattering models. In the latter case, prior knowledge of the refractive index of the particle being measured as well as the dispersant is required.
Commercial LD PSAs have gained popularity due to their broad dynamic range, rapid measurement, high reproducibility and the capability to perform online measurements. However, these devices are generally large in size (~700 × 300 × 450 mm), heavy (~30 kg) and expensive (in the 50–200 K€ range). On the one hand, the large size of common devices is due to the large distance needed between the sample and the detectors to provide the desired angular resolution. Furthermore, their high price is mainly due to the use of expensive laser sources and a large number of detectors, i.e., one sensor for each scattering angle to be monitored. Some commercial devices contain up to twenty sensors. This complexity of commercial LD PSAs, together with the fact that they often require maintenance and highly trained personnel, make them impractical in the majority of online industrial applications, which require the installation of probes in processing environments, often at multiple locations. An alternative method for PSD is cuvette-based SPR technique, that simultaneously measures the particle size ranging 10 nm-10 μm and concentration in a standard spectrophotometer. The optical filter inserted in the cuvette consists of nano-photonic crystals with very high angular resolution, which enables the analysis of PSD by automatically quantifying Mie scattering and Rayleigh scattering.
The application of LD PSAs is also normally restricted to dilute suspensions. This is because the optical models used to estimate the particle size distribution (PSD) are based on a single scattering approximation. In practice, most industrial processes require measuring concentrated suspensions, where multiple scattering becomes a prominent effect. Multiple scattering in dense media leads to an underestimation of the particle size since the light scattered by the particles encounters diffraction points multiple times before reaching the detector, which in turn increases the apparent scattering angle. To overcome this issue, LD PSAs require appropriate sampling and dilution systems, which increase capital investments and operational costs. Another approach is to apply multiple scattering correction models together with the optical models to compute the PSD. A large number of algorithms for multiple scattering correction can be found in the literature. However, these algorithms typically require implementing a complex correction, which increases the computation time and is often not suitable for online measurements. An alternative approach to compute the PSD without the use of optical models and complex correction factors is to apply machine learning (ML) techniques.
Microfluidic diffusional sizing
Microfluidic diffusional sizing (MDS) is a method of particle size analysis dependent on the diffusion of particles within a laminar flow. The method has found applications in proteomics and related fields where nano-sized particles may vary in size depending on their environment.
Paints and Coatings
Typically, paints and coatings are subjected to multiple rounds of particle size analysis, as the particle size of the individual components influences parameters as diverse as hint strength, hiding power, gloss, viscosity, stability and weather resistance.
Mining and Building Materials
The size of materials being processed in an operation is very important. Having oversize material being conveyed will cause damage to equipment and slow down production. Particle-size analysis also helps the effectiveness of SAG Mills when crushing material.
In the building industry, the particle size can directly affect the strength of the final material, as it observed for cement. Two of the most used techniques used for the particle size characterization of minerals are sieving and laser diffraction. These techniques are faster and cheaper compared to image-based techniques.
Food and Beverages Industry
The optimization of the particle size distribution facilitates the pumping, mixing and transportation of foodstuff. Particle size analysis is usually done with any milled food, such as coffee, flour, cocoa powder. It is especially helpful with chocolate quality to ensure there is a consistent taste and feeling when eaten. Furthermore, in the case of food emulsions, particle size analysis is relevant to predict stability and shelf-life, and optimize homogenization.
Agriculture
The gradation of soils, or soil texture, affects water and nutrient holding and drainage capabilities. For sand-based soils, particle size can be the dominant characteristic affecting soil performances and hence crop. Sieving has long been the technique of choice for soil texture analysis, although laser diffraction instruments are increasingly used as they considerably speed up the analytical process, and provide highly reproducible results.
Particle size analysis in the agriculture industry is paramount because unwanted materials will contaminate products if they are not detected. By having an automated particle size analyzer, companies can closely monitor their processes.
Forestry
Wood particles used to make various types of products rely on particle-size analysis to maintain high quality standards. By doing so, companies reduce waste and become more productive.
Aggregates
Having properly sized particles allow aggregate companies to create long-lasting roads and other products. Particle size analysis is also routinely conducted on bitumen emusions to predict their stability and their behavior.
Biology
Particle size analyzers are used also in biology to measure protein aggregation.
DLS is a particularly appreciated technique for the characterization of nanoparticles designed for drug delivery, such as vaccines. DLS instruments are for instance part of the quality control process for mRNA vaccines formulated in lipid nanoparticle carriers.
Selecting the most appropriate technique for size analysis
There is a large number of methods for the determination of particle size, and it is important to acknowledge that these different methods are not expected to give identical results. The size of a particle depends on the method used for its measurement, and it is important to choose the method that is most relevant to the application.
The "See also" section covers many of these techniques. In most of them, the particle size is inferred from a measurement of, for example: light scattering; electrical resistance; particle motion, rather than a direct measurement of particle diameter. This enables rapid measurement of a particle size distribution by an instrument, but does require some form of calibration or assumptions regarding the nature of the particles. Most often this includes the assumption of spherical particles, thus giving a result which is an equivalent spherical diameter. Thus, it is usual for measured particle size distributions to be different when comparing the results between different equipment. The most appropriate method to use is normally the one where the method is aligned to the end use of the data.
For example, to choose whether a chemical compound should be measured by dynamic light scattering or laser diffraction, one generally considers the expected size range, the sample type (liquid or solid), the amount of sample available, the chemical stability, as well its application field. If designing a sedimentation vessel, then a sedimentation technique for sizing is most relevant. However, this approach is often not possible, and an alternative technique must be used. An online Expert system to assist in the selection (and elimination) of particle size analysis equipment has been developed.
See also
Sieving
Sieve analysis
Laser diffraction analysis
Sedimentation
Elutriation
Microscope counting
Coulter counter
Dynamic light scattering
Imaging Particle Analysis
Aerosol mass spectrometry
Obscuration SPOS
References
Particulates
Analytical chemistry
Laboratory techniques | Particle size analysis | Chemistry | 2,201 |
13,146,531 | https://en.wikipedia.org/wiki/Differentiation%20of%20trigonometric%20functions | The differentiation of trigonometric functions is the mathematical process of finding the derivative of a trigonometric function, or its rate of change with respect to a variable. For example, the derivative of the sine function is written (a) = cos(a), meaning that the rate of change of sin(x) at a particular angle x = a is given by the cosine of that angle.
All derivatives of circular trigonometric functions can be found from those of sin(x) and cos(x) by means of the quotient rule applied to functions such as tan(x) = sin(x)/cos(x). Knowing these derivatives, the derivatives of the inverse trigonometric functions are found using implicit differentiation.
Proofs of derivatives of trigonometric functions
Limit of sin(θ)/θ as θ tends to 0
The diagram at right shows a circle with centre O and radius r = 1. Let two radii OA and OB make an arc of θ radians. Since we are considering the limit as θ tends to zero, we may assume θ is a small positive number, say 0 < θ < π in the first quadrant.
In the diagram, let R1 be the triangle OAB, R2 the circular sector OAB, and R3 the triangle OAC.
The area of triangle OAB is:
The area of the circular sector OAB is:
The area of the triangle OAC is given by:
Since each region is contained in the next, one has:
Moreover, since in the first quadrant, we may divide through by , giving:
In the last step we took the reciprocals of the three positive terms, reversing the inequities.
We conclude that for 0 < θ < π, the quantity is always less than 1 and always greater than cos(θ). Thus, as θ gets closer to 0, is "squeezed" between a ceiling at height 1 and a floor at height , which rises towards 1; hence sin(θ)/θ must tend to 1 as θ tends to 0 from the positive side:For the case where θ is a small negative number – π < θ < 0, we use the fact that sine is an odd function:
Limit of (cos(θ)-1)/θ as θ tends to 0
The last section enables us to calculate this new limit relatively easily. This is done by employing a simple trick. In this calculation, the sign of θ is unimportant.
Using
the fact that the limit of a product is the product of limits, and the limit result from the previous section, we find that:
Limit of tan(θ)/θ as θ tends to 0
Using the limit for the sine function, the fact that the tangent function is odd, and the fact that the limit of a product is the product of limits, we find:
Derivative of the sine function
We calculate the derivative of the sine function from the limit definition:
Using the angle addition formula , we have:
Using the limits for the sine and cosine functions:
Derivative of the cosine function
From the definition of derivative
We again calculate the derivative of the cosine function from the limit definition:
Using the angle addition formula , we have:
Using the limits for the sine and cosine functions:
From the chain rule
To compute the derivative of the cosine function from the chain rule, first observe the following three facts:
The first and the second are trigonometric identities, and the third is proven above. Using these three facts, we can write the following,
We can differentiate this using the chain rule. Letting , we have:
.
Therefore, we have proven that
.
Derivative of the tangent function
From the definition of derivative
To calculate the derivative of the tangent function tan θ, we use first principles. By definition:
Using the well-known angle formula , we have:
Using the fact that the limit of a product is the product of the limits:
Using the limit for the tangent function, and the fact that tan δ tends to 0 as δ tends to 0:
We see immediately that:
From the quotient rule
One can also compute the derivative of the tangent function using the quotient rule.
The numerator can be simplified to 1 by the Pythagorean identity, giving us,
Therefore,
Proofs of derivatives of inverse trigonometric functions
The following derivatives are found by setting a variable y equal to the inverse trigonometric function that we wish to take the derivative of. Using implicit differentiation and then solving for dy/dx, the derivative of the inverse function is found in terms of y. To convert dy/dx back into being in terms of x, we can draw a reference triangle on the unit circle, letting θ be y. Using the Pythagorean theorem and the definition of the regular trigonometric functions, we can finally express dy/dx in terms of x.
Differentiating the inverse sine function
We let
Where
Then
Taking the derivative with respect to on both sides and solving for dy/dx:
Substituting in from above,
Substituting in from above,
Differentiating the inverse cosine function
We let
Where
Then
Taking the derivative with respect to on both sides and solving for dy/dx:
Substituting in from above, we get
Substituting in from above, we get
Alternatively, once the derivative of is established, the derivative of follows immediately by differentiating the identity so that .
Differentiating the inverse tangent function
We let
Where
Then
Taking the derivative with respect to on both sides and solving for dy/dx:
Left side:
using the Pythagorean identity
Right side:
Therefore,
Substituting in from above, we get
Differentiating the inverse cotangent function
We let
where . Then
Taking the derivative with respect to on both sides and solving for dy/dx:
Left side:
using the Pythagorean identity
Right side:
Therefore,
Substituting ,
Alternatively, as the derivative of is derived as shown above, then using the identity follows immediately that
Differentiating the inverse secant function
Using implicit differentiation
Let
Then
(The absolute value in the expression is necessary as the product of secant and tangent in the interval of y is always nonnegative, while the radical is always nonnegative by definition of the principal square root, so the remaining factor must also be nonnegative, which is achieved by using the absolute value of x.)
Using the chain rule
Alternatively, the derivative of arcsecant may be derived from the derivative of arccosine using the chain rule.
Let
Where
and
Then, applying the chain rule to :
Differentiating the inverse cosecant function
Using implicit differentiation
Let
Then
(The absolute value in the expression is necessary as the product of cosecant and cotangent in the interval of y is always nonnegative, while the radical is always nonnegative by definition of the principal square root, so the remaining factor must also be nonnegative, which is achieved by using the absolute value of x.)
Using the chain rule
Alternatively, the derivative of arccosecant may be derived from the derivative of arcsine using the chain rule.
Let
Where
and
Then, applying the chain rule to :
See also
References
Bibliography
Handbook of Mathematical Functions, Edited by Abramowitz and Stegun, National Bureau of Standards, Applied Mathematics Series, 55 (1964)
Articles containing proofs
Differential calculus
Mathematical identities | Differentiation of trigonometric functions | Mathematics | 1,531 |
70,121,241 | https://en.wikipedia.org/wiki/Alterococcus%20agarolyticus | Alterococcus agarolyticus is a Gram-negative, facultatively anaerobic, halophilic and thermophilic bacterium from the genus of Alterococcus.
References
Verrucomicrobiota
Bacteria described in 1999 | Alterococcus agarolyticus | Biology | 53 |
73,457,377 | https://en.wikipedia.org/wiki/N%2CN%27-Diallyl-L-tartardiamide | {{DISPLAYTITLE:N,N'-Diallyl-L-tartardiamide}}
N,N′-Diallyl-L-tartardiamide (DATD) is a crosslinking agent for polyacrylamide gels, e.g., as used for SDS-PAGE. Compared to bisacrylamide gels, DATD gels have a stronger interaction with glass, and therefore are used in applications where the polyacrylamide gel acts as a "plug" structural component at the bottom of a gel electrophoresis apparatus, thereby preventing a weak discontinuous gel from sliding out from or otherwise moving within the apparatus. Unlike bisacrylamide-polyacrylamide gels, DATD-polyacrylamide gels can be conveniently dissolved using periodic acid due to the presence of viscinal diols in DATD. DATD is the slowest polyacrylamide crosslinker tested, and has can act as an inhibitor of polymerization at high concentrations.
See also
bisacrylamide
References
Acrylamides
Monomers | N,N'-Diallyl-L-tartardiamide | Chemistry,Materials_science | 238 |
13,351,034 | https://en.wikipedia.org/wiki/Werner%20Kuhn%20%28chemist%29 | Werner Kuhn (February 6, 1899 – August 27, 1963) was a Swiss physical chemist who developed the first model of the viscosity of polymer solutions using statistical mechanics. He is known for being the first to apply Boltzmann's entropy formula:
to the modeling of rubber molecules, i.e. the "rubber band entropy model", molecules which he imagined as chains of N independently oriented links of length b with an end-to-end distance of r. This model, which resulted in the derivation of the thermal equation of state of rubber, has since been extrapolated to the entropic modeling of proteins and other conformational polymer chained molecules attached to a surface.
Kuhn received a degree in chemical engineering at the Eidgenössische Technische Hochschule (ETH, Federal Institute of Technology), in Zürich, and later a doctorate (1923) in physical chemistry. He was appointed professor of physical chemistry at the University of Kiel (1936–39) and then returned to Switzerland as director of the Physico-Chemical Institute of the University of Basel (1939–63), where he also served as rector (1955–56).
In a 1951 lecture along with his student V.B. Hargitay, he was the first to hypothesize the countercurrent multiplier mechanism in the mammalian kidney, later to be discovered in many other similar biological systems.
See also
Excluded volume
Kuhn length
References
External links
Hirsch, Warren (2003). "Disorder in un-stretched rubberbands", JCE, February Vol. 80, No. 2, p. 145
Thermodynamicists
1899 births
1963 deaths
Swiss physical chemists | Werner Kuhn (chemist) | Physics,Chemistry | 352 |
36,353,404 | https://en.wikipedia.org/wiki/Hypocreopsis%20rhododendri | Hypocreopsis rhododendri is an ascomycete fungus. It is commonly known as hazel gloves due to the resemblance of its orange-brown, radiating lobular ascocarp to rubber gloves, and because it is found on hazel (Corylus avellana) stems.
Distribution
Hypocreopsis rhododendri is found on the hyperoceanic west coasts of Britain and Ireland, in the Atlantic Pyrenees in south western France, and in the Appalachian Mountains in the eastern United States.
Habitat
In the Appalachian mountains, H. rhododendri was originally found growing on Rhododendron maximum, and was subsequently found on Kalmia latifolia and Quercus sp.
In Europe, H. rhododendri is found in Atlantic hazel woodland, mainly on hazel stems. It has never been found on Rhododendron species.
Host
Although H. rhododendri is found on woody stems, it has been suggested that it is not a wood-decay fungus, but is instead a parasite of the wood-decay fungus Hymenochaete corrugata.
References
External links
Report on hazel gloves Hypocreopsis rhododendri, a UK BAP ascomycete fungus. English Nature Research Report.
Hazel gloves. Scottish Natural Heritage.
Scottish Fungi: Hazel gloves research news.
Fungi described in 1922
Fungi of Europe
Fungi of North America
Hypocreaceae
Fungus species | Hypocreopsis rhododendri | Biology | 308 |
1,174,172 | https://en.wikipedia.org/wiki/Current%20loop | In electrical signalling an analog current loop is used where a device must be monitored or controlled remotely over a pair of conductors. Only one current level can be present at any time.
A major application of current loops is the industry de facto standard 4–20 mA current loop for process control applications, where they are extensively used to carry signals from process instrumentation to proportional–integral–derivative (PID) controllers, supervisory control and data acquisition (SCADA) systems, and programmable logic controllers (PLCs). They are also used to transmit controller outputs to the modulating field devices such as control valves. These loops have the advantages of simplicity and noise immunity, and have a large international user and equipment supplier base. Some 4–20 mA field devices can be powered by the current loop itself, removing the need for separate power supplies, and the "smart" Highway Addressable Remote Transducer (HART) Protocol uses the loop for communications between field devices and controllers. Various automation protocols may replace analog current loops, but 4–20 mA is still a principal industrial standard.
Process control 4–20 mA loops
In industrial process control, analog 4–20 mA current loops are commonly used for electronic signalling, with the two values of 4 and 20 mA representing 0–100% of the range of measurement or control. These loops are used both for carrying sensor information from field instrumentation and carrying control signals to the process modulating devices, such as a valve.
The key advantages of the current loop are:
The loop can often power the remote device, with power supplied by the controller, thus removing need for power cabling. Many instrumentation manufacturers produce 4–20 mA sensors which are "loop powered".
The "live" or "elevated" zero of 4 mA allows powering of the device even with no process signal output from the field transmitter.
The accuracy of the signal is not affected by voltage drop in the interconnecting wiring.
It has high noise immunity, as it is low-impedance circuit, usually through twisted-pair conductors.
It is self-monitoring; currents less than 3.8 mA or more than 20.5 mA are taken to indicate a fault.
It can be carried over long cables up to the limit of the resistance for the voltage used.
Inline displays can be inserted and powered by the loop, as long as total allowable loop resistance is not exceeded.
Easy conversion to voltage using a resistor.
Loop-powered "I to P" (current to pressure) converters can convert the 4–20 mA signal to a 3–15 psi pneumatic output for control valves, allowing easy integration of 4–20 mA signals into existing pneumatic plant.
Field instrumentation measurements include pressure, temperature, level, flow, pH or other process variables. A current loop can also be used to control a valve positioner or other output actuator. Since input terminals of instruments may have one side of the current loop input tied to the chassis ground (earth), analog isolators may be required when connecting several instruments in series.
The relationship between current value and process variable measurement is set by calibration, which assigns different ranges of engineering units to the span between 4 and 20 mA. The mapping between engineering units and current can be inverted, so that 4 mA represents the maximum and 20 mA the minimum.
Active and passive devices
Depending on the source of current for the loop, devices may be classified as active (supplying or "sourcing" power) or passive (relying on or "sinking" loop power). For example, a chart recorder may provide loop power to a pressure transmitter. The pressure transmitter modulates the current on the loop to send the signal to the strip chart recorder, but does not in itself supply power to the loop and so is passive. Another loop may contain two passive chart recorders, a passive pressure transmitter, and a 24 V battery (the battery is the active device). Note that a 4-wire instrument has a power-supply input separate from the current loop.
Panel mount displays and chart recorders are commonly termed "indicator devices" or "process monitors". Several passive indicator devices may be connected in series, but a loop must have only one transmitter device and only one power source (active device).
Evolution of analogue control signals
The 4–20 mA convention was born in the 1950s out of the earlier highly successful 3–15 psi pneumatic control signal standard, when electronics became cheap and reliable enough to emulate the older standard electrically. The 3–15 psi standard had the same features of being able to power some remote devices, and have a "live" zero. However, the 4–20 mA standard was better suited to the electronic controllers being developed at the time.
The transition was gradual and has extended into the 21st century, due to the huge installed base of 3–15 psi devices. As the operation of pneumatic valves over motorised valves has many cost and reliability advantages, pneumatic actuation is still an industry standard. To allow the construction of hybrid systems, where the 4–20 mA is generated by the controller, but allows the use of pneumatic valves, a range of current to pressure (I to P) converters are available from manufacturers. These are usually local to the control valve and convert 4–20 mA to 3–15 psi (or 0.2–1.0 bar). This signal is then fed to the valve actuator or, more commonly, a pneumatic positioner. The positioner is a dedicated controller which has a mechanical linkage to the actuator movement. This ensures that problems of friction are overcome and the valve control element moves to the desired position. It also allows the use of higher air pressures for valve actuation.
With the development of cheap industrial micro-processors, "smart" valve positioners have become available since the mid-1980s and are very popular for new installations. These include an I to P converter, plus valve position and condition monitoring. These latter are fed back over the current loop to the controller, using protocols such as HART.
Long circuits
Analog current loops were historically occasionally carried between buildings by dry pairs in telephone cables leased from the local telephone company. 4–20 mA loops were more common in the days of analog telephony. These circuits require end-to-end direct current (DC) continuity, and unless a dedicated wire pair was hardwired, their use ceased with the introduction of semiconductor switching. DC continuity is not available over a microwave radio, optical fibre, or a multiplexed telephone circuit connection.
Basic DC circuit theory shows that the current is the same all along the line. It was common to see 4–20 mA circuits that had loop lengths in miles or circuits working over telephone cable pairs that were longer than ten thousand feet end-to-end. There are still legacy systems in place using this technology. In Bell System circuits, voltages up to 125 VDC were employed.
Discrete control
Discrete control functions can be represented by discrete levels of current sent over a loop. This would allow multiple control functions to be operated over a single pair of wires. Currents required for a specific function vary from one application or manufacturer to another. There is no specific current that is tied to a single meaning. It is almost universal that 0 mA indicates the circuit has failed. In the case of a fire alarm, 6 mA could be normal, 15 mA could mean a fire has been detected, and 0 mA would produce a trouble indication, telling the monitoring site the alarm circuit had failed. Some devices, such as two-way radio remote control consoles, can reverse the polarity of currents and can multiplex audio onto a DC current.
These devices can be employed for any remote control need a designer might imagine. For example, a current loop could actuate an evacuation siren or command synchronized traffic signals.
Two-way radio use
Current loop circuits are one possible way used to control radio base stations at distant sites. The two-way radio industry calls this type of remote control DC remote. This name comes from the need for DC circuit continuity between the control point and the radio base station. A current loop remote control saves the cost of extra pairs of wires between the operating point and the radio transceiver. Some equipment, such as the Motorola MSF-5000 base station, uses currents below 4 mA for some functions. An alternative type, the tone remote, is more complex but requires only an audio path between control point and base station.
For example, a taxi dispatch base station might be physically located on the rooftop of an eight-story building. The taxi company office might be in the basement of a different building nearby. The office would have a remote control unit that would operate the taxi company base station over a current loop circuit. The circuit would normally be over a telephone line or similar wiring. Control function currents come from the remote control console at the dispatch office end of a circuit. In two-way radio use, an idle circuit would normally have no current present.
In two-way radio use, radio manufacturers use different currents for specific functions. Polarities are changed to get more possible functions over a single circuit. For example, imagine one possible scheme where the presence of these currents cause the base station to change state:
no current means receive on channel 1, (the default).
+6 mA might mean transmit on channel 1
−6 mA might mean stay in receive mode but switch to channel 2. So long as the −6 mA current were present, the remote base station would continue to receive on channel 2.
−12 mA might command the base station to transmit on channel 2.
This circuit is polarity-sensitive. If a telephone company cable splicer accidentally reversed the conductors, selecting channel 2 would lock the transmitter on.
Each current level could close a set of contacts, or operate solid-state logic, at the other end of the circuit. That contact closure caused a change of state on the controlled device. Some remote control equipment could have options set to allow compatibility between manufacturers. That is, a base station that was configured to transmit with a +18 mA current could have options changed to (instead) make it transmit when +6 mA was present.
In two-way radio use, AC signals were also present on the circuit pair. If the base station were idle, receive audio would be sent over the line from the base station to the dispatch office. In the presence of a transmit command current, the remote control console would send audio to be transmitted. The voice of the user in the dispatch office would be modulated and superimposed over the DC current that caused the transmitter to operate.
See also
Current source – a current loop transmitter
Current-to-voltage converter
Highway Addressable Remote Transducer Protocol
NAMUR German industry standards body defining fault levels for 4–20 mA
Piping and instrumentation diagram Gives the control scheme and associated piping and vessels.
References
Further reading
Lipták, Béla G. Instrumentation Engineers' Handbook. Process Measurement and Analysis. CRC Press. 2003. HB.
External links
Fundamentals, System Design, and Setup for the 4 to 20 mA Current Loop
What Voltage do I need to operate my 4...20 mA transducer?
Current signal systems
How to read current loop using arduino
Types of 4-20 mA Current Loop
Communication circuits
Control engineering
Electronics standards
Industrial automation
Analog communication interfaces | Current loop | Engineering | 2,309 |
49,548,197 | https://en.wikipedia.org/wiki/Passive%20Wi-Fi | Passive Wi-Fi is a refinement of Wi-Fi technology that uses passive reflection to reduce energy consumption.
Wi-Fi energy use
Wi-Fi use can account for up to 60 percent of a smartphone's energy consumption. When not connected to a network, Wi-Fi consumes energy because the device constantly searches for a signal.
Backscattering
The technique communicates via backscattering, reflecting incoming radio waves sent from a separate device. The technique is similar to contactless RFID chip cards although unlike such cards, the new technique does not require a special device to read the signal.
The project effectively decoupled the analog and the digital radio signals. Power-intensive functions – like producing a signal at a specific frequency are assigned to a single device in the network that is plugged into the grid. Smartphones modify and reflect this signal to communicate to the router. Prototype passive devices transferred data as far as 100 feet through walls at 11 megabits per second. The system used tens of microwatts of power, 10−4 less energy than conventional Wi-fi devices, and one thousandth the energy of Bluetooth LE and Zigbee communications standards.
Applications
Applications include smart home devices such as smoke detectors, temperature sensors and security cameras that will no longer require a power source.
References
Wi-Fi
Internet of things | Passive Wi-Fi | Technology | 273 |
13,182,827 | https://en.wikipedia.org/wiki/Weld%20quality%20assurance | Weld quality assurance is the use of technological methods and actions to test or assure the quality of welds, and secondarily to confirm the presence, location and coverage of welds. In manufacturing, welds are used to join two or more metal surfaces. Because these connections may encounter loads and fatigue during product lifetime, there is a chance they may fail if not created to proper specification.
Weld testing and analysis
Methods of weld testing and analysis are used to assure the quality and correctness of the weld after it is completed. This term generally refers to testing and analysis focused on the quality and strength of the weld but may refer to technological actions to check for the presence, position, and extent of welds. These are divided into destructive and non-destructive methods. A few examples of destructive testing include macro etch testing, fillet-weld break tests, transverse tension tests, and guided bend tests. Other destructive methods include acid etch testing, back bend testing, tensile strength break testing, nick break testing, and free bend testing. Non-destructive methods include fluorescent penetrate tests, magnaflux tests, eddy current (electromagnetic) tests, hydrostatic testing, tests using magnetic particles, X-rays and gamma ray-based methods, and acoustic emission techniques. Other methods include ferrite and hardness testing.
Imaging-based methods
Industrial Radiography
X-ray-based weld inspection may be manual, performed by an inspector on X-ray-based images or video, or automated using machine vision. Gamma Rays can also be used
Visible light imaging
Inspection may be manual, conducted by an inspector using imaging equipment, or automated using machine vision. Since the similarity of materials between weld and workpiece, and between good and defective areas, provides little inherent contrast, the latter usually requires methods other than simple imaging.
One (destructive) method involves the microscopic analysis of a weld cross-section.
Ultrasonic- and acoustic-based methods
Ultrasonic testing uses the principle that a gap in the weld changes the propagation of ultrasonic sound through the metal. One common method uses single-probe ultrasonic testing involving operator interpretation of an oscilloscope-type screen.
Another senses using a 2D array of ultrasonic sensors. Conventional, phased array and time of flight diffraction (TOFD) methods can be combined into the same piece of test equipment.
Acoustic emission methods monitor for the sound created by the loading or flexing of the weld.
Peel testing of spot welds
This method includes tearing the weld apart and measuring the size of the remaining weld.
Weld monitoring
Weld monitoring methods ensure the weld's quality and correctness during welding. The term is generally applied to automated monitoring for weld-quality purposes and secondarily for process-control purposes such as vision-based robot guidance. Visual weld monitoring is also performed during the welding process.
On vehicular applications, weld monitoring aims to enable improvements in the quality, durability, and safety of vehicles – with cost savings in the avoidance of recalls to fix the large proportion of systemic quality problems that arise from suboptimal welding. Quality monitoring of automatic welding can save production downtime and reduce the need for product reworking and recall.
Industrial monitoring systems encourage high production rates and reduce scrap costs.
Inline coherent imaging
Inline coherent imaging (ICI) is a recently developed interferometric technique based on optical coherence tomography that is used for quality assurance of keyhole laser beam welding, a welding method that is gaining popularity in a variety of industries. ICI aims a low-powered broadband light source through the same optical path as the primary welding laser. The beam enters the keyhole of the weld and is reflected back into the head optics by the bottom of the keyhole. An interference pattern is produced by combining the reflected light with a separate beam that has traveled through a path of a known distance. This interference pattern is then analyzed to obtain a precise measurement of the depth of the keyhole. Because these measurements are acquired in real-time, ICI can also be used to control the laser penetration depth by using the depth measurement in a feedback loop that modulates the laser's output power.
Transient thermal analysis method
Transient thermal analysis is used for range of weld optimization tasks.
Signature image processing method
Signature image processing (SIP) is a technology for analyzing electrical data collected from welding processes. Acceptable welding requires exact conditions; variations in conditions can render a weld unacceptable. SIP allows the identification of welding faults in real time, measures the stability of welding processes, and enables the optimization of welding processes.
Development
The idea of using electrical data analyzed by algorithms to assess the quality of the welds produced in robotic manufacturing emerged in 1995 from research by Associate Professor Stephen Simpson at the University of Sydney on the complex physical phenomena that occur in welding arcs. Simpson realized that a way of determining the quality of a weld could be developed without a definitive understanding of those phenomena.
The development involved:
a method for handling sampled data blocks by treating them as phase-space portrait signatures with appropriate image processing. Typically, one second's worth of sampled welding voltage and current data are collected from GMAW pulse or short arc welding processes. The data is converted to a 2D histogram, and signal-processing operations such as image smoothing are performed.
a technique for analyzing welding signatures based on statistical methods from the social sciences, such as principal component analysis. The relationship between the welding voltage and the current reflects the state of the welding process, and the signature image includes this information. Comparing signatures quantitatively using principal component analysis allows for the spread of signature images, enabling faults to be detected and identified The system includes algorithms and mathematics appropriate for real-time welding analysis on personal computers, and the multidimensional optimization of fault-detection performance using experimental welding data. Comparing signature images from moment to moment in a weld provides a useful estimate of how stable the welding process is. "Through-the-arc" sensing, by comparing signature images when the physical parameters of the process change, leads to quantitative estimates—for example, of the position of the weld bead.
Unlike systems that log information for later study or use X-rays or ultrasound to check samples, SIP technology looks at the electrical signal and detects faults when they occur.
Data blocks of 4,000 points of electrical data are collected four times a second and converted to signature images. After image processing operations, statistical analyses of the signatures provide a quantitative assessment of the welding process, revealing its stability and reproducibility and providing fault detection and process diagnostics. A similar approach, using voltage-current histograms and a simplified statistical measure of distance between signature images, has been evaluated for tungsten inert gas (TIG) welding by researchers from Osaka University.
Industrial application
SIP provides the basis for the WeldPrint system, which consists of a front-end interface and software based on the SIP engine and relies on electrical signals alone. It is designed to be non-intrusive and sufficiently robust to withstand harsh industrial welding environments. The first major purchaser of the technology, GM Holden provided feedback that allowed the system to be refined in ways that increased its industrial and commercial value. Improvements in the algorithms, including multiple parameter optimization with a server network, have led to an order-of-magnitude improvement in fault-detection performance over the past five years.
WeldPrint for arc welding became available in mid-2001. About 70 units have been deployed since 2001, about 90% used on the shop floors of automotive manufacturing companies and their suppliers. Industrial users include Lear (UK), Unidrive, GM Holden, Air International and QTB Automotive (Australia). Units have been leased to Australian companies such as Rheem, Dux, and OneSteel for welding evaluation and process improvement.
The WeldPrint software received the Brother business software of the year award (2001); in 2003, the technology received the A$100,000 inaugural Australasian Peter Doherty Prize for Innovation;
and WTi, the University of Sydney's original spin-off company, received an AusIndustry Certificate of Achievement in recognition of the development.
SIP has opened opportunities for researchers to use it as a measurement tool both in welding
and in related disciplines, such as structural engineering. Research opportunities have opened up in the application of biomonitoring of external EEGs, where SIP offers advantages in interpreting the complex signals
Weld mapping
Weld mapping is the process of assigning information to a weld repair or joint to enable easy identification of weld processes, production (welders, their qualifications, date welded), quality (visual inspection, NDT, standards and specifications) and traceability (tracking weld joints and welded castings, the origin of weld materials).
Weld mapping should also incorporate a pictorial identification to represent the weld number on the fabrication drawing or casting repair. Military, nuclear and commercial industries possess unique quality standards (eg., ISO, CEN, ASME, ASTM, AWS, NAVSEA) which direct weld mapping procedures and specifications, both in metal casting in which defects are removed and filled in via GTAW (TIG welding) or SMAW (stick welding) processes, or fabrication of weld joints which primarily involves GMAW (MIG welding).
See also
Welding defect
Industrial radiography
Robot welding
Pipeline and Hazardous Materials Safety Administration
References
Further reading
ISO 3834-1: "Quality requirements for fusion welding of metallic materials. Criteria for the selection of the appropriate level of quality requirements" 2005)
ISO 3834-2: "Quality requirements for fusion welding of metallic materials. Comprehensive quality requirements" (2005)
ISO 3834-3: "Quality requirements for fusion welding of metallic materials. Standard quality requirements" (2005)
ISO 3834-4: "Quality requirements for fusion welding of metallic materials. Elementary quality requirements" (2005)
ISO 3834-5: "Quality requirements for fusion welding of metallic materials. Documents with which it is necessary to conform to claim conformity to the quality requirements of ISO 3834-2, ISO 3834-3 or ISO 3834-4"
ISO/TR 3834-6: "Quality requirements for fusion welding of metallic materials. Guidelines on implementing ISO 3834" (2007)
Welding | Weld quality assurance | Engineering | 2,111 |
1,419,696 | https://en.wikipedia.org/wiki/The%20Cult%20of%20Mac | The Cult of Mac is a book by Leander Kahney. The book discusses fanaticism about the Apple product line and brand loyalty. Kahney released a later book titled The Cult of iPod.
The cover of the book features the Apple logo shaved into the back of a person's head.
See also
Apple evangelist
Reality distortion field
Criticism of Apple Inc.#Comparison with a cult
Footnotes
Books about Apple Inc.
2004 non-fiction books
No Starch Press books | The Cult of Mac | Technology | 96 |
65,597,539 | https://en.wikipedia.org/wiki/Budget-balanced%20mechanism | In mechanism design, a branch of economics, a weakly-budget-balanced (WBB) mechanism is a mechanism in which the total payment made by the participants is at least 0. This means that the mechanism operator does not incur a deficit, i.e., does not have to subsidize the market. Weak budget balance is considered a necessary requirement for the economic feasibility of a mechanism. A strongly-budget-balanced (SBB) mechanism is a mechanism in which the total payment made by the participants is exactly 0. This means that all payments are made among the participants - the mechanism has neither a deficit nor a surplus. The term budget-balanced mechanism is sometimes used as a shorthand for WBB, and sometimes as a shorthand for SBB.
Weak budget balance
A simple example of a WBB mechanism is the Vickrey auction, in which the operator wants to sell an object to one of n potential buyers. Each potential buyer bids a value, the highest bidder wins an object and pays the second-highest bid. As all bids are positive, the total payment is trivially positive too.
As an example of a non-WBB mechanism, consider its extension to a bilateral trade setting. Here, there is a buyer and a seller; the buyer has a value of b and the seller has a cost of s. Trade should occur if and only if b > s. The only truthful mechanism that implements this solution must charge a trading buyer the cost s and pay a trading seller the value b; but since b > s, this mechanism runs a deficit. In fact, the Myerson–Satterthwaite theorem says that every Pareto-efficient truthful mechanism must incur a deficit.
McAfee developed a solution to this problem for a large market (with many potential buyers and sellers): McAfee's mechanism is WBB, truthful and almost Pareto-efficient - it performs all efficient deals except at most one. McAfee's mechanism has been extended to various settings, while keeping its WBB property. See double auction for more details.
Strong budget balance
In a strongly-budget-balanced (SBB) mechanism, all payments are made between the participants themselves. An advantage of SBB is that all the gain from trade remains in the market; thus, the long-term welfare of the traders is larger and their tendency to participate may be higher.
McAfee's double-auction mechanism is WBB but not SBB - it may have a surplus, and this surplus may account for almost all the gain from trade. There is a simple SBB mechanism for bilateral trading: trade occurs iff b > s, and in this case the buyer pays (b+s)/2 to the seller. Since the payment goes directly from the buyer to the seller, the mechanism is SBB; however, it is not truthful, since the buyer can gain by bidding b' < b and the seller can gain by bidding s''' > s''. Recently, some truthful SBB mechanisms for double auction have been developed. Some of them have been generalized to multi-sided markets.
See also
Balanced budget - a budget in which revenues are equal to expenditures
Government budget balance - a financial statement presenting the government's proposed revenues and spending for a financial year.
Balanced budget amendment - a rule in the USA constitution requiring that a state cannot spend more than its income.
References
Mechanism design
Auction theory | Budget-balanced mechanism | Mathematics | 708 |
10,548,751 | https://en.wikipedia.org/wiki/United%20States%20biological%20weapons%20program | The United States biological weapons program officially began in spring 1943 on orders from U.S. President Franklin D. Roosevelt. Research continued following World War II as the U.S. built up a large stockpile of biological agents and weapons. Over the course of its 27-year history, the program weaponized and stockpiled seven bio-agents Bacillus anthracis (anthrax), Francisella tularensis (tularemia), Brucella spp (brucellosis), Coxiella burnetii (Q-fever), Venezuelan equine encephalitis virus, Botulinum toxin (botulism), and Staphylococcal enterotoxin B. The US also pursued basic research on many more bio-agents. Throughout its history, the U.S. bioweapons program was secret. It was later revealed that laboratory and field testing (some of the latter using simulants on non-consenting individuals) had been common. The official policy of the United States was first to deter the use of bio-weapons against U.S. forces and secondarily to retaliate if deterrence failed.
In 1969, President Richard Nixon ended all offensive (i.e., non-defensive) aspects of the U.S. bio-weapons program. In 1975 the U.S. ratified both the 1925 Geneva Protocol and the 1972 Biological Weapons Convention (BWC)—international treaties outlawing biological warfare.
History
Early history (1918–1941)
Initial interest in any form of biological warfare came at the close of World War I. The only agent the U.S. tested was the toxin ricin, a product of the castor plant. The U.S. conducted tests concerning two methods of ricin dissemination: the first, which involved adhering the toxin to shrapnel for delivery by artillery shell, was successful; the second, delivering an aerosol cloud of ricin, was proven less successful in these tests. Neither delivery method was perfected before the war in Europe ended.
In the early 1920s suggestions that the U.S. began a biological weapons program were coming from within the Chemical Warfare Service (CWS). Chief of the CWS, Amos Fries, decided that such a program would not be "profitable" for the U.S. Japan's Shiro Ishii began promoting biological weapons during the 1920s and toured biological research facilities worldwide, including in the United States. Though Ishii concluded that the U.S. was developing a bio-weapons programs, he was incorrect. In fact, Ishii concluded that each major power he visited was developing a bio-weapons program. As the interwar period continued, the United States did not emphasize biological weapons development or research. While the U.S. was spending very little time on biological weapons research, its future allies and enemies in the upcoming second World War were researching the potential of biological weapons as early as 1933.
World War II (1941–45)
Despite the World War I-era interest in ricin, as World War II erupted, the United States Army still maintained the position that biological weapons were, for the most part, impractical. Other nations, notably France, Japan and the United Kingdom, thought otherwise and had begun their own biological weapons programs. Thus, as late as 1942 the U.S. had no biological weapons capabilities. Initial interest in biological weapons by the Chemical Warfare Service began in 1941. That fall, U.S. Secretary of War Henry L. Stimson requested that the National Academy of Sciences (NAS) undertake consideration of U.S. biological warfare. He wrote to Dr. Frank B. Jewett, then president of the NAS:
Because of the dangers that might confront this country from potential enemies employing what may be broadly described as biological warfare, it seems advisable that investigations be initiated to survey the present situation and the future possibilities. I am therefore, asking if you will undertake the appointment of an appropriate committee to survey all phases of this matter. Your organization already has before it a request from The Surgeon General for the appointment of a committee by the Division of Medical Sciences of the National Research Council to examine one phase of the matter.
In response the NAS formed a committee, the War Bureau of Consultants (WBC), which issued a report on the subject in February 1942. The report, among other items, recommended the research and development of an offensive biological weapons program.
The British, and the research undertaken by the WBC, pressured the U.S. to begin biological weapons research and development and in November 1942 U.S. President Franklin Roosevelt officially approved an American biological weapons program. In response to the information provided by the WBC, Roosevelt ordered Stimson to form the War Research Service (WRS). Established within the Federal Security Agency, the WRS' stated purpose was to promote "public security and health", but, in reality, the WRS was tasked with coordinating and supervising the U.S. biological warfare program. In the spring of 1943 the U.S. Army Biological Warfare Laboratories were established at Camp Detrick (now Fort Detrick) in Frederick, Maryland.
Though initially, under George Merck, the WRS contracted several universities to participate in the U.S. biological weapons program, the program became large quickly and before long it was under the full control of the CWS. By November 1943 the biological weapons facility at Detrick was completed, in addition, the United States constructed three other facilities - a biological agent production plant at Vigo County near Terre Haute, Indiana (Vigo Ordnance Plant), a field-testing site on Horn Island in Mississippi (Horn Island Testing Station), and another field site near Granite Peak in Utah (Granite Peak Installation). According to an official history of the period, "the elaborate security precautions taken [at Camp Detrick] were so effective that it was not until January 1946, 4 months after VJ Day, that the public learned of the wartime research in biological weapons".
Cold War (1947–1969)
Following World War II, the United States biological warfare program progressed into an effective, military-driven research and production program, covered in controversy and secrecy. Production of U.S. biological warfare agents went from "factory-level to laboratory-level". By 1950 the principal U.S. bio-weapons facility was located at Camp Detrick in Maryland under the auspices of the Research and Engineering Division of the U.S. Army Chemical Corps. Most of the research and development was done there, while production and testing occurred at Pine Bluff, Arkansas, and Dugway Proving Ground, Utah. Pine Bluff Arsenal began production of weapons-grade agents by 1954. From 1952 to 1954 the Chemical Corps maintained a biological weapons research and development facility at Fort Terry on Plum Island, New York. Fort Terry's focus was on anti-animal biological weapon research and development; the facility researched more than a dozen potential BW agents. From the end of World War II through the Korean War, the U.S. Army, the Chemical Corps and the U.S. Air Force all expanded their biological warfare programs significantly, especially concerning delivery systems. Throughout the Cold War, the United States and the Soviet Union would combine to produce enough biological weapons to kill everyone on Earth.
At the trial of John W. Powell and two other defendants for sedition for reporting that the U.S. used biological weapons during the Korean War, the U.S. Attorney in the case, Robert H. Schnacke and the former Chief of the Special Operations Division at Ft. Detrick during the Korean War (and long-time U.S. Chemical Corps officer), John L. Schwab, entered sworn affidavits that the U.S. Army had the capability to use both offensive and defensive biological and chemical weapons "during the period from January 1, 1949 through July 27, 1953.... based upon resources available and retained only within the continental limits of the United States."
Another substantive expansion phase was during the Kennedy-Johnson years, after McNamara initiated Project 112 as a comprehensive initiative, starting in 1961. Despite an increase in testing, the readiness for biological warfare remained limited after this program. A 10 November 1969 report by the Interdepartmental Political-Military Group submitted its findings to the Nixon administration that the American BW capability was limited:
Field testing of the biological weapons was completed covertly and successfully with simulants and agents dispersed over wide, open areas. The first American large-scale aerosol vulnerability test, code-named Operation Sea-Spray, occurred in the San Francisco Bay Area in September 1950, using two types of bacteria, Bacillus globigii and Serratia marcescens, and fluorescent particles. Bacillus species were chosen in these tests because of their spore-forming abilities and their similarities to Bacillus anthracis, a causing agent of anthrax. S. marcescens was used because it is easily identifiable from its red pigment. In 1966, the New York City Subway was contaminated with Bacillus globigii in an attempt to simulate the spreading of anthrax in a large urban population. More field tests involving pathogenic species were conducted at Dugway Proving Ground, Utah and anti-animal studies were conducted at Eglin Air Force Base, Florida.
At the time, many scientists disagreed with the creation of biological weapons. Theodor Rosebury, who previously worked as a supervisor at Camp Detrick, issued a warning against the development of biological weapons during the Cold War. In 1945, Rosebury left Camp Detrick during a period of time when scientists could publish the results of their research. Rosebury published Peace or Pestilence? in 1949, which explained his views on why biological weapons should be banned by world powers. By the time his book was available, publications were becoming more restricted and the extent of the Soviet threat of biological weapons was being overstated by Congress and the media. In 1969, Harvard biologist Matthew Meselson argued that the biological warfare programs would eventually hurt US security because potential enemy nations could easily emulate these weapons.
The general population remained uninformed of any breakthroughs concerning biological warfare. This included new production plants for anthrax, brucellosis, and anti-crop agents, as well as the development of the cluster bomb. The U.S. public was also unaware of ongoing studies, particularly the environmental and open-air experiments that were taking place. One of the more controversial experiments was conducted in 1951, when a disproportionate number of African Americans were exposed to the fungus Aspergillus fumigatus, to see if they were more susceptible to infection. Some scientists reasoned that such knowledge would help them prepare a defense against a more deadly form of the fungus. The same year, workers at the Norfolk Supply Center in Norfolk, Virginia, were unknowingly exposed to Aspergillus fumigatus spores. Another case of human research was the biodefense medical research program, Operation Whitecoat. This decade-long experiment on volunteer Seventh Day Adventist servicemen exposed them to tularaemia via aerosols. They were then treated with antibiotics. The goal of the experiment, unknown to the volunteers, was to standardize tularaemia bomb-fill for attacks on civilian populations.
In the 1960s, the U.S. changed its main approach from biological agents aimed to kill to those that would incapacitate. In 1964, research programs studied Enterotoxin type B, which can cause food poisoning. New research initiatives also included prophylaxis, the preventive treatment of diseases. Pathogens studied included the biological agents causing a myriad of diseases such as anthrax, glanders, brucellosis, melioidosis, Venezuelan equine encephalitis, Q fever, coccidioidomycosis, and other plant and animal pathogens.
The Vietnam War brought public awareness to the U.S. biological weapons program. The use of chemicals, riot-control agents, and herbicides like Agent Orange drew international criticism, and negatively affected the U.S. public opinion on the development of biological weapons. Highly controversial human research programs and open air experiments were discovered. Jeanne Guillemin, wife of biologist Matthew Meselson, summarized the controversy:
The Nixon administration felt an urgent need to respond to the growing negative perception of biological weapons. The realization that biological weapons may become the poor man's atom bomb also contributed to the end of the U.S. biological weapons program. Subsequently, President Nixon announced that the U.S. was unilaterally renouncing its biological warfare program, ultimately signing the Biological and Toxin Weapons Convention in 1972.
End of the program (1969–1973)
President Richard M. Nixon issued his "Statement on Chemical and Biological Defense Policies and Programs" on November 25, 1969, in a speech from Fort Detrick. The statement officially ended all U.S. offensive biological weapons programs. Nixon noted that biological weapons were unreliable and stated:
The United States shall renounce the use of lethal biological agents and weapons, and all other methods of biological warfare. The United States will confine its biological research to defensive measures such as immunization and safety measures.
In his speech Nixon called his move "unprecedented"; and it was in fact the first review of the U.S. biological warfare program since 1954. Despite the lack of review, the biological warfare program had increased in cost and size since 1961. From the onset of the U.S. biological weapons program in 1943 through the end of World War II the United States spent $400 million on biological weapons, mostly on research and development. The budget for fiscal year 1966 was $38 million. When Nixon ended the program the budget was $300 million annually. Nixon's statement confined all biological weapons research to defensive-only and ordered the destruction of the existing U.S. biological arsenal.
U.S. biological weapons stocks were destroyed over the next few years. A $12 million disposal plan was undertaken at Pine Bluff Arsenal, where all U.S. anti-personnel biological agents were stored. That plan was completed in May 1972 and included decontamination of facilities at Pine Bluff. Other agents, including anti-crop agents such as wheat stem rust, were stored at Beale Air Force Base and Rocky Mountain Arsenal. These anti-crop agents, along with agents at Fort Detrick used for research purposes were destroyed in March 1973.
Geneva Protocol and BWC
The 1925 Geneva Protocol, ratified by most major powers in the 1920s and 30s, had still not been ratified by the United States at the dawn of World War II. Among the Protocol's provisions was a ban on bacteriological warfare. The Geneva Protocol had encountered opposition in the U.S. Senate, in part due to strong lobbying against it by the Chemical Warfare Service, and it was never brought to the floor for a vote when originally introduced. Regardless, on June 8, 1943, President Roosevelt affirmed a no-first-use policy for the United States concerning biological weapons. Even with Roosevelt's declaration opposition to the Protocol remained strong; in 1949 the Protocol was among several old treaties returned to President Harry S. Truman unratified.
When Nixon ended the U.S. bio-weapons program in 1969 he also announced that he would resubmit the Geneva Protocol to the U.S. Senate. This was a move Nixon was considering as early as July 1969. The announcement included language that indicated the Nixon administration was moving toward an international agreement on an outright ban on bio-weapons. Thus, the Nixon administration became the world's leading anti-biological weapons voice calling for an international treaty. The Eighteen Nation Disarmament Committee was discussing a British draft of a biological weapons treaty which the United Nations General Assembly approved in 1968 and that NATO supported. These arms control talks would eventually lead to the Biological Weapons Convention, the international treaty outlawing biological warfare. Prior to the Nixon announcement only Canada supported the British draft. Beginning in 1972, the Soviet Union, United States and more than 100 other countries signed the BWC. The United States ratified the Geneva Protocol in 1975.
Agents studied and weaponized
When the U.S. biological warfare program ended in 1969 it had developed six mass-produced, battle-ready biological weapons in the form of agents that cause anthrax, tularemia, brucellosis, Q-fever, Venezuelan equine encephalitis virus, and botulism. In addition staphylococcal enterotoxin B was produced as an incapacitating agent. In addition to the agents that were ready to be used, the U.S. program conducted research into the weaponization of more than 20 other agents. They included: smallpox, EEE and WEE, AHF, Hantavirus, BHF, Lassa fever, melioidosis, plague, yellow fever, psittacosis, typhus, dengue fever, Rift Valley fever (RVF), CHIKV, late blight of potato, rinderpest, Newcastle disease, bird flu, and the toxin ricin.
Besides the numerous pathogens that afflict human beings, the U.S. had developed an arsenal of anti-agriculture biological agents. These included rye stem rust spores (stored at Edgewood Arsenal, 1951–1957), wheat stem rust spores (stored at the same facility 1962–1969), and the causative agent of rice blast (stored at Fort Detrick 1965–1966).
A U.S. facility at Fort Terry focused primarily on anti-animal biological agents. The first agent that was a candidate for development was foot and mouth disease (FMD). Besides FMD, five other top-secret biological weapons projects were commissioned on Plum Island. The other four programs researched included RVF, rinderpest, African swine fever, plus eleven miscellaneous exotic animal diseases. The eleven miscellaneous pathogens were: Blue tongue virus, bovine influenza, bovine virus diarrhea (BVD), fowl plague, goat pneumonitis, mycobacteria, "N" virus, Newcastle disease, sheep pox, Teschers disease, and vesicular stomatitis.
Work on delivery systems for the U.S. bioweapons arsenal led to the first mass-produced biological weapon in 1952, the M33 cluster bomb. The M33's sub-munition, the pipe-bomb-like cylindrical M114 bomb, was also completed and battle-ready by 1952. Other delivery systems researched and at least partially developed during the 1950s included the E77 balloon bomb and the E86 cluster bomb. The peak of U.S. biological weapons delivery system development came during the 1960s. Production of cluster bomb submunitions began to shift from cylindrical to spherical bomblets, which had a larger coverage area. Development of the spherical E120 bomblet took place in the early 1960s as did development of the M143 bomblet, similar to the chemical M139 bomblet. The experimental Flettner rotor bomblet was also developed during this time period. The Flettner rotor was called, "probably one of the better devices for disseminating microorganisms", by William C. Patrick III.
Alleged uses
Korean War
In 1952, during the Korean War, the Chinese and North Koreans insinuated that mysterious outbreaks of disease in North Korea and China were due to U.S. biological attacks. Despite contrary assertions from the International Red Cross and World Health Organization, whom the Chinese denounced as being dominated by US influence and thus biased, the Chinese government pursued an investigation by the World Peace Council. A committee led by Joseph Needham gathered evidence for a report that included testimony from eyewitnesses, doctors, and four American Korean War prisoners who confirmed use of biological weapons by the U.S. In eastern Europe, China, and North Korea it was widely believed that the accusations were true. A 1988 book Korea: The Unknown War, by Western historians Jon Halliday and Bruce Cumings, also suggested the claims might be true.
In 1998, Canadian researchers and historians Stephen Endicott and Edward Hagerman of York University made the case that the accusations were true in their book, The United States and Biological Warfare: Secrets from the Early Cold War and Korea. The book received mostly positive reviews, out of a collection of 20 reviews cited, 2 were negative, calling it "bad history" and "appalling", while others praised the authors, "Endicott and Hagerman is far and away the most authoritative work on the subject" and "the most impressive, expertly researched and, as far as the official files allow, the best-documented case for the prosecution yet made". In the same year Endicott's book was published, Kathryn Weathersby and Milton Leitenberg of the Cold War International History Project at the Woodrow Wilson Center in Washington released a cache of Soviet and Chinese documents that claimed to have revealed that the biowarfare allegation was an elaborate disinformation campaign by the communists. In addition, a Japanese journalist claims to have seen similar evidence of a Soviet disinformation campaign and that the evidence supporting its occurrence was faked. In 2001, anti-communist historian Herbert Romerstein supported Weathersby and Leitenberg, criticizing Endicott's research for using evidence provided by the Chinese government.
In March 2010, the allegations were investigated by the Al Jazeera English news program People & Power. In this program, Professor Mori Masataka investigated historical artifacts in the form of bomb casings from US biological weapons, contemporary documentary evidence and eyewitness testimonies. He concluded that the United States did, in fact, test biological weapons on North Korea during the Korean War.
In September 2020, U.S. author Jeffrey Kaye published a set of declassified CIA communications reports (COMINT) that documented the responses of military units for the Korean People's Army and the Chinese People's Volunteer Army as they were apparently under attack by biological weapons, particularly the dropping of bacteria-laden insects. Some of these COMINT reports were also published a few months previously in Nicholson Baker's book, Baseless. One report from an identified Chinese military unit on February 26, 1952, said, "yesterday it was discovered that in our bivouac area there was a real flood of bacteria and germs from a plane by the enemy. Please supply us immediately with an issue of DDT that we may combat this menace, stop the spread of this plague, and eliminate all bacteria." In another example, on March 6, 1952, the 23rd Brigade of the Korean People's Army sent a "long detailed... message to one of its subordinate battalions" suggesting preventive measures be taken against "bacteria" dropped by UN aircraft, apparently in the area around Sariwon. The report stated that "three persons... became suddenly feverish", presumably in their unit. Their nervous systems were said to have become
"benumbed".
Cuba
It has been rumored that the U.S. employed biological weapons against the Communist island nation of Cuba. Noam Chomsky claimed that evidence exists implicating the U.S. in biological warfare in Cuba. These claims are disputed.
Allegations in 1962 held that CIA operatives had contaminated a shipment of sugar while it was in storage in Cuba. Also in 1962, a Canadian agricultural technician assisting the Cuban government claimed he was paid $5,000 to infect Cuban turkeys with the deadly Newcastle disease. Though the technician later claimed he had just pocketed the money, many Cubans and some US citizens believed a clandestinely administered biological weapons agent was responsible for a subsequent outbreak of the disease in Cuban turkeys.
In 1971 the first serious outbreak of African Swine Fever in the Western Hemisphere occurred in Cuba. The Cuban government alleged that U.S. covert biological warfare was responsible for this outbreak, which led to the preemptive slaughter of 500,000 pigs. The outbreak was labeled the "most alarming event" of 1971 by the United Nations Food and Agricultural Organization. Six years after the event, the newspaper Newsday, citing an anonymous former CIA agent, claimed that anti-Castro saboteurs with at least the tacit backing of U.S. Central Intelligence Agency officials introduced African swine fever virus into Cuba six weeks before the outbreak in 1971 to destabilize the Cuban economy and encourage domestic opposition to Fidel Castro. According to the Newsday report, the virus was allegedly delivered to the operatives from an army base in the Panama Canal Zone by an unnamed U.S. intelligence source. Evidence linking these incidents to biological warfare has not been confirmed, however, according to Kieth Bolender, a French scientist analyzing the situation concluded that it was not possible that the outbreak had occurred naturally.
Accusations have continued to come out of Havana alleging continued U.S. use of bio-weapons on the island after the official end of the U.S. biological weapons program in 1973. The Cuban government blamed the U.S. for a 1981 outbreak of dengue fever that sickened more than 300,000. Dengue is a vector-borne disease usually carried by mosquitoes, the same species of yellow-fever mosquitoes (Aedes aegypti) utilized in Operation Big Buzz in 1955. Dengue 2 killed 158 people that year in Cuba, including 101 children under 15. Hemorrhagic Dengue 2 had not appeared in the Caribbean until this point and the two closest islands, Jamaica and Bahamas, reported no cases during this time. According to Ariel Alonso Pérez, the fever appeared simultaneously in three separate areas (Havana, Cienfuegos, and Camagüey) hundreds of miles apart and that examination of visitors from areas known to have Dengue found that none had brought the virus with them and none of the original victims had made contact with foreigners or exited the country. Tensions between the two countries, coupled with confirmed U.S. research into entomological warfare during the 1950s, made these charges seem not implausible to some scientists and historians.
Since July 1981, Cuba has had widespread sugar cane rust, African Swine Fever, tobacco blue mold, Dengue 2, meningitis, hemorrhagic conjunctivitis, and several parasites targeting staple crops such as rice, corn, and potatoes. None of these had been present in the region before 1960.
Experimentation and testing
Entomological testing
The United States seriously researched the potential of entomological warfare (EW) during the Cold War. EW is a specific type of biological warfare which aims to use insects as weapon, either directly or through their potential to act as vectors. During the 1950s the United States conducted a series of field tests using entomological weapons. Operation Big Itch, in 1954, was designed to test munitions loaded with uninfected fleas (Xenopsylla cheopis). In May 1955 over 300,000 yellow fever mosquitoes (Aedes aegypti) were dropped over parts of the U.S. state of Georgia to determine if the air-dropped mosquitoes could survive to take meals from humans. The mosquito tests were known as Operation Big Buzz. The U.S. engaged in at least two other EW testing programs, Operation Drop Kick and Operation May Day. A 1981 Army report outlined these tests as well as multiple cost-associated issues that occurred with EW.
Clinical trials
Operation Whitecoat involved the controlled testing of many serious agents on military personnel who had consented to experimentation, and who understood the risks involved. No deaths are known to have resulted from this program.
Vulnerability field tests
In military venues
In August 1949 a U.S. Army Special Operations Division, operating out of Fort Detrick in Maryland, set up its first test at The Pentagon in Washington, D.C. Operatives sprayed harmless bacteria into the building's air conditioning system and observed as the microbes spread throughout the Pentagon.
The U.S. military acknowledges that it tested several chemical and biological weapons on US military personnel in the desert facility, including the East Demilitarization Area near Deseret Chemical Depot/Deseret Chemical Test Center at Fort Douglas, Utah, but takes the position that the tests have contributed to long-term illnesses in only a handful of exposed personnel.
Veterans who took part believe they were also exposed to Agent Orange. The Department of Veterans Affairs denies almost all claims for care and compensation made by veterans who believe they got sick as a result of the tests. The U.S. military for decades remained silent about "Project 112" and its victims, a slew of tests overseen by the Army's Deseret Test Center in Salt Lake City. Project 112 starting in the 1960s tested chemical and biological agents, including VX, sarin and E. coli, on military personnel who did not know they were being tested. After the Defense Department finally acknowledged conducting the tests on unwitting human subjects, it agreed to help the Veterans' Affairs Department track down those who were exposed, but a Government Accountability Office report in 2008 scolded the military for ceasing the effort.
In civilian venues
Between 1941 and the mid-1960s, some medical experiments were conducted on a large scale on civilians who had not consented to participate. Often, these experiments took place in urban areas in order to test dispersion methods. Questions were raised about detrimental health effects after experiments in San Francisco, California, were followed by a spike in hospital visits. The San Francisco test involved a U.S. Navy ship that in 1951 sprayed Serratia marcescens from the bay; it traveled more than 30 miles. In 1977, however, the Centers for Disease Control and Prevention reported that there was no association between the testing and the occurrence of pneumonia or influenza.
Scientists tested biological agents, including Bacillus globigii, which were thought to be harmless, at public places such as subways. Light bulbs containing Bacillus globigii were dropped in New York City's subway system; the result was strong enough to affect people prone to illness (also known as Subway Experiment). Based on the circulation measurements, thousands of people would have been killed if a dangerous microbe was released in the same manner. Another dispersion test involved laboratory personnel disguised as passengers spraying harmless bacteria in Washington National Airport.
A jet aircraft released material over Victoria, Texas, that was monitored in the Florida Keys.
GAO Report
In February 2008, the Government Accountability Office (GAO) released report GAO-08-366 titled, "Chemical and Biological Defense, DOD and VA Need to Improve Efforts to Identify and Notify Individuals Potentially Exposed during Chemical and Biological Tests." The report stated that tens of thousands of military personnel and civilians may have been exposed to biological and chemical substances through DOD tests. In 2003, the DOD reported it had identified 5,842 military personnel and estimated 350 civilians as being potentially exposed during the testing, known as Project 112.
The GAO scolded the U.S. Department of Defense's (DOD) 2003 decision to stop searching for people affected by the tests as premature. The GAO report also found that the DoD made no effort to inform civilians of exposure, and that the United States Department of Veterans Affairs (VA) is failing to use available resources to inform veterans of possible exposure or to determine if they were deceased. After the DoD halted efforts to find those who may have been affected by the tests, veteran health activists and others identified approximately 600 additional individuals who were potentially exposed during Project 112. Some of the individuals were identified after the GAO reviewed records stored at the Dugway Proving Ground, others were identified by the Institute of Medicine. Many of the newly identified suffer from long-term illnesses that may have been caused by the biological or chemical testing.
Current (post-1969) bio-defense program
Both the U.S. bio-weapons ban and the Biological Weapons Convention restricted any work in the area of biological warfare to defensive in nature. In reality, this gives BWC member-states wide latitude to conduct biological weapons research because the BWC contains no provisions for monitoring or enforcement. The treaty, essentially, is a gentlemen's agreement amongst members backed by the long-prevailing thought that biological warfare should not be used in battle.
After Nixon declared an end to the U.S. bio-weapons program, debate in the Army centered around whether or not toxin weapons were included in the president's declaration. Following Nixon's November 1969 order, scientists at Fort Detrick worked on one toxin, Staphylococcus enterotoxin type B (SEB), for several more months. Nixon ended the debate when he added toxins to the bio-weapons ban in February 1970. The U.S. also ran a series of experiments with anthrax, code named Project Bacchus, Project Clear Vision and Project Jefferson in the late 1990s and early 2000s.
In recent years certain critics have claimed the U.S. stance on biological warfare and the use of biological agents has differed from historical interpretations of the BWC. For example, it is said that the U.S. now maintains that the Article I of the BWC (which explicitly bans bio-weapons), does not apply to "non-lethal" biological agents. Previous interpretation was stated to be in line with a definition laid out in Public Law 101-298, the Biological Weapons Anti-Terrorism Act of 1989. That law defined a biological agent as:
any micro-organism, virus, infectious substance, or biological product that may be engineered as a result of biotechnology, or any naturally occurring or bioengineered component of any such microorganism, virus, infectious substance, or biological product, capable of causing death, disease, or other biological malfunction in a human, an animal, a plant, or another living organism; deterioration of food, water, equipment, supplies, or material of any kind ...
According to the Federation of American Scientists, U.S. work on non-lethal agents exceeds limitations in the BWC.
During the 2022 Russian invasion of Ukraine, the Russians claimed that they had come across "US military-run biolabs in Ukraine" supposedly developing biological weapons. The Ukraine biolabs conspiracy theory was rejected as without evidence by the US, Ukraine, the United Nations, Russian scientists, and Reuters. who stated the labs are performing public health research. The US dismissed the allegations as propaganda and disinformation, stating the labs focused on preventing the outbreak of infectious diseases and developing vaccines. The laboratories were first established following the Nunn–Lugar Cooperative Threat Reduction to secure and dismantle the remnants of the Soviet biological weapons program, and since then have been used to monitor and prevent new epidemics. The laboratories are publicly listed, not secret, and are operated by their own countries, such as Ukraine, not by the US. According to PolitiFact, as part of a continuation of international agreements to reduce biological threats, the Department of Defense does provide "technical support to the Ukrainian Ministry of Health since 2005 to improve public health laboratories," but does not control or provide personnel to the public health facilities.
According to the 2008 report by the U.S. Congressional Research Service, "Developments in biotechnology, including genetic engineering, may produce a wide variety of live agents and toxins that are difficult to detect and counter; and new chemical warfare agents and mixtures of chemical weapons and biowarfare agents are being developed . . . Countries are using the natural overlap between weapons and civilian applications of chemical and biological materials to conceal chemical weapon and bioweapon production."
See also
History of biological warfare
Human experimentation in the United States
Iraqi biological weapons program
Operation Sea-Spray
Frank Olson
Project SHAD
Soviet biological weapons program
United States Army Biological Warfare Laboratories
United States and weapons of mass destruction
United States chemical weapons program
Allegations of biological warfare in the Korean War
2001 anthrax attacks
References
Further reading
Cirincione, Joseph, et al. Deadly Arsenals: Nuclear, Biological, and Chemical Threats, (Google Books), Carnegie Endowment, 2005, .
Croddy, Eric and Wirtz, James J. Weapons of Mass Destruction: An Encyclopedia of Worldwide Policy, Technology, and History, (Google Books), ABC-CLIO, 2005, .
"Global Guide to Bioweapons", Nova Online – "Bioterror", PBS, accessed January 7, 2009.
Guillemin, Jeanne. Biological Weapons: From the Invention of State-sponsored Programs to Contemporary Bioterrorism, ( Internet Archive), Columbia University Press, 2005, pp. 63, 122–27. .
Khardori, Nancy. Bioterrorism Preparedness: Medicine – Public Health – Policy, (Google Books), Wiley-VCH, 2006, .
Miller, Judith, Engelberg, Stephen and Broad, William J. Germs: Biological Weapons and America's Secret War, (Google Books), Simon and Schuster, 2002, .
Department of the Army, U.S. Army Activity in the U.S. Biological Warfare Programs, 2 volumes; 24 February 1977.
External links
"The Living Weapon", American Experience, PBS, link to full one-hour video included, accessed January 12, 2009.
biological weapons
biological weapons
Military projects of the United States
Biological weapons by country | United States biological weapons program | Technology,Engineering | 7,667 |
35,342,049 | https://en.wikipedia.org/wiki/Illumina%20dye%20sequencing | Illumina dye sequencing is a technique used to determine the series of base pairs in DNA, also known as DNA sequencing. The reversible terminated chemistry concept was invented by Bruno Canard and Simon Sarfati at the Pasteur Institute in Paris. It was developed by Shankar Balasubramanian and David Klenerman of Cambridge University, who subsequently founded Solexa, a company later acquired by Illumina. This sequencing method is based on reversible dye-terminators that enable the identification of single nucleotides as they are washed over DNA strands. It can also be used for whole-genome and region sequencing, transcriptome analysis, metagenomics, small RNA discovery, methylation profiling, and genome-wide protein-nucleic acid interaction analysis.
Overview
This works in three basic steps: amplify, sequence, and analyze. The process begins with purified DNA. The DNA is fragmented and adapters are added that contain segments that act as reference points during amplification, sequencing, and analysis. The modified DNA is loaded onto a flow cell where amplification and sequencing will take place. The flow cell contains nanowells that space out fragments and help with overcrowding. Each nanowell contains oligonucleotides that provide an anchoring point for the adapters to attach. Once the fragments have attached, a phase called cluster generation begins. This step makes about a thousand copies of each fragment of DNA and is done by bridge amplification PCR. Next, primers and modified nucleotides are washed onto the chip. These nucleotides have a reversible fluorescent blocker so the DNA polymerase can only add one nucleotide at a time onto the DNA fragment. After each round of synthesis, a camera takes a picture of the chip. A computer determines what base was added by the wavelength of the fluorescent tag and records it for every spot on the chip. After each round, non-incorporated molecules are washed away. A chemical deblocking step is then used to remove the 3’ fluorescent terminal blocking group. The process continues until the full DNA molecule is sequenced. With this technology, thousands of places throughout the genome are sequenced at once via massive parallel sequencing.
Procedure
Genomic Library
After the DNA is purified a DNA library, genomic library, needs to be generated. There are two ways a genomic library can be created: sonication and tagmentation. With tagmentation, transposases randomly cut the DNA into sizes between 50 and 500 bp fragments and add adaptors simultaneously. A genetic library can also be generated by using sonication to fragment genomic DNA. Sonication fragments DNA into similar sizes using ultrasonic sound waves. Right and left adapters will need to be attached by T7 DNA Polymerase and T4 DNA ligase after sonication. Strands that fail to have adapters ligated are washed away.
Adapters
Adapters contain three different segments: the sequence complementary to solid support (oligonucleotides on flow cell), the barcode sequence (indices), and the binding site for the sequencing primer. Indices are usually six base pairs long and are used during DNA sequence analysis to identify samples. Indices allow for up to 96 different samples to be run together, this is also known as multiplexing. During analysis, the computer will group all reads with the same index together. Illumina uses a "sequence by synthesis" approach. This process takes place inside of an acrylamide-coated glass flow cell. The flow cell has oligonucleotides (short nucleotide sequences) coating the bottom of the cell, and they serve as the solid support to hold the DNA strands in place during sequencing. As the fragmented DNA is washed over the flow cell, the appropriate adapter attaches to the complementary solid support.
Bridge amplification
Once attached, cluster generation can begin. The goal is to create hundreds of identical strands of DNA. Some will be the forward strand; the rest, the reverse. This is why right and left adapters are used. Clusters are generated through bridge amplification. DNA polymerase moves along a strand of DNA, creating its complementary strand. The original strand is washed away, leaving only the reverse strand. At the top of the reverse strand there is an adapter sequence. The DNA strand bends and attaches to the oligo that is complementary to the top adapter sequence. Polymerases attach to the reverse strand, and its complementary strand (which is identical to the original) is made. The now double stranded DNA is denatured so that each strand can separately attach to an oligonucleotide sequence anchored to the flow cell. One will be the reverse strand; the other, the forward. This process is called bridge amplification, and it happens for thousands of clusters all over the flow cell at once.
Clonal amplification
Over and over again, DNA strands will bend and attach to the solid support. DNA polymerase will synthesize a new strand to create a double stranded segment, and that will be denatured so that all of the DNA strands in one area are from a single source (clonal amplification). Clonal amplification is important for quality control purposes. If a strand is found to have an odd sequence, then scientists can check the reverse strand to make sure that it has the complement of the same oddity. The forward and reverse strands act as checks to guard against artefacts. Because Illumina sequencing uses DNA polymerase, base substitution errors have been observed, especially at the 3' end. Paired end reads combined with cluster generation can confirm an error took place. The reverse and forward strands should be complementary to each other, all reverse reads should match each other, and all forward reads should match each other. If a read is not similar enough to its counterparts (with which it should be a clone), an error may have occurred. A minimum threshold of 97% similarity has been used in some labs' analyses.
Sequence by synthesis
At the end of clonal amplification, all of the reverse strands are washed off the flow cell, leaving only forward strands. A primer attaches to the forward strands adapter primer binding site, and a polymerase adds a fluorescently tagged dNTP to the DNA strand. Only one base is able to be added per round due to the fluorophore acting as a blocking group; however, the blocking group is reversible. Using the four-color chemistry, each of the four bases has a unique emission, and after each round, the machine records which base was added. Once the color is recorded, the fluorophore is washed away and another dNTP is washed over the flow cell and the process is repeated.
Starting with the launch of the NextSeq and later the MiniSeq, Illumina introduced a new two-color sequencing chemistry. Nucleotides are distinguished by either one of two colors (red or green), no color ("black") or combining both colors (appearing orange as a mixture between red and green).
Once the DNA strand has been read, the strand that was just added is washed away. Then, the index 1 primer attaches, polymerizes the index 1 sequence, and is washed away. The strand forms a bridge again, and the 3' end of the DNA strand attaches to an oligo on the flow cell. The index 2 primer attaches, polymerizes the sequence, and is washed away.
A polymerase sequences the complementary strand on top of the arched strand. They separate, and the 3' end of each strand is blocked. The forward strand is washed away, and the process of sequence by synthesis repeats for the reverse strand.
Data analysis
The sequencing occurs for millions of clusters at once, and each cluster has ~1,000 identical copies of a DNA insert. The sequence data is analyzed by finding fragments with overlapping areas, called contigs, and lining them up. If a reference sequence is known, the contigs are then compared to it for variant identification.
This piecemeal process allows scientists to see the complete sequence even though an unfragmented sequence was never run; however, because Illumina read lengths are not very long (HiSeq sequencing can produce read lengths around 90 bp long), it can be a struggle to resolve short tandem repeat areas. Also, if the sequence is de novo and a reference does not exist, repeated areas can cause a lot of difficulty in sequence assembly. Additional difficulties include base substitutions (especially at the 3' end of reads) by inaccurate polymerases, chimeric sequences, and PCR-bias, all of which can contribute to generating an incorrect sequence.
Comparison with other sequencing methods
This technique offers several advantages over traditional sequencing methods such as Sanger sequencing. Sanger sequencing requires two reactions, one for the forward primer and another for the reverse primer. Unlike Illumina, Sanger sequencing uses fluorescently labeled dideoxynucleoside triphosphates (ddNTPs) to determine the sequence of the DNA fragment. ddNTPs are missing the 3' OH group and terminates DNA synthesis permanently. In each reaction tube, dNTPs and ddNTPs are added, along with DNA polymerase and primers. The ratio of ddNTPs to dNTPs matter since the template DNA needs to be completely synthesized, and an overabundance of ddNTPs will create multiple fragments of the same size and position of the DNA template. When the DNA polymerase adds a ddNTP the fragment is terminated and a new fragment is synthesized. Each fragment synthesized is one nucleotide longer than the last. Once the DNA template has been completely synthesized, the fragments are separated by capillary electrophoresis. At the bottom of the capillary tube a laser excites the fluorescently labeled ddNTPs and a camera captures the color emitted.
Due to the automated nature of Illumina dye sequencing it is possible to sequence multiple strands at once and gain actual sequencing data quickly. With Sanger sequencing, only one strand is able to be sequenced at a time and is relatively slow. Illumina only uses DNA polymerase as opposed to multiple, expensive enzymes required by other sequencing techniques (i.e. pyrosequencing).
References
DNA sequencing methods | Illumina dye sequencing | Biology | 2,131 |
17,919,325 | https://en.wikipedia.org/wiki/Piperitone | Piperitone is a natural monoterpene ketone which is a component of some essential oils. Both stereoisomers, the D-form and the L-form, are known. The D-form has a peppermint-like aroma and has been isolated from the oils of plants from the genera Cymbopogon, Andropogon, and Mentha. The L-form has been isolated from Sitka spruce.
Occurrence
Piperitone is found in many essential oils, including over thirty species of the genus eucalyptus. High levels are present in certain species of Eucalyptus and mentha. In the genus Eucalyptus, the highest concentrations are found in Eucalyptus dives. Both enantiomers occur naturally. In Eucalyptus species, (-)-piperitone is present; in mint species, (+)-piperitone is found; and some plants contain racemate piperitone.
Properties
Piperitone is a colorless liquid with a distinct peppermint odor.
Production
Piperitone can be synthesized from isopropyl acetoacetate and 3-buten-2-one.
The primary source of D/L-piperitone is from Eucalyptus dives, produced mainly in South Africa.
Reactions
Piperitone is used as the principal raw material for the production of synthetic menthol and thymol. The reduction to menthol is achieved using hydrogen and a nickel catalyst. Oxidation to thymol is accomplished with iron(III) chloride and acetic acid. It also forms adducts with benzaldehyde and hydroxylamine (an oxime), which were historically useful for compound identification by the melting points of the derivatives. Under light exposure, piperitone undergoes photodimerization, forming a polycyclic compound with a cyclobutane ring.
References
Ketones
Monoterpenes
Cyclohexenes | Piperitone | Chemistry | 386 |
25,250,224 | https://en.wikipedia.org/wiki/Fluoroethyl | Fluoroethyl is an organofluorine functional group in chemistry. Its chemical formulas are (1-fluoroethyl) and (2-fluoroethyl). The general formulas of a compound containing this group are and , where R stands for an organyl group. An example of a compound containing the fluoroethyl group is (2-fluoroethyl)benzene , where Ph stands for phenyl.
See also
Trifluoromethyl
References
Haloalkyl groups | Fluoroethyl | Chemistry | 107 |
166,354 | https://en.wikipedia.org/wiki/Barotropic%20vorticity%20equation | The barotropic vorticity equation assumes the atmosphere is nearly barotropic, which means that the direction and speed of the geostrophic wind are independent of height. In other words, there is no vertical wind shear of the geostrophic wind. It also implies that thickness contours (a proxy for temperature) are parallel to upper level height contours. In this type of atmosphere, high and low pressure areas are centers of warm and cold temperature anomalies. Warm-core highs (such as the subtropical ridge and the Bermuda-Azores high) and cold-core lows have strengthening winds with height, with the reverse true for cold-core highs (shallow Arctic highs) and warm-core lows (such as tropical cyclones).
A simplified form of the vorticity equation for an inviscid, divergence-free flow (solenoidal velocity field), the barotropic vorticity equation can simply be stated as
where is the material derivative and
is absolute vorticity, with ζ being relative vorticity, defined as the vertical component of the curl of the fluid velocity and f is the Coriolis parameter
where Ω is the angular frequency of the planet's rotation (Ω = for the earth) and φ is latitude.
In terms of relative vorticity, the equation can be rewritten as
where β = is the variation of the Coriolis parameter with distance y in the north–south direction and v is the component of velocity in this direction.
In 1950, Charney, Fjørtoft, and von Neumann integrated this equation (with an added diffusion term on the right-hand side) on a computer for the first time, using an observed field of 500 hPa geopotential height for the first timestep. This was one of the first successful instances of numerical weather prediction.
See also
Barotropic
References
External links
http://www.met.reading.ac.uk/~ross/Science/BarVor.html
Equations of fluid dynamics | Barotropic vorticity equation | Physics,Chemistry | 421 |
16,961,891 | https://en.wikipedia.org/wiki/Self-confirming%20equilibrium | In game theory, self-confirming equilibrium is a generalization of Nash equilibrium for extensive form games, in which players correctly predict the moves their opponents make, but may have misconceptions about what their opponents would do at information sets that are never reached when the equilibrium is played. Self-confirming equilibrium is motivated by the idea that if a game is played repeatedly, the players will revise their beliefs about their opponents' play if and only if they observe these beliefs to be wrong.
Consistent self-confirming equilibrium is a refinement of self-confirming equilibrium that further requires that each player correctly predicts play at all information sets that can be reached when the player's opponents, but not the player herself, deviate from their equilibrium strategies. Consistent self-confirming equilibrium is motivated by learning models in which players are occasionally matched with "crazy" opponents, so that even if they stick to their equilibrium strategy themselves, they eventually learn the distribution of play at all information sets that can be reached if their opponents deviate.
References
Game theory equilibrium concepts | Self-confirming equilibrium | Mathematics | 213 |
5,063,129 | https://en.wikipedia.org/wiki/Park%20Grass%20Experiment | The Park Grass Experiment is a biological study originally set up to test the effect of fertilizers and manures on hay yields. The scientific experiment is located at the Rothamsted Research in the English county of Hertfordshire, and is notable as one of the longest-running experiments of modern science, as it was initiated in 1856 and has been continually monitored ever since.
The experiment was originally designed to answer agricultural questions but has since proved an invaluable resource for studying natural selection and biodiversity. The treatments under study were found to be affecting the botanical make-up of the plots and the ecology of the field and it has been studied ever since. In spring, the field is a colourful tapestry of flowers and grasses, some plots still having the wide range of plants that most meadows probably contained hundreds of years ago.
Over its history, Park Grass has:
demonstrated that conventional field trials probably underestimate threats to plant biodiversity from long term changes, such as soil acidification,
shown how plant species richness, biomass and pH are related,
demonstrated that competition between plants can make the effects of climatic variation on communities more extreme,
provided one of the first demonstrations of local evolutionary change under different selection pressures and
endowed us with an archive of soil and hay samples that have been used to track the history of atmospheric pollution, including nuclear fallout.
Bibliography
Rothamsted Research: Classical Experiments
Biodiversity
Ecological experiments
Grasslands
1856 establishments in England | Park Grass Experiment | Biology | 285 |
47,348,378 | https://en.wikipedia.org/wiki/KTDU-35 | The KTDU-35 (GRAU Index 11D62) was a Soviet spacecraft propulsion system composed of two liquid rocket engines, the primary, S5.60 (SKD) and the secondary S5.35 (DKD), fed from the same propellant tanks. Both engines burn UDMH and AK27I in the gas generator cycle. It was designed by OKB-2, the famous Isaev Design Bureau, for the original Soyuz programme.
Within the Soyuz and Progress, the SKD is the primary engine, the DKD is the backup engine for main orbital correction and de-orbit operations. The engine generate (SKD) or (DKD) of thrust with a specific impulse of 278 seconds and 270 seconds, respectively. The SKD nozzle is fixed in the aft of the craft, and the dual DKD nozzles are on either side. The spacecraft attitude system (DPO) is responsible for pointing the vehicle in the correct direction and keep it that way during SKD burns.
Versions
This engine has been used in three variants:
S5.53: Orbital correction engine for the lunar version of the Soyuz.
S5.60 (AKA KTDU-35 GRAU Index 11D62): Version for the LEO version of the Soyuz.
S5.66 (AKA KTDU-66): Maneuvering engine version for the Salyut 1 and Salyut 4 stations. Increased burn time to 1000 seconds and increased number of starts. Also was composed of primary and secondary engines.
See also
Soyuz 7K-OK
Soyuz 7K-OKS
Soyuz 7K-T
Soyuz 7K-TM
Progress 7K-TG
Isaev
S5.4
References
External links
KB KhIMMASH Official Page (in Russian)
Rocket engines of Russia
Rocket engines of the Soviet Union
Rocket engines using hypergolic propellant | KTDU-35 | Astronomy | 390 |
56,981,810 | https://en.wikipedia.org/wiki/ETrice | eTrice is a CASE-Tool for the development of real-time software. It is an official Eclipse project.
The software architecture tooling eTrice is implementing the domain specific language Real-Time Object-Oriented Modeling ROOM. It provides code generators for C, C++ and Java. Each release is accompanied with tutorials and a training is provided.
Since ObjecTime Developer went out of support, eTrice is the only remaining implementation of ROOM.
Literature
Bran Selic, Garth Gullekson, Paul T. Ward: Real-Time Object-Oriented Modeling. John Wiley & Sons Inc, New York 1994,
New Edition: Bran Selic, Garth Gullekson, Paul T. Ward: Real-Time Object-Oriented Modeling. MBSE4U, Hamburg 2023,
References
External links
eTrice project at eclipse.org
Real-time technology
Eclipse (software)
Software using the Eclipse Public License | ETrice | Technology | 183 |
77,250,718 | https://en.wikipedia.org/wiki/UGC%20711 | UGC 711 is a relatively nearby spiral galaxy located in the constellation of Cetus. Estimated to be located 77 million light-years from Earth, the galaxy's luminosity class is IV and it has a HI line width region. It belongs to the equatorial region of Eridanus Void with an arcsec approximation of ≈ 250.
Morphology
UGC 711 is considered a low-surface brightness galaxy (LSB) with a diffuse stellar disk.
With a surface brightness measurement found ~1 magnitude less illuminated compared to μ B,0 = 21.65 mag arcsec−2 according to K.C. Freeman, UGC 711 is one of best studied superthin galaxies defined by its atypical classification when seen edge-on. It has a flat structure with only a diameter estimating to be a = 40 arcsecs but has a major-to-minor axis ratio wider than 7 arcsecs.
The rotational velocity of UGC 711 is said to be only Vcirc = 92 km s−1 according to measurements from Hyperleda.
References
Cetus
Spiral galaxies
00711
Low surface brightness galaxies
+00-04-008
004063 | UGC 711 | Astronomy | 242 |
33,482,931 | https://en.wikipedia.org/wiki/Smart%20traffic%20light | Smart traffic lights or Intelligent traffic lights are a vehicle traffic control system that combines traditional traffic lights with an array of sensors and artificial intelligence to intelligently route vehicle and pedestrian traffic. They can form part of a bigger intelligent transport system.
Research
A technology for smart traffic signals has been developed at Carnegie Mellon University and is being used in a pilot project in Pittsburgh in an effort to reduce vehicle emissions in the city. Unlike other dynamic control signals that adjust the timing and phasing of lights according to limits that are set in controller programming, this system combines existing technology with artificial intelligence.
The signals communicate with each other and adapt to changing traffic conditions to reduce the amount of time that cars spend idling. Using fiber optic video receivers similar to those already employed in dynamic control systems, the new technology monitors vehicle numbers and makes changes in real time to avoid congestion wherever possible. Initial results from the pilot study are encouraging: the amount of time that motorists spent idling at lights was reduced by 40% and travel times across the city were reduced by 25%.
Possible benefits
Companies involved in developing smart traffic management systems include BMW and Siemens, who unveiled their system of networked lights in 2010. This system works with the anti-idling technology that many cars are equipped with, to warn them of impending light changes. This should help cars that feature anti-idling systems to use them more intelligently, and the information that networks receive from the cars should help them to adjust light cycling times to make them more efficient.
A new patent appearing March 1, 2016 by John F. Hart Jr. is for a "Smart" traffic control system that "sees" traffic approaching the intersections and reacts according to what is needed to keep the flow of vehicles at the most efficient rate. By anticipating the needs of the approaching vehicles, as opposed to reacting to them after they arrive and stop, this system has the potential to save motorist time while cutting down harmful emissions.
Romanian and US research teams believe that the time spent by motorists waiting for lights to change could be reduced by over 28% with the introduction of smart traffic lights, and that CO2 emissions could be cut by as much as 6.5%.
A major use of Smart traffic lights could be as part of public transport systems. The signals can be set up to sense the approach of buses or trams and change the signals in their favour, thus improving the speed and efficiency of sustainable transport modes.
Obstacles to widespread introduction
The main stumbling block to the widespread introduction of such systems is the fact that most vehicles on the road are unable to communicate with the computer systems that town and city authorities use to control traffic lights. However, the trial in Harris County, Texas, referred to above, uses a simple system based on signals received from drivers' cell phones, and it has found that even if only a few drivers have their phone switched on, the system is still able to produce reliable data on traffic density. This means that the adoption of smart traffic lights around the world could be started as soon as a reasonable minority of vehicles were fitted with the technology to communicate with the computers that control the signals, rather than having to wait until the majority of cars had such technology.
The first experiment
In July 2019 the first experiment of a traffic signal regulated by 100% "connected" vehicles was carried on at University of Calabria (Unical) with the help of common commercial smart phones by a team of researchers working for Unical and the innovative Start Up SOMOS.
Simpler systems
In the United Kingdom, lights that changed to red when sensing that an approaching motorist was traveling too fast were being trialled in Swindon in 2011, to see if they are more effective at reducing the number of accidents on the road than the speed cameras that preceded them and which were removed following a council decision in 2008. These lights are more focused on encouraging motorists to obey the law but if they prove to be a success then they could pave the way for more sophisticated systems to be introduced in the UK.
Previous research
In addition to the findings of the Romanian and US researchers mentioned above, scientists in Dresden, Germany came to the conclusion that smart traffic lights could handle their task more efficiently without human interface.
See also
Traffic light control and coordination
Level crossing
Pedestrian crossing
Scalable Urban Traffic Control
Traffic optimization
References
Smart devices
Traffic signals | Smart traffic light | Technology | 868 |
30,856,833 | https://en.wikipedia.org/wiki/C22H42O4 | {{DISPLAYTITLE:C22H42O4}}
The molecular formula C22H42O4 (molar mass: 370.56 g/mol) may refer to:
Bis(2-ethylhexyl) adipate
Dioctyl adipate
Docosanedioic acid
Molecular formulas | C22H42O4 | Physics,Chemistry | 68 |
8,203,821 | https://en.wikipedia.org/wiki/Nader%20Engheta | Nader Engheta (; born October 8, 1955) is an Iranian-American scientist. He has made pioneering contributions to the fields of metamaterials, transformation optics, plasmonic optics, nanophotonics, graphene photonics, nano-materials, nanoscale optics, nano-antennas and miniaturized antennas, physics and reverse-engineering of polarization vision in nature, bio-inspired optical imaging, fractional paradigm in electrodynamics, and electromagnetics and microwaves.
Background
Engheta was born on October 8, 1955 in Tehran. After earning a B.S. degree from the school of engineering (Daneshkadeh-e-Fanni) of the University of Tehran, he left for the United States in the summer of 1978 and earned his Masters and PhD degrees from the Caltech.
He is one of the original pioneers of the field of modern metamaterials, and is the originator of the fields of near-zero-index metamaterials, plasmonic cloaking and optical nano circuitry (optical metatronics,).
His metamaterial-based optical nano circuitry, in which properly designed nano structures function as lumped optical circuit elements such as optical capacitors, optical inductors and optical resistors. These are the building blocks for the metatronic circuits operating with light. This concept has been recently verified and realized experimentally by him and his research group at the University of Pennsylvania. This provides a new circuit paradigm for information processing at the nanoscale.
His near-zero-index structures exhibit unique properties in light-matter interaction that have provided exciting possibilities in nanophotonics.
His plasmonic cloaking ideas have led to new methods in stealth physics.
He and his group have developed several areas and concepts in the fields of metamaterials and plasmonic optics, including, (1) ‘extreme-parameter metamaterials’ and 'epsilon-near-zero (ENZ) metamaterials'; (2) the concept of Omega structures, as one of the building blocks of structured materials,; (3) ultrathin cavities and waveguides, with sizes beyond diffraction limits, providing possibilities for unprecedented miniaturization of devices; (4) supercoupling phenomena between waveguides using low-permittivity ENZ metamaterials,; (5) extended Purcell effects in nano-optics using the ENZ phenomena, in which enhanced photon density of states occurs in a relatively large area with essentially uniform phase; (6) far-field subwavelength imaging lens based on ENZ hyperbolic metamaterials; (7) scattering-cancellation-based plasmonic cloaking and transparency,; (8) merging the field of graphene with the field of metamaterials and plasmonic optics in infrared regime, providing the roadmaps for one-atom-thick optical devices and one-atom-thick information processing,; (9) microwave artificial chirality; (10) “signal-processing” metamaterials and “meta-machine”, and (11) “digital” metamaterials.
He is currently the H. Nedwill Ramsey Professor at the University of Pennsylvania, Philadelphia, Pennsylvania, USA, affiliated with the departments of Electrical and Systems Engineering, Bioengineering, Materials Science and Engineering, and Physics and Astronomy.
Awards and honors
Professor Engheta has received the following honors and awards:
Elected Member Academia Europaea (2024)
Caltech Distinguished Alumni Award (2023)
Elected to the American Academy of Arts and Sciences (2023)
Franklin Medal in Electrical Engineering (2023)
Hermann Anton Haus Lecture, MIT (April 13, 2022)
Isaac Newton Medal (2020)
Max Born Award (2020)
Canadian Academy of Engineering, International Fellow (2019)
Ellis Island Medal of Honor from the Ellis Island Honors Society (2019)
Pioneer Award in Nanotechnology from IEEE Nanotechnology Council (2018)
Highly Cited Researcher (Clarivate Analytics, Top 1% Researcher most cited) (2017 & 2018)
William Streifer Scientific Achievement Award from IEEE Photonics Society (2017)
Beacon of Photonics Industry Award from Photonics Media (2017)
Honorary Doctorate from National Technical University Kharkov Polytechnic Institute (2017)
Honorary Doctorate from University of Stuttgart, Germany (2016)
Honorary Doctorate in Technology from Aalto University in Finland (2016)
SPIE Gold Medal (2015)
Vannevar Bush Faculty Fellow Award from the US Department of Defense (2015)
Distinguished Achievement Award from the IEEE Antennas and Propagation Society (2015)
Wheatstone Lecture in King's College London (2015)
Balthasar van der Pol Gold Medal from URSI (International Union of Radio Science) (2014)
Inaugural SINA Award in Engineering (SINA: "Spirit of Iranian Noted Achiever") (2013)
Benjamin Franklin Key Award (2013)
IEEE Electromagnetics Award (2012)
Fellow of the Institute of Physics (UK) (2020)
Fellow of the Union Radio-Scientifique Internationale (URSI: International Union of Radio Science) (since 2017)
Fellow of the US National Academy of Inventors (NAI) (2015)
Fellow of the Materials Research Society (MRS) (since 2015)
Fellow of the SPIE- The International Society for Optical Engineering (since 2011)
Fellow of the American Association for the Advancement of Science (AAAS) (since 2010)
Fellow of the American Physical Society (APS) (since November 2008)
Fellow of the Optical Society of America (OSA) (since March 1999)
Fellow of the Institute of Electrical and Electronics Engineers IEEE (since January 1996)
Recipient of the George H. Heilmeier Faculty Award 2008 for Excellence in Research
In Scientific American Magazine List of 50 Leaders in Science and Technology, 2006
Endowed Scholarly H. Nedwill Ramsey Professorship, U. of Pennsylvania, January 2005 – present
IEEE Third Millennium Medal
Guggenheim Fellowship (1999)
UPS Foundation Distinguished Educator term Chair
Fulbright Naples Chair Award (1998)
S. Reid Warren Jr. Award (two times: 1993 and 2001)
IEEE Antennas and Propagation Society (AP-S) Distinguished Lecturer for 1997–1999
W. M. Keck Foundation's Engineering Teaching Excellence Award (1995)
Christian F. and Mary R. Lindback Foundation Award (1994)
NSF Presidential Young Investigator (PYI) Award (1989)
Frequent plenary and keynote speaker at many conferences
Books
See also
Iranian science
References
External links
Dr. Nader Engheta in Scientific American 50 list
For Nader Engheta's Recent Research Papers, see https://www.seas.upenn.edu/~engheta/publications-intro-page.htm
For Press Releases and News Media Reports on some of Nader Engheta's Research, see https://www.seas.upenn.edu/~engheta/news-1.htm
Doing Math at the Speed of Light - A TEDx talk by Dr. Engheta
Iranian electrical engineers
American people of Iranian descent
Scientists from Tehran
1955 births
Living people
Metamaterials scientists
University of Pennsylvania faculty
California Institute of Technology alumni
University of Tehran alumni
Fellows of the American Association for the Advancement of Science
Fellows of the American Physical Society
Fellows of Optica (society)
Fellows of the IEEE
Iranian expatriate academics
21st-century American engineers
Optical engineers
Benjamin Franklin Medal (Franklin Institute) laureates | Nader Engheta | Materials_science | 1,534 |
42,388,127 | https://en.wikipedia.org/wiki/Lenovo%20A526 | Lenovo A526 is a dual-SIM, quad-core MediaTek Cortex A7 based smartphone launched on 2 April 2014. It released on April 5, 2014.
Design and features
The CPU is Mediatek Cortex A7 1.3 GHz quad-core processor. It has a 4.5 inch FWVGA screen with a resolution of 480x854 px.
RAM memory is 1 GB, internal eMMC memory size is 4 GiB. Additionally, the smartphone supports external microSD/microSDHC card of a capacity up to 32 GB.
The Li-Po battery capacity is 2000 mAh. Lenovo A526 is running Android 4.2, Jelly Bean operating system.
References
A526
Android (operating system) devices
Mobile phones introduced in 2014
Discontinued smartphones | Lenovo A526 | Technology | 163 |
3,615,907 | https://en.wikipedia.org/wiki/Oblique%20wing | An oblique wing (also called a slewed wing) is a variable geometry wing concept. On an aircraft so equipped, the wing is designed to rotate on center pivot, so that one tip is swept forward while the opposite tip is swept aft. By changing its sweep angle in this way, drag can be reduced at high speed (with the wing swept) without sacrificing low speed performance (with the wing perpendicular). This is a variation on the classic swing-wing design, intended to simplify construction and retain the center of gravity as the sweep angle is changed.
History
The oldest examples of this technology are the unrealized German aircraft projects Blohm & Voss P.202 and Messerschmitt Me P.1009-01 from the year 1944, based on a Messerschmitt patent. After the war, constructor Dr. Richard Vogt was brought to the US during Operation Paperclip.
The oblique wing concept was resurrected by Robert T. Jones in the 1950s, an aeronautical engineer at the NASA Ames Research Center, Moffett Field, California. Analytical and wind tunnel studies initiated by Jones at Ames indicated that a transport-size oblique-wing aircraft, flying at speeds up to Mach 1.4 (1.4 times the speed of sound), would have substantially better aerodynamic performance than aircraft with more conventional wings.
In the 1970s, an uncrewed propeller-driven aircraft was constructed and tested at Moffett Field. Known as the NASA Oblique Wing, the project pointed out a craft's unpleasant characteristics at large sweep angles.
So far, only one crewed aircraft, the NASA AD-1, has been built to explore this concept. It flew a series of flight tests starting in 1979. This aircraft demonstrated a number of serious roll-coupling modes and further experimentation ended.
Theory
The general approach is to design an aircraft that performs with high efficiency as the Mach number increases from takeoff to cruise conditions (M ~ 0.8, for a commercial aircraft). Since two different types of drag dominate in each of these two flight regimes, uniting high performance designs for each regime into a single airframe is problematic.
At low Mach numbers induced drag dominates drag concerns. Airplanes during takeoff and gliders are most concerned with induced drag. One way to reduce induced drag is to increase the effective wingspan of the lifting surface. This is why gliders have such long, narrow wings. An ideal wing has infinite span and induced drag is reduced to a two–dimensional property. At lower speeds, during takeoffs and landings, an oblique wing would be positioned perpendicular to the fuselage like a conventional wing to provide maximum lift and control qualities. As the aircraft gained speed, the wing would be pivoted to increase the oblique angle, thereby reducing the drag due to wetted area, and decreasing fuel consumption.
Alternatively, at Mach numbers increasing towards the speed of sound and beyond, wave drag dominates design concerns. As the aircraft displaces the air, a sonic wave is generated. Sweeping the wings away from the nose of the aircraft can keep the wings aft of the sonic wave, greatly reducing drag. Unfortunately, for a given wing design, increasing sweep decreases the aspect ratio. At high speeds, both subsonic and supersonic, an oblique wing would be pivoted at up to 60 degrees to the aircraft's fuselage for better high-speed performance. The studies showed these angles would decrease aerodynamic drag, permitting increased speed and longer range with the same fuel expenditure.
Fundamentally, it appears that no design can be completely optimised for both flight regimes. However, the oblique wing shows promise of getting close. By actively increasing sweep as Mach number increases, high efficiency is possible for a wide range of speeds.
Robert T. Jones theorised that an oblique flying wing could drastically improve commercial air transportation, reducing fuel costs and noise in the vicinity of airports. Military operations include the possibility of a long–endurance fighter/attack vehicle.
NASA OFW airliner research
There have been investigations into an OFW platform being developed into a transcontinental airliner. NASA Ames performed a preliminary design study of a theoretical 500-seat supersonic airliner using the concept in 1991. Following this study, NASA built a small remote-controlled demonstrator aircraft with a 20-foot (6.1m) wingspan. It flew only once, for four minutes in May 1994, but in doing so, it demonstrated stable flight with oblique wing sweep from 35 degrees to 50 degrees. Despite this success, the NASA High Speed Research program, and further oblique wing studies, were canceled.
DARPA Oblique Flying-Wing (OFW) Project
The United States Defense Advanced Research Projects Agency (DARPA) awarded Northrop Grumman a $10.3 million (USD) contract for risk reduction and preliminary planning for an X-plane OFW demonstrator, known as the Switchblade. That program was eventually cancelled, citing difficulties with control systems.
The program aimed at producing a technology demonstrator aircraft to explore the various challenges which the radical design entails. The proposed aircraft would be a pure flying wing (an aircraft with no other auxiliary surfaces such as tails, canards or a fuselage) where the wing is swept with one side of the aircraft forward, and one backwards in an asymmetric fashion. This aircraft configuration is believed to give it a combination of high speed, long range and long endurance. The program entailed two phases. Phase I was to explore the theory and result in a conceptual design, while Phase II covered the design, manufacture and flight test of an aircraft. The program hoped to produce a dataset that can then be used when considering future military aircraft designs.
Wind tunnel tests for the aircraft design were completed. The design was noted to be "workable and robust." The program was concluded before a flight demonstrator was constructed.
See also
Asymmetrical aircraft
Circular wing
References
Further reading
Thinking Obliquely, Larrimer, Bruce I., NASA (2013)
External links
Oblique Flying Wings: An Introduction and White Paper - Desktop Aeronautics, Inc., 2005
Aircraft configurations
Aircraft wing design
Asymmetrical aircraft
Wing configurations | Oblique wing | Physics,Engineering | 1,253 |
35,467,350 | https://en.wikipedia.org/wiki/Morchella%20diminutiva | Morchella diminutiva is a species of fungus in the family Morchellaceae native to North America. Described as new to science in 2012, it occurs in eastern North America, usually near Fraxinus americana and Liriodendron tulipifera, but also under other hardwoods like species of Carya.
References
External links
diminutiva
Edible fungi
Fungi described in 2012
Fungi of North America
Fungus species | Morchella diminutiva | Biology | 88 |
3,246,442 | https://en.wikipedia.org/wiki/Allura%20Red%20AC | Allura Red AC, also known as FD&C Red 40 or E129, is a red azo dye commonly used in food. It was developed in 1971 by the Allied Chemical Corporation, who gave the substance its name.
It is usually supplied as its red sodium salt but can also be used as the calcium and potassium salts. These salts are soluble in water. In solution, its maximum absorbance lies at about 504 nm.
Allura Red AC is manufactured by coupling diazotized 5-amino-4-methoxy-2-toluenesulfonic acid with 6-hydroxy-2-naphthalene sulfonic acid in an azo coupling reaction.
Use as a consumable coloring agent
Allura Red AC is a popular dye used worldwide. Annual production in 1980 was greater than 2.3 million kilograms. It was introduced as a replacement for amaranth in the United States.
The European Union approved Allura Red AC as a food colorant in 1994, but EU countries' local laws banning food colorants were preserved.
In the United States, Allura Red AC is approved by the FDA for use in cosmetics, drugs, and food. When prepared as a lake pigment it is disclosed as Red 40 Lake or Red 40 Aluminum Lake. It is used in some tattoo inks and is used in many products, such as cotton candy, soft drinks, cherry-flavored products, children's medications, and dairy products. It is occasionally used to dye medicinal tablets, such as the antihistamine fexofenadine, to help with identification. It is by far the most commonly used red dye in the United States, completely replacing amaranth (Red 2) and also replacing erythrosine (Red 3) in most applications due to the negative health effects of those two dyes.
Studies on safety
Allura Red has been heavily studied by food safety groups in North America and Europe, and remains in wide use. However, chronic exposure to the dye has been shown to increase susceptibility to bowel disorders in mice. The dye has been shown to damage the DNA of mice.
The UK's Food Standards Agency commissioned a study of six food dyes (tartrazine, Allura red, Ponceau 4R, Quinoline Yellow, sunset yellow, carmoisine (dubbed the "Southampton 6")), and sodium benzoate (a preservative) on children in the general population, who consumed them in beverages. The study found "a possible link between the consumption of these artificial colours and a sodium benzoate preservative and increased hyperactivity" in the children; the advisory committee to the FSA that evaluated the study also determined that because of study limitations, the results could not be extrapolated to the general population, and further testing was recommended.
The European Food Safety Authority (EFSA), with a stronger emphasis on the precautionary principle, required labelling and temporarily reduced the acceptable daily intake (ADI) for the food colorings; the UK FSA called for voluntary withdrawal of the colorings by food manufacturers. However, in 2009, the EFSA re-evaluated the data at hand and determined that "the available scientific evidence does not substantiate a link between the color additives and behavioral effects", and in 2014, after further review of the data, the EFSA restored the prior ADI levels. In 2015, the EFSA found that the exposure estimates did not exceed the ADI of 7 mg/kg per day in any population.
The US FDA did not make changes following the publication of the Southampton study. Following a citizen petition filed by the Center for Science in the Public Interest in 2008, requesting the FDA ban several food additives, the FDA commenced a review of the available evidence but found no evidence to justify changes.
Allura Red AC has previously been banned in Denmark, Belgium, France, Switzerland, and Sweden. This changed in 2008, when the EU adopted a common framework for authorizing food additives, under which Allura Red AC is not currently banned. In Norway and Iceland, it was banned between 1978 and 2001, a period in which azo dyes were only legally used in alcoholic beverages and some fish products.
References
External links
Allura Red AC on PubChem
International Programme on Chemical Safety
List of Foods and Drugs containing Red Dye #40
2-Naphthols
Azo dyes
Benzenesulfonates
Food colorings
Naphthalenesulfonates
Organic sodium salts
E-number additives
Products introduced in 1971
American inventions
20th-century inventions
1971 in science | Allura Red AC | Chemistry | 950 |
56,221,934 | https://en.wikipedia.org/wiki/Dataism | Dataism is a term that has been used to describe the mindset or philosophy created by the emerging significance of big data. It was first used by David Brooks in The New York Times in 2013. The term has been expanded to describe what historian Yuval Noah Harari, in his book Homo Deus: A Brief History of Tomorrow from 2015, calls an emerging ideology or even a new form of religion, in which "information flow" is the "supreme value". In art, the term was used by Albert-Laszlo Barabasi to refer to an artist movement that uses data as its primary source of inspiration.
History
"If you asked me to describe the rising philosophy of the day, I'd say it is Data-ism", wrote David Brooks in The New York Times in February 2013. Brooks argued that in a world of increasing complexity, relying on data could reduce cognitive biases and "illuminate patterns of behavior we haven't yet noticed".
In 2015, Steve Lohr's book Data-ism looked at how Big Data is transforming society, using the term to describe the Big Data revolution.
In his 2016 book Homo Deus: A Brief History of Tomorrow, Yuval Noah Harari argues that all competing political or social structures can be seen as data processing systems: "Dataism declares that the universe consists of data flows, and the value of any phenomenon or entity is determined by its contribution to data processing" and "we may interpret the entire human species as a single data processing system, with individual humans serving as its chips." According to Harari, a Dataist should want to "maximise dataflow by connecting to more and more media". Harari predicts that the logical conclusion of this process is that, eventually, humans will give algorithms the authority to make the most important decisions in their lives, such as whom to marry and which career to pursue. Harari argues that Aaron Swartz could be called the "first martyr" of Dataism.
In 2022, Albert-László Barabási coined the term "Dataism" to define an artistic movement that positions data as the central means of understanding nature, society, technology, and human essence. This movement underscores the necessity for art to integrate with data to stay relevant in contemporary society.
Dataism responds to the intricacy and interconnectedness of modern social, economic, and technological realms, which exceed individual understanding. Advocating for the use of methodologies from various fields like science, business, and politics in art, Dataism sees this fusion as essential for art to retain its significance and influence.
Criticism
Commenting on Harari's characterisation of Dataism, security analyst Daniel Miessler believes that Dataism does not present the challenge to the ideology of liberal humanism that Harari claims, because humans will simultaneously be able to believe in their own importance and that of data.
Harari himself raises some criticisms, such as the problem of consciousness, which Dataism is unlikely to illuminate. Humans may also find out that organisms are not algorithms, he suggests. Dataism implies that all data is public, even personal data, to make the system work as a whole, which is a factor that's already showing resistance today.
Other analysts, such as Terry Ortleib, have looked at the extent to which Dataism poses a dystopian threat to humanity.
The Facebook–Cambridge Analytica data scandal showed how political leaders manipulated Facebook's users' data to build specific psychological profiles that went on to manipulate the network. A team of data analysts reproduced the AI technology developed by Cambridge Analytica around Facebook's data and was able to define the following rules: 10 likes enables a machine to know a person like a coworker, 70 likes like a friend would, 150 likes like a parent would, 300 likes like a lover would, and beyond it may be possible to know a people better than they know themselves.
See also
Transhumanism
Futurism
Surveillance capitalism
Facebook–Cambridge Analytica data scandal
References
External links
Techopedia definition of Dataism
Wired: 'Homo sapiens is an obsolete algorithm': by Yuval Noah Harari
Steve Lohr on Data-ism
The Dataist Organization
Digital Revolution
Philosophy of life
Philosophical schools and traditions
Big data
Posthumanism
Data
Philosophy of computer science
Philosophy of artificial intelligence | Dataism | Mathematics,Technology | 885 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.