id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
5,452,205 | https://en.wikipedia.org/wiki/Trading%20zones | The metaphor of a trading zone is being applied to collaborations in science and technology. The basis of the metaphor is anthropological studies of how different cultures are able to exchange goods, despite differences in language and culture.
Overview
Peter Galison produced the "trading zone" metaphor in order to explain how physicists from different paradigms went about collaborating with each other, and with engineers to develop particle detectors and radar.
According to Galison, "Two groups can agree on rules of exchange even if they ascribe utterly different significance to the objects being exchanged; they may even disagree on the meaning of the exchange process itself. Nonetheless, the trading partners can hammer out a local coordination, despite vast global differences. In an even more sophisticated way, cultures in interaction frequently establish contact languages, systems of discourse that can vary from the most function-specific jargons, through semispecific pidgins, to full-fledged creoles rich enough to support activities as complex as poetry and metalinguistic reflection" (Galison 1997, p. 783)
In the case of radar, for example, the physicists and engineers had to gradually develop what was effectively a pidgin or creole language involving shared concepts like ‘equivalent circuits’ that the physicists represented symbolically in terms of field theory and the engineers saw as extensions of their radio toolkit.
Exchange via an "agent"
Exchanges across disciplinary boundaries can also be carried out with the help of an agent: namely, a person who is familiar enough with the language of two or more cultures to facilitate trade.
At one point in the development of MRI, surgeons saw a lesion where an engineer familiar with the device would have recognized an artifact produced by the way the device was being used. It took someone with expertise in both physics and surgery to see how each of the different disciplines viewed the device, and develop procedures for correcting the problem (Baird & Cohen, 1999). The ability to converse expertly in more than one discipline is called interactional expertise (Collins & Evans, 2002).
Areas of application
The U.S. National Nanotechnology Initiative calls for a “broadly inclusive interdisciplinary dialogue on nanotechnology” that would incorporate a wide range of stakeholders (http://www.nano.gov/html/society/Responsible_Development.htm). This kind of a dialogue will require developing creoles that allow different stakeholders to communicate, and also interactional expertise (Gorman, Groves, & Catalano, 2004).
The convergence between nano, bio, information and cognitive technologies will set an even greater premium on the development of trading zones and interactional expertise (Gorman, 2004).
Computer science education requires development of trading zones between experts in the social and learning sciences and computer scientists (Fincher & Petre, 2004). Each of these communities uses different methods and speaks a different language, hence the need for a creole and also for interactional experts.
Managing environmental systems like the Everglades also requires the development of trading zones (http://www-personal.umich.edu/~bwfuller/Trading_Zone_Paper--Boyd_Fuller--Distribution--Jan_1-05.pdf). Brad Allenby suggests the development of a new kind of expertise in Earth Systems Engineering and Management, which will include an interactional component (Allenby, 2005).
A workshop at Arizona State University on Trading Zones, Interactional Expertise and Interdisciplinary Collaboration raised the possibility of applying these concepts to other applications like global health and service science, and also identified avenues for future research (https://archive.today/20121215123346/http://bart.tcc.virginia.edu/Tradzoneworkshop/index.htm).
See also
Creole language
Interactional expertise
Pidgin
Boundary object
Boundary-work
Science and technology studies
References
Allenby, B. (2005). Technology at the global scale: Integrative cognitivism and Earth Systems Engineering Management. In M. E. Gorman, R. D. Tweney, D. C. Gooding & A. Kincannon (Eds.), Scientific and technological thinking (pp. 303–344). Mahwah, NJ: Lawrence Erlbaum Associates.
Baird, D., & Cohen, M. (1999). Why trade? Perspectives on science, 7(2), 231–254.
Collins, H. M., & Evans, R. (2002). The third wave of science studies. Social Studies of Science, 32(2), 235–296.
Fincher, S., & Petre, M. (2004). Computer science education research. London; New York: Taylor & Francis.
Galison, P. (1997). Image & logic: A material culture of microphysics. Chicago: The University of Chicago Press.
Gorman, M. E. (2004). Collaborating on Convergent Technologies: Education and Practice. In M. C. Roco & C. D. Montemagno (Eds.), The coevolution of human potential and converging technologies (Vol. 1013, pp. 25–37). New York: The New York Academy of Sciences.
Gorman, M. E., Groves, J. F., & Catalano, R. K. (2004). Societal dimensions of nanotechnology. IEEE Technology and Society Magazine, 29(4), 55–64.
External links
Trading Zones Workshop
Science and technology studies | Trading zones | [
"Technology"
] | 1,129 | [
"Science and technology studies"
] |
5,452,285 | https://en.wikipedia.org/wiki/Space%20Brothers | Space Brothers may refer to:
The Space Brothers, British trance music act
Space Brothers (manga), a Japanese manga series by Chūya Koyama about two brothers wanting to go into space, which has been adapted into a live action film and anime series
Nordic aliens, sometimes referred to as 'space brothers' | Space Brothers | [
"Technology"
] | 63 | [
"UFO conspiracy theories",
"Science and technology-related conspiracy theories"
] |
5,452,591 | https://en.wikipedia.org/wiki/Warner%20%26%20Swasey%20Company | The Warner & Swasey Company was an American manufacturer of machine tools, instruments, and special machinery. It operated as an independent business firm, based in Cleveland, from its founding in 1880 until its acquisition in 1980. It was founded as a partnership in 1880 by Worcester Reed Warner (1846–1929) and Ambrose Swasey (1846–1937). The company was best known for two general types of products: astronomical telescopes and turret lathes. It also did a large amount of instrument work, such as equipment for astronomical observatories and military instruments (rangefinders, optical gunsights, etc.) The themes that united these various lines of business were the crafts of toolmaking and instrument-making, which have often overlapped technologically. In the decades after World War II, it also entered the heavy equipment industry with its acquisition of the Gradall brand.
History
In 1866, Swasey and Warner met as fellow apprentices at the Exeter Machine Works in Exeter, New Hampshire. Within a few years they went together to Pratt & Whitney in Hartford, Connecticut, which was one of the leading machine tool builders of the era. There they both rose through the ranks, with Warner rising to be in charge of an assembly floor and Swasey rising to be foreman of the gear-cutting department. There Swasey invented the epicycloidal milling machine for cutting true theoretical curves for the milling cutters used for cutting gears.
In 1880, Swasey and Warner resigned from Pratt & Whitney in order to start a machine-tool-building business together. They investigated Chicago as a place to build their works, but they perceived the Chicago of 1880 as too far west and lacking a sufficient labor pool of skilled machinists. So they went to Cleveland, Ohio, where their company would stay for the next century. They worked together for 20 years without a formal corporate agreement, during which time their partnership's principal products were various models of lathes and milling machines. From the beginning, the partners built both machine tools and telescopes, which reflected their interests in toolmaking, instrument-making, and astronomy.
After nearly 20 years of successful growth, the partners realized that their business was growing enough that it should be given a formal corporate structure, so in 1900 they reorganized it under the official name of The Warner & Swasey Company.
During the early- to mid-20th century, the company was well known in American industry. Its products, both turret lathes and instruments, played very prominent roles in the war efforts for both world wars. Warner & Swasey took part in the transition to numerical control and computer numerical control machine tools during the 1950s through 1970s, but like many other machine tool builders during those decades, it ultimately was affected by the prevailing winds of merger and acquisition in the industry. Bendix Corporation acquired Warner & Swasey in 1980 for nearly $300 million, beating out a competing bid by AMCA International.
In 2019, the Warner & Swasey Company Building was listed on the National Register of Historic Places.
Products
Telescopes
The first Warner & Swasey telescope, built in 1881, was sold to Beloit College for its new Smith Observatory and had a 9.5-inch lens made by Alvan Clark & Sons. Among the notable instruments the company built were the telescopes for Lick Observatory (1888, 36-inch, refracting); the United States Naval Observatory (1893); Yerkes Observatory (according to the 50th-anniversary book, this was a 40-inch refracting telescope completed in time for display at the World's Columbian Exposition of 1893, although its installation at Yerkes was apparently in 1897); and Canada's Dominion Astrophysical Observatory (1916, 72-inch, reflecting). In 1919, the company's founders donated their private observatory in East Cleveland, Ohio to Case Western Reserve University. Today's Warner and Swasey Observatory grew from that facility.
The company's 50th-anniversary book describes the firm's giant-telescope-building work as unprofitable overall but a labor of technological love.
List of observatories with Warner & Swasey telescopes
Bosque Alegre Observatory, National University of Cordoba, ARG
Burrell Memorial Observatory, Baldwin Wallace University, USA
Crane Observatory, Washburn University, USA
Chabot Space & Science Center, Oakland, California, USA
Drake Municipal Observatory, Des Moines, Iowa, USA
Dominion Astrophysical Observatory, NRC, Canada
Dudley Observatory, miSci, Schenectady, NY, USA
Durfee High School, Fall River, Massachusetts, USA
Fuertes Observatory (Irving Porter Church Memorial Telescope), Cornell University, USA
Hildene Astronomy Club (Robert Todd Lincoln Telescope), Manchester, Vermont, USA
James Observatory, Millsaps College, Jackson, Mississippi, USA
Kirkwood Observatory, Indiana University, USA
Lee Observatory, American University of Beirut, Lebanon
Lick Observatory, University of California, USA
McDonald Observatory (Otto Struve Telescope), University of Texas at Austin, USA
Moraine Farm Observatory (Col. Deeds 7" Refractor), Col. Deeds Homestead, currently owned by Kettering health Network, Dayton OH, USA
Painter Hall Observatory, University of Texas at Austin, USA
Perkins Telescope, Lowell Observatory, USA
McKim Observatory, DePauw University, USA
Mueller Observatory, Cleveland Museum of Natural History, USA
Ritter Observatory, University of Toledo, USA
Spacewatch 0.9-meter Telescope, Kitt Peak, University of Arizona, USA
Stephens Memorial Observatory (Cooley Telescope - 9-inch Refractor), Hiram College, USA
Swasey Observatory, Denison University, USA
Tate Laboratory, School of Physics and Astronomy, University of Minnesota, USA
United States Naval Observatory (USNO), United States Navy, USA
University of Illinois Observatory, Urbana, Illinois, USA
Theodor Jacobsen Observatory, University of Washington, USA
Warner and Swasey Observatory, Case Western Reserve University, USA
Yerkes Observatory, Williams Bay, Wisconsin, USA
Turret lathes
Warner & Swasey was one of the premier brands in heavy turret lathes between the 1910s and 1960s. Its chief competitors in this market segment included Jones & Lamson (Springfield, VT, USA), Gisholt (Madison, WI, USA), and Alfred Herbert Ltd (Coventry, UK).
Military instruments
Military instrument contracts were an important line of work for the company. The U.S. government referred many problems concerning such instruments to the company during the Spanish–American War (1898). Instruments produced included "range finders of several types, gun-sight telescopes, battery commanders' telescopes, telescopic musket sights, and prism binoculars". Presumably, the range finders included the company's depression position finder. During World War I, three important kinds of instrument were produced: "musket sights, naval gun sights, and panoramic sights".
Construction equipment
The foundation of the Warner & Swasey Construction Equipment Division with five product lines was started in 1946 with the development of the first production hydraulic excavator; the GRADALL®. This machine was new technology for the industry and was highly versatile and productive for a variety of work. The DUPLEX TRUCK® Company of Lansing, Michigan, a heavy duty and specialized truck manufacturer was acquired in 1955 to supply truck chassis for the GRADALL and future Warner & Swasey backhoe excavator and crane products.
In 1957 the Company sought a broader market penetration into the hydraulic excavator market. It acquired the Badger Machine Company of Winona, Minnesota, with its six HOPTO® hydraulic excavator models which complimented the Gradall models. Badger had been formed in 1946 and developed a tractor-mounted hydraulic backhoe dubbed the "HOPTO" (Hydraulically Operated Power Take-Off) as it was driven by the tractor's power take-off.
The Company acquired in 1967, the Sargent Engineering Corporation of Fort Dodge, Iowa, a manufacturer of hydraulic cranes. Their six Sargent Hydra-Tower Crane models enabled the company to move into another large segment of the construction industry using hydraulic machinery. That same year the Company partnered with a Canadian paper industry association in the manufacture of the Arbomatik, a line of hydraulic tree harvesting equipment. Through corporate diversity into hydraulic construction equipment, the growing popularity and productivity of this type of hydraulic machinery yielded strong business growth for the Warner & Swasey company of Cleveland, Ohio during the years of 1946 through 1977. Badger Machine was sold to Alvis International Group in 1978.
Gradall
In 1946, Warner & Swasey Company acquired the patent rights to manufacture the Gradall telescopic boom excavator from the brothers Ray and Koop Ferwerda with their manufacturing company, the FWF Corporation, of Beachwood, Ohio. The Gradall, a type of hydraulic machinery, became a business of the new owner as the Gradall Division with operations in Cleveland. In the year 1946, the Gradall was the first production hydraulic excavator that was designed and manufactured in the United States. In July 1950, Gradall manufacturing operations were moved to New Philadelphia, Ohio, where it continues in 2017, as Gradall Industries, Inc., a global manufacturer of telescopic boom excavators and industrial maintenance machinery. Following the purchase of Warner & Swasey by Bendix in 1980 and the purchase of Bendix by Allied Corp in 1983, ownership of Gradall shifted multiple times in the 1980's. Following the acquisition by Allied, Gradall was sold to a group of local executives who formed a partnership called GBKS. In 1985, ICM Industries, a Chicago consulting firm, purchased Gradall. In 1995 Morgan, Lewis, Githens & Ahn, a New York City investment firm acquired the company and directed an IPO, but retained a controlling interest. From 1999 to 2006, Gradall was owned by JLG. In 2006 Gradall was acquired by the Alamo Group of Seguin, Texas and formally was renamed Gradall Industries.
See also
James Hartness, president of competitor Jones & Lamson Machine Company, a contemporary of Worcester Reed Warner and Ambrose Swasey who shared their avocations of developing better telescopes and better turret lathes.
References
Bibliography
Grant, James H. (2012), Load Handler Gradall LOED Materials Handler, New Philadelphia, Ohio, USA: JHG Partners, LLC, .
Further reading
External links
26-inch USNO Refracting Telescope
The Beautiful Early Telescopes of Warner & Swasey, Including the J.A. Brashear and C.S. Hastings Optical Collaboration, abstract of lecture by John W. Briggs, Yerkes Observatory, at the 112th annual meeting of the Astronomical Society of the Pacific, Pasadena, CA, July 15, 2000
International Catalog of Sources: Warner & Swasey Company records to 1919
1900-1985
The story of Warner & Swasey telescopes by Ernest N. Jennison
Smith Observatory History
Warner & Swasey Company at Abandoned
Companies established in 1880
Instrument-making corporations
Machine tool builders
Defunct technology companies of the United States
Telescope manufacturers
Manufacturing companies based in Cleveland
1880 establishments in Ohio
Bendix Corporation | Warner & Swasey Company | [
"Astronomy"
] | 2,295 | [
"Telescope manufacturers",
"People associated with astronomy"
] |
5,452,697 | https://en.wikipedia.org/wiki/Dember%20effect | In physics, the Dember effect is when the electron current from a cathode subjected to both illumination and a simultaneous electron bombardment is greater than the sum of the photoelectric current and the secondary emission current .
History
Discovered by Harry Dember (1882–1943) in 1925, this effect is due to the sum of the excitations of an electron by two means: photonic illumination and electron bombardment (i.e. the sum of the two excitations extracts the electron). In Dember’s initial study, he referred only to metals; however, more complex materials have been analyzed since then.
Photoelectric effect
The photoelectric effect due to the illumination of the metallic surface extracts electrons (if the energy of the photon is greater than the extraction work) and excites the electrons which the photons don’t have the energy to extract.
In a similar process, the electron bombardment of the metal both extracts and excites electrons inside the metal.
If one considers a constant and increases , it can be observed that has a maximum of about 150 times .
On the other hand, considering a constant and increasing the intensity of the illumination the , supplementary current, tends to saturate. This is due to the usage in the photoelectric effect of all the electrons excited (sufficiently) by the primary electrons of .
See also
Anomalous photovoltaic effect
Photo-Dember
References
Further reading
External links
:de:Harry Dember
Electrical phenomena | Dember effect | [
"Physics"
] | 301 | [
"Physical phenomena",
"Electrical phenomena"
] |
5,452,760 | https://en.wikipedia.org/wiki/Protein%20precursor | A protein precursor, also called a pro-protein or pro-peptide, is an inactive protein (or peptide) that can be turned into an active form by post-translational modification, such as breaking off a piece of the molecule or adding on another molecule. The name of the precursor for a protein is often prefixed by pro-. Examples include proinsulin and proopiomelanocortin, which are both prohormones.
Protein precursors are often used by an organism when the subsequent protein is potentially harmful, but needs to be available on short notice and/or in large quantities. Enzyme precursors are called zymogens or proenzymes. Examples are enzymes of the digestive tract in humans.
Some protein precursors are secreted from the cell. Many of these are synthesized with an N-terminal signal peptide that targets them for secretion. Like other proteins that contain a signal peptide, their name is prefixed by pre. They are thus called pre-pro-proteins or pre-pro-peptides. The signal peptide is cleaved off in the endoplasmic reticulum. An example is preproinsulin.
Pro-sequences are areas in the protein that are essential for its correct folding, usually in the transition of a protein from an inactive to an active state. Pro-sequences may also be involved in pro-protein transport and secretion.
Pro-domain (or prodomain) is the domain of a proprotein.
References
External links | Protein precursor | [
"Chemistry"
] | 309 | [
"Biochemistry stubs",
"Protein stubs"
] |
5,452,870 | https://en.wikipedia.org/wiki/Microbial%20fuel%20cell | Microbial fuel cell (MFC) is a type of bioelectrochemical fuel cell system also known as micro fuel cell that
generates electric current by diverting electrons produced from the microbial oxidation of reduced compounds (also known as fuel or electron donor) on the anode to oxidized compounds such as oxygen (also known as oxidizing agent or electron acceptor) on the cathode through an external electrical circuit. MFCs produce electricity by using the electrons derived from biochemical reactions catalyzed by bacteria. Comprehensive Biotechnology (Third Edition)
MFCs can be grouped into two general categories: mediated and unmediated. The first MFCs, demonstrated in the early 20th century, used a mediator: a chemical that transfers electrons from the bacteria in the cell to the anode. Unmediated MFCs emerged in the 1970s; in this type of MFC the bacteria typically have electrochemically active redox proteins such as cytochromes on their outer membrane that can transfer electrons directly to the anode. In the 21st century MFCs have started to find commercial use in wastewater treatment.
History
The idea of using microbes to produce electricity was conceived in the early twentieth century. Michael Cressé Potter initiated the subject in 1911. Potter managed to generate electricity from Saccharomyces cerevisiae, but the work received little coverage. In 1931, Barnett Cohen created microbial half fuel cells that, when connected in series, were capable of producing over 35 volts with only a current of 2 milliamps.
A study by DelDuca et al. used hydrogen produced by the fermentation of glucose by Clostridium butyricum as the reactant at the anode of a hydrogen and air fuel cell. Though the cell functioned, it was unreliable owing to the unstable nature of hydrogen production by the micro-organisms. This issue was resolved by Suzuki et al. in 1976, who produced a successful MFC design a year later.
In the late 1970s, little was understood about how microbial fuel cells functioned. The concept was studied by Robin M. Allen and later by H. Peter Bennetto. People saw the fuel cell as a possible method for the generation of electricity for developing countries. Bennetto's work, starting in the early 1980s, helped build an understanding of how fuel cells operate and he was seen by many as the topic's foremost authority.
In May 2007, the University of Queensland, Australia completed a prototype MFC as a cooperative effort with Foster's Brewing. The prototype, a 10 L design, converted brewery wastewater into carbon dioxide, clean water and electricity. The group had plans to create a pilot-scale model for an upcoming international bio-energy conference.
Definition
A microbial fuel cell (MFC) is a device that converts chemical energy to electrical energy by the action of microorganisms. These electrochemical cells are constructed using either a bioanode and/or a biocathode. Most MFCs contain a membrane to separate the compartments of the anode (where oxidation takes place) and the cathode (where reduction takes place). The electrons produced during oxidation are transferred directly to an electrode or to a redox mediator species. The electron flux is moved to the cathode. The charge balance of the system is maintained by ionic movement inside the cell, usually across an ionic membrane. Most MFCs use an organic electron donor that is oxidized to produce CO2, protons, and electrons. Other electron donors have been reported, such as sulfur compounds or hydrogen. The cathode reaction uses a variety of electron acceptors, most often oxygen (O2). Other electron acceptors studied include metal recovery by reduction, water to hydrogen, nitrate reduction, and sulfate reduction.
Applications
Power generation
MFCs are attractive for power generation applications that require only low power, but where replacing batteries may be impractical, such as wireless sensor networks.
Wireless sensors powered by microbial fuel cells can then for example be used for remote monitoring (conservation).
Virtually any organic material could be used to feed the fuel cell, including coupling cells to wastewater treatment plants. Chemical process wastewater and synthetic wastewater have been used to produce bioelectricity in dual- and single-chamber mediator less MFCs (uncoated graphite electrodes).
Higher power production was observed with a biofilm-covered graphite anode. Fuel cell emissions are well under regulatory limits. MFCs convert energy more efficiently than standard internal combustion engines, which are limited by the Carnot efficiency. In theory, an MFC is capable of energy efficiency far beyond 50%. Rozendal produced hydrogen with 8 times less energy input than conventional hydrogen production technologies.
Moreover, MFCs can also work at a smaller scale. Electrodes in some cases need only be 7 μm thick by 2 cm long, such that an MFC can replace a battery. It provides a renewable form of energy and does not need to be recharged.
MFCs operate well in mild conditions, 20 °C to 40 °C and at pH of around 7 but lack the stability required for long-term medical applications such as in pacemakers.
Power stations can be based on aquatic plants such as algae. If sited adjacent to an existing power system, the MFC system can share its electricity lines.
Education
Soil-based microbial fuel cells serve as educational tools, as they encompass multiple scientific disciplines (microbiology, geochemistry, electrical engineering, etc.) and can be made using commonly available materials, such as soils and items from the refrigerator. Kits for home science projects and classrooms are available. One example of microbial fuel cells being used in the classroom is in the IBET (Integrated Biology, English, and Technology) curriculum for Thomas Jefferson High School for Science and Technology. Several educational videos and articles are also available on the International Society for Microbial Electrochemistry and Technology (ISMET Society)"".
Biosensor
The current generated from a microbial fuel cell is directly proportional to the organic-matter content of wastewater used as the fuel. MFCs can measure the solute concentration of wastewater (i.e., as a biosensor).
Wastewater is commonly assessed for its biochemical oxygen demand (BOD) values. BOD values are determined by incubating samples for 5 days with proper source of microbes, usually activated sludge collected from wastewater plants.
An MFC-type BOD sensor can provide real-time BOD values. Oxygen and nitrate are interfering preferred electron acceptors over the anode, reducing current generation from an MFC. Therefore, MFC BOD sensors underestimate BOD values in the presence of these electron acceptors. This can be avoided by inhibiting aerobic and nitrate respiration in the MFC using terminal oxidase inhibitors such as cyanide and azide. Such BOD sensors are commercially available.
The United States Navy is considering microbial fuel cells for environmental sensors. The use of microbial fuel cells to power environmental sensors could provide power for longer periods and enable the collection and retrieval of undersea data without a wired infrastructure. The energy created by these fuel cells is enough to sustain the sensors after an initial startup time. Due to undersea conditions (high salt concentrations, fluctuating temperatures and limited nutrient supply), the Navy may deploy MFCs with a mixture of salt-tolerant microorganisms that would allow for a more complete utilization of available nutrients. Shewanella oneidensis is their primary candidate, but other heat- and cold-tolerant Shewanella spp may also be included.
A first self-powered and autonomous BOD/COD biosensor has been developed and enables detection of organic contaminants in freshwater. The sensor relies only on power produced by MFCs and operates continuously without maintenance. It turns on the alarm to inform about contamination level: the increased frequency of the signal warns about a higher contamination level, while a low frequency informs about a low contamination level.
Biorecovery
In 2010, A. ter Heijne et al. constructed a device capable of producing electricity and reducing Cu2+ ions to copper metal.
Microbial electrolysis cells have been demonstrated to produce hydrogen.
Wastewater treatment
MFCs are used in water treatment to harvest energy utilizing anaerobic digestion. The process can also reduce pathogens. However, it requires temperatures upwards of 30 degrees C and requires an extra step in order to convert biogas to electricity. Spiral spacers may be used to increase electricity generation by creating a helical flow in the MFC. Scaling MFCs is a challenge because of the power output challenges of a larger surface area.
Types
Mediated
Most microbial cells are electrochemically inactive. Electron transfer from microbial cells to the electrode is facilitated by mediators such as thionine, pyocyanin, methyl viologen, methyl blue, humic acid, and neutral red. Most available mediators are expensive and toxic.
Mediator-free
Mediator-free microbial fuel cells use electrochemically active bacteria such as Shewanella putrefaciens and Aeromonas hydrophila to transfer electrons directly from the bacterial respiratory enzyme to the electrode. Some bacteria are able to transfer their electron production via the pili on their external membrane. Mediator-free MFCs are less well characterized, such as the strain of bacteria used in the system, type of ion-exchange membrane and system conditions (temperature, pH, etc.)
Mediator-free microbial fuel cells can run on wastewater and derive energy directly from certain plants and O2. This configuration is known as a plant microbial fuel cell. Possible plants include reed sweetgrass, cordgrass, rice, tomatoes, lupines and algae. Given that the power is obtained using living plants (in situ-energy production), this variant can provide ecological advantages.
Microbial electrolysis
One variation of the mediator-less MFC is the microbial electrolysis cell (MEC). While MFCs produce electric current by the bacterial decomposition of organic compounds in water, MECs partially reverse the process to generate hydrogen or methane by applying a voltage to bacteria. This supplements the voltage generated by the microbial decomposition of organics, leading to the electrolysis of water or methane production. A complete reversal of the MFC principle is found in microbial electrosynthesis, in which carbon dioxide is reduced by bacteria using an external electric current to form multi-carbon organic compounds.
Soil-based
Soil-based microbial fuel cells adhere to the basic MFC principles, whereby soil acts as the nutrient-rich anodic media, the inoculum and the proton exchange membrane (PEM). The anode is placed at a particular depth within the soil, while the cathode rests on top the soil and is exposed to air.
Soils naturally teem with diverse microbes, including electrogenic bacteria needed for MFCs, and are full of complex sugars and other nutrients that have accumulated from plant and animal material decay. Moreover, the aerobic (oxygen consuming) microbes present in the soil act as an oxygen filter, much like the expensive PEM materials used in laboratory MFC systems, which cause the redox potential of the soil to decrease with greater depth. Soil-based MFCs are becoming popular educational tools for science classrooms.
Sediment microbial fuel cells (SMFCs) have been applied for wastewater treatment. Simple SMFCs can generate energy while decontaminating wastewater. Most such SMFCs contain plants to mimic constructed wetlands. By 2015 SMFC tests had reached more than 150 L.
In 2015 researchers announced an SMFC application that extracts energy and charges a battery. Salts dissociate into positively and negatively charged ions in water and move and adhere to the respective negative and positive electrodes, charging the battery and making it possible to remove the salt effecting microbial capacitive desalination. The microbes produce more energy than is required for the desalination process. In 2020, a European research project achieved the treatment of seawater into fresh water for human consumption with an energy consumption around 0.5 kWh/m3, which represents an 85% reduction in current energy consumption respect state of the art desalination technologies. Furthermore, the biological process from which the energy is obtained simultaneously purifies residual water for its discharge in the environment or reuse in agricultural/industrial uses. This has been achieved in the desalination innovation center that Aqualia has opened in Denia, Spain early 2020.
Phototrophic biofilm
Phototrophic biofilm MFCs (ner) use a phototrophic biofilm anode containing photosynthetic microorganism such as chlorophyta and candyanophyta. They carry out photosynthesis and thus produce organic metabolites and donate electrons.
One study found that PBMFCs display a power density sufficient for practical applications.
The sub-category of phototrophic MFCs that use purely oxygenic photosynthetic material at the anode are sometimes called biological photovoltaic systems.
Nanoporous membrane
The United States Naval Research Laboratory developed nanoporous membrane microbial fuel cells that use a non-PEM to generate passive diffusion within the cell. The membrane is a nonporous polymer filter (nylon, cellulose, or polycarbonate). It offers comparable power densities to Nafion (a well-known PEM) with greater durability. Porous membranes allow passive diffusion thereby reducing the necessary power supplied to the MFC in order to keep the PEM active and increasing the total energy output.
MFCs that do not use a membrane can deploy anaerobic bacteria in aerobic environments. However, membrane-less MFCs experience cathode contamination by the indigenous bacteria and the power-supplying microbe. The novel passive diffusion of nanoporous membranes can achieve the benefits of a membrane-less MFC without worry of cathode contamination.Nanoporous membranes are also 11 times cheaper than Nafion (Nafion-117, $0.22/cm2 vs. polycarbonate, <$0.02/cm2).
Ceramic membrane
PEM membranes can be replaced with ceramic materials. Ceramic membrane costs can be as low as $5.66/m2. The macroporous structure of ceramic membranes allows for good transport of ionic species.
The materials that have been successfully employed in ceramic MFCs are earthenware, alumina, mullite, pyrophyllite, and terracotta.
Generation process
When microorganisms consume a substance such as sugar in aerobic conditions, they produce carbon dioxide and water. However, when oxygen is not present, they may produce carbon dioxide, hydrons (hydrogen ions), and electrons, as described below for sucrose:
Microbial fuel cells use inorganic mediators to tap into the electron transport chain of cells and channel electrons produced. The mediator crosses the outer cell lipid membranes and bacterial outer membrane; then, it begins to liberate electrons from the electron transport chain that normally would be taken up by oxygen or other intermediates.
The now-reduced mediator exits the cell laden with electrons that it transfers to an electrode; this electrode becomes the anode. The release of the electrons recycles the mediator to its original oxidized state, ready to repeat the process. This can happen only under anaerobic conditions; if oxygen is present, it will collect the electrons, as it has more free energy to release.
Certain bacteria can circumvent the use of inorganic mediators by making use of special electron transport pathways known collectively as extracellular electron transfer (EET). EET pathways allow the microbe to directly reduce compounds outside of the cell, and can be used to enable direct electrochemical communication with the anode.
In MFC operation, the anode is the terminal electron acceptor recognized by bacteria in the anodic chamber. Therefore, the microbial activity is strongly dependent on the anode's redox potential. A Michaelis–Menten curve was obtained between the anodic potential and the power output of an acetate-driven MFC. A critical anodic potential seems to provide maximum power output.
Potential mediators include natural red, methylene blue, thionine, and resorufin.
Organisms capable of producing an electric current are termed exoelectrogens. In order to turn this current into usable electricity, exoelectrogens have to be accommodated in a fuel cell.
The mediator and a micro-organism such as yeast, are mixed together in a solution to which is added a substrate such as glucose. This mixture is placed in a sealed chamber to prevent oxygen from entering, thus forcing the micro-organism to undertake anaerobic respiration. An electrode is placed in the solution to act as the anode.
In the second chamber of the MFC is another solution and the positively charged cathode. It is the equivalent of the oxygen sink at the end of the electron transport chain, external to the biological cell. The solution is an oxidizing agent that picks up the electrons at the cathode. As with the electron chain in the yeast cell, this could be a variety of molecules such as oxygen, although a more convenient option is a solid oxidizing agent, which requires less volume.
Connecting the two electrodes is a wire (or other electrically conductive path). Completing the circuit and connecting the two chambers is a salt bridge or ion-exchange membrane. This last feature allows the protons produced, as described in , to pass from the anode chamber to the cathode chamber.
The reduced mediator carries electrons from the cell to the electrode. Here the mediator is oxidized as it deposits the electrons. These then flow across the wire to the second electrode, which acts as an electron sink. From here they pass to an oxidizing material. Also the hydrogen ions/protons are moved from the anode to the cathode via a proton exchange membrane such as Nafion. They will move across to the lower concentration gradient and be combined with the oxygen but to do this they need an electron. This generates current and the hydrogen is used sustaining the concentration gradient.
Algal biomass has been observed to give high energy when used as the substrate in microbial fuel cell.
Applications in Environmental Remediation
Microbial fuel cells (MFCs) have emerged as promising tools for environmental remediation due to their unique ability to utilize the metabolic activities of microorganisms for both electricity generation and pollutant degradation. MFCs find applications across diverse contexts in environmental remediation. One primary application is in bioremediation, where the electroactive microorganisms on the MFC anode actively participate in the breakdown of organic pollutants, providing a sustainable and efficient method for pollutant removal. Moreover, MFCs play a significant role in wastewater treatment by simultaneously generating electricity and enhancing water quality through the microbial degradation of contaminants. These fuel cells can be deployed in situ, allowing for continuous and autonomous remediation in contaminated sites. Furthermore, their versatility extends to sediment microbial fuel cells (SMFCs), which are capable of removing heavy metals and nutrients from sediments. By integrating MFCs with sensors, they enable remote environmental monitoring in challenging locations. The applications of microbial fuel cells in environmental remediation highlight their potential to convert pollutants into a renewable energy source while actively contributing to the restoration and preservation of ecosystems.
Challenges and advances
Microbial fuel cells (MFCs) offer significant potential as sustainable and innovative technologies, but they are not without their challenges. One major obstacle lies in the optimization of MFC performance, which remains a complex task due to various factors including microbial diversity, electrode materials, and reactor design. The development of cost-effective and long-lasting electrode materials presents another hurdle, as it directly affects the economic viability of MFCs on a larger scale. Furthermore, the scaling up of MFCs for practical applications poses engineering and logistical challenges. Nonetheless, ongoing research in microbial fuel cell technology continues to address these obstacles. Scientists are actively exploring new electrode materials, enhancing microbial communities to improve efficiency, and optimizing reactor configurations. Moreover, advancements in synthetic biology and genetic engineering have opened up possibilities for designing custom microbes with enhanced electron transfer capabilities, pushing the boundaries of MFC performance. Collaborative efforts between multidisciplinary fields are also contributing to a deeper understanding of MFC mechanisms and expanding their potential applications in areas such as wastewater treatment, environmental remediation, and sustainable energy production.
See also
Biobattery
Cable bacteria
Dark fermentation
Electrohydrogenesis
Electromethanogenesis
Fermentative hydrogen production
Glossary of fuel cell terms
Hydrogen hypothesis
Hydrogen technologies
Photofermentation
Bacterial nanowires
References
Yue P.L. and Lowther K. (1986). Enzymatic Oxidation of C1 compounds in a Biochemical Fuel Cell. The Chemical Engineering Journal, 33B, p 69-77
Further reading
External links
DIY MFC Kit
BioFuel from Microalgae
Sustainable and efficient biohydrogen production via electrohydrogenesis – November 2007
Microbial Fuel Cell blog A research-type blog on common techniques used in MFC research.
Microbial Fuel Cells This website is originating from a few of the research groups currently active in the MFC research domain.
Microbial Fuel Cells from Rhodopherax Ferrireducens An overview from the Science Creative Quarterly.
Building a Two-Chamber Microbial Fuel Cell
Discussion group on Microbial Fuel Cells
Innovation company developing MFC technology
Bioelectrochemistry
Fuel cells
Hydrogen biology
Renewable energy | Microbial fuel cell | [
"Chemistry"
] | 4,475 | [
"Electrochemistry",
"Bioelectrochemistry"
] |
5,453,142 | https://en.wikipedia.org/wiki/Biodiversity%20of%20Israel%20and%20Palestine | The biodiversity of Israel and Palestine is the fauna, flora and fungi of the geographical region of Israel and of the Palestinian National Authority (the West Bank and the Gaza Strip). This geographical area within the historical region of Palestine extends from the Jordan River and Wadi Araba in the east, to the Mediterranean Sea and the Sinai desert in the west, to Lebanon in the north, and to the gulf of Aqaba, or Eilat in the south.
The area is part of the Palearctic realm, located in the Mediterranean Basin, whose climate supports the Mediterranean forests, woodlands, and scrub biome. This includes the Eastern Mediterranean conifer-sclerophyllous-broadleaf forests and the Southern Anatolian montane conifer and deciduous forests ecoregions.
There are five geographical zones and the climate varies from semi-arid to temperate to subtropical. The region is home to a variety of plants and animals; at least 47,000 living species have been identified, with another 4,000 assumed to exist. At least 116 mammal species are native to Palestine/Israel, as well as 511 bird species, 97 reptile species, and 7 amphibian species. There are also an estimated 2,780 plant species.
Geography
The region of Palestine with the Gaza Strip, Israel, and the West Bank are located at the eastern end of the Mediterranean Sea, traditionally called the Levant. Israel is bounded on the north by Lebanon and on the northeast by Syria. Jordan lies to the east and southeast of the West Bank and Israel; Israel and the Gaza Strip are bordered on the southwest by the Egyptian Sinai Peninsula and on the west by the Mediterranean Sea.
Climate
The region is divided into three major climate zones, and one microclimate zone:
The Mediterranean climate zone, characterized by long, hot, rainless summers and relatively short, cool, rainy winters. The rainfall may go from as much as 400 mm for a year (in the south around Gaza), to 1,200 mm for a year (in the northernmost end of Israel). The Mediterranean landscapes include several kinds of forest, garrigue, scrubland, marsh and savanna-like-grassland. The fauna and flora are mostly of European origin.
The Steppe climate zone. It is a narrow strip (no wider than 60 km, and mostly, much narrower) between the Mediterranean zone and the Desert zone. The rainfall varies from 400 mm for year to 200 mm for year. This climate zone includes mostly low-grasslands and hardy forms of scrub. The fauna and flora are mostly of Asian and Saharan origin.
The Desert climate zone, which is the largest climate zone of Israel and Palestine, covers the country's southern half as the Negev, while the Judean Desert extends to the Dead Sea region through the West Bank and into the southern Jordan Valley. Rainfall is as low as 32 mm per year in the southernmost tip of Palestine/Israel in the Arabah valley. This dry climate zone grows scattered shrub vegetation or desert-grassland in its wetter parts. In the more arid regions, the vegetation is confined to dry riverbeds and gullies, known as wadis and in some places it is almost absent. In some of the greater valleys, desert-scrub and acacia-woodland are to be found. The fauna and flora are mostly of Saharan origin. Sudanese flora is present as well.
The Tropical (Sudanese) Microclimate by the springs of the Judean Desert. Refers mostly to Ein Gedi spring and Arugot creek. Due to the high aquifer in the region, and the steady, hot climate of the Judean Desert, a tropical savanna-related flora (not rainforest, as many think) of East-African origin has established in the area of the springs. The fauna is that of the Desert zone.
The climate is determined by the location between the subtropical aridity of the Sahara and the Arabian deserts, and the subtropical humidity of the Levant or eastern Mediterranean. The climate conditions are highly variable within the area and modified locally by altitude, latitude, and the proximity to the Mediterranean Sea.
Taxonomification
The following is a taxonomical classification up to Phylum level of all species found in the area.
Realm Protista
Realm Fungi
Phyla: Ascomycota, Basidiomycota, Glomeromycota, Myxomycota, Zygomycota, Chytridiomycota, Anamorphic fungi
Realm Plantae
Phyla: Anthocerophyta, Bryophyta, Charophyta, Chlorophyta, Cycadophyta, Equisetophyta, Ginkgophyta, Gnetophyta, Hepatophyta, Lycopodiophyta, Magnoliophyta, Ophioglossophyta, Pinophyta, Psilotophyta, Pteridophyta
Realm Animalia
Subrealm: Parazoa
Phylum: Porifera
Subrealm: Agnotozoa
Phyla: Orthonectida, Placozoa, Rhombozoa
Subrealm: Eumetazoa
Superphylum: Radiata
Phyla: Cnidaria, Ctenophora
Bilateria: Protostomia
Phyla: Acanthocephala, Annelida, Arthropoda, Brachiopoda, Bryozoa, Chaetognatha, Cycliophora, Echiura, Entoprocta, Gastrotricha, Gnathostomulida, Kinorhyncha, Loricifera, Micrognathozoa, Mollusca, Myzostomida, Nematoda, Nematomorpha, Nemertea, Onychophora, Phoronida, Platyhelminthes, Priapula, Rotifera, Sipuncula, Tardigrada
Bilateria: Deuterostomia
Phyla: Echinodermata, Hemichordata, Chordata, Xenoturbellida
Online databases
The following are online well prepared databases about the biodiversity in the Palestine Israel area.
Flora of Israel Online, Hebrew University, Jerusalem
BioGIS: Israel Biodiversity Information System
Botanical Garden University E Book
Biodiversity of Jordan and Israel
Articles
Bird Species in Israel
Societies
Palestine Wildlife Society (Palestine Wildlife Society)
Israel Nature and National Parks Protection Authority
Bird watching
Birding Israel: the world of birds, birding tours and birdwatching in Israel
International Center for the Study of Bird Migration
The Israeli Center for Yardbirds
Jerusalem Bird Observatory (Jerusalem Bird Observatory)
Photographers
Rittner Oz: Amazing collection of great images of small creatures in the area with fine scientific classification
Ilia Shalamaev: Excellent photographs of wildlife in the area
Vadim Onishchenko
The Edge: A searchable collection of nature photos. Search for Palestine, Israel in the database. Site created by Niall Benvie
Neighbouring countries
The Arabian Oryx Website
Selected images
See also
List of rivers of Israel
References
Environment of the State of Palestine
Environment of Israel
Environment of Palestine (region)
Natural history of Israel
Biodiversity
Natural history of Palestine (region)
ar:الحياة البرية في الشام | Biodiversity of Israel and Palestine | [
"Biology"
] | 1,479 | [
"Biodiversity"
] |
5,453,152 | https://en.wikipedia.org/wiki/Law%20of%20attraction%20%28New%20Thought%29 | The law of attraction is the New Thought spiritual belief that positive or negative thoughts bring positive or negative experiences into a person's life. The belief is based on the idea that people and their thoughts are made from "pure energy" and that like energy can attract like energy, thereby allowing people to improve their health, wealth, or personal relationships. There is no empirical scientific evidence supporting the law of attraction, and it is widely considered to be pseudoscience or religion couched in scientific language. This belief has alternative names that have varied in popularity over time, including manifestation and lucky girl syndrome.
Advocates generally combine cognitive reframing techniques with affirmations and creative visualization to replace limiting or self-destructive ("negative") thoughts with more empowered, adaptive ("positive") thoughts. A key component of the philosophy is the idea that in order to effectively change one's negative thinking patterns, one must also "feel" (through creative visualization) that the desired changes have already occurred. This combination of positive thought and positive emotion is believed to allow one to attract positive experiences and opportunities by achieving resonance with the proposed energetic law.
While some supporters of the law of attraction refer to scientific theories and use them as arguments in favor of it, it has no demonstrable scientific basis. A number of scientists have criticized the misuse of scientific concepts by its proponents. Recent empirical research has shown that while individuals who indulge in manifestation and law of attraction beliefs often do exhibit higher perceived levels of success, these beliefs are also seen being associated with higher risk taking behaviors, particularly financial risks, and show a susceptibility to bankruptcy.
History
The New Thought movement grew out of the teachings of Phineas Quimby in the early 19th century. Early in his life, Quimby was diagnosed with tuberculosis. Early 19th century medicine had no reliable cure for tuberculosis. Quimby took to horse riding and noted that intense excitement temporarily relieved him from his affliction. This method for relieving his pain and seemingly subsequent recovery prompted Quimby to pursue a study of "Mind over Body". Although he never used the words "Law of Attraction", he explained this in a statement that captured the concept in the field of health:
Historian Mitch Horowitz noted that the term "Law of Attraction" first appeared in 1855 in The Great Harmonia, vol. IV, by American Spiritualist Andrew Jackson Davis, in a context alluding to the human soul and spheres of the afterlife.
The first articulator of the law of attraction as general principle was Prentice Mulford. Mulford, a pivotal figure in the development of New Thought thinking, discusses the law at length in his essay "The Law of Success", published 1886–1887. In this, Mulford was followed by other New Thought authors, such as Henry Wood (starting with his God's Image in Man, 1892), and Ralph Waldo Trine (starting with his first book, What All the World's A-Seeking, 1896). For these authors, the law of attraction is concerned not only about health but every aspect of life.
The 20th century saw a surge in interest in the subject with many books being written about it, amongst which are two of the best-selling books of all time; Think and Grow Rich (1937) by Napoleon Hill, The Power of Positive Thinking (1952) by Norman Vincent Peale, and You Can Heal Your Life (1984) by Louise Hay. The Abraham-Hicks material is based primarily around the law of attraction.
In 2006, the concept of the law of attraction gained renewed exposure with the release of the film The Secret (2006) which was then developed into a book of the same title in the same year. The movie and book gained widespread media coverage. This was followed by a sequel, The Power in 2010 that talks about the law of attraction being the law of love.
A modernized version of the law of attraction is known as manifestation, which refers to various self-help strategies that can purportedly make an individual's wishes come true by mentally visualizing them. Manifestation techniques involve positive thinking or directing requests to "the universe" as well as actions on the part of the individual.
Lucky girl syndrome
An incarnation of the law of attraction appearing in the early 2020s is known as lucky girl syndrome. According to Woman's Health this is "the idea that you can attract things you want (like luck, money, love, etc.) by repeating mantras and truly believing things will work out for you." In early 2023 AARP explained that "The newest self-help craze, lucky girl syndrome is Gen Z's spin on books like The Power of Positive Thinking, The Secret and Manifest Your Destiny: The Nine Spiritual Principles for Getting Everything You Want. This year's version, however, puts the emphasis on luck and consistently reminding yourself that the universe is conspiring to make good things happen for you because you are a lucky person. The BBC reported that "There isn't scientific evidence for it" and "some have labeled it 'smuggest TikTok trend yet'".
A January 2023 article in CNET explained that "thousands of people across TikTok have posted videos about how this manifestation strategy has changed their lives, bringing them new opportunities they never expected. Manifestation is the concept of thinking things into being -- by believing something enough, it will happen."
Also in January 2023, Today.com reported that "Different manifestation techniques are taking over TikTok, and "lucky girl syndrome" is the latest way people claim to achieve the life they desire." It also said that "Videos detailing the power of positive thinking have amassed millions of views on TikTok, and manifestation experts seem to approve." The article also quoted a manifestation coach as saying "the lucky girl mindset is, indeed, a true practice of manifestation", and that it has been around for years.
As reported by Vox, "If 2020 was the year that TikTokers discovered The Secretthat is, the idea that you can make anything you want happen if you believe in it enoughthen the two years that followed are when they've tried to rebrand it into perpetual relevance. Its most recent makeover is something rather ominously called "lucky girl syndrome..." The article also reported that "What lucky girl syndromeand The Secret, and the 'law of attraction', or the 'law of assumption', and prosperity gospel, and any of the other branches of this kind of New Age thinkingreally amounts to, though, is 'manifesting', or the practice of repeatedly writing or saying declarative statements in the hopes that they will soon become true." The Vox article concludes "It never hurts to be curious, though. When you come across a shiny new term on TikTok, it's worth interrogating where it came from, and whether the person using it is someone worth listening to. Often, it's not that they're any better at living than you are; they're just better at marketing it."
Attempting to explain the attraction of lucky girl syndrome, Parents interviewed an LCSW therapist for teens and their families on the subject who opined that "It makes us feel like we're in control of our lives. Gen Z is constantly exposed to bad news, from layoffs to political conflicts to the student loan crisis. It makes sense that they'd be drawn to something that would make them feel a greater sense of agency and control."
The Conversation warned of the negative side of lucky girl syndrome, saying that what most videos on the topic suggest is "that what you put out to the universe is what you will get in return. So if you think you're poor or unsuccessful, this is what you'll get back. Obviously, this is quite an unhelpful message, which likely won't do much for the self-esteem of people who don't feel particularly luckylet alone those facing significant hardship."
Also regarding negative consequences, Harper's Bazaar warned that lucky girl syndrome has much in common with toxic positivity and that "If you try it, and it doesn't work for you, it could become yet another stick to beat yourself with. If you already feel vulnerable or wobbly, this could well be something else that makes you feel bad about yourself... it ignores the fact that life is not fair. And it ignores that some people are more privileged than others. It doesn't take into account the systemic and structural biases and inequalities that exist in the world."
Descriptions
Proponents believe that the law of attraction is always in operation and that it brings to each person the conditions and experiences that they predominantly think about, or which they desire or expect.
Charles Haanel wrote in The Master Key System (1912):
Ralph Trine wrote in In Tune with the Infinite (1897):
In her 2006 documentary, The Secret, Rhonda Byrne emphasized thinking about what each person wants to obtain, but also to infuse the thought with the maximum possible amount of emotion. She claims the combination of thought and feeling is what attracts the desire. Another similar book is James Redfield's The Celestine Prophecy, which says reality can be manifested by man. The Power of Your Subconscious Mind by Joseph Murphy, says readers can achieve seemingly impossible goals by learning how to bring the mind itself under control. The Power by Rhonda Byrne and The Alchemist by Paulo Coelho are similar. While there are personal testimonies that claim that methods based on The Secret and the law of attraction have worked for them, a number of skeptics have criticized Byrne's film and book. The New York Times Book Review called The Secret pseudoscience and an "illusion of knowledge".
Philosophical and religious basis
The New Thought concept of the law of attraction is rooted in ideas that come from various philosophical and religious traditions. In particular, it has been inspired by Hermeticism, New England transcendentalism, specific verses from the Bible, and Hinduism.
Hermeticism influenced the development of European thought in the Renaissance. Its ideas were transmitted partly through alchemy. In the 18th century, Franz Mesmer studied the works of alchemists such as Paracelsus and van Helmont. Van Helmont was a 17th-century Flemish physician who proclaimed the curative powers of the imagination. This led Mesmer to develop his ideas about Animal magnetism which Phineas Quimby, the founder of New Thought, studied.
The Transcendentalist movement developed in the United States immediately before the emergence of New Thought and is thought to have had a great influence on it. George Ripley, an important figure in that movement, stated that its leading idea was "the supremacy of mind over matter".
New Thought authors often quote certain verses from the Bible in the context of the law of attraction. An example is Mark 11:24: "Therefore I tell you, whatever you ask in prayer, believe that you have received it, and it will be yours."
In the late 19th century Swami Vivekananda traveled to the United States and gave lectures on Hinduism. These talks greatly influenced the New Thought movement and in particular, William Walker Atkinson who was one of New Thought's pioneers.
Criticism
The law of attraction has been popularized in the early 21st century by books and films such as The Secret. The 2006 film and the subsequent book use interviews with New Thought authors and speakers to explain the principles of the proposed metaphysical law that one can attract anything that one thinks about consistently. Writing for the Committee for Skeptical Inquiry, Mary Carmichael and Ben Radford wrote that "neither the film nor the book has any basis in scientific reality", and that its premise contains "an ugly flipside: if you have an accident or disease, it's your fault".
Others have questioned the references to modern scientific theory, and have maintained, for example, that the law of attraction misrepresents the electrical activity of brainwaves. Victor Stenger and Leon Lederman were critical of attempts to use quantum mysticism to bridge any unexplained or seemingly implausible effects, believing these to be traits of modern pseudoscience.
Skeptical Inquirer magazine criticized the lack of falsifiability and testability of these claims. Critics have asserted that the evidence provided is usually anecdotal and that, because of the self-selecting nature of the positive reports, as well as the subjective nature of any results, these reports are susceptible to confirmation bias and selection bias. Physicist Ali Alousi, for instance, criticized it as unmeasurable and questioned the likelihood that thoughts can affect anything outside the head.
The mantra of The Secret, and by extension, the law of attraction, is as follows: positive thoughts and positive visualization will have a direct impact on the self. While positivity can improve one's quality of life and resilience through hardship, it can also be misguiding. Holding the belief that positive thinking will manifest positivity in one's life diminishes the value of hard work and perseverance, such as in the 1970s pursual of "self-esteem-based education".
Notable supporters
In 1897, Ralph Waldo Trine wrote In Tune with the Infinite. In the second paragraph of chapter 9 he writes, "The Law of Attraction works unceasingly throughout the universe, and the one great and never changing fact in connection with it is, as we have found, that like attracts like."
In 1904, Thomas Troward, a strong influence in the New Thought Movement, gave a lecture in which he claimed that thought precedes physical form and "the action of Mind plants that nucleus which, if allowed to grow undisturbed, will eventually attract to itself all the conditions necessary for its manifestation in outward visible form."
In 1906, in his New Thought Movement book William Walker Atkinson used the phrase Thought Vibration or the Law of Attraction in the Thought World, stating that "like attracts like".
In his 1910 The Science of Getting Rich. Wallace D. Wattles espoused similar principlesthat simply believing in the object of one's desire and focusing on it will lead to that object or goal being realized on the material plane (Wattles claims in the Preface and later chapters of this book that his premise stems from the monistic Hindu view that God provides everything and can deliver what is focused on). The book also claims negative thinking will manifest negative results.
In 1915, Theosophical author William Quan Judge used the phrase in The Ocean of Theosophy.
In 1919, Another theosophical author Annie Besant discussed the 'Law of Attraction'. Besant compared her version of it to gravitation, and said that the law represented a form of karma.
Napoleon Hill published two books on the theme. The first, The Law of Success in 16 Lessons (1928), directly and repeatedly references the Law of Attraction and proposes that it operates by use of radio waves transmitted by the brain. The second, Think and Grow Rich (1937), went on to sell 100 million copies by 2015. Hill insisted on the importance of controlling one's own thoughts in order to achieve success, as well as the energy that thoughts have and their ability to attract other thoughts. He mentions a "secret" to success and promises to indirectly describe it at least once in every chapter. It is never named and he says that discovering it on one's own is far more beneficial. Many people have argued over what it actually is; some claim it is the law of Attraction. Hill states the "secret" is mentioned no fewer than a hundred times, yet reference to "attract" is used less than 30 times in the text.
In 1944, Neville Goddard published Feeling Is the Secret, which promoted creative visualization and emotional feeling as a form of meditation to receive desires from the universe. His second book on the topic, Out of This World (1949), explored the reasoning behind the so-called "feeling" and how assumptions if repeated enough can "harden into fact". His third book, The Power of Awareness (1952), Goddard explains of the concept of "I am" to reason that the human subconscious mind has a "god-given" ability to manifest and create reality if it is impressed by the feeling.
In 1960, W. Clement Stone and Napoleon Hill co-wrote Success Through a Positive Mental Attitude.
In his 1988 The American Myth of Success, Richard Weiss states that the principle of "non-resistance" is a popular concept of the New Thought movement and is taught in conjunction with the law of attraction.
The 2008, Esther and Jerry Hicks' book Money and the Law of Attraction: Learning to Attract Health, Wealth & Happiness appeared on the New York Times Best Seller list.
See also
List of New Thought writers
Cosmic ordering
Hermeticism
Efficacy of prayer
Internal locus of control
Law of contagion
Magical thinking
Medical students' disease
Mind over matter
Positive mental attitude
Priming (psychology)
Prosperity theology
Pygmalion effect
Self-fulfilling prophecy
Sympathetic magic
References
Magical thinking
New Age
New Thought beliefs
Quantum mysticism | Law of attraction (New Thought) | [
"Physics"
] | 3,518 | [
"Quantum mechanics",
"Quantum mysticism"
] |
5,453,292 | https://en.wikipedia.org/wiki/Threaded%20pipe | A threaded pipe is a pipe with screw-threaded ends for assembly.
Tapered threads
The threaded pipes used in some plumbing installations for the delivery of gases or liquids under pressure have a tapered thread that is slightly conical (in contrast to the parallel sided cylindrical section commonly found on bolts and leadscrews). The seal provided by a threaded pipe joint depends upon multiple factors: the labyrinth seal created by the threads; a positive seal between the threads created by thread deformation when they are tightened to the proper torque; and sometimes on the presence of a sealing coating, such as thread seal tape or a liquid or paste pipe sealant such as pipe dope. Tapered thread joints typically do not include a gasket.
Especially precise threads are known as "dry fit" or "dry seal" and require no sealant for a gas-tight seal. Such threads are needed where the sealant would contaminate or react with the media inside the piping, e.g., oxygen service.
Tapered threaded fittings are sometimes used on plastic piping. Due to the wedging effect of the tapered thread, extreme care must be used to avoid overtightening the joint. The overstressed female fitting may split days, weeks, or even years after initial installation. Therefore many municipal plumbing codes restrict the use of threaded plastic pipe fittings.
Both British standard and National pipe thread standards specify a thread taper of 1:16; the change in diameter is one sixteenth the distance travelled along the thread. The nominal diameter is achieved some small distance (the "gauge length") from the end of the pipe.
Straight threads
Pipes may also be threaded with cylindrical threaded sections, in which case the threads do not themselves provide any sealing function other than some labyrinth seal effect, which may not be enough to satisfy either functional or code requirements. Instead, an O-ring seated between the shoulder of the male pipe section and an interior surface on the female, provides the seal.
See also
AN thread
British Standard Pipe thread (BSP)
Buttress thread
Fire hose thread
Garden hose thread
National pipe thread (NPT)
Nipple (plumbing)
O-ring boss seal
Panzergewinde (steel conduit thread)
Piping
Plumbing
Screw thread
Tap and die
Thread angle
United States Standard thread
External links
NPT Vs. NPTF Taper Pipe Threads
Newman Tools Inc. and J.W. WINCO, INC. show the Whitworth form BSP or ISO pipe thread.
Piping
Plumbing | Threaded pipe | [
"Chemistry",
"Engineering"
] | 504 | [
"Building engineering",
"Chemical engineering",
"Plumbing",
"Construction",
"Mechanical engineering",
"Piping"
] |
5,453,466 | https://en.wikipedia.org/wiki/Daylighting%20%28streams%29 | Daylighting is the opening up and restoration of a previously buried watercourse, one which had at some point been diverted below ground. Typically, the rationale behind returning the riparian environment of a stream, wash, or river to a more natural above-ground state is to reduce runoff, create habitat for species in need of it, or improve an area's aesthetics. In the United Kingdom, the practice is also known as deculverting.
In addition to its use in urban design and planning the term also refers to the public process of advancing such projects. According to the Planning and Development Department of the City of Berkeley, "A general consensus has developed that protecting and restoring natural creeks' functions is achievable over time in an urban environment while recognizing the importance of property rights."
Systems
Natural drainage systems
Natural drainage systems help manage stormwater by infiltrating and slowing the flow of stormwater, filtering and bioremediating pollutants by soils and plants, reducing impervious surfaces, using porous paving, increasing vegetation, and improving related pedestrian amenities. Natural features—open, vegetated swales, stormwater cascades, and small wetland ponds—mimic the functions of nature lost to urbanization. At the heart are plants, trees, and the deep, healthy soils that support them. All three combine to form a "living infrastructure" that, unlike pipes and vaults, increase in functional value over time.
Some efforts to blend urban development with natural systems use innovative drainage design and landscaping instead of traditional curbs and gutters, pipes and vaults. One such demonstration project in the Pipers Creek watershed reduced imperviousness by more than 18 percent. The project built bioswales, landscape elements intended to remove silt and pollution from surface runoff water and planted 100 evergreen trees and 1,100 shrubs. From 2001 to 2003, the project reduced the volume of stormwater leaving the street in a two-year storm event by 98%. Such a reduction can reduce storm damage to water quality and habitats for species such as the iconic salmon. Unfortunately, the engineering alternatives have a relatively expensive initial price, since they are usually replacing existing structures, albeit life-limited ones. Further, conventional systems generally do not consider full cost accounting. The natural drainage system alternatives can also provide returns on investment by improving urban environments.
The street edge alternatives street breaks most of the conventions of 150 years of standard American street design. Narrow, curved streets, open drainage swales, and an abundance of diverse plants and trees welcome pedestrians as well as diverse species. Adjacent residents maintain city infrastructure in the form of street "gardens" in front of their homes, visually integrating the neighborhood along the street. The natural drainage system united the community visually, environmentally, and socially. The 110th Cascades SEA (2002–2003) are a creek-like cascade of stair-stepped natural, seasonal pools that intercept, infiltrate, slow and filter over of stormwater draining through the project.
Example projects
Viable, daylighted streams exist only where neighbourhoods are intimately connected to restoration and stewardship values in their watersheds, since the health of an urban stream can not long survive carelessness or neglect. With impervious surfaces having replaced most of the natural ground cover in urban environments, habitat for wildlife is dramatically reduced compared to historic baselines. Hydrologic changes have resulted, and impervious waterways directly carry non-point pollution through urban creeks. One effective solution is to restore streams and riparian habitat. This improves the entire urban watershed, far beyond the riparian channel itself. Wild et al 2011 described the first known online map and database of urban river daylighting projects. Wild et al 2019 published geo-spatial database about all schemes. University of Waterloo documented a very similar list featuring many of the same stream daylighting projects around the globe.
Switzerland
Zürich
The City of Zürich’s stream daylighting policy has long received the attention of researchers and is considered by some to be unique in the world. It had been adopted since 1986 and ensued in daylighting nearly 21 kilometers of Zürich’s buried streams thus far. The positive impact on the quality of water and biodiversity has been significant. There are also benefits for enhanced stormwater management, and even socio-cultural benefits such as, enhanced public realm and educational ones.
Canada
Vancouver, British Columbia
In the 1880s there were over 50 wild salmon streams in Vancouver alone. However, as Vancouver grew, these streams were lost to urbanization. They were covered by roads, homes, and businesses. They were also lost when they were buried beneath sewers or culverts.
The City of Vancouver and its residents are now making an effort to uncover these lost streams and restore them back to their natural state.
Hastings Creek
The Hastings Creek Stream Daylighting Project was originally proposed in 1994 as a way to manage storm water and for aesthetic purposes. The idea was to bring the stream back to its once natural formation which would improve the surrounding habitat for wildlife as well as the originally proposed purposes. This project's plan was finalized in 1997, and work began the same year.
The stream had existed in Hastings Park until 1935 when the Park became focused on entertainment rather than its original purpose when it was given to the city in 1889, which was to be a retreat for those with a passion for the outdoors. As the Pacific National Exhibition (PNE) grounds continued to expand there was a continued loss of natural woodlands, greenery and waterways. It was not until the 1980s when the surrounding community began to look at continuing to uphold its original purpose.
The daylighting project made major progress in 2013 in the area located in the Creekway Park, which was originally a parking lot. The daylighted stream will one day connect the Sanctuary in Hastings Park to the Burrard Inlet. The progress made in Creekway Park is a major step towards this goal. This daylighting project also improved pedestrian and bikeway transit. This stream is now able to obtain the stormwater from the surrounding area, which reduces the load that is felt by the municipality's storm sewers. It is the storms in early autumn which provide the water flow for the creek, meaning that there is variable flow throughout the year. During the late summer months the moist soil is relied upon to maintain the vegetation of the area. This variation in flow does not allow for salmon migration through the creek; however it does house trout as well as vegetation which aid in the filtration of the storm water entering the creek.
Spanish Banks
Located upstream from Spanish Banks waterfront, one of the highest profile creeks in Vancouver Metro became open to salmon in 2000. In a collaborative project between Spanish Banks Streamkeepers Association and the Department of Fisheries and Oceans Canada, barriers to fish passage were removed and habitat structure was added. Spanish Banks Creek was previously diverted through a culvert underneath a parking lot, but the lower reaches of this creek have been revitalized. The banks were stabilized with riprap, large woody debris was added for habitat cover, and spawning gravels were added in appropriate areas. Rigorous effectiveness monitoring has not been performed, but a few dozen coho and chum salmon are known to spawn there annually in a sustaining population. Maintenance to the creek is provided by Spanish Banks Streamkeepers Association, a local volunteer stewardship group.
St. George Rainway
The East Vancouver neighborhood of Mount Pleasant has officially incorporated into its community plan a project to restore St. George Creek, a tributary to the False Creek watershed. St. George street is the site of this former stream, which now flows through the sewers and a culvert. This paved street will be converted into a shared-use path, riparian habitat, and urban greenspace.
St. George Creek once spawned salmon and trout, and hosted a diverse riparian ecosystem. The restoration of this habitat using the rainway proposal would allow for salmon spawning, recreational and educational opportunities, and improve the community's access to nature and transportation alternatives. The proposal would pass the following community centres: Great Northern Way Campus, St. Francis Xavier School, Mt. Pleasant Elementary, Florence N. Elementary, Kivan Boys and Girls Club, Robson Park Family Centre.
Detailed landscape designs have been produced, and incorporated into the community plan of Mount Pleasant neighborhood. Project leaders from the False Creek watershed Society and Vancouver Society of Storytelling have collaborated with Mount Pleasant Elementary students to create a street mural drawing attention to the belowground stream. To date, the mural is the only physical progress on the project.
Tatlow Creek
This is a future project aiming to ultimately connect the gap in the Seaside Greenway in order to link it to the Burrard Bridge. The beginning of this project has been started by the City of Vancouver in 2013, after its approval on July 29 of the same year. Volunteer Park is located in Kitsilano at the corner of Point Grey Road and Macdonald Street. This is where the main daylighting project for this area is planned to occur.
Phase one is currently in progress. Point Grey Road is currently closed to through motor traffic in order to turn the street into a greenway for cycling and walking. This part of the project is expected to be complete by summer 2014.
Phase two of this project is looking to include the daylighting of Tatlow Creek which is located in Volunteer Park. This phase must go through the City Council and the Park Board capital planning process for the 2015-2017 Capital Plan before any plans can be finalized.
Tatlow Creek had been scheduled to be daylighted in 1996, and the project to start in 1997. The project was deemed feasible and the storm water was to be diverted back into the natural creek bed and tunneled under Point Grey Road. When it was not done, the project was proposed again by a UBC masters' student as the Tatlow Creek Revitalization Project. If this project is completed as phase 2 of the new Park Board Project it would allow for salmon and trout spawning.
Caledon, Ontario
Credit River: East Credit Tributary
Credit Valley Conservation (CVC) worked with a private landowner to daylight 500 m of coldwater stream on their Caledon family farm. The project emerged from a decision to replace a failing tile drain on the farm property with a stream. The stream was buried in an agricultural tile in the early 1980s to facilitate agricultural operations. CVC worked collaboratively with the landowners to design and construct a new stream, stream-side grassland and wetland in 2017. The project improved biodiversity and ecosystem health. Nine species of fish have been recorded in the stream, and Bobolink and Eastern Meadowlark (both threatened bird species) use the planted riparian grassland. Frogs and toads are also thriving in the new wetland. In addition to the newly created stream, CVC removed a perched culvert downstream that was preventing fish passage to allow downstream fish populations to reach the new stream.
In January 2018, the landowners received the Ontario Heritage Trust Lieutenant Governor's Award for Conservation Excellence in recognition of the project's contribution to conservation.
The project was funded by the Fisheries and Oceans Canada, Peel Rural Water Quality Program and the Species at Risk Farm Incentive Program.
France
Ile de France
La Bièvre river
Partial reopening sections and re-naturalisation of La Bièvre river, in the region Ile de France (from the south to Paris, where it joins La Seine)
600 metre section in Fresnes in 2003
900 metres section in Verrieres-le-Buisson/Massy in 2006
600 metres section in L’Haÿ-les-Roses in 2016
600 metre section between Arcueil and Gentilly in 2021
Re-naturalisation in 2020 of a section from Bievres to Igny from a relatively straight caisson reinforced embankment to a meandering stream (excess flow diverted into a pipe).
United States
California
Codornices Creek and Strawberry Creek, Berkeley
Islais Creek, San Francisco
Maryland
Since the 1990s there have been several plans to daylight the Jones Falls along much of its route through downtown Baltimore.
Massachusetts
Part of Island End River flowing through Everett, Massachusetts was daylighted in 2021.
New York (State)
Yonkers, New York, the third largest city in the state, broke ground on December 15, 2010, on a project to daylight of the Saw Mill River as it runs through its downtown, called Getty Square. The daylighting project is the cornerstone of a large redevelopment effort in the downtown. An additional 2 other sections of the Saw Mill River are planned to be daylighted as well.
The first phase of the Yonkers daylighting was portrayed in the documentary Lost Rivers. The second phase, where the river runs under the Mill Street Courtyard, broke ground on March 19, 2014.
Salt Lake City, Utah
City Creek
A public-private partnership between Salt Lake City and the Church of Jesus Christ of Latter Day Saints, exchange the ownership of a surface parking lot at 110 N State Street in Salt Lake City for development rights to an underground parking garage. In 1995, a donation by the church allowed Salt Lake City to daylight a creek channel through the newly created City Creek Park.
Three Creeks Confluence
Red Butte, Emigration, and Parleys Creeks flow into the Jordan River at 1300 South and 900 West in Salt Lake City, UT. The site was previously paved over with a dead-end segment of 1300 South. A dilapidated, vacant home existed to the north of 1300 South on the site. The area was in a neglected condition, impacted by noxious weeds, dumping, and encroachments from private property.
Approximately $3 million was secured for the construction of the Three Creeks Confluence, a partnership between Salt Lake City and the Seven Canyons Trust. Red Butte, Emigration, and Parleys Creeks were daylighted 200 feet in a newly restored channel up to 900 West. The site includes a Jordan River Trail connection, fishing bridge, and plaza space. In 2017, an Achievement Award from the Utah Chapter of the American Planning Association was received for the innovative project design and creative community engagement process.
Seattle, Washington
Pipers Creek
Pipers Creek in the central to north Greenwood area is joined by Venema and Mohlendorph Creeks in Carkeek Park on Puget Sound. Pipers is one of the four largest streams in urban Seattle, together with Longfellow, Taylor, and Thornton creeks. Pipers Creek drains a watershed into Puget Sound, from a residential upper plateau that is most of the watershed, through the steep ravines of the of Carkeek Park. The headwaters begin in the north Greenwood neighborhood.
As a result of project efforts, salmon were brought back to Pipers Creek, Venema, and Mohlendorph creeks in the mid-2000s after a fifty-year absence. The latter is named for the late Ted Mohlendorph, a biologist who spearheaded efforts to restore the watershed as salmon habitat. Though augmented by hatchery fish, anywhere from 200 to 600 chum salmon return each November, along with a few coho in the fall and fewer occasional winter steelhead. Inspirationally, several hundred small resident coastal cutthroat trout live in the watershed, believed to be native fish that survived decades of urban assault. An environmental learning center and programs are part of comprehensive restoration. More than four miles (6 km) of trail are maintained by neighborhood volunteers who put in 4,000 hours of work in 2003, for example. The creek waters are pretty in their impressively restored settings, but the watershed is the surrounding neighborhoods and streets, laced with petrochemicals, pesticides, fertilizers, wandering pets, and such. Along with steeply high volume during storm runoff and resulting turbidity, water quality is the remaining big issue in restoring salmon.
The north fork of Pipers Creek is the site for the 110th Cascades, a street edge alternatives street demonstration project (see above). The 110th Cascades are a creek-like cascade of stair-stepped natural, seasonal pools that intercept, infiltrate, slow and filter over of stormwater draining through the project. The cascades are a part of a natural drainage systems) project; together these united the community visually, environmentally, and socially, toward integrating the neighborhood as a community.
Taylor Creek
Taylor Creek flows from Deadhorse Canyon (west of Rainier Avenue S at 68th Avenue S and northwest of Skyway Park), through Lakeridge Park to Lake Washington. With volunteer effort and some city matching grants, restoration has been underway since 1971. Volunteers have planted thousands of indigenous trees and plants, removed tons of garbage, removed invasive plants, and had city help removing fish-blocking culverts and improving trails. A deer has been spotted and sightings of raccoons, opossum and birds are common. By about 2050, the area will be looking like a young version of what it looked like before being disrupted. Taylor is one of the four largest streams in urban Seattle.
Fauntleroy Creek
Fauntleroy Creek in the Fauntleroy neighborhood of West Seattle flows about a mile (1.6 km) from as far east as 38th Avenue SW in the modest 33 acre (130,000 m2) Fauntleroy Park at SW Barton Street, through a fish ladder at its outlet near the Fauntleroy ferry terminal (the creek drops a moderately steep 300 ft (91 m) in that one mile). Coho salmon and cutthroat trout returned as soon as barriers were removed, after concerted effort and pressure by citizen groups of activist neighbors (1989–1998). A further culvert blocks fish passage to Kilbourne Park and so on up to the headwaters in Fauntleroy Park. The 98 acre (400,000 m2) watershed is about two-thirds residential development, from 1900s summer colony to post-World War II urban, with the rest natural space, primarily Fauntleroy Park.
Longfellow Creek
Longfellow Creek is one of the four largest in urban Seattle. It flows north from Roxhill Park for several miles along the valley of the Delridge neighborhood of West Seattle, turning east to reach the Duwamish Waterway via a 3,300 ft (1000 m) pipe beneath the Bethlehem Steel plant (now Nucor). Salmon returned without intervention as soon as toxic input was ended and barriers were removed, after having been extinguished for 60 years. Construction of a fish ladder at the north end of the West Seattle Golf Course will allow spawning salmon up along the fairways. Farther upstream the city has been enlarging and building more storm-detention ponds, recreation areas, and an outdoor-education center at Camp Long. An area of of open upland, wetland and wooded space just east of Chief Sealth High School in Westwood is the first daylight of Longfellow Creek. It has been the location of some plant and tree restoration since 1997. After more than a decade of preparation by hundreds of neighborhood volunteers, a restoration and 4.2 mile (6.7 km) legacy trail was completed in 2004. Further improvement by removal of invasive vegetation is ongoing as native species retake hold. Blue heron and coyote can be seen. The creek first emerges at the 10,000-year-old Roxhill Bog, south of the Westwood Village shopping center.
Madrona Creek
Citizens of Madrona neighborhoods initiated a daylighting project in 2001, encompassing from above 38th Avenue into Lake Washington. Daylighting will return the creek to a new bed and replace the sloping lawn between Lake Washington Boulevard and Lake Washington with native plantings, and with the mouth of the creek at a restored wetland cove on the lake. New culverts under 38th, the boulevard, and under a permeable pedestrian path will allow fish passage. Native plantings will restore about 1.5 acres (6,100 m2), with plantings three to four feet in height at three key view corridors. Planning continued through 2004, followed by design (2205) and construction (2006). The completion celebration is scheduled for spring, 2007. The $450,000 cost is funded by community-initiated grants and private donations.
Citizen stewards of the creek and woods are represented by the Friends of Madrona Woods (1996). The urban forest encompasses about 9 acres (36,000 m2), largely in a couple ravines. The park area was built 1891-1893, officially no longer maintained since the 1930s with the demise of streetcars and pedestrian lifestyles. Persistent efforts began (1995) with informal removal of ivy smothering trees, then invasive species like holly, laurel and blackberries, and realization that effective restoration would require comprehensive stewardship.
With a Department of Neighborhoods grant, the neighborhood started a formal effort. Neighborhood groups, planning with naturalists and landscape architects, brought an effective early step rebuilding trails, promoting access and building constituency. Further priorities were protection for habitat, restoration of stream beds, rehabilitation as a natural area using native plants, and using the Madrona Woods as a setting for environmental education programs at local schools. A hired landscape architect became a team member, experimental plots were set up to test different methods for revegetating with native plants. (Plants adapt to microclimates; experimentation is required to jumpstart the otherwise very long natural processes.)
Friends of Madrona Woods earned a much larger Department of Neighborhoods matching grant in 2000, funding the creation of a master action plan, and major trail restoration work. The community match for the grant was nearly 2500 hours of volunteer labor by community members and school children from St. Therese and Epiphany schools. After many decades of urban use without formal maintenance, substantial trail engineering was required. EarthCorps was contracted to do the actual construction, which included 86 steps, two landings and a bridge.
In the process of clearing, volunteers found substantial erosion in the wetland hillside, leading to a grant from a Parks Department fund to stabilize it with a water cascade of natural materials. Neighbors did a little trail-building of their own with Volunteers for Outdoor Washington and an all-day trail building workshop (February 2000). Work parties continue monthly through much of the year.
Schmitz Creek
Schmitz Creek in the Alki neighborhood of West Seattle flows to the sound from Schmitz Park, SW 55th Avenue at SW Admiral Way. Apart from the paved entrance and a parking lot at the northwest corner, the park has remained essentially unchanged since its 53 acres (210,000 m2) were protected 1908-1912 from complete logging. Fragmentary old growth forest remains. Daylighting and drainage rebuilding to handle seasonal and storm flow was done 2001-2003.
United Kingdom
Porter Brook, Sheffield, Yorkshire
The Porter Brook flows from the west of Sheffield on the edge of the Peak District and flows into the River Sheaf at Sheaf Street near Sheffield Railway Station. The Porter Brook is one of Sheffield's five well known rivers, along with the Don, Sheaf, Loxley and Rivelin. The Porter has been deculverted at Matilda Street near the BBC Radio Sheffield studios. A feasibility study for the scheme was undertaken for South Yorkshire Forest Partnership by Sheffield City Council in 2013 with funding from the Environment Agency and the EU via the Interreg North Sea Region Programme. The project was completed by Sheffield City Council with funding from the Environment Agency in 2016.
The Porter Brook daylighting scheme featured in a 2016 BBC Radio 4 documentary entitled A River of Steel, produced by sound recordist Chris Watson, ex-member of Caberet Voltaire. It was also discussed in an article in The Guardian in 2017.
River Roch, Rochdale, Greater Manchester
The River Roch that runs through the town of Rochdale has recently been uncovered, revealing the medieval bridge in place. It was covered in 1904 to accommodate a tram network that has since closed.
South Korea
In Seoul, which buried the Cheonggyecheon creek during the city's 1960s boom, an artificial waterway and adjoining parks have been built atop it. Mayor Lee Myung Bak, formerly a construction magnate with the Hyundai chaebol that helped bury the river, ran for office promising to daylight it, and achieved in 2005 a greenspace in a city without very many parks or playgrounds. The new park is hugely popular, alleviating fears that opening the river would cause nearby businesses to lose customers.
See also
Stream restoration
Subterranean river
Water resources
Notes and references
Bibliography
File: Jae-Won, Lee. "A CITY RUNS THROUGH IT: Residents waded into the newly restored Chonggyechon River earlier this month in downtown Seoul, South Korea."
"with additions by Sunny Walter and local Audubon chapters." See "Northeast Seattle" section, bullet points "Meadowbrook", "Paramount Park Open Space", "North Seattle Community College Wetlands", and "Sunny Walter -- Twin Ponds". Particularly useful.
Fiset referenced Warren W. Wing, To Seattle by Trolley (Edmonds, WA: Pacific Fast Mail), 1988; [No author, title], Portage, Winter/Spring 1984; Gail Lee Dubrow et al., Broadview/Bitter Lake Community History, (Seattle Department of Parks & Recreation), 1995; [No author, title], Today, August 4, 1976; [No author, title], The Seattle Times, May 22, 1930; [No author, title], Seattle Post-Intelligencer, August 19, 1953.
Referenced The Electric Trolley by Junius Rochester; Seattle 1900-1920 by Richard C. Berner; Seattle Now & Then by Paul Dorpat; The Lake Washington Story by Lucille McDonald; The Don Sherwood Files, Seattle Parks Department.
from the files of Don Sherwood, 1916–1981, Park Historian, Don Sherwood History Files).
Was , NF.
Overview and links to full document in PDF.
Clean Water & Oceans: Water Pollution: In Depth: Report > Stormwater Strategies Community Responses to Runoff Pollution Date per "Stormwater Strategies Community Responses to Runoff Pollution ", additional chapter 12, October 2001.
Planning 2001-2004, construction 2006.
Thistle St. Longfellow Creek Greenspace
Good list of news articles; also newsletters and official correspondence.
Viewing locations only; the book has walks, hikes, wildlife, and natural wonders. Walter excerpted from
"with additions by Sunny Walter and local Audubon chapters." See "Northeast Seattle" section, bullet points "Meadowbrook", "Paramount Park Open Space", "North Seattle Community College Wetlands", and "Sunny Walter -- Twin Ponds".
Includes summary title of Initiative 80.
Further reading
Overview of the geography of metro Seattle watersheds, Map of the landscape carved by the Vashon Glacier some 14,000 years ago.
Homewaters Project, Thornton Creek Watershed
Longfellow Creek Home Page
City of Seattle Urban Creeks Legacy
What is in urban stormwater runoff
External links
https://uwaterloo.ca/stream-daylighting/interactive-map
https://web.archive.org/web/20071008041448/http://groundworkhudsonvalley.org/
http://www.SawMillRiverCoalition.org
https://web.archive.org/web/20121109013431/http://riverwiki.restorerivers.eu/
Water streams
Ecological restoration
Hydrology
Hydraulic engineering
Riparian zone
Habitat
Water and the environment
Subterranean rivers | Daylighting (streams) | [
"Physics",
"Chemistry",
"Engineering",
"Environmental_science"
] | 5,543 | [
"Hydrology",
"Ecological restoration",
"Physical systems",
"Hydraulics",
"Civil engineering",
"Environmental engineering",
"Riparian zone",
"Hydraulic engineering"
] |
5,453,536 | https://en.wikipedia.org/wiki/Zij-i%20Ilkhani | Zīj-i Īlkhānī () or Ilkhanic Tables (literal translation: "The Ilkhan Stars", after ilkhan Hulagu, who was the patron of the author at that time) is a Zij book with astronomical tables of planetary movements. It was compiled by the Muslim astronomer Nasir al-Din al-Tusi in collaboration with his research team of astronomers at the Maragha observatory. It was written in Persian and later translated into Arabic.
The book contains tables for calculating the positions of the planets and the names of the stars. It included data derived from the observations made over the course of 12 years in the Maragha observatory, completed in 1272. The planetary positions of the Zij-i Ilkhani, derived from the zijs of Ibn al-A'lam and Ibn Yunus (10/11th cent. AD), were so faulty that later astronomers, such as al-Wabkanawi and Rukn al-Din al-Amuli, criticized it severely.
The Zīj-i Īlkhānī set the precession of the equinoxes at 51 arcseconds per year, which is very close to the modern value of 50.2 arcseconds. The book also describes a method of interpolation between the observed positions, which in modern terms may be described as a second-order interpolation scheme.
History
Hulagu Khan believed that many of his military successes were due to the advice of astronomers (who were also astrologers), especially of al-Tusi. Therefore, when al-Tusi complained that his astronomical tables were 250 years old, Hulagu gave permission to build a new observatory in a place of al-Tusi's choice (he chose Maragheh). A number of other prominent astronomers worked with al-Tusi there, including Muhyi al-Din al-Maghribi, Qutb al-Din al-Shirazi, Mu'ayyid al-Din al-'Urdi from Damascus. Furthermore, the influence of Chinese astronomy was brought by Fao Munji, whose astronomical experience brought improvements to Ptolemaic system used by al-Tusi; traces of the Chinese system may be seen in Zij-i Ilkhani. The tables were published during the reign of Abaqa Khan, Hulagu's son, and named after the patron of the observatory. They were popular until the 15th century.
Some Islamic astronomical tables such as the Zij-i Al-`Ala'i of Abd-Al-Karim al-Fahhad and the Zij al-Sanjari of al-Khazini were translated into Byzantine Greek by Gregory Chioniades and studied in the Byzantine Empire. Chioniades himself had studied under Shams ad-Din al-Bukhari, who had worked at the famous Maragheh observatory after the death of al-Tusi.
See also
Zij
Astronomy in medieval Islam
Notes
References
Nasir al-Din al-Tusi (1272) Zij-i Ilkhani, British Museum, MS Or.7464.
J. A. Boyle (1963) "The Longer Introduction to the Zij-i Ilkhani of Nasir ad-Din Tusi", Journal of Semitic Studies 8(2), pp.244-254
Edward Stewart Kennedy (1956) "A Survey of Islamic Astronomical Tables", Transactions of the American Philosophical Society 46(2), pp. 3, 39-40.
Javad H. Zadeh (1985) "A Second Order Interpolation Scheme Described in the Zij-i Ilkhani", Historia Mathematica 12: 56–59.
External links
The held by the Wellcome Collection in London
1272 books
Astronomical tables
Astronomical works of the medieval Islamic world
13th century in science
1270s in the Mongol Empire
Ilkhanate
13th-century Persian books | Zij-i Ilkhani | [
"Astronomy"
] | 813 | [
"Astronomical tables",
"History of astronomy"
] |
5,453,553 | https://en.wikipedia.org/wiki/Acorn%20nut | An acorn nut, also referred to as crown hex nut, blind nut, cap nut, domed cap nut, or dome nut (UK), is a nut that has a domed end on one side. When used together with a threaded fastener with an external male thread, the domed end encloses the external thread, either to protect the thread or to protect nearby objects from contact with the thread. In addition, the dome gives a more finished appearance.
Acorn nuts are usually made of brass, steel, stainless steel (low carbon content) or nylon. They can also be chrome plated and given a mirror finish.
There are two types of acorn nuts. One is low, or the standard acorn nut. The other is the high acorn nut. The high acorn nut is wider and higher and will protect extra long studs. There are also self-locking acorn nuts that have distorted threads in the hex area to create a tight friction fit to prevent the nut from vibrating loose.
There are standards governing the manufacture of acorn nuts. One is Society of Automotive Engineers (SAE) Standard J483, High and Low Crown (Blind, Acorn) Hex Nuts. Another is Deutsches Institut für Normung (DIN) 1587, Hexagon Domed Cap Nuts.
References
Nuts (hardware) | Acorn nut | [
"Engineering"
] | 274 | [
"Mechanical engineering stubs",
"Mechanical engineering"
] |
5,453,739 | https://en.wikipedia.org/wiki/The%20Computer%20Wore%20Tennis%20Shoes | The Computer Wore Tennis Shoes is a 1969 American science fiction comedy film starring Kurt Russell, Cesar Romero, Joe Flynn and William Schallert. It was produced by Walt Disney Productions and distributed by Buena Vista Distribution Company.
It was one of several films made by Disney using the setting of Medfield College, first used in the 1961 Disney film The Absent-Minded Professor and its sequel Son of Flubber. The Computer Wore Tennis Shoes is the first film for the series Dexter Riley.
Plot
Dexter Reilly (Kurt Russell) and his friends attend a small private college, known as Medfield College, which cannot afford to buy a computer. The students persuade wealthy businessman A. J. Arno (Cesar Romero) to donate an old Burroughs 205 computer to the college. Arno is secretly the head of a large illegal gambling ring which used the computer for its operations.
While installing a replacement computer part during a thunderstorm, Reilly receives an electric shock and becomes a human computer. He now has superhuman mathematical talent, can read and remember the contents of an encyclopedia volume in a few minutes, and can speak a language fluently after reading one textbook. His new abilities make him a worldwide celebrity and Medfield's best chance to win a televised quiz tournament with a $100,000 prize.
Reilly single-handedly leads Medfield's team in victories against other colleges. During the tournament, on live television, a trigger word ("applejack") causes him to unknowingly recite details of Arno's gambling ring. Arno's henchmen kidnap Reilly and plan to kill him, but his friends help him escape by locating the house in which he is being kept, posing as house painters to gain access, and sneaking him out in a large trunk. During the escape, he suffers a concussion which, during the tournament final against rival Springfield State, gradually returns his mental abilities to normal; however, one of his friends, Schuyler, is able to answer the final question ("A small Midwest city is located exactly on an area designated as the geographic center of the United States. For 10 points and $100,000, can you tell us the name of that city?" with the answer "Lebanon, Kansas"). Medfield wins the $100,000 prize. Arno and his henchmen are arrested when they attempt to escape the TV studio and crash head-on into a police car.
Cast
Kurt Russell as Dexter Reilly
Cesar Romero as A. J. Arno
Joe Flynn as Dean Eugene Higgins
William Schallert as Professor Quigley
Alan Hewitt as Dean Collingsgood
Richard Bakalyan as Chillie Walsh
Debbie Paine as Annie Hannah
Frank Webb as Pete
Michael McGreevey as Richard Schuyler
Jon Provost as Bradley
Frank Welker as Henry Fathington
W. Alex Clarke as Myles
Bing Russell as Angelo
Pat Harrington as Moderator
Fabian Dean as Little Mac
Fritz Feld as Sigmund van Dyke
Pete Ronoudet as Lt. Charles "Charlie" Hannah
Hillyard Anderson as J. Reedy
David Canary* as Walski
Robert Foulk* as Police desk sergeant
Ed Begley Jr.* as a Springfield State panelist
William Fawcett* as Regent Dietes
Byron Morrow* as Regent Leonard
Howard Culver* as the Moderator
Olan Soule* as a TV Reporter
Gregory Morton* as Dr. Rufus Schmidt
Gail Bonney* as Winnifrid
* Not credited on-screen.
Music
The film's theme song, The Computer Wore Tennis Shoes, was written by Robert F. Brunner and Bruce Belland.
Reception
A. H. Weiler of The New York Times wrote: "This 'Computer' isn't I.B.M.'s kind but it's homey, lovable, as exciting as porridge and as antiseptic and predictable as any homey, half-hour TV family show". Gene Siskel of the Chicago Tribune reported: "I rather enjoyed The Computer Wore Tennis Shoes and I suspect children under 14 will like it, too". Arthur D. Murphy of Variety praised the film as "above-average family entertainment, enhanced in great measure by zesty, but never show-off, direction by Robert Butler, in a debut swing to pix from telefilm". Kevin Thomas of the Los Angeles Times wrote that "Disney Productions latched on to a terrific premise for some sharp satire only to flatten it out by jamming it into its familiar 'wholesome' formula. Alas, the movie itself comes out looking like it had been made by a computer".
The film holds a score of 50% on Rotten Tomatoes based on six reviews.
Release
The Computer Wore Tennis Shoes was released by Walt Disney Home Video through VHS on October 19, 1985. It was later re-released by Walt Disney Studios Home Entertainment on Blu-ray disc on September 9, 2014 as a Disney Movie Club exclusive.
Legacy
Sequels
Now You See Him, Now You Don't (1972)
The Strongest Man in the World (1975)
Television films
This film was remade as the television film The Computer Wore Tennis Shoes in 1995 starring Kirk Cameron as Dexter Riley.
Other Disney Channel films carrying similar plot elements were the Not Quite Human film series, which aired in the late 1980s and early 1990s. The films were based on the series of novels with the same name.
Other
The animated title sequence, by future Academy Award-winning British visual effects artist Alan Maley, reproduced the look of contemporary computer graphics using stop motion photography of paper cutouts. It has been cited as an early example of "computational kitsch".
The 2000 episode of The Simpsons, "The Computer Wore Menace Shoes", is a reference to the film but the episode isn't related to the film in any other way, according to M. Keith Booker in his book Drawn to Television: Prime-Time Animation from The Flintstones to Family Guy.
See also
Dexter Riley (film series)
List of American films of 1969
References
Footnotes
Bibliography
External links
(archived)
1960s American films
1960s English-language films
1960s science fiction comedy films
1969 comedy films
1969 films
1969 children's films
American science fiction comedy films
English-language science fiction comedy films
Films about computing
Films directed by Robert Butler
Films set in universities and colleges
Medfield College films
Walt Disney Pictures films | The Computer Wore Tennis Shoes | [
"Technology"
] | 1,279 | [
"Works about computing",
"Films about computing"
] |
5,454,012 | https://en.wikipedia.org/wiki/11%20Brigade%20%28United%20Kingdom%29 | The 11th Brigade is a brigade of the British Army which is transitioning to the tactical recce-strike role. The brigade was formerly the 11th Security Force Assistance Brigade, providing training and guidance for foreign militaries.
Originally formed in the Second Boer War, the brigade was engaged during both World Wars, and deployed to Afghanistan.
History
Second Boer War
British Army brigades had traditionally been ad hoc formations known by the name of their commander or numbered as part of a division. However, units involved in the Second Boer War in 1899–1900 were organised into sequentially numbered brigades that were frequently reassigned between divisions. The Army Corps sent from Britain in 1899 comprised six brigades in three divisions while the troops already in South Africa were intended to constitute a fourth division. The rapid deterioration of the situation led the War Office to announce on 11 November 1899 that a 5th Division was to be formed and sent out. This consisted of the new 10th and 11th (Lancashire) Brigades and concentrated at Estcourt on 8 January 1900 for the campaign for the Relief of Ladysmith.
Order of Battle
The 11th (Lancashire) Brigade was constituted as follows:
2nd Battalion, King's Own (Royal Lancaster Regiment)
2nd Battalion, Lancashire Fusiliers
1st Battalion, South Lancashire Regiment
1st Battalion, York and Lancaster Regiment
Commanders
Major-General Edward Woodgate (mortally wounded at Spion Kop 23/24 January 1900)
Lieutenant-Colonel Malby Crofton (Royal Lancaster Regiment) (acting)
Colonel Arthur Wynne (wounded at Tugela Heights 22 February 1900)
Lieutenant-Colonel Malby Crofton (acting)
Brigadier-General Walter Kitchener (acting)
Maj-Gen Arthur Wynne (returned by May 1900)
As well as Spion Kop and Tugela Heights, the brigade served at Trichard's Drift, Tabanyama, Vaal Krantz, Wessel's Nek, Waschbank, Botha's Pass, Alleman's Nek, Volkrust, Wakkerstroom, and the advance on Standerton. However, after the defeat of the main Boer field armies and the development of guerrilla warfare, all the divisions and brigades were broken up to form ad hoc 'columns' and garrisons.
After the Boer War, 11th Brigade became a permanent formation in 1902, stationed at Portsmouth. By 1907 it was part of 6th Division in Eastern Command. In the Expeditionary Force established by the Haldane reforms, 11th Brigade at Colchester became part of 4th Division, and remained so until the outbreak of World War I.
First World War
When war broke out in August 1914 the 11th Infantry Brigade mobilised as part of the 4th Division. It was one of the British units sent overseas to France as part of the British Expeditionary Force and fought on the Western Front for the next four years.
Order of Battle
During World War I the brigade had the following composition:
1st Battalion, Somerset Light Infantry
1st Battalion, East Lancashire Regiment (left to join 103rd Brigade, 34th Division, 1 February 1918)
1st Battalion, Hampshire Regiment
1st Battalion, Rifle Brigade
2nd Battalion, Royal Irish Regiment (joined from 12th Brigade 26 July 1915, left to join 22nd Brigade, 7th Division, 22 May 1916)
11th Brigade Machine Gun Company (formed 23 December 1915; left to join No 4 Battalion, Machine Gun Corps, 26 February 1918)
11th Brigade Trench Mortar Battery (formed June 1916)
Service
During the war the brigade participated in the following actions:
1914
Retreat from Mons, 25 August–5 September
Battle of Le Cateau, 16 August
First Battle of the Marne, 6–9 September
Crossing of the Aisne, 12 September
First Battle of the Aisne, 13–20 September
Battle of Armentières, 3 October–2 November
Capture of Méteren, 13 October
1915
Second Battle of Ypres:
Battle of St Julien 25 April–4 May
Battle of Frezenberg Ridge, 8–13 May
Battle of Bellewaarde Ridge, 24–25 May
1916
Battle of the Somme:
Battle of Albert, 1–2 July
Battle of the Transloy Ridges, 10–18 October
1917
Battle of Arras:
First Battle of the Scarpe, 9–14 April
Second Battle of the Scarpe, 3–4 May
Third Battle of Ypres:
Battle of Polygon Wood, 28 September–3 October
Battle of Broodseinde, 4 October
Battle of Poelcappelle, 9 October
First Battle of Passchendaele, 12 October
1918
German Spring Offensive:
Third Battle of Arras, 28 March
Battle of the Lys:
Battle of Hazebrouck, 13–15 April, including defence of Hinges Wood
Battle of Béthune, 18 April
Hundred Days Offensive:
Battle of the Scarpe, 29–30 August
Battle of the Drocourt-Quéant Line, 2–3 September
Battle of the Canal du Nord, 27 September–1 October
Battle of the Selle, 17–25 October
Battle of Valenciennes, 1–2 November
Commanders
During World War I the brigade was commanded by the following officers:
Brigadier-General Aylmer Hunter-Weston (February 1914)
Brigadier-General Julian Hasler (26 February 1915 –killed 27 April 1915)
Lieutenant-Colonel F. R. Hicks (27 April 1915 - acting)
Brigadier-General Charles Bertie Prowse (29 April 1915 – killed 1 July 1916)
Major W. A. T. B. Somerville (1 July 1916 - acting)
Brigadier-General H. C. Rees (3 July 1916)
Brigadier-General R. A. Berners (7 December 1916)
Lieutenant-Colonel F. A. W. Armitage (15 October 1917 - acting)
Brigadier-General T. S. H. Wade (21 October 1917)
Brigadier-General W. J. Webb-Bowen (19 September 1918)
Second World War
The 11th Infantry Brigade was originally part of the 4th Infantry Division as it was during the First World War, serving with it during the Battle of France and was evacuated from Dunkirk in late May 1940. It remained with the division in the United Kingdom up until 6 June 1942 when it was reassigned to join 78th Infantry Division (commanded by Vyvyan Evelegh, a previous commander of the brigade) which was being newly formed to take part in Operation Torch, the Allied landings in French North Africa, as part of the British First Army (commanded by Kenneth Anderson, also a previous commander of the brigade). The brigade landed in North Africa at Algiers in November 1942 and fought with 78th Division throughout the Tunisian campaign which ended with the Axis surrender in May 1943. It then served with 78th Division throughout the campaigns in Sicily and Italy.
Order of Battle
During World War II the brigade comprised the following units:
Headquarters, 11th Infantry Brigade & Signal Section
2nd Battalion, The Lancashire Fusiliers
1st Battalion, East Surrey Regiment
1st Battalion, Oxfordshire and Buckinghamshire Light Infantry (left to join 143rd Brigade 31 December 1940)
11th Infantry Brigade Anti-Tank Company (left to join 4th Battalion, Reconnaissance Corps, 1 January 1941)
5th (Huntingdon) Battalion, Northamptonshire Regiment (joined from 143rd Brigade 29 January 1941)
Commanders
During World War II the brigade was commanded by the following officers:
Brigadier Kenneth Anderson (1938–1940)
Lieutenant-Colonel Brian Horrocks: (30 May 1940 – acting until 3 June)
Brigadier John Grover (14 June 1940)
Lieutenant-Colonel R.A. Boxshall (7 January 1941 – acting)
Brigadier Vyvyan Evelegh (11 January 1941)
Brigadier Guy Francis Gough (13 November 1941)
Brigadier Edward Cass (18 February 1942)
Brigadier Keith Arbuthnott (29 September 1943)
Lieutenant-Colonel John Alexander Mackenzie (10 October 1944 – acting)
Brigadier Gerald Ernest Thubron (23 November 1944 – 1945)
Post-war
In January 1946, following the end of the campaign in Europe, the brigade was dissolved and its units dispersed to other brigades and commands. In 1950, the brigade was reformed in West Germany.
The organisation of the brigade during the 1950s was as follows:
Brigade Headquarters, at Kingsley Barracks, Minden
9th Queen's Royal Lancers, at Lothian Barracks, Detmold (Armoured role, with Centurion main battle tanks)
1st Battalion, The Sherwood Foresters (Nottinghamshire & Derbyshire Regiment), at Dempsey Barracks, Sennelager
1st Battalion, The Manchester Regiment, at Clifton Barracks, Minden – merged with the King's Liverpool Regiment on 1 September 1958 to form the King's Regiment
1st Battalion, The Dorset Regiment, at Elizabeth Barracks, Minden – from April 1956, merged with the Devonshire Regiment in 1958 to form the Devonshire and Dorset Regiment
On 1 April 1956, the 4th Infantry Division was reformed in the BAOR, and its brigades: 10th, 11th, and 12th was reformed by conversion of the old 61st Lorried Infantry Brigade based in Minden. In 1958, following the 1957 Defence White Paper, the brigade was redesignated as 11th Infantry Brigade Group. As a brigade group, it picked up not just infantry but supporting elements such as artillery. It was shifted to the 2nd Division. And in 1964, the brigade was transferred to the 1st Division, sitting alongside the 7th Armoured Brigade Group. In February 1961, the brigade groups were reorganised again, to comprise a signal squadron, armoured regiment, three infantry battalions, field artillery regiment, engineer squadron, and one AAC reconnaissance flight.
The brigade's structure following its conversion to a brigade group was as follows:
Brigade Headquarters, at Kingsley Barracks, Minden
7th Royal Tank Regiment, at Haig Barracks, Hohne – merged with 4th Royal Tank Regiment on 3 April 1959
4th Royal Tank Regiment – from April 1959
1st Battalion, North Staffordshire Regiment (Prince of Wales's), at Clifton Barracks, Minden
1st Battalion, The South Wales Borderers – from June 1959
1st Battalion, The Highland Light Infantry (City of Glasgow Regiment), at Alma Barracks, Lüneburg
1st Battalion, Middlesex Regiment (Duke of Cambridge's Own) – from November 1958
1st Battalion, The Royal Lincolnshire Regiment, at Elizabeth Barracks, Minden – from June 1958
19th Field Regiment, Royal Artillery, at Saint George's Barracks, Minden (Field artillery; 18 x Ordnance QF 25-pdr howitzers)
25 Field Squadron, Royal Engineers, at Saint George's Barracks, Minden
In November 1965, the brigade groups became 'brigades' once again, dropping their support units. In October 1966, just after the publication of the 1966 Defence White Paper, the 7th Armoured and 11th Infantry brigades experimented with a new brigade organisation with two armoured regiments and two 'mechanised' battalions equipped with the new FV432 armoured personnel carrier. With the increasing availability of the new vehicle, all of the infantry battalions within the BAOR were to become mechanised.
The brigade's structure just before conversion was as follows:
Brigade Headquarters, at Kingsley Barracks, Minden
211 Signal Squadron (Infantry Brigade Group), Royal Corps of Signals, at Kingsley Barracks, Minden
The Royal Scots Greys (2nd Dragoons), at Wessex Barracks, Fallingbostel
1st Battalion, The Royal Warwickshire Fusiliers, at Gordon Barracks, Hameln
1st Battalion, The Duke of Edinburgh's Royal Regiment (Berkshire & Wiltshire) – from June 1966
16th/5th The Queen's Royal Lancers – in infantry role from June 1969
1st Battalion, The Royal Welch Fusiliers, at Saint George's Barracks, Minden
1st Battalion, The Gordon Highlanders – from April 1967
15th/19th The King's Royal Hussars – in infantry role from November 1969
1st Battalion, The Black Watch (Royal Highland Regiment), at Elizabeth Barracks, Minden
1st Battalion, The Sherwood Foresters (Nottinghamshire & Derbyshire Regiment) – from March 1968
As a result of the above defence white paper and experimentations, the BAOR was completely reorganised with the 11th Infantry Brigade becoming an armoured formation in the end of 1970. The new formation, 11th Armoured Brigade, was reformed, thus ending the infantry lineage.
Twenty-first century
Afghanistan
On 15 October 2007, Helmand Task Force 11 formed its planning cell at Aldershot Garrison, expanding into 11th Light Brigade in November 2007 for deployment to Afghanistan (Operation Herrick). The brigade was stood up alongside 52nd Infantry Brigade thus providing the Army with two infantry brigades available for deployment to either Afghanistan (Operation Herrick) or Iraq (Operation Telic).
On 10 October 2009, the brigade deployed to Helmand Province, replacing 19th Light Brigade and would remain until April 2010. The brigade's order of battle on deployment to Afghanistan was as follows alongside the formation they had been part of:
Brigade Headquarters
11th Light Brigade Headquarters & 261 Signal Squadron, Royal Corps of Signals (101st Logistic Brigade)
Household Cavalry Regiment (1st Mechanised Brigade)
1st Battalion, Grenadier Guards (London District)
2nd Battalion (The Green Howards), Yorkshire Regiment (19th Light Brigade)
1st Battalion (Royal Welch Fusiliers), The Royal Welsh (1st Mechanised Brigade)
3rd Battalion, The Rifles (52nd Infantry Brigade)
1st Regiment, Royal Horse Artillery (3rd (UK) Mechanised Division)
28th Engineer Regiment, Royal Engineers (1st (UK) Armoured Division)
10th (Queen's Own Gurkha) Logistic Regiment, Royal Logistic Corps (101st Logistic Brigade)
104th Force Support Battalion, Royal Electrical and Mechanical Engineers (Equipment Support, Theatre Troops)
33rd Field Hospital, Royal Army Medical Corps (2nd Medical Brigade)
160 Provost Company, Royal Military Police, Adjutant General's Corps (4th Regiment, Royal Military Police)
On the brigade's return in April 2010, a total of 650 soldiers from the 12 regiments of the brigade marched through Winchester in Hampshire accompanied by three bands to celebrate their return. Later in June, around 120 soldiers then marched past the Palace of Westminster (Parliament of the United Kingdom).
Just a few months after the brigade's return in 2010, the brigade was disbanded and its units returned to their peacetime headquarters.
Army 2020
In 2012, following the Strategic Defence and Security Review 2010, the Army 2020 programme was announced. As part of the mergers, the 2nd (South East) Infantry Brigade, which had regional responsibility for the south east counties (Kent, Surrey, and Sussex), and 145th (South) Brigade, which had regional responsibility for the south-central region (Thames Valley (Berkshire, Buckinghamshire, and Oxfordshire), Hampshire, and the Isle of Wight) were merged to form the new 11th Infantry Brigade and Headquarters South East.
The brigade's organisation was as follows by 2015:
Brigade Headquarters, at Taurus House, Aldershot Garrison
1st Battalion, Welsh Guards, at Elizabeth Barracks, Pirbright Camp (Light Mechanised Infantry with Foxhound armoured cars)
1st Battalion, Grenadier Guards, at Lille Barracks, Aldershot Garrison (Light Infantry)
1st Battalion, The Royal Gurkha Rifles, at Sir John Moore Barracks, Shorncliffe (Light Infantry)
The London Regiment (Army Reserve), HQ in Westminster (Light Infantry) – paired with the Grenadier Guards
3rd Battalion, The Royal Welsh (Army Reserve), HQ in Cardiff (Light Infantry) – paired with the Welsh Guards
Army 2020 Refine
In 2017, a supplement to the Army 2020 programme was announced entitled the Army 2020 Refine which reversed many of the unit-level changes. In addition to the unit level changes, several of the regional brigades formed under the initial Army 2020 programme were disbanded or reduced to Colonel-level commands. In 2019, a Field Army reorganisation saw these brigades lose their units permanently with the following changes occurring to the former units: Grenadier Guards and Welsh Guards transferred to London District (on rotation) and replaced by the Coldstream Guards and Irish Guards respectively, Royal Gurkha Rifles moved to 16th Air Assault Brigade, The London Regiment transferred to London District, and the 3rd Royal Welsh moved to the 12th Armoured Infantry Brigade.
Under the changes, the Coldstream and Irish Guards moved from London District, the 3rd Princess of Wales's Royal Regiment moved from 7th Infantry Brigade, and the 1st and 2nd Battalions, Royal Irish Regiment moved from 160th (Welsh) Brigade.
In 2019 with the brigade completely reorganised, its structure was now as follows by the end of 2021:
Brigade Headquarters, at Taurus House, Aldershot Garrison
1st Battalion, Irish Guards, at Lille Barracks, Aldershot Garrison (Light Mechanised Infantry with Foxhound armoured cars)
1st Battalion, The Royal Irish Regiment, at Clive Barracks, Ternhill (Light Mechanised Infantry with Foxhound armoured cars)
1st Battalion, Coldstream Guards, at Victoria Barracks, Windsor (Light Infantry)
3rd Battalion, The Princess of Wales's Royal Regiment (Army Reserve), HQ at Leros Barracks, Canterbury
2nd Battalion, The Royal Irish Regiment (Army Reserve), HQ at Thiepval Barracks, Lisburn
11th Security Force Assistance Brigade
On 30 November 2021, the Future Soldier changes were announced, and the brigade will transition from an infantry brigade into a security force assistance formation. In late 2021, the brigade was renamed as 11th Security Force Assistance Brigade, dropping its regional commitments, and will reorganise by 2022. The brigade's mission was described as follows:
The brigade headquarters will remain in Aldershot, drop its regional commitments, and unit moves will be as follows: Coldstream Guards move to 4th Light Brigade Combat Team (BCT) – formerly 4th Infantry Brigade & HQ North East; 2nd Royal Irish Regiment move to 19th Reserve Brigade – a new formation; 3rd Princess of Wales's Royal Regiment moved to 20th Armoured BCT as mechanised infantry; 1st Royal Irish Regiment moves to 16th Air Assault Brigade as 'light strike reconnaissance infantry'; and the Irish Guards will remain part of the brigade. The following units will join the brigade in 2022: The Black Watch (3rd Battalion, Royal Regiment of Scotland) from 51st Infantry Brigade; 1st Royal Anglian Regiment from British Forces Cyprus (will join on return from Cyprus in 2023); 3rd The Rifles joins in 2024 from 51st Infantry Brigade; 4th Princess of Wales's Royal Regiment joins from 7th Infantry Brigade; and finally the Outreach and Cultural Support Group will join from 77th Brigade.
The brigade's structure by 2025 will therefore be as follows:
Brigade Headquarters, at Taurus House, Aldershot Garrison
1st Battalion, Irish Guards, at Lille Barracks, Aldershot Garrison
The Black Watch, 3rd Battalion, The Royal Regiment of Scotland, at Fort George, Inverness – to move to Leuchars Station not before 2029
1st Battalion, The Royal Anglian Regiment, at Alexander Barracks, Dhekelia Cantonment, Cyprus – to move to Kendrew Barracks, Cottesmore in 2023 and join the brigade that same year
3rd Battalion, The Rifles, at Dreghorn Barracks, Edinburgh – to move to Weeton Barracks, Blackpool not before 2027 and join the brigade in 2024
4th Battalion, The Princess of Wales's Royal Regiment (Army Reserve), HQ in Redhill
Outreach and Cultural Support Group (77th Brigade), at Denison Barracks, Hermitage – to move to Alexander Barracks, Pirbright Camp not before 2027
The brigade led a programme to train members of the Armed Forces of Ukraine during the Russo-Ukrainian War as part of Operation Orbital (20152022) and Operation Interflex (2022).
Current formation
11th Brigade
In November 2024, the brigade resurbordinated from the 1st (UK) Division to Field Army Troops. The brigade became 11th Brigade, dropping its Security Force Assistance responsibility, and returning to a combat role as part of the Land Special Operations Force. The brigade will learn to fight as a tactical recce-strike force and will take part in training packages in Kenya and the Baltics in 2025.
The brigade's current structure is as follows:
Brigade Headquarters, at Taurus House, Aldershot Garrison
1st Battalion, Irish Guards, at Lille Barracks, Aldershot Garrison
The Black Watch, 3rd Battalion, The Royal Regiment of Scotland, at Fort George, Inverness – to move to Leuchars Station not before 2029
1st Battalion, The Royal Anglian Regiment, at Kendrew Barracks, Cottesmore
3rd Battalion, The Rifles, at Dreghorn Barracks, Edinburgh – to move to Weeton Barracks, Blackpool not before 2027
4th Battalion, The Princess of Wales's Royal Regiment (Army Reserve), RHQ in Redhill
Outreach and Cultural Support Group, at Denison Barracks, Hermitage – to move to Alexander Barracks, Pirbright Camp not before 2027
See also
Security Force Assistance Brigade
Footnotes
References
L.S. Amery (ed), The Times History of the War in South Africa 1899-1902, London: Sampson Low, Marston, 7 Vols 1900–09.
Maj A.F. Becke,History of the Great War: Order of Battle of Divisions, Part 1: The Regular British Divisions, London: HM Stationery Office, 1934/Uckfield: Naval & Military Press, 2007, ISBN 1-847347-38-X.
Col John K. Dunlop, The Development of the British Army 1899–1914, London: Methuen, 1938.
Lt-Col H.F. Joslen, Orders of Battle, United Kingdom and Colonial Formations and Units in the Second World War, 1939–1945, London: HM Stationery Office, 1960/London: London Stamp Exchange, 1990, ISBN 0-948130-03-2/Uckfield: Naval & Military Press, 2003, ISBN 1-843424-74-6.
External links
Official web page
Infantry brigades of the British Army in World War I
Infantry brigades of the British Army in World War II
Infantry brigades of the British Army
Military units and formations established in 2021
Future Soldier
Military advisory groups | 11 Brigade (United Kingdom) | [
"Engineering"
] | 4,439 | [
"Military projects",
"Future Soldier"
] |
5,454,081 | https://en.wikipedia.org/wiki/Android%20science | Android science is an interdisciplinary framework for studying human interaction and cognition based on the premise that a very humanlike robot (that is, an android) can elicit human-directed social responses in human beings. The android's ability to elicit human-directed social responses enables researchers to employ an android in experiments with human participants as an apparatus that can be controlled more precisely than a human actor.
While mechanical-looking robots may be able to elicit social responses to some extent, a robot that looks and acts like a human being is in a better position to stand in for a human actor in social, psychological, cognitive, or neuroscientific experiments. This gives experiments with androids a level of ecological validity with respect to human interaction found lacking in experiments with mechanical-looking robots.
An experimental setting for human-android interaction also provides a testing ground for models concerning how cognitive or neural processing influence human interaction, because models can be implemented in the android and tested in interaction with human participants. In android science, cognitive science and engineering are understood as enjoying a synergistic relationship in which the results from a deepening understanding of human interaction and the development of increasingly humanlike androids feed into each other.
Android science can be broadly construed to include all the effects of engineered human likeness, such as the impact of humanlike robots on society or the study of the relationship between anthropomorphism and human perception. The latter relates to an observation made by Masahiro Mori that human beings are more sensitive to deviations from humanlike behavior or appearance in near-human forms. Mori refers to this phenomenon as the uncanny valley.
See also
Affective computing
Android
Cognitive science
Human-robot interaction
Robot
Uncanny valley
References
External links
Android Science Center | Android science | [
"Engineering"
] | 355 | [
"Android (robot)",
"Human–machine interaction"
] |
5,454,132 | https://en.wikipedia.org/wiki/Problem%20of%20future%20contingents | Future contingent propositions (or simply, future contingents) are statements about states of affairs in the future that are contingent: neither necessarily true nor necessarily false.
The problem of future contingents seems to have been first discussed by Aristotle in chapter 9 of his On Interpretation (De Interpretatione), using the famous sea-battle example. Roughly a generation later, Diodorus Cronus from the Megarian school of philosophy stated a version of the problem in his notorious master argument. The problem was later discussed by Leibniz.
The problem can be expressed as follows. Suppose that a sea-battle will not be fought tomorrow. Then it was also true yesterday (and the week before, and last year) that it will not be fought, since any true statement about what will be the case in the future was also true in the past. But all past truths are now necessary truths; therefore it is now necessarily true in the past, prior and up to the original statement "A sea battle will not be fought tomorrow", that the battle will not be fought, and thus the statement that it will be fought is necessarily false. Therefore, it is not possible that the battle will be fought. In general, if something will not be the case, it is not possible for it to be the case. "For a man may predict an event ten thousand years beforehand, and another may predict the reverse; that which was truly predicted at the moment in the past will of necessity take place in the fullness of time" (De Int. 18b35).
This conflicts with the idea of our own free choice: that we have the power to determine or control the course of events in the future, which seems impossible if what happens, or does not happen, is necessarily going to happen, or not happen. As Aristotle says, if so there would be no need "to deliberate or to take trouble, on the supposition that if we should adopt a certain course, a certain result would follow, while, if we did not, the result would not follow".
Aristotle's solution
Aristotle solved the problem by asserting that the principle of bivalence found its exception in this paradox of the sea battles: in this specific case, what is impossible is that both alternatives can be possible at the same time: either there will be a battle, or there won't. Both options can't be simultaneously taken. Today, they are neither true nor false; but if one is true, then the other becomes false. According to Aristotle, it is impossible to say today if the proposition is correct: we must wait for the contingent realization (or not) of the battle, logic realizes itself afterwards:
One of the two propositions in such instances must be true and the other false, but we cannot say determinately that this or that is false, but must leave the alternative undecided. One may indeed be more likely to be true than the other, but it cannot be either actually true or actually false. It is therefore plain that it is not necessary that of an affirmation and a denial, one should be true and the other false. For in the case of that which exists potentially, but not actually, the rule which applies to that which exists actually does not hold good. (§9)
For Diodorus, the future battle was either impossible or necessary. Aristotle added a third term, contingency, which saves logic while in the same time leaving place for indetermination in reality. What is necessary is not that there will or that there will not be a battle tomorrow, but the dichotomy itself is necessary:
A sea-fight must either take place tomorrow or not, but it is not necessary that it should take place tomorrow, neither is it necessary that it should not take place, yet it is necessary that it either should or should not take place tomorrow. (De Interpretatione, 9, 19 a 30.)
Islamic philosophy
What exactly al-Farabi posited on the question of future contingents is contentious. Nicholas Rescher argues that al-Farabi's position is that the truth value of future contingents is already distributed in an "indefinite way", whereas Fritz Zimmerman argues that al-Farabi endorsed Aristotle's solution that the truth value of future contingents has not been distributed yet. Peter Adamson claims they are both correct as al-Farabi endorses both perspectives at different points in his writing, depending on how far he is engaging with the question of divine foreknowledge.
Al-Farabi's argument about "indefinite" truth values centers around the idea that "from premises that are contingently true, a contingently true conclusion necessarily follows". This means that even though a future contingent will occur, it may not have done so according to present contingent facts; as such, the truth value of a proposition concerning that future contingent is true, but true in a contingent way. al-Farabi uses the following example; if we argue truly that Zayd will take a trip tomorrow, then he will, but crucially:There is in Zayd the possibility that he stays home....if we grant that Zayd is capable of staying home or of making the trip, then these two antithetical outcomes are equally possibleAl-Farabi's argument deals with the dilemma of future contingents by denying that the proposition P "it is true at that Zayd will travel at " and the proposition Q "it is true that at that Zayd travels" would lead us to conclude that necessarily if P then necessarily Q.
He denies this by arguing that "the truth of the present statement about Zayd's journey does not exclude the possibility of Zayd’s staying at home: it just excludes that this possibility will be realized".
Leibniz
Leibniz gave another response to the paradox in §6 of Discourse on Metaphysics: "That God does nothing which is not orderly, and that it is not even possible to conceive of events which are not regular." Thus, even a miracle, the Event by excellence, does not break the regular order of things. What is seen as irregular is only a default of perspective, but does not appear so in relation to universal order, and thus possibility exceeds human logics. Leibniz encounters this paradox because according to him:
Thus the quality of king, which belonged to Alexander the Great, an abstraction from the subject, is not sufficiently determined to constitute an individual, and does not contain the other qualities of the same subject, nor everything which the idea of this prince includes. God, however, seeing the individual concept, or haecceity, of Alexander, sees there at the same time the basis and the reason of all the predicates which can be truly uttered regarding him; for instance that he will conquer Darius and Porus, even to the point of knowing a priori (and not by experience) whether he died a natural death or by poison,- facts which we can learn only through history. When we carefully consider the connection of things we see also the possibility of saying that there was always in the soul of Alexander marks of all that had happened to him and evidences of all that would happen to him and traces even of everything which occurs in the universe, although God alone could recognize them all. (§8)
If everything that happens to Alexander derives from the haecceity of Alexander, then fatalism threatens Leibniz's construction:
We have said that the concept of an individual substance includes once for all everything which can ever happen to it and that in considering this concept one will be able to see everything which can truly be said concerning the individual, just as we are able to see in the nature of a circle all the properties which can be derived from it. But does it not seem that in this way the difference between contingent and necessary truths will be destroyed, that there will be no place for human liberty, and that an absolute fatality will rule as well over all our actions as over all the rest of the events of the world? To this I reply that a distinction must be made between that which is certain and that which is necessary. (§13)
Against Aristotle's separation between the subject and the predicate, Leibniz states:
"Thus the content of the subject must always include that of the predicate in such a way that if one understands perfectly the concept of the subject, he will know that the predicate appertains to it also." (§8)
The predicate (what happens to Alexander) must be completely included in the subject (Alexander) "if one understands perfectly the concept of the subject". Leibniz henceforth distinguishes two types of necessity: necessary necessity and contingent necessity, or universal necessity vs singular necessity. Universal necessity concerns universal truths, while singular necessity concerns something necessary that could not be (it is thus a "contingent necessity"). Leibniz hereby uses the concept of compossible worlds. According to Leibniz, contingent acts such as "Caesar crossing the Rubicon" or "Adam eating the apple" are necessary: that is, they are singular necessities, contingents and accidentals, but which concerns the principle of sufficient reason. Furthermore, this leads Leibniz to conceive of the subject not as a universal, but as a singular: it is true that "Caesar crosses the Rubicon", but it is true only of this Caesar at this time, not of any dictator nor of Caesar at any time (§8, 9, 13). Thus Leibniz conceives of substance as plural: there is a plurality of singular substances, which he calls monads. Leibniz hence creates a concept of the individual as such, and attributes to it events. There is a universal necessity, which is universally applicable, and a singular necessity, which applies to each singular substance, or event. There is one proper noun for each singular event: Leibniz creates a logic of singularity, which Aristotle thought impossible (he considered that there could only be knowledge of generality).
20th century
One of the early motivations for the study of many-valued logics has been precisely this issue. In the early 20th century, the Polish formal logician Jan Łukasiewicz proposed three truth-values: the true, the false and the as-yet-undetermined. This approach was later developed by Arend Heyting and L. E. J. Brouwer; see Łukasiewicz logic.
Issues such as this have also been addressed in various temporal logics, where one can assert that "Eventually, either there will be a sea battle tomorrow, or there won't be." (Which is true if "tomorrow" eventually occurs.)
The modal fallacy
By asserting "A sea-fight must either take place tomorrow or not, but it is not necessary that it should take place tomorrow, neither is it necessary that it should not take place, yet it is necessary that it either should or should not take place tomorrow", Aristotle is simply claiming "necessarily (a or not-a)", which is correct.
However, if we then conclude: "If a is the case, then necessarily, a is the case", then this is known as the modal fallacy.
Expressed in another way:
That is, there are no contingent propositions. Every proposition is either necessarily true or necessarily false.
The fallacy arises in the ambiguity of the first premise. If we interpret it close to the English, we get:
However, if we recognize that the English expression (i) is potentially misleading, that it assigns a necessity to what is simply nothing more than a necessary condition, then we get instead as our premises:
From these latter two premises, one cannot validly infer the conclusion:
See also
Logical determinism
Free will
Principle of distributivity
Principle of plenitude
Truth-value link
In Borges' The Garden of Forking Paths, both alternatives happen, thus leading to what Deleuze calls "incompossible worlds"
Notes
Further reading
attempts to reconstruct both Aristotle's and Diodorus' arguments in propositional modal logic
Dorothea Frede (1985), "The Sea Battle Reconsidered: A defense of the traditional interpretation", Oxford Studies in Ancient Philosophy 3, 31-87.
John MacFarlane (2003), Sea Battles, Futures Contingents, and Relative Truth, The Philosophical Quarterly 53, 321-36
Jules Vuillemin, Le chapitre IX du De Interpretatione d'Aristote - Vers une réhabilitation de l'opinion comme connaissance probable des choses contingentes, in Philosophiques, vol. X, n°1, April 1983
External links
Aristotle's De Interpretatione: Semantics and Philosophy of Language with an extensive bibliography of recent studies on the Future Sea Battle
Selected Bibliography on the Master Argument, Diodorus Chronus, Philo the Dialectician with a bibliography on Diodorus and the problem of future contingents
Modal logic
Philosophical logic
Paradoxes
Future Contingents
Future
Ancient Greek logic | Problem of future contingents | [
"Physics",
"Mathematics"
] | 2,694 | [
"Physical quantities",
"Time",
"Future",
"Mathematical logic",
"Spacetime",
"Modal logic"
] |
5,454,277 | https://en.wikipedia.org/wiki/Librascope | Librascope was a Glendale, California, division of General Precision, Inc. (GPI). It was founded in 1937 by Lewis W. Imm to build and operate theater equipment, and acquired by General Precision in 1941. During World War II it worked on improving aircraft load balancing.
Later, Librascope became a manufacturer of early digital computers sold in both the business and defense markets. It hired Stan Frankel, a Manhattan Project veteran and early ENIAC programmer, to design the LGP-30 desk computer in 1956.
In 1964 Librascope's Avionic Equipment Division at San Marcos has been shifted to the Aerospace Group, GPI as the West Coast facility of the Kearfott Division.
Librascope was eventually purchased by Singer Corporation and moved into the manufacture of marine systems and land-based C3 (Command, Control, Communication) systems for the international defense industry. The company specialized in fire control systems for torpedoes, though they continued to work on a variety of other smaller military contracts through the 1970s.
After Singer was taken over by corporate raider Paul Bilzerian, the company was sold to Loral Space & Communications in 1992. The division was eventually sold to Lockheed Martin and was eventually absorbed into the Lockheed Martin Federal Systems, but is now called Lockheed Martin NE&SS—Undersea Systems.
Computers
LGP-30
LGP-21
Librascope AN/ASN-24 general purpose Airborne/Aerospace Computer Set (1958), after modification used in:
Centaur guidance computer (Librascope-3)
Lockheed C-141A Starlifter and C-130E Hercules - Digital Navigation Computer (System 605A), AN/ASN-24(V)
Atlas-Centaur Navigation Computer (GPK-33)
Digital Camera-Control System - aerial-reconnaissance camera system, AN/ASN-24(XY-1)
Librascope C141 airborne navigation computer
Librascope L90-I general purpose aerospace computer (1962)
Librascope L600 aircraft and missile guidance computer
Librascope L-2010 general purpose rugged computer (1962), portable
Librascope L3055 data processor for 473L system
References
External links
Librascope Memories, over 60 years of history, including 293 Librazette newsletters, photos, product literature, and company videos.
Air Force 473L global communications system
1937 establishments in California
1992 disestablishments in California
American companies disestablished in 1992
American companies established in 1937
Companies based in Glendale, California
Computer companies established in 1937
Computer companies disestablished in 1992
Defunct computer companies of the United States
Defunct computer hardware companies
Defunct computer systems companies
Technology companies established in 1937
Technology companies disestablished in 1992 | Librascope | [
"Technology"
] | 557 | [
"Computing stubs",
"Computer company stubs"
] |
5,454,605 | https://en.wikipedia.org/wiki/Gordon%20Bell%20Prize | The Gordon Bell Prize is an award presented by the Association for Computing Machinery each year in conjunction with the SC Conference series (formerly known as the Supercomputing Conference). The prize recognizes outstanding achievement in high-performance computing applications. The main purpose is to track the progress over time of parallel computing, by acknowledging and rewarding innovation in applying high-performance computing to applications in science, engineering, and large-scale data analytics. The prize was established in 1987. A cash award of $10,000 (since 2011) accompanies the recognition, funded by Gordon Bell, a pioneer in high-performance and parallel computing.
The Prizes were preceded by a nominal prize ($100) established by Alan Karp, a numerical analyst (then of IBM) who challenged claims of MIMD performance improvements proposed in the Letters to the Editor section of the Communications of the ACM. Karp went on to be one of the first Gordon Bell Prize judges.
Individuals or teams may apply for the award by submitting a technical paper describing their work through the SC conference submissions process. Finalists present their work at that year's conference, and their submissions are included in the conference proceedings.
Prize criteria
The ACM Gordon Bell Prize is primarily intended to recognize performance achievements that demonstrate:
evidence of important algorithmic and/or implementation innovations
clear improvement over the previous state-of-the-art
solutions that don’t depend on one-of-a-kind architectures (systems that can only be used to address a narrow range of problems, or that can’t be replicated by others)
performance measurements that have been characterized in terms of scalability (strong as well as weak scaling), time to solution, efficiency (in using bottleneck resources, such as memory size or bandwidth, communications bandwidth, I/O), and/or peak performance
achievements that are generalizable, in the sense that other people can learn and benefit from the innovations
In earlier years, multiple prizes were sometimes awarded to reflect different types of achievements. According to current policies, the Prize can be awarded in one or more of the following categories, depending on the entries received in a given year:
Peak Performance: If the entry demonstrates outstanding performance in terms of floating point operations per second on an important science/engineering problem; the efficiency of the application in using bottleneck resources (such as memory size or bandwidth) is also taken into consideration.
Special Achievement in Scalability, Special Achievement in Time to Solution: If the entry demonstrates exceptional Scalability, in terms of both strong and weak scaling, and/or total time to solve an important science/engineering problem.
See also
List of computer science awards
References
External links
ACM Gordon Bell Prize Winners 2006-present
Gordon Bell Prize News - 2013-2022
- ACM Gordon Bell Prize 1987-2015
Gordon Bell Prize description from SC13
ACM Gordon Bell Prize Winners 2006-2015
Earlier Prize Winners 1987–1999
Prize Winners 1987-2015
Gordon Bell Prize official page on ACM Website
The SC (formerly "Supercomputing") Conference Series
Awards of the Association for Computing Machinery
Computer science awards
Awards established in 1987 | Gordon Bell Prize | [
"Technology"
] | 631 | [
"Science and technology awards",
"Computer science",
"Computer science awards"
] |
5,455,183 | https://en.wikipedia.org/wiki/Custom%20Integrated%20Circuits%20Conference | IEEE Custom Integrated Circuits Conference (CICC) is an international conference devoted to IC development, showcasing original, first published technical work and circuit techniques that tackle practical problems. CICC is a forum for circuit, IC, and SoC designers, CAD developers, manufacturers and ASIC users to present and discuss new developments, future trends, innovative ideas and recent advancements. CICC is sponsored by the IEEE Solid-State Circuits Society and technically sponsored by the IEEE Electron Devices Society. The conference is held annually in either April or September in locations across the US.
See also
International Electron Devices Meeting (IEDM)
International Solid-State Circuits Conference (ISSCC)
Symposia on VLSI Technology and Circuits
References
External links
CICC proceedings
IEEE conferences
International conferences in the United States | Custom Integrated Circuits Conference | [
"Technology"
] | 157 | [
"Computing stubs",
"Computer hardware stubs"
] |
5,455,191 | https://en.wikipedia.org/wiki/...First%20Do%20No%20Harm | ...First Do No Harm is a 1997 American drama television film produced and directed by Jim Abrahams, written by Ann Beckett, and starring Meryl Streep, Fred Ward, and Seth Adkins. It is about a boy whose severe epilepsy, unresponsive to medications with terrible side effects, is controlled by the ketogenic diet. Aspects of the story mirror Abrahams' own experience with his son Charlie.
The film aired on ABC on February 16, 1997. Streep's performance was nominated for an Primetime Emmy Award for Outstanding Lead Actress in a Limited or Anthology Series or Movie, a Golden Globe Award for Best Actress – Miniseries or Television Film and in the Satellite Award for Best Actress – Miniseries or TV Film. Beckett was nominated for the Humanitas Prize (90 minute category). Adkins won a Young Artist Award for his performance.
Plot
The film tells a story in the life of a Midwestern family, the Reimullers. Lori is the mother of three children, and the wife of Dave, a truck driver. The family is presented as happy, normal, and comfortable financially: they have just bought a horse and are planning a holiday to Hawaii. Then the youngest son, Robbie, has a sudden unexplained fall at school. A short while later, he has another unprovoked fall while playing with his brother, and is seen having a convulsive seizure. Robbie is taken to the hospital where several procedures are performed: a CT scan, a lumbar puncture, an electroencephalogram (EEG) and blood tests. No cause is found but the two falls are regarded as epileptic seizures and the child is diagnosed with epilepsy.
Robbie is started on phenobarbital, an old anticonvulsant drug with well-known side effects including cognitive impairment and behavior problems. The latter causes the child to run berserk through the house, leading to injury. Lori urgently phones the physician to request a change of medication. It is changed to phenytoin (Dilantin) but the dose of phenobarbital must be tapered slowly, causing frustration. Later, the drug carbamazepine (Tegretol) is added.
Meanwhile, the Reimullers discover that their health insurance is invalid and their treatment is transferred from private to county hospital. In an attempt to pay the medical bills, Dave takes on more dangerous truckloads and works long hours. Family tensions reach a head when the children realize the holiday is not going to happen and a foreclosure notice is posted on the house.
Robbie's epilepsy gets worse, and he develops a serious rash known as Stevens–Johnson syndrome as a side effect of the medication. He is admitted to the hospital where his padded cot is designed to prevent him escaping. The parents fear he may become a "vegetable" and are losing hope. At one point, Robbie goes into status epilepticus (a continuous convulsive seizure that must be stopped as a medical emergency). Increasing doses of diazepam (Valium) are given intravenously to no effect. Eventually, paraldehyde is given rectally. This drug is described as having possibly fatal side effects and is seen dramatically melting a plastic cup (a glass syringe is required).
The neurologist in charge of Robbie's care, Dr. Melanie Abbasac, has a poor bedside manner and paints a bleak picture. Abbasac wants the Reimullers to consider surgery and start the necessary investigative procedures to see if this is an option. These involve removing the top of the skull and inserting electrodes on the surface of the brain to achieve a more accurate location of any seizure focus than normal scalp EEG electrodes. The Reimullers see surgery as a dangerous last resort and want to know if anything else can be done.
Lori begins to research epilepsy at the library. After many hours, she comes across the ketogenic diet in a well-regarded textbook on epilepsy. However, their doctor dismisses the diet as having only anecdotal evidence of its effectiveness. After initially refusing to consider the diet, she appears to relent but sets impossible hurdles in the way: the Reimullers must find a way to transport their son to Johns Hopkins Hospital in Baltimore, Maryland with continual medical support—something they cannot afford.
That evening, Lori attempts to abduct her son from the hospital and, despite the risk, flies with him to an appointment she has made with a doctor at Johns Hopkins. However, she is stopped by hospital security at the exit to the hospital. A sympathetic nurse warns Lori that she could lose custody of her son if a court decides she is putting her son's health at risk.
Dave makes contact with an old family friend who once practiced as a physician and is still licensed. This doctor and the sympathetic nurse agree to accompany Lori and Robbie on the trip to Baltimore. During the flight, Robbie has a prolonged convulsive seizure, which causes some concern to the pilot and crew.
When they arrive at Johns Hopkins, it becomes apparent that Lori has deceived her friends as her appointment (for the previous week) was not rescheduled and there are no places on the ketogenic diet program. After much pleading, Dr. Freeman agrees to take Robbie on as an outpatient. Lori and Robbie stay at a convent in Baltimore.
The diet is briefly explained by Millicent Kelly, a dietitian who has helped run the ketogenic diet program since the 1940s. Robbie's seizures begin to improve during the initial fast that is used to kick-start the diet. Despite the very high-fat nature of the diet, Robbie accepts the food and rapidly improves. His seizures are eliminated and his mental faculties are restored. The film ends with Robbie riding the family horse at a parade through town. Closing credits claim Robbie continued the diet for a couple of years and has remained seizure- and drug-free ever since.
Cast
Meryl Streep as Lori Reimuller
Fred Ward as Dave Reimuller
Seth Adkins as Robbie Reimuller
Allison Janney as Dr. Melanie Abbasac
Margo Martindale as Marjean
Leo Burmester as Bob Purdue
Tom Butler as Dr. Jim Peterson
Mairon Bennett as Lynne Reimuller
Michael Yarmush as Mark Reimuller
Millicent Kelly as herself
See also
Never event
Primum non nocere
References
Further reading
External links
1997 television films
1997 films
1997 drama films
1990s American films
1990s English-language films
American Broadcasting Company original films
American drama television films
American films based on actual events
Drama films based on actual events
English-language drama films
Films about families
Films directed by Jim Abrahams
Films scored by Hummie Mann
Films set in Baltimore
Johns Hopkins Hospital in fiction
Low-carbohydrate diets
Medical-themed films
Television films based on actual events
Works about epilepsy | ...First Do No Harm | [
"Chemistry"
] | 1,426 | [
"Carbohydrates",
"Low-carbohydrate diets"
] |
5,455,427 | https://en.wikipedia.org/wiki/Axilrod%E2%80%93Teller%20potential | The Axilrod–Teller potential in molecular physics, is a three-body potential that results from a third-order perturbation correction to the attractive London dispersion interactions (instantaneous induced dipole-induced dipole)
where is the distance between atoms and , and is the angle between the vectors
and . The coefficient is positive and of the order , where is the ionization energy and is the mean atomic polarizability; the exact value of depends on the magnitudes of the dipole matrix elements and on the energies of the orbitals.
References
Chemical bonding
Quantum mechanical potentials | Axilrod–Teller potential | [
"Physics",
"Chemistry",
"Materials_science"
] | 121 | [
"Materials science stubs",
"Quantum mechanics",
"Quantum mechanical potentials",
"Condensed matter physics",
"nan",
"Chemical bonding",
"Electromagnetism stubs",
"Quantum physics stubs"
] |
5,455,478 | https://en.wikipedia.org/wiki/Till%20roll | Till rolls are paper rolls for use in cash registers and Electronic Point of Sale printers. There are a number of different types available, including:
Thermal: One side of the paper has a special coating that is heat sensitive.
2 & 3 ply: These rolls require an impact printer and produce multiple copies
Thermal: One side of the paper has a special coating that is heat sensitive.
2 & 3 ply: These rolls require an impact printer and produce multiple copies.
Contact www.tillrollwarehosue.co.uk 0870 850 6535
Retailing equipment and supplies
Retail point of sale systems | Till roll | [
"Technology"
] | 122 | [
"Retail point of sale systems",
"Information systems"
] |
5,455,816 | https://en.wikipedia.org/wiki/Common%20Power%20Format | The Si2 Common Power Format, or CPF is a file format for specifying power-saving techniques early in the design process. In the design of integrated circuits, saving power is a primary goal, and designers are forced to use sophisticated techniques such as clock gating, multi-voltage logic, and turning off the power entirely to inactive blocks. These techniques require a consistent implementation in the design steps of logic design, implementation, and verification. For example, if multiple different power supplies are used, then logic synthesis must insert level shifters, place and route must deal with them correctly, and other tools such as static timing analysis and formal verification must understand these components. As power became an increasingly pressing concern, each tool independently added the features needed. Although this made it possible to build low power flows, it was difficult and error prone since the same information needed to be specified several times, in several formats, to many different tools. CPF was created as a common format that many tools can use to specify power-specific data, so that power intent only need be entered once and can be used consistently by all tools. The aim of CPF is to support an automated, power-aware design infrastructure.
Associated with CPF is the Power Forward Initiative (PFI), a group of companies that collaborate to drive low-power design methodology and have contributed to the development of the CPF v1.0 specification. PFI membership spans EDA, IP, library, foundry fabs, ASIC, IDM, and equipment companies. In March 2007, CPF v1.0 was contributed to the Silicon Integration Initiative (Si2) where it was ratified by Si2’s Low Power Coalition (LPC) as a Si2 standard. The LPC controls the ongoing evolution of the CPF v1.0 standard.
Contents
Constructs expressing power domains and their power supplies:
Logical design: hierarchical modules can be specified as belonging to specific power supply domains
Physical design: explicit power/ground nets and connectivity can be specified per cell or block.
Analysis: different timing library data for cases where the same cell is used in different power domains
Power control logic
Specification of level shifter logic - special cells needed when signals traverse between blocks of different supply voltage.
Specification of isolation logic - what special logic is needed for signals that traverse between blocks that can be powered up and down independently.
Specification of state-retention logic - when blocks are switched off entirely, how is the state retained?
Specification of switch logic and control signals - how are blocks switched on and off?
Definition and verification of power modes (standby, sleep, etc.)
Mode definitions
Mode transition expressions
History and controversy
Cadence Design Systems designed the early versions of CPF, then contributed it to Si2. This was followed shortly by an alternative effort, the Unified Power Format or UPF, proposed as an IEEE standard as opposed to an Si2 standard. UPF has been driven mainly by Synopsys, Mentor Graphics and Magma. The technical differences between the two formats are relatively minor, but the political considerations are harder to overcome. Not surprisingly, the Cadence Low-Power Solution supported Si2’s CPF very early on, as well as UPF as it emerged; whereas the Synopsys, and Mentor Graphics offerings all support UPF. Magma supports both CPF and UPF.
An attempt at convergence is taking place in the Low Power Coalition at Si2.
References
External links
Download CPF specification.
Computer file formats
Power standards | Common Power Format | [
"Engineering"
] | 705 | [
"Electrical engineering",
"Power standards"
] |
5,455,932 | https://en.wikipedia.org/wiki/Thorium%28IV%29%20carbide | Thorium(IV) carbide (ThC) is an inorganic thorium compound and a carbide.
References
Carbides
Thorium(IV) compounds
Rock salt crystal structure | Thorium(IV) carbide | [
"Chemistry"
] | 40 | [
"Inorganic compounds",
"Inorganic compound stubs"
] |
5,455,963 | https://en.wikipedia.org/wiki/Thorium%28IV%29%20chloride | Thorium(IV) chloride describes a family of inorganic compounds with the formula ThCl4(H2O)n. Both the anhydrous and tetrahydrate (n = 4) forms are known. They are hygroscopic, water-soluble white salts.
Structures
The structure of thorium(IV) chloride features 8-coordinate Th centers with doubly bridging chloride ligands.
Synthesis
ThCl4 was an intermediate in the original isolation of thorium metal by Jons Jacob Berzelius.
Thorium(IV) chloride can be produced in a variety of ways. One method is a carbothermic reaction, 700 °C to 2600 °C, involving thorium oxides and carbon in a stream of chlorine gas:
ThO2 + 2C + 4Cl2 → ThCl4 + 2CO
The chlorination reaction can be effected with carbon tetrachloride:
Th(C2O4)2 + CCl4 → ThCl4 + 3CO + 3CO2
In another two-step method, thorium metal reacts with ammonium chloride:
Th + 6NH4Cl → (NH4)2ThCl6 + 4NH3 + 2H2
The hexachloride salt is then heated at 350 °C under a high vacuum to produce ThCl4.
Reactions
Lewis base adducts
ThCl4 reacts with Lewis bases to give molecular adducts, such as ThCl4(DME)2 and ThCl4(TMEDA)2.
Reduction to Th metal
Thorium(IV) chloride is an intermediate in the purification of thorium, which can be affected by:
Reduction of ThCl4 with alkali metals.
Electrolysis of anhydrous thorium(IV) chloride in fused mixture of NaCl and KCl.
Ca reduction of a mixture of ThCl4 with anhydrous zinc chloride.
References
Chlorides
Actinide halides
Thorium(IV) compounds | Thorium(IV) chloride | [
"Chemistry"
] | 409 | [
"Chlorides",
"Inorganic compounds",
"Salts"
] |
5,456,001 | https://en.wikipedia.org/wiki/Thorium%28IV%29%20fluoride | Thorium(IV) fluoride (ThF4) is an inorganic chemical compound. It is a white hygroscopic powder which can be produced by reacting thorium with fluorine gas. At temperatures above 500 °C, it reacts with atmospheric moisture to produce ThOF2.
Uses
Despite its (mild) radioactivity, thorium fluoride is used as an antireflection material in multilayered optical coatings. It has excellent optical transparency in the range 0.35–12 μm, and its radiation is primarily due to alpha particles, which can be easily stopped by a thin cover layer of another material. However, like all alpha emitters, thorium is potentially hazardous if incorporated, which means safety should focus on reducing or eliminating this danger. In addition to its radioactivity, thorium is also a chemically toxic heavy metal.
Thorium fluoride was used in making carbon arc lamps, which provided high-intensity illumination for movie projectors and search lights.
See also
Liquid fluoride thorium reactor
References
Fluorides
Actinide halides
Thorium(IV) compounds | Thorium(IV) fluoride | [
"Chemistry"
] | 232 | [
"Fluorides",
"Salts"
] |
5,456,056 | https://en.wikipedia.org/wiki/Thorium%28IV%29%20iodide | Thorium(IV) iodide (ThI4) is an inorganic chemical compound composed of thorium and iodine. It is one of three known thorium iodides, the others being ThI3 and ThI2.
References
Iodides
Actinide halides
Thorium(IV) compounds | Thorium(IV) iodide | [
"Chemistry"
] | 66 | [
"Inorganic compounds",
"Inorganic compound stubs"
] |
5,456,153 | https://en.wikipedia.org/wiki/Thorium%28IV%29%20orthosilicate | Thorium(IV) orthosilicate (ThSiO4) is an inorganic chemical compound. Thorite is a mineral that consists essentially of thorium othosilicate.
References
Silicates
Thorium(IV) compounds | Thorium(IV) orthosilicate | [
"Chemistry"
] | 49 | [
"Inorganic compounds",
"Inorganic compound stubs"
] |
5,456,164 | https://en.wikipedia.org/wiki/Whitney%20extension%20theorem | In mathematics, in particular in mathematical analysis, the Whitney extension theorem is a partial converse to Taylor's theorem. Roughly speaking, the theorem asserts that if A is a closed subset of a Euclidean space, then it is possible to extend a given function of A in such a way as to have prescribed derivatives at the points of A. It is a result of Hassler Whitney.
Statement
A precise statement of the theorem requires careful consideration of what it means to prescribe the derivative of a function on a closed set. One difficulty, for instance, is that closed subsets of Euclidean space in general lack a differentiable structure. The starting point, then, is an examination of the statement of Taylor's theorem.
Given a real-valued Cm function f(x) on Rn, Taylor's theorem asserts that for each a, x, y ∈ Rn, there is a function Rα(x,y) approaching 0 uniformly as x,y → a such that
where the sum is over multi-indices α.
Let fα = Dαf for each multi-index α. Differentiating (1) with respect to x, and possibly replacing R as needed, yields
where Rα is o(|x − y|m−|α|) uniformly as x,y → a.
Note that () may be regarded as purely a compatibility condition between the functions fα which must be satisfied in order for these functions to be the coefficients of the Taylor series of the function f. It is this insight which facilitates the following statement:
Theorem. Suppose that fα are a collection of functions on a closed subset A of Rn for all multi-indices α with satisfying the compatibility condition () at all points x, y, and a of A. Then there exists a function F(x) of class Cm such that:
F = f0 on A.
DαF = fα on A.
F is real-analytic at every point of Rn − A.
Proofs are given in the original paper of , and in , and .
Extension in a half space
proved a sharpening of the Whitney extension theorem in the special case of a half space. A smooth function on a half space Rn,+ of points where xn ≥ 0 is a smooth function f on the interior xn for which the derivatives ∂α f extend to continuous functions on the half space. On the boundary xn = 0, f restricts to smooth function. By Borel's lemma, f can be extended to a
smooth function on the whole of Rn. Since Borel's lemma is local in nature, the same argument shows that if is a (bounded or unbounded) domain in Rn with smooth boundary, then any smooth function on the closure of can be extended to a smooth function on Rn.
Seeley's result for a half line gives a uniform extension map
which is linear, continuous (for the topology of uniform convergence of functions and their derivatives on compacta) and takes functions supported in [0,R] into functions supported in [−R,R]
To define set
where φ is a smooth function of compact support on R equal to 1 near 0 and the sequences (am), (bm) satisfy:
tends to ;
for with the sum absolutely convergent.
A solution to this system of equations can be obtained by taking and seeking an entire function
such that That such a function can be constructed follows from the Weierstrass theorem and Mittag-Leffler theorem.
It can be seen directly by setting
an entire function with simple zeros at The derivatives W '(2j) are bounded above and below. Similarly the function
meromorphic with simple poles and prescribed residues at
By construction
is an entire function with the required properties.
The definition for a half space in Rn by applying the operator R to the last variable xn. Similarly, using a smooth partition of unity and a local change of variables, the result for a half space implies the existence of an analogous extending map
for any domain in Rn with smooth boundary.
See also
The Kirszbraun theorem gives extensions of Lipschitz functions.
Notes
References
Theorems in analysis | Whitney extension theorem | [
"Mathematics"
] | 844 | [
"Theorems in mathematical analysis",
"Mathematical analysis",
"Mathematical problems",
"Mathematical theorems"
] |
5,456,186 | https://en.wikipedia.org/wiki/Thorium%28IV%29%20sulfide | Thorium(IV) sulfide (ThS2) is an inorganic chemical compound composed of one thorium atom ionically bonded to two atoms of sulfur. This salt is dark brown and has a melting point of 1905 °C. ThS2 adopts the same orthorhombic lattice structure as PbCl2.
References
Sulfides
Thorium(IV) compounds
Dichalcogenides | Thorium(IV) sulfide | [
"Chemistry"
] | 82 | [
"Inorganic compounds",
"Inorganic compound stubs"
] |
5,456,269 | https://en.wikipedia.org/wiki/Titanium%28II%29%20oxide | Titanium(II) oxide (TiO) is an inorganic chemical compound of titanium and oxygen. It can be prepared from titanium dioxide and titanium metal at 1500 °C. It is non-stoichiometric in a range TiO0.7 to TiO1.3 and this is caused by vacancies of either Ti or O in the defect rock salt structure. In pure TiO 15% of both Ti and O sites are vacant, as the vacancies allow metal-metal bonding between adjacent Ti centres. Careful annealing can cause ordering of the vacancies producing a monoclinic form which has 5 TiO units in the primitive cell that exhibits lower resistivity. A high temperature form with titanium atoms with trigonal prismatic coordination is also known. Acid solutions of TiO are stable for a short time then decompose to give hydrogen:
2 Ti2+(aq) + 2 H+(aq) → 2 Ti3+(aq) + H2(g)
Gas-phase TiO shows strong bands in the optical spectra of cool (M-type) stars. In 2017, TiO was claimed to be detected in an exoplanet atmosphere for the first time; a result which is still debated in the literature. Additionally, evidence has been obtained for the presence of the diatomic molecule TiO in the interstellar medium.
References
Titanium(II) compounds
Non-stoichiometric compounds
Transition metal oxides
Rock salt crystal structure | Titanium(II) oxide | [
"Chemistry"
] | 305 | [
"Non-stoichiometric compounds"
] |
5,456,301 | https://en.wikipedia.org/wiki/Titanium%28II%29%20sulfide | Titanium(II) sulfide (TiS) is an inorganic chemical compound of titanium and sulfur.
A meteorite, Yamato 691, contains tiny flecks of this compound, making it a new mineral called wassonite.
References
Monosulfides
Titanium(II) compounds
Nickel arsenide structure type | Titanium(II) sulfide | [
"Chemistry"
] | 66 | [
"Inorganic compounds",
"Inorganic compound stubs"
] |
5,456,343 | https://en.wikipedia.org/wiki/Titanium%20tetrafluoride | Titanium(IV) fluoride is the inorganic compound with the formula TiF4. It is a white hygroscopic solid. In contrast to the other tetrahalides of titanium, it adopts a polymeric structure. In common with the other tetrahalides, TiF4 is a strong Lewis acid.
Preparation and structure
The traditional method involves treatment of titanium tetrachloride with excess hydrogen fluoride:
TiCl4 + 4 HF → TiF4 + 4 HCl
Purification is by sublimation, which involves reversible cracking of the polymeric structure.
X-ray crystallography reveals that the Ti centres are octahedral, but conjoined in an unusual columnar structure.
Reactions
TiF4 forms adducts with many ligands. One example is the complex cis-TiF4(CH3CN)2, which is formed by treatment with acetonitrile. It is also used as a reagent in the preparation of organofluorine compounds. With fluoride, the cluster [Ti4F18]2- forms. It has an adamantane-like Ti4F6 core.
Related to its Lewis acidity, TiF4 forms a variety of hexafluorides also called hexafluorotitanates. Hexafluorotitanic acid has been used commercially to clean metal surfaces. These salts are stable at pH<4 in the presence of hydrogen fluoride, otherwise they hydrolyze to give oxides.
References
Fluorides
Titanium halides
Titanium(IV) compounds
Adamantane-like molecules | Titanium tetrafluoride | [
"Chemistry"
] | 338 | [
"Fluorides",
"Salts"
] |
5,456,458 | https://en.wikipedia.org/wiki/Titanium%28III%29%20phosphide | Titanium(III) phosphide (TiP) is an inorganic chemical compound of titanium and phosphorus. Normally encountered as a grey powder, it is a metallic conductor with a high melting point. It is not attacked by common acids or water. Its physical properties stand in contrast to the group 1 and group 2 phosphides that contain the P3− anion (such as Na3P), which are not metallic and are readily hydrolysed. Titanium phosphide is classified as a "metal-rich phosphide", where extra valence electrons from the metal are delocalised.
Titanium phosphide can be prepared by the reaction of TiCl4 and PH3.
There are other titanium phosphide phases, including Ti3P, Ti2P, Ti7P4, Ti5P3, and Ti4P3.
Titanium phosphide should not be confused with titanium phosphate or titanium isopropoxide, both of which are sometimes known by the acronym TIP.
References
Phosphides
Titanium(III) compounds | Titanium(III) phosphide | [
"Chemistry"
] | 222 | [
"Inorganic compounds",
"Inorganic compound stubs"
] |
5,456,701 | https://en.wikipedia.org/wiki/Technoculture | Technoculture is a neologism that is not in standard dictionaries but that has some popularity in academia, popularized by editors Constance Penley and Andrew Ross in a book of essays bearing that title. It refers to the interactions between, and politics of, technology and culture.
Programs of study
"Technoculture" is used by a number of universities to describe subject areas or courses of study. UC Davis, for instance, has a program of technocultural studies. In 2012, the major merged with Film Studies to form Cinema and Techno-Cultural Studies (CaTS), but in 2013 is being reviewed to become Cinema and Technoculture (see below); the University of Western Ontario offers a degree in Media, Information and Technoculture (which they refer to as MIT, offering an "MIT BA"). UC Riverside is in the process of creating a program in technocultural studies beginning with the creation of a graduate certificate program in "Science Fiction and Technoculture Studies."
According to its description, the Georgetown University course English/CCT 691 titled Technoculture from Frankenstein to Cyberpunk, covers the "social reception and representation of technology in literature and popular culture from the Romantic era to the present" and includes "all media, including film, TV, and recent video animation and Web 'zines." The course focuses "mainly on American culture and the way in which machines, computers, and the body have been imagined."
The UC Davis Technocultural Studies department focuses on "transdisciplinary approaches to artistic, cultural and scholarly production in contemporary media and digital arts, community media, and mutual concerns of the arts with the scientific and technological disciplines. In contrast to programs which see technology as the primary driving force, we place questions of poetics, aesthetics, history, politics and the environment at the core of our mission. In other words, we emphasize the 'culture' in Technoculture."
The Technocultural Studies major program is an interdisciplinary integration of current research in cultural history and theory with innovative hands-on production in digital media and "low-tech". It focuses on the fine and performing arts, media arts, community media, literature and cultural studies as they relate to technology and science. Backed by critical perspectives and the latest forms of research and production skills, students enjoy the mobility to explore individual research and expression, project-based collaboration and community engagement.
Technocultural Studies is a fairly new major at UC Davis and is considered a division of Humanities, Arts and Cultural Studies.
Film Studies and Technocultural Studies majors at UC Davis have merged into Cinema and Technoculture. The faculty have been hard at work on developing this new major, and it is going through the review process. Declared students will be grandfathered in to the existing programs to complete their major. They will also have the option of switching to the new major if they choose. The faculty of UC Davis believes these new additions will improve the program and hope their students take advantage of them.
Journals
'Technoculture: An Online Journal of Technology in Society () is an independent, interdisciplinary, annual peer-reviewed journal that publishes critical and creative works that explore the ways in which technology impacts society. It uses a broad definition of technology. Founded by Keith Dorwick and Kevin Moberly, it is now edited by Keith Dorwick. Technoculture is a member of the Council of Editors of Learned Journals and is indexed by EBSCOhost and the Modern Language Association.
People
Marshall McLuhan is most known for his concepts of a "global village". In his book Understanding Media he talks about how media affects society and culture. He also develops a theory about technology being an extension of the body. According to McLuhan, the alphabet is what gave rise to the idea that sight is more important than hearing because in order to communicate one had to be able to see and understand the alphabet.
In her book Technoculture: The Key Concepts, Debra Benita Shaw "outlines the place of science and technology in today's culture" and "explores the power of scientific ideas, their impact on how we understand the natural world and how successive technological developments have influenced our attitudes to work, art, space, language and the human body."
Clay Shirky writes, teaches, and consults on the social and economic effects of the internet, and especially on places where our social and technological networks overlap. He is on the faculty of NYU's Interactive Telecommunications Program, and has consulted for Nokia, Procter and Gamble, News Corp., the BBC, the United States Navy and Lego. He is also a regular speaker at technology conferences.
In his book "The Work of Art in the Age of Mechanical Reproduction" Walter Benjamin attempts to analyze the changed experience of art in modern society. He believes that a reproduction of art lacks presence in time and space and therefore has no aura. Original works of art do have an aura. An aura includes authority, its place in space and time (when it was made), how the piece's physical condition suffered and how it's changed owners over time. An original work of art derives its authenticity from history and what has happened to it over time.
See also
Cyberculture
References
Technology in society
Philosophy of technology | Technoculture | [
"Technology"
] | 1,055 | [
"Philosophy of technology",
"Science and technology studies"
] |
5,456,797 | https://en.wikipedia.org/wiki/Method%20of%20moments%20%28probability%20theory%29 | In probability theory, the method of moments is a way of proving convergence in distribution by proving convergence of a sequence of moment sequences. Suppose X is a random variable and that all of the moments
exist. Further suppose the probability distribution of X is completely determined by its moments, i.e., there is no other probability distribution with the same sequence of moments
(cf. the problem of moments). If
for all values of k, then the sequence {Xn} converges to X in distribution.
The method of moments was introduced by Pafnuty Chebyshev for proving the central limit theorem; Chebyshev cited earlier contributions by Irénée-Jules Bienaymé. More recently, it has been applied by Eugene Wigner to prove Wigner's semicircle law, and has since found numerous applications in the theory of random matrices.
Notes
Moment (mathematics) | Method of moments (probability theory) | [
"Physics",
"Mathematics"
] | 181 | [
"Mathematical analysis",
"Moments (mathematics)",
"Physical quantities",
"Moment (physics)"
] |
5,456,815 | https://en.wikipedia.org/wiki/Carter%20subgroup | In mathematics, especially in the field of group theory, a Carter subgroup of a finite group G is a self-normalizing subgroup of G that is nilpotent. These subgroups were introduced by Roger Carter, and marked the beginning of the post 1960 theory of solvable groups .
proved that any finite solvable group has a Carter subgroup, and all its Carter subgroups are conjugate subgroups (and therefore isomorphic). If a group is not solvable it need not have any Carter subgroups: for example, the alternating group A5 of order 60 has no Carter subgroups. showed that even if a finite group is not solvable then any two Carter subgroups are conjugate.
A Carter subgroup is a maximal nilpotent subgroup, because of the normalizer condition for nilpotent groups, but not all maximal nilpotent subgroups are Carter subgroups . For example, any non-identity proper subgroup of the nonabelian group of order six is a maximal nilpotent subgroup, but only those of order two are Carter subgroups. Every subgroup containing a Carter subgroup of a soluble group is also self-normalizing, and a soluble group is generated by any Carter subgroup and its nilpotent residual .
viewed the Carter subgroups as analogues of Sylow subgroups and Hall subgroups, and unified their treatment with the theory of formations. In the language of formations, a Sylow p-subgroup is a covering group for the formation of p-groups, a Hall π-subgroup is a covering group for the formation of π-groups, and a Carter subgroup is a covering group for the formation of nilpotent groups . Together with an important generalization, Schunck classes, and an important dualization, Fischer classes, formations formed the major research themes of the late 20th century in the theory of finite soluble groups.
A dual notion to Carter subgroups was introduced by Bernd Fischer in . A Fischer subgroup of a group is a nilpotent subgroup containing every other nilpotent subgroup it normalizes. A Fischer subgroup is a maximal nilpotent subgroup, but not every maximal nilpotent subgroup is a Fischer subgroup: again the nonabelian group of order six provides an example as every non-identity proper subgroup is a maximal nilpotent subgroup, but only the subgroup of order three is a Fischer subgroup .
See also
Cartan subalgebra
Cartan subgroup
References
, especially Kap VI, §12, pp736–743
translation in Siberian Mathematical Journal 47 (2006), no. 4, 597–600.
Finite groups
Solvable groups
Subgroup properties | Carter subgroup | [
"Mathematics"
] | 545 | [
"Mathematical structures",
"Algebraic structures",
"Finite groups"
] |
5,456,824 | https://en.wikipedia.org/wiki/Ergodicity | In mathematics, ergodicity expresses the idea that a point of a moving system, either a dynamical system or a stochastic process, will eventually visit all parts of the space that the system moves in, in a uniform and random sense. This implies that the average behavior of the system can be deduced from the trajectory of a "typical" point. Equivalently, a sufficiently large collection of random samples from a process can represent the average statistical properties of the entire process. Ergodicity is a property of the system; it is a statement that the system cannot be reduced or factored into smaller components. Ergodic theory is the study of systems possessing ergodicity.
Ergodic systems occur in a broad range of systems in physics and in geometry. This can be roughly understood to be due to a common phenomenon: the motion of particles, that is, geodesics on a hyperbolic manifold are divergent; when that manifold is compact, that is, of finite size, those orbits return to the same general area, eventually filling the entire space.
Ergodic systems capture the common-sense, every-day notions of randomness, such that smoke might come to fill all of a smoke-filled room, or that a block of metal might eventually come to have the same temperature throughout, or that flips of a fair coin may come up heads and tails half the time. A stronger concept than ergodicity is that of mixing, which aims to mathematically describe the common-sense notions of mixing, such as mixing drinks or mixing cooking ingredients.
The proper mathematical formulation of ergodicity is founded on the formal definitions of measure theory and dynamical systems, and rather specifically on the notion of a measure-preserving dynamical system. The origins of ergodicity lie in statistical physics, where Ludwig Boltzmann formulated the ergodic hypothesis.
Informal explanation
Ergodicity occurs in broad settings in physics and mathematics. All of these settings are unified by a common mathematical description, that of the measure-preserving dynamical system. Equivalently, ergodicity can be understood in terms of stochastic processes. They are one and the same, despite using dramatically different notation and language.
Measure-preserving dynamical systems
The mathematical definition of ergodicity aims to capture ordinary every-day ideas about randomness. This includes ideas about systems that move in such a way as to (eventually) fill up all of space, such as diffusion and Brownian motion, as well as common-sense notions of mixing, such as mixing paints, drinks, cooking ingredients, industrial process mixing, smoke in a smoke-filled room, the dust in Saturn's rings and so on. To provide a solid mathematical footing, descriptions of ergodic systems begin with the definition of a measure-preserving dynamical system. This is written as
The set is understood to be the total space to be filled: the mixing bowl, the smoke-filled room, etc. The measure is understood to define the natural volume of the space and of its subspaces. The collection of subspaces is denoted by , and the size of any given subset is ; the size is its volume. Naively, one could imagine to be the power set of ; this doesn't quite work, as not all subsets of a space have a volume (famously, the Banach–Tarski paradox). Thus, conventionally, consists of the measurable subsets—the subsets that do have a volume. It is always taken to be a Borel set—the collection of subsets that can be constructed by taking intersections, unions and set complements of open sets; these can always be taken to be measurable.
The time evolution of the system is described by a map . Given some subset , its map will in general be a deformed version of – it is squashed or stretched, folded or cut into pieces. Mathematical examples include the baker's map and the horseshoe map, both inspired by bread-making. The set must have the same volume as ; the squashing/stretching does not alter the volume of the space, only its distribution. Such a system is "measure-preserving" (area-preserving, volume-preserving).
A formal difficulty arises when one tries to reconcile the volume of sets with the need to preserve their size under a map. The problem arises because, in general, several different points in the domain of a function can map to the same point in its range; that is, there may be with . Worse, a single point has no size. These difficulties can be avoided by working with the inverse map ; it will map any given subset to the parts that were assembled to make it: these parts are . It has the important property of not losing track of where things came from. More strongly, it has the important property that any (measure-preserving) map is the inverse of some map . The proper definition of a volume-preserving map is one for which because describes all the pieces-parts that came from.
One is now interested in studying the time evolution of the system. If a set eventually comes to fill all of over a long period of time (that is, if approaches all of for large ), the system is said to be ergodic. If every set behaves in this way, the system is a conservative system, placed in contrast to a dissipative system, where some subsets wander away, never to be returned to. An example would be water running downhill: once it's run down, it will never come back up again. The lake that forms at the bottom of this river can, however, become well-mixed. The ergodic decomposition theorem states that every ergodic system can be split into two parts: the conservative part, and the dissipative part.
Mixing is a stronger statement than ergodicity. Mixing asks for this ergodic property to hold between any two sets , and not just between some set and . That is, given any two sets , a system is said to be (topologically) mixing if there is an integer such that, for all and , one has that . Here, denotes set intersection and is the empty set. Other notions of mixing include strong and weak mixing, which describe the notion that the mixed substances intermingle everywhere, in equal proportion. This can be non-trivial, as practical experience of trying to mix sticky, gooey substances shows.
Ergodic processes
The above discussion appeals to a physical sense of a volume. The volume does not have to literally be some portion of 3D space; it can be some abstract volume. This is generally the case in statistical systems, where the volume (the measure) is given by the probability. The total volume corresponds to probability one. This correspondence works because the axioms of probability theory are identical to those of measure theory; these are the Kolmogorov axioms.
The idea of a volume can be very abstract. Consider, for example, the set of all possible coin-flips: the set of infinite sequences of heads and tails. Assigning the volume of 1 to this space, it is clear that half of all such sequences start with heads, and half start with tails. One can slice up this volume in other ways: one can say "I don't care about the first coin-flips; but I want the 'th of them to be heads, and then I don't care about what comes after that". This can be written as the set where is "don't care" and is "heads". The volume of this space is again one-half.
The above is enough to build up a measure-preserving dynamical system, in its entirety. The sets of or occurring in the 'th place are called cylinder sets. The set of all possible intersections, unions and complements of the cylinder sets then form the Borel set defined above. In formal terms, the cylinder sets form the base for a topology on the space of all possible infinite-length coin-flips. The measure has all of the common-sense properties one might hope for: the measure of a cylinder set with in the 'th position, and in the 'th position is obviously 1/4, and so on. These common-sense properties persist for set-complement and set-union: everything except for and in locations and obviously has the volume of 3/4. All together, these form the axioms of a sigma-additive measure; measure-preserving dynamical systems always use sigma-additive measures. For coin flips, this measure is called the Bernoulli measure.
For the coin-flip process, the time-evolution operator is the shift operator that says "throw away the first coin-flip, and keep the rest". Formally, if is a sequence of coin-flips, then . The measure is obviously shift-invariant: as long as we are talking about some set where the first coin-flip is the "don't care" value, then the volume does not change: . In order to avoid talking about the first coin-flip, it is easier to define as inserting a "don't care" value into the first position: . With this definition, one obviously has that with no constraints on . This is again an example of why is used in the formal definitions.
The above development takes a random process, the Bernoulli process, and converts it to a measure-preserving dynamical system The same conversion (equivalence, isomorphism) can be applied to any stochastic process. Thus, an informal definition of ergodicity is that a sequence is ergodic if it visits all of ; such sequences are "typical" for the process. Another is that its statistical properties can be deduced from a single, sufficiently long, random sample of the process (thus uniformly sampling all of ), or that any collection of random samples from a process must represent the average statistical properties of the entire process (that is, samples drawn uniformly from are representative of as a whole.) In the present example, a sequence of coin flips, where half are heads, and half are tails, is a "typical" sequence.
There are several important points to be made about the Bernoulli process. If one writes 0 for tails and 1 for heads, one gets the set of all infinite strings of binary digits. These correspond to the base-two expansion of real numbers. Explicitly, given a sequence , the corresponding real number is
The statement that the Bernoulli process is ergodic is equivalent to the statement that the real numbers are uniformly distributed. The set of all such strings can be written in a variety of ways: This set is the Cantor set, sometimes called the Cantor space to avoid confusion with the Cantor function
In the end, these are all "the same thing".
The Cantor set plays key roles in many branches of mathematics. In recreational mathematics, it underpins the period-doubling fractals; in analysis, it appears in a vast variety of theorems. A key one for stochastic processes is the Wold decomposition, which states that any stationary process can be decomposed into a pair of uncorrelated processes, one deterministic, and the other being a moving average process.
The Ornstein isomorphism theorem states that every stationary stochastic process is equivalent to a Bernoulli scheme (a Bernoulli process with an N-sided (and possibly unfair) gaming die). Other results include that every non-dissipative ergodic system is equivalent to the Markov odometer, sometimes called an "adding machine" because it looks like elementary-school addition, that is, taking a base-N digit sequence, adding one, and propagating the carry bits. The proof of equivalence is very abstract; understanding the result is not: by adding one at each time step, every possible state of the odometer is visited, until it rolls over, and starts again. Likewise, ergodic systems visit each state, uniformly, moving on to the next, until they have all been visited.
Systems that generate (infinite) sequences of N letters are studied by means of symbolic dynamics. Important special cases include subshifts of finite type and sofic systems.
History and etymology
The term ergodic is commonly thought to derive from the Greek words (ergon: "work") and (hodos: "path", "way"), as chosen by Ludwig Boltzmann while he was working on a problem in statistical mechanics. At the same time it is also claimed to be a derivation of ergomonode, coined by Boltzmann in a relatively obscure paper from 1884. The etymology appears to be contested in other ways as well.
The idea of ergodicity was born in the field of thermodynamics, where it was necessary to relate the individual states of gas molecules to the temperature of a gas as a whole and its time evolution thereof. In order to do this, it was necessary to state what exactly it means for gases to mix well together, so that thermodynamic equilibrium could be defined with mathematical rigor. Once the theory was well developed in physics, it was rapidly formalized and extended, so that ergodic theory has long been an independent area of mathematics in itself. As part of that progression, more than one slightly different definition of ergodicity and multitudes of interpretations of the concept in different fields coexist.
For example, in classical physics the term implies that a system satisfies the ergodic hypothesis of thermodynamics, the relevant state space being position and momentum space.
In dynamical systems theory the state space is usually taken to be a more general phase space. On the other hand in coding theory the state space is often discrete in both time and state, with less concomitant structure. In all those fields the ideas of time average and ensemble average can also carry extra baggage as well—as is the case with the many possible thermodynamically relevant partition functions used to define ensemble averages in physics, back again. As such the measure theoretic formalization of the concept also serves as a unifying discipline. In 1913 Michel Plancherel proved the strict impossibility of ergodicity for a purely mechanical system.
Ergodicity in physics and geometry
A review of ergodicity in physics, and in geometry follows. In all cases, the notion of ergodicity is exactly the same as that for dynamical systems; there is no difference, except for outlook, notation, style of thinking and the journals where results are published.
Physical systems can be split into three categories: classical mechanics, which describes machines with a finite number of moving parts, quantum mechanics, which describes the structure of atoms, and statistical mechanics, which describes gases, liquids, solids; this includes condensed matter physics. These presented below.
In statistical mechanics
This section reviews ergodicity in statistical mechanics. The above abstract definition of a volume is required as the appropriate setting for definitions of ergodicity in physics. Consider a container of liquid, or gas, or plasma, or other collection of atoms or particles. Each and every particle has a 3D position, and a 3D velocity, and is thus described by six numbers: a point in six-dimensional space If there are of these particles in the system, a complete description requires numbers. Any one system is just a single point in The physical system is not all of , of course; if it's a box of width, height and length then a point is in Nor can velocities be infinite: they are scaled by some probability measure, for example the Boltzmann–Gibbs measure for a gas. Nonetheless, for close to the Avogadro number, this is obviously a very large space. This space is called the canonical ensemble.
A physical system is said to be ergodic if any representative point of the system eventually comes to visit the entire volume of the system. For the above example, this implies that any given atom not only visits every part of the box with uniform probability, but it does so with every possible velocity, with probability given by the Boltzmann distribution for that velocity (so, uniform with respect to that measure). The ergodic hypothesis states that physical systems actually are ergodic. Multiple time scales are at work: gases and liquids appear to be ergodic over short time scales. Ergodicity in a solid can be viewed in terms of the vibrational modes or phonons, as obviously the atoms in a solid do not exchange locations. Glasses present a challenge to the ergodic hypothesis; time scales are assumed to be in the millions of years, but results are contentious. Spin glasses present particular difficulties.
Formal mathematical proofs of ergodicity in statistical physics are hard to come by; most high-dimensional many-body systems are assumed to be ergodic, without mathematical proof. Exceptions include the dynamical billiards, which model billiard ball-type collisions of atoms in an ideal gas or plasma. The first hard-sphere ergodicity theorem was for Sinai's billiards, which considers two balls, one of them taken as being stationary, at the origin. As the second ball collides, it moves away; applying periodic boundary conditions, it then returns to collide again. By appeal to homogeneity, this return of the "second" ball can instead be taken to be "just some other atom" that has come into range, and is moving to collide with the atom at the origin (which can be taken to be just "any other atom".) This is one of the few formal proofs that exist; there are no equivalent statements e.g. for atoms in a liquid, interacting via van der Waals forces, even if it would be common sense to believe that such systems are ergodic (and mixing). More precise physical arguments can be made, though.
Simple dynamical systems
The formal study of ergodicity can be approached by examining fairly simple dynamical systems. Some of the primary ones are listed here.
The irrational rotation of a circle is ergodic: the orbit of a point is such that eventually, every other point in the circle is visited. Such rotations are a special case of the interval exchange map. The beta expansions of a number are ergodic: beta expansions of a real number are done not in base-N, but in base- for some The reflected version of the beta expansion is tent map; there are a variety of other ergodic maps of the unit interval. Moving to two dimensions, the arithmetic billiards with irrational angles are ergodic. One can also take a flat rectangle, squash it, cut it and reassemble it; this is the previously-mentioned baker's map. Its points can be described by the set of bi-infinite strings in two letters, that is, extending to both the left and right; as such, it looks like two copies of the Bernoulli process. If one deforms sideways during the squashing, one obtains Arnold's cat map. In most ways, the cat map is prototypical of any other similar transformation.
In classical mechanics and geometry
Ergodicity is a widespread phenomenon in the study of symplectic manifolds and Riemannian manifolds. Symplectic manifolds provide the generalized setting for classical mechanics, where the motion of a mechanical system is described by a geodesic. Riemannian manifolds are a special case: the cotangent bundle of a Riemannian manifold is always a symplectic manifold. In particular, the geodesics on a Riemannian manifold are given by the solution of the Hamilton–Jacobi equations.
The geodesic flow of a flat torus following any irrational direction is ergodic; informally this means that when drawing a straight line in a square starting at any point, and with an irrational angle with respect to the sides, if every time one meets a side one starts over on the opposite side with the same angle, the line will eventually meet every subset of positive measure. More generally on any flat surface there are many ergodic directions for the geodesic flow.
For non-flat surfaces, one has that the geodesic flow of any negatively curved compact Riemann surface is ergodic. A surface is "compact" in the sense that it has finite surface area. The geodesic flow is a generalization of the idea of moving in a "straight line" on a curved surface: such straight lines are geodesics. One of the earliest cases studied is Hadamard's billiards, which describes geodesics on the Bolza surface, topologically equivalent to a donut with two holes. Ergodicity can be demonstrated informally, if one has a sharpie and some reasonable example of a two-holed donut: starting anywhere, in any direction, one attempts to draw a straight line; rulers are useful for this. It doesn't take all that long to discover that one is not coming back to the starting point. (Of course, crooked drawing can also account for this; that's why we have proofs.)
These results extend to higher dimensions. The geodesic flow for negatively curved compact Riemannian manifolds is ergodic. A classic example for this is the Anosov flow, which is the horocycle flow on a hyperbolic manifold. This can be seen to be a kind of Hopf fibration. Such flows commonly occur in classical mechanics, which is the study in physics of finite-dimensional moving machinery, e.g. the double pendulum and so-forth. Classical mechanics is constructed on symplectic manifolds. The flows on such systems can be deconstructed into stable and unstable manifolds; as a general rule, when this is possible, chaotic motion results. That this is generic can be seen by noting that the cotangent bundle of a Riemannian manifold is (always) a symplectic manifold; the geodesic flow is given by a solution to the Hamilton–Jacobi equations for this manifold. In terms of the canonical coordinates on the cotangent manifold, the Hamiltonian or energy is given by
with the (inverse of the) metric tensor and the momentum. The resemblance to the kinetic energy of a point particle is hardly accidental; this is the whole point of calling such things "energy". In this sense, chaotic behavior with ergodic orbits is a more-or-less generic phenomenon in large tracts of geometry.
Ergodicity results have been provided in translation surfaces, hyperbolic groups and systolic geometry. Techniques include the study of ergodic flows, the Hopf decomposition, and the Ambrose–Kakutani–Krengel–Kubo theorem. An important class of systems are the Axiom A systems.
A number of both classification and "anti-classification" results have been obtained. The Ornstein isomorphism theorem applies here as well; again, it states that most of these systems are isomorphic to some Bernoulli scheme. This rather neatly ties these systems back into the definition of ergodicity given for a stochastic process, in the previous section. The anti-classification results state that there are more than a countably infinite number of inequivalent ergodic measure-preserving dynamical systems. This is perhaps not entirely a surprise, as one can use points in the Cantor set to construct similar-but-different systems. See measure-preserving dynamical system for a brief survey of some of the anti-classification results.
In wave mechanics
All of the previous sections considered ergodicty either from the point of view of a measurable dynamical system, or from the dual notion of tracking the motion of individual particle trajectories. A closely related concept occurs in (non-linear) wave mechanics. There, the resonant interaction allows for the mixing of normal modes, often (but not always) leading to the eventual thermalization of the system. One of the earliest systems to be rigorously studied in this context is the Fermi–Pasta–Ulam–Tsingou problem, a string of weakly coupled oscillators.
A resonant interaction is possible whenever the dispersion relations for the wave media allow three or more normal modes to sum in such a way as to conserve both the total momentum and the total energy. This allows energy concentrated in one mode to bleed into other modes, eventually distributing that energy uniformly across all interacting modes.
Resonant interactions between waves helps provide insight into the distinction between high-dimensional chaos (that is, turbulence) and thermalization. When normal modes can be combined so that energy and momentum are exactly conserved, then the theory of resonant interactions applies, and energy spreads into all of the interacting modes. When the dispersion relations only allow an approximate balance, turbulence or chaotic motion results. The turbulent modes can then transfer energy into modes that do mix, eventually leading to thermalization, but not before a preceding interval of chaotic motion.
In quantum mechanics
As to quantum mechanics, there is no universal quantum definition of ergodicity or even chaos (see quantum chaos). However, there is a quantum ergodicity theorem stating that the expectation value of an operator converges to the corresponding microcanonical classical average in the semiclassical limit . Nevertheless, the theorem does not imply that all eigenstates of the Hamiltionian whose classical counterpart is chaotic are features and random. For example, the quantum ergodicity theorem does not exclude the existence of non-ergodic states such as quantum scars. In addition to the conventional scarring, there are two other types of quantum scarring, which further illustrate the weak-ergodicity breaking in quantum chaotic systems: perturbation-induced and many-body quantum scars.
Definition for discrete-time systems
Ergodic measures provide one of the cornerstones with which ergodicity is generally discussed. A formal definition follows.
Invariant measure
Let be a measurable space. If is a measurable function from to itself and a probability measure on , then a measure-preserving dynamical system is defined as a dynamical system for which for all . Such a is said to preserve equivalently, that is -invariant.
Ergodic measure
A measurable function is said to be -ergodic or that is an ergodic measure for if preserves and the following condition holds:
For any such that either or .
In other words, there are no -invariant subsets up to measure 0 (with respect to ).
Some authors relax the requirement that preserves to the requirement that is a non-singular transformation with respect to , meaning that if is a subset so that has zero measure, then so does .
Examples
The simplest example is when is a finite set and the counting measure. Then a self-map of preserves if and only if it is a bijection, and it is ergodic if and only if has only one orbit (that is, for every there exists such that ). For example, if then the cycle is ergodic, but the permutation is not (it has the two invariant subsets and ).
Equivalent formulations
The definition given above admits the following immediate reformulations:
for every with we have or (where denotes the symmetric difference);
for every with positive measure we have ;
for every two sets of positive measure, there exists such that ;
Every measurable function with is constant on a subset of full measure.
Importantly for applications, the condition in the last characterisation can be restricted to square-integrable functions only:
If and then is constant almost everywhere.
Further examples
Bernoulli shifts and subshifts
Let be a finite set and with the product measure (each factor being endowed with its counting measure). Then the shift operator defined by is .
There are many more ergodic measures for the shift map on . Periodic sequences give finitely supported measures. More interestingly, there are infinitely-supported ones which are subshifts of finite type.
Irrational rotations
Let be the unit circle , with its Lebesgue measure . For any the rotation of of angle is given by . If then is not ergodic for the Lebesgue measure as it has infinitely many finite orbits. On the other hand, if is irrational then is ergodic.
Arnold's cat map
Let be the 2-torus. Then any element defines a self-map of since . When one obtains the so-called Arnold's cat map, which is ergodic for the Lebesgue measure on the torus.
Ergodic theorems
If is a probability measure on a space which is ergodic for a transformation the pointwise ergodic theorem of G. Birkhoff states that for every measurable functions and for -almost every point the time average on the orbit of converges to the space average of . Formally this means that
The mean ergodic theorem of J. von Neumann is a similar, weaker statement about averaged translates of square-integrable functions.
Related properties
Dense orbits
An immediate consequence of the definition of ergodicity is that on a topological space , and if is the σ-algebra of Borel sets, if is -ergodic then -almost every orbit of is dense in the support of .
This is not an equivalence since for a transformation which is not uniquely ergodic, but for which there is an ergodic measure with full support , for any other ergodic measure the measure is not ergodic for but its orbits are dense in the support. Explicit examples can be constructed with shift-invariant measures.
Mixing
A transformation of a probability measure space is said to be mixing for the measure if for any measurable sets the following holds:
It is immediate that a mixing transformation is also ergodic (taking to be a -stable subset and its complement). The converse is not true, for example a rotation with irrational angle on the circle (which is ergodic per the examples above) is not mixing (for a sufficiently small interval its successive images will not intersect itself most of the time). Bernoulli shifts are mixing, and so is Arnold's cat map.
This notion of mixing is sometimes called strong mixing, as opposed to weak mixing which means that
Proper ergodicity
The transformation is said to be properly ergodic if it does not have an orbit of full measure. In the discrete case this means that the measure is not supported on a finite orbit of .
Definition for continuous-time dynamical systems
The definition is essentially the same for continuous-time dynamical systems as for a single transformation. Let be a measurable space and for each , then such a system is given by a family of measurable functions from to itself, so that for any the relation holds (usually it is also asked that the orbit map from is also measurable). If is a probability measure on then we say that is -ergodic or is an ergodic measure for if each preserves and the following condition holds:
For any , if for all we have then either or .
Examples
As in the discrete case the simplest example is that of a transitive action, for instance the action on the circle given by is ergodic for Lebesgue measure.
An example with infinitely many orbits is given by the flow along an irrational slope on the torus: let and . Let ; then if this is ergodic for the Lebesgue measure.
Ergodic flows
Further examples of ergodic flows are:
Billiards in convex Euclidean domains;
the geodesic flow of a negatively curved Riemannian manifold of finite volume is ergodic (for the normalised volume measure);
the horocycle flow on a hyperbolic manifold of finite volume is ergodic (for the normalised volume measure)
Ergodicity in compact metric spaces
If is a compact metric space it is naturally endowed with the σ-algebra of Borel sets. The additional structure coming from the topology then allows a much more detailed theory for ergodic transformations and measures on .
Functional analysis interpretation
A very powerful alternate definition of ergodic measures can be given using the theory of Banach spaces. Radon measures on form a Banach space of which the set of probability measures on is a convex subset. Given a continuous transformation of the subset of -invariant measures is a closed convex subset, and a measure is ergodic for if and only if it is an extreme point of this convex.
Existence of ergodic measures
In the setting above it follows from the Banach-Alaoglu theorem that there always exists extremal points in . Hence a transformation of a compact metric space always admits ergodic measures.
Ergodic decomposition
In general an invariant measure need not be ergodic, but as a consequence of Choquet theory it can always be expressed as the barycenter of a probability measure on the set of ergodic measures. This is referred to as the ergodic decomposition of the measure.
Example
In the case of and the counting measure is not ergodic. The ergodic measures for are the uniform measures supported on the subsets and and every -invariant probability measure can be written in the form for some . In particular is the ergodic decomposition of the counting measure.
Continuous systems
Everything in this section transfers verbatim to continuous actions of or on compact metric spaces.
Unique ergodicity
The transformation is said to be uniquely ergodic if there is a unique Borel probability measure on which is ergodic for .
In the examples considered above, irrational rotations of the circle are uniquely ergodic; shift maps are not.
Probabilistic interpretation: ergodic processes
If is a discrete-time stochastic process on a space , it is said to be ergodic if the joint distribution of the variables on is invariant under the shift map . This is a particular case of the notions discussed above.
The simplest case is that of an independent and identically distributed process which corresponds to the shift map described above. Another important case is that of a Markov chain which is discussed in detail below.
A similar interpretation holds for continuous-time stochastic processes though the construction of the measurable structure of the action is more complicated.
Ergodicity of Markov chains
The dynamical system associated with a Markov chain
Let be a finite set. A Markov chain on is defined by a matrix , where is the transition probability from to , so for every we have . A stationary measure for is a probability measure on such that ; that is for all .
Using this data we can define a probability measure on the set with its product σ-algebra by giving the measures of the cylinders as follows:
Stationarity of then means that the measure is invariant under the shift map .
Criterion for ergodicity
The measure is always ergodic for the shift map if the associated Markov chain is irreducible (any state can be reached with positive probability from any other state in a finite number of steps).
The hypotheses above imply that there is a unique stationary measure for the Markov chain. In terms of the matrix a sufficient condition for this is that 1 be a simple eigenvalue of the matrix and all other eigenvalues of (in ) are of modulus <1.
Note that in probability theory the Markov chain is called ergodic if in addition each state is aperiodic (the times where the return probability is positive are not multiples of a single integer >1). This is not necessary for the invariant measure to be ergodic; hence the notions of "ergodicity" for a Markov chain and the associated shift-invariant measure are different (the one for the chain is strictly stronger).
Moreover the criterion is an "if and only if" if all communicating classes in the chain are recurrent and we consider all stationary measures.
Examples
Counting measure
If for all then the stationary measure is the counting measure, the measure is the product of counting measures. The Markov chain is ergodic, so the shift example from above is a special case of the criterion.
Non-ergodic Markov chains
Markov chains with recurring communicating classes which are not irreducible are not ergodic, and this can be seen immediately as follows. If are two distinct recurrent communicating classes there are nonzero stationary measures supported on respectively and the subsets and are both shift-invariant and of measure 1/2 for the invariant probability measure . A very simple example of that is the chain on given by the matrix (both states are stationary).
A periodic chain
The Markov chain on given by the matrix is irreducible but periodic. Thus it is not ergodic in the sense of Markov chain though the associated measure on is ergodic for the shift map. However the shift is not mixing for this measure, as for the sets
and
we have but
Generalisations
The definition of ergodicity also makes sense for group actions. The classical theory (for invertible transformations) corresponds to actions of or .
For non-abelian groups there might not be invariant measures even on compact metric spaces. However the definition of ergodicity carries over unchanged if one replaces invariant measures by quasi-invariant measures.
Important examples are the action of a semisimple Lie group (or a lattice therein) on its Furstenberg boundary.
A measurable equivalence relation it is said to be ergodic if all saturated subsets are either null or conull.
Notes
References
External links
Karma Dajani and Sjoerd Dirksin, "A Simple Introduction to Ergodic Theory"
Ergodic theory | Ergodicity | [
"Mathematics"
] | 7,779 | [
"Ergodic theory",
"Dynamical systems"
] |
5,457,138 | https://en.wikipedia.org/wiki/List%20of%20Java%20APIs | There are two types of Java programming language application programming interfaces (APIs):
The official core Java API, contained in the Android (Google), SE (OpenJDK and Oracle), MicroEJ. These packages (java.* packages) are the core Java language packages, meaning that programmers using the Java language had to use them in order to make any worthwhile use of the Java language.
Optional APIs that can be downloaded separately. The specification of these APIs are defined according to many different organizations in the world (Alljoyn, OSGi, Eclipse, JCP, E-S-R, etc.).
The following is a partial list of application programming interfaces (APIs) for Java.
APIs
Following is a very incomplete list, as the number of APIs available for the Java platform is overwhelming.
Rich client platforms
Eclipse Rich Client Platform (RCP)
NetBeans Platform
Office_compliant libraries
Apache POI
JXL - for Microsoft Excel
JExcel - for Microsoft Excel
Compression
LZMA SDK, the Java implementation of the SDK used by the popular 7-Zip file archive software (available here)
JSON
Jackson (API)
Game engines
Slick
jMonkey Engine
JPCT Engine
LWJGL
Real-time libraries
Real time Java is a catch-all term for a combination of technologies that allows programmers to write programs that meet the demands of real-time systems in the Java programming language.
Java's sophisticated memory management, native support for threading and concurrency, type safety,
and relative simplicity have created a demand for its use in many
domains. Its capabilities have been enhanced to support real time
computational needs:
Java supports a strict priority based threading model.
Because Java threads support priorities, Java locking mechanisms support priority inversion avoidance techniques, such as priority inheritance or the priority ceiling protocol.
To overcome typical real time difficulties, the Java Community introduced a specification for real-time Java, JSR001. A number of implementations of the resulting Real-Time Specification for Java (RTSJ) have emerged, including a reference implementation from Timesys, IBM's WebSphere Real Time, Sun Microsystems's Java SE Real-Time Systems,[1] Aonix PERC or JamaicaVM from aicas.
The RTSJ addressed the critical issues by mandating a minimum (only two)
specification for the threading model (and allowing other models to be
plugged into the VM) and by providing for areas of memory
that are not subject to garbage collection, along with threads that are
not preempt able by the garbage collector. These areas are instead
managed using region-based memory management.
Real-Time Specification for Java
The Real-Time Specification for Java (RTSJ) is a set of interfaces and behavioral refinements that enable real-time computer programming in the Java programming language. RTSJ 1.0 was developed as JSR 1 under the Java Community Process, which approved the new standard in November, 2001. RTSJ 2.0 is being developed under JSR 282. A draft version is available at JSR 282 JCP Page. More information can be found at RTSJ 2.0
Javolution
Windowing libraries
Standard Widget Toolkit (SWT)
Physics libraries
JBox2D
JBullet
dyn4j
See also
Java Platform
Java ConcurrentMap
List of Java frameworks
External links
APISonar - Search Java API examples
Java APIs | List of Java APIs | [
"Technology"
] | 705 | [
"Computing-related lists",
"Lists of software"
] |
5,457,188 | https://en.wikipedia.org/wiki/Parasitoid%20wasp | Parasitoid wasps are a large group of hymenopteran superfamilies, with all but the wood wasps (Orussoidea) being in the wasp-waisted Apocrita. As parasitoids, they lay their eggs on or in the bodies of other arthropods, sooner or later causing the death of these hosts. Different species specialise in hosts from different insect orders, most often Lepidoptera, though some select beetles, flies, or bugs; the spider wasps (Pompilidae) exclusively attack spiders.
Parasitoid wasp species differ in which host life-stage they attack: eggs, larvae, pupae, or adults. They mainly follow one of two major strategies within parasitism: either they are endoparasitic, developing inside the host, and koinobiont, allowing the host to continue to feed, develop, and moult; or they are ectoparasitic, developing outside the host, and idiobiont, paralysing the host immediately. Some endoparasitic wasps of the superfamily Ichneumonoidea have a mutualistic relationship with polydnaviruses, the viruses suppressing the host's immune defenses.
Parasitoidism evolved only once in the Hymenoptera, during the Permian, leading to a single clade called Euhymenoptera, but the parasitic lifestyle has secondarily been lost several times including among the ants, bees, and vespid wasps. As a result, the order Hymenoptera contains many families of parasitoids, intermixed with non-parasitoid groups. The parasitoid wasps include some very large groups, some estimates giving the Chalcidoidea as many as 500,000 species, the Ichneumonidae 100,000 species, and the Braconidae up to 50,000 species.
Host insects have evolved a range of defences against parasitoid wasps, including hiding, wriggling, and camouflage markings.
Many parasitoid wasps are considered beneficial to humans because they naturally control agricultural pests. Some are applied commercially in biological pest control, starting in the 1920s with Encarsia formosa to control whitefly in greenhouses. Historically, parasitoidism in wasps influenced the thinking of Charles Darwin.
Parasitoidism
Parasitoid wasps range from some of the smallest species of insects to wasps about an inch long. Most females have a long, sharp ovipositor at the tip of the abdomen, sometimes lacking venom glands, and almost never modified into a sting.
Parasitoids can be classified in a variety of ways. They can live within their host's body as endoparasitoids, or feed on it from outside as ectoparasitoids: both strategies are found among the wasps. Parasitoids can also be divided according to their effect on their hosts. Idiobionts prevent further development of the host after initially immobilizing it, while koinobionts allow the host to continue its development while they are feeding upon it; and again, both types are seen in parasitoidal wasps. Most ectoparasitoid wasps are idiobiont, as the host could damage or dislodge the external parasitoid if allowed to move or moult. Most endoparasitoid wasps are koinobionts, giving them the advantage of a host that continues to grow larger and remains able to avoid predators.
Hosts
Many parasitoid wasps use larval Lepidoptera as hosts, but some groups parasitize different host life stages (egg, larva or nymph, pupa, adult) of nearly all other orders of insects, especially Coleoptera, Diptera, Hemiptera and other Hymenoptera. Some attack arthropods other than insects: for instance, the Pompilidae specialise in catching spiders: these are quick and dangerous prey, often as large as the wasp itself, but the spider wasp is quicker, swiftly stinging her prey to immobilise it. Adult female wasps of most species oviposit into their hosts' bodies or eggs.
More rarely, parasitoid wasps may use plant seeds as hosts, such as Torymus druparum.
Some also inject a mix of secretory products that paralyse the host or protect the egg from the host's immune system; these include polydnaviruses, ovarian proteins, and venom. If a polydnavirus is included, it infects the nuclei of host hemocytes and other cells, causing symptoms that benefit the parasite.
Host size is important for the development of the parasitoid, as the host is its entire food supply until it emerges as an adult; small hosts often produce smaller parasitoids. Some species preferentially lay female eggs in larger hosts and male eggs in smaller hosts, as the reproductive capabilities of males are limited less severely by smaller adult body size.
Some parasitoid wasps mark the host with chemical signals to show that an egg has been laid there. This may both deter rivals from ovipositing, and signal to itself that no further egg is needed in that host, effectively reducing the chances that offspring will have to compete for food and increasing the offspring's survival.
Life cycle
On or inside the host the parasitoid egg hatches into a larva or two or more larvae (polyembryony). Endoparasitoid eggs can absorb fluids from the host body and grow several times in size from when they were first laid before hatching. The first instar larvae are often highly mobile and may have strong mandibles or other structures to compete with other parasitoid larvae. The following instars are generally more grub-like. Parasitoid larvae have incomplete digestive systems with no rear opening. This prevents the hosts from being contaminated by their wastes. The larva feeds on the host's tissues until ready to pupate; by then the host is generally either dead or almost so. A meconium, or the accumulated wastes from the larva is cast out as the larva transitions to a prepupa. Depending on its species, the parasitoid then may eat its way out of the host or remain in the more or less empty skin. In either case it then generally spins a cocoon and pupates. As adults, parasitoid wasps feed primarily on nectar from flowers. Females of some species will also drink hemolymph from hosts to gain additional nutrients for egg production.
Mutualism with polydnavirus
Polydnaviruses are a unique group of insect viruses that have a mutualistic relationship with some parasitic wasps. The polydnavirus replicates in the oviducts of an adult female parasitoid wasp. The wasp benefits from this relationship because the virus provides protection for the parasitic larvae inside the host, (i) by weakening the host's immune system and (ii) by altering the host's cells to be more beneficial to the parasite. The relationship between these viruses and the wasp is obligatory in the sense that all individuals are infected with the viruses; the virus has been incorporated in the wasp's genome and is inherited.
Host defenses
The hosts of parasitoids have developed several levels of defence. Many hosts try to hide from the parasitoids in inaccessible habitats. They may also get rid of their frass (body wastes) and avoid plants that they have chewed on as both can signal their presence to parasitoids hunting for hosts. The egg shells and cuticles of the potential hosts are thickened to prevent the parasitoid from penetrating them. Hosts may use behavioral evasion when they encounter an egg laying female parasitoid, like dropping off the plant they are on, twisting and thrashing so as to dislodge or kill the female and even regurgitating onto the wasp to entangle it. The wriggling can sometimes help by causing the wasp to "miss" laying the egg on the host and instead place it nearby. Wriggling of pupae can cause the wasp to lose its grip on the smooth hard pupa or get trapped in the silk strands. Some caterpillars even bite the female wasps that approach them. Some insects secrete poisonous compounds that kill or drive away the parasitoid. Ants that are in a symbiotic relationship with caterpillars, aphids or scale insects may protect them from attack by wasps.
Parasitoid wasps are vulnerable to hyperparasitoid wasps. Some parasitoid wasps change the behavior of the infected host, causing them to build a silk web around the pupae of the wasps after they emerge from its body to protect them from hyperparasitoids.
Hosts can kill endoparasitoids by sticking haemocytes to the egg or larva in a process called encapsulation. In aphids, the presence of a particular species of γ-3 Pseudomonadota makes the aphid relatively immune to their parasitoid wasps by killing many of the eggs. As the parasitoid's survival depends on its ability to evade the host's immune response, some parasitoid wasps have developed the counterstrategy of laying more eggs in aphids that have the endosymbiont, so that at least one of them may hatch and parasitize the aphid.
Certain caterpillars eat plants that are toxic to both themselves and the parasite to cure themselves. Drosophila melanogaster larvae also self-medicate with ethanol to treat parasitism. D. melanogaster females lay their eggs in food containing toxic amounts of alcohol if they detect parasitoid wasps nearby. The alcohol protects them from the wasps, at the cost of retarding their own growth.
Evolution and taxonomy
Evolution
Based on genetic and fossil analysis, parasitoidism has evolved only once in the Hymenoptera, during the Permian, leading to a single clade. All parasitoid wasps are descended from this lineage. The narrow-waisted Apocrita emerged during the Jurassic. The Aculeata, which includes bees, ants, and parasitoid spider wasps, evolved from within the Apocrita; it contains many families of parasitoids, though not the Ichneumonoidea, Cynipoidea, and Chalcidoidea. The Hymenoptera, Apocrita, and Aculeata are all clades, but since each of these contains non-parasitic species, the parasitoid wasps, formerly known as the Parasitica, do not form a clade on their own. The common ancestor in which parasitoidism evolved lived approximately 247 million years ago and was previously believed to be an ectoparasitoid wood wasp that fed on wood-boring beetle larvae. Species similar in lifestyle and morphology to this ancestor still exist in the Ichneumonoidea. However, recent molecular and morphological analysis suggests this ancestor was endophagous, meaning it fed from within its host. A significant radiation of species in the Hymenoptera occurred shortly after the evolution of parasitoidy in the order and is thought to have been a result of it. The evolution of a wasp waist, a constriction in the abdomen of the Apocrita, contributed to rapid diversification as it increased maneuverability of the ovipositor, the organ off the rear segment of the abdomen used to lay eggs.
The phylogenetic tree gives a condensed overview of the positions of parasitoidal groups (boldface), amongst groups (italics) like the Vespidae which have secondarily abandoned the parasitoid habit. The approximate numbers of species estimated to be in these groups, often much larger than the number so far described, is shown in parentheses, with estimates for the most populous also shown in boldface, like "(150,000)". Not all species in these groups are parasitoidal: for example, some Cynipoidea are phytophagous.
Taxonomy
The parasitoid wasps are paraphyletic since the ants, bees, and non-parasitic wasps such as the Vespidae are not included, and there are many members of mainly parasitoidal families which are not themselves parasitic. Listed are Hymenopteran families where most members have a parasitoid lifestyle.
Symphyta:
Orussidae
Apocrita:
Scolebythidae
Bethylidae
Chrysididae
Sclerogibbidae
Dryinidae
Embolemidae
Tiphiidae
Thynnidae
Sapygidae
Mutillidae
Bradynobaenidae
Chyphotidae
Sierolomorphidae
Braconidae
Ichneumonidae
Pompilidae
Rhopalosomatidae
Aulacidae
Evaniidae
Gasteruptiidae
Stephanidae
Megalyridae
Trigonalidae
Ibaliidae
Liopteridae
Figitidae
Austroniidae
Diapriidae
Heloridae
Monomachidae
Pelecinidae
Peradeniidae
Proctotrupidae
Roproniidae
Vanhorniidae
Platygastridae
Scelionidae
Megaspilidae
Ceraphronidae
Mymarommatidae
Chalcidoidea (19 families)
Ampulicidae
Interactions with humans
Biological pest control
Parasitoid wasps are considered beneficial as they naturally control the population of many pest insects. They are widely used commercially (alongside other parasitoids such as tachinid flies) for biological pest control, for which the most important groups are the ichneumonid wasps, which prey mainly on caterpillars of butterflies and moths; braconid wasps, which attack caterpillars and a wide range of other insects including greenfly; chalcidoid wasps, which parasitise eggs and larvae of greenfly, whitefly, cabbage caterpillars, and scale insects.
One of the first parasitoid wasps to enter commercial use was Encarsia formosa, an endoparasitic aphelinid. It has been used to control whitefly in greenhouses since the 1920s. Use of the insect fell almost to nothing, replaced by chemical pesticides by the 1940s. Since the 1970s, usage has revived, with renewed usage in Europe and Russia. In some countries, such as New Zealand, it is the primary biological control agent used to control greenhouse whiteflies, particularly on crops such as tomato, a particularly difficult plant for predators to establish on.
Commercially, there are two types of rearing systems: short-term seasonal daily output with high production of parasitoids per day, and long-term year-round low daily output with a range in production of 4–1000 million female parasitoids per week, to meet demand for suitable parasitoids for different crops.
In culture
Parasitoid wasps influenced the thinking of Charles Darwin. In an 1860 letter to the American naturalist Asa Gray, Darwin wrote: "I cannot persuade myself that a beneficent and omnipotent God would have designedly created parasitic wasps with the express intention of their feeding within the living bodies of Caterpillars." The palaeontologist Donald Prothero notes that religiously-minded people of the Victorian era, including Darwin, were horrified by this instance of evident cruelty in nature, particularly noticeable in the Ichneumonidae.
Notes
References
Parasitic insects
Articles containing video clips
Parasitism | Parasitoid wasp | [
"Biology"
] | 3,158 | [
"Parasitism",
"Symbiosis"
] |
5,457,193 | https://en.wikipedia.org/wiki/Parasitica | Parasitica (the parasitican wasps) is an obsolete, paraphyletic infraorder of Apocrita containing the parasitoid wasps. It includes all Apocrita except for the Aculeata. Parasitica has more members as a group than both the Symphyta and the Aculeata combined.
Parasitica also contains groups of phytophagous hymenopterans such as the Cynipoidea (gall wasps).
References
External links
Parasitica at bugguide
Insect infraorders
Paraphyletic groups | Parasitica | [
"Biology"
] | 117 | [
"Phylogenetics",
"Paraphyletic groups"
] |
5,457,285 | https://en.wikipedia.org/wiki/DSSAM%20Model | The DSSAM Model (Dynamic Stream Simulation and Assessment Model) is a computer simulation developed for the Truckee River to analyze water quality impacts from land use and wastewater management decisions in the Truckee River Basin. This area includes the cities of Reno and Sparks, Nevada as well as the Lake Tahoe Basin. The model is historically and alternatively called the Earth Metrics Truckee River Model. Since original development in 1984-1986 under contract to the U.S. Environmental Protection Agency (EPA), the model has been refined and successive versions have been dubbed DSSAM II and DSSAM III. This hydrology transport model is based upon a pollutant loading metric called Total maximum daily load (TMDL). The success of this flagship model contributed to the Agency's broadened commitment to the use of the underlying TMDL protocol in its national policy for management of most river systems in the United States.
The Truckee River has a length of over and drains an area of approximately 3120 square miles, not counting the extent of its Lake Tahoe sub-basin. The DSSAM model establishes numerous stations along the entire river extent as well as a considerable number of monitoring points inside the Great Basin's Pyramid Lake, the receiving waters of this closed hydrological system. Although the region is sparsely populated, it is important because Lake Tahoe is visited by 20 million persons per annum and Truckee River water quality affects at least two endangered species: the Cui-ui sucker fish and the Lahontan cutthroat trout.
Development history
Impetus to derive a quantitative prediction model arose from a trend of historically decreasing river flow rates coupled with jurisdictional and tribal conflicts over water rights as well as concern for river biota. When expansion of the Reno-Sparks Wastewater Treatment Plant was proposed, the EPA decided to fund a large scale research effort to create simulation software and a parallel program to collect field data in the Truckee River and Pyramid Lake. For river stations water quality measurements were made in the benthic zone as well as the topic zone; in the case of Pyramid Lake boats were used to collect grab samples at varying depths and locations. Earth Metrics conducted the software development for the first generation computer model and collected field data on water quality and flow rates in the Truckee River. After model calibration, runs were made to evaluate impacts of alternative land use controls and discharge parameters for treated effluent.
The DSSAM Model is constructed to allow dynamic decay of most pollutants; for example, total nitrogen and phosphorus are allowed to be consumed by benthic algae in each time step, and the algal communities are given a separate population dynamic in each river reach (e.g.metabolic rate based upon river temperature). Sources throughout the watershed include non-point agricultural and urban stormwater as well as a multiplicity of point source discharges of treated municipal wastewater effluent.
Subsequent to the first generation of DSSAM model development, calibration and application, later refinements were made. These augmentations to model functionality focussed on increased flexibility in modeling the diel cycle and also allowed inclusion of analyzing particulate nitrogen and phosphorus. In developing DSSAM III several changes in the model operation and scope were performed.
Applications
Numerous different uses of the model have been made including (a)analysis of public policies for urban stormwater runoff, (b) researching agricultural methods for surface runoff minimization, (c) innovative solutions for non-point source control and d)engineering aspects of treated wastewater discharge. Regarding stormwater runoff in Washoe County, the specific elements within a new xeriscape ordinance were analyzed for efficacy using the model. For the varied agricultural uses in the watershed, the model was run to understand the principal sources of adverse impact, and management practices were developed to reduce in river pollution. Use of the model has specifically been conducted to analyze survival of two endangered species found in the Truckee River and Pyramid Lake: the Cui-ui sucker fish (endangered 1967) and the Lahontan cutthroat trout (threatened 1970). When the model is used for surface runoff reaching a stream, this pollutant input can be viewed as a line source (e.g., a continuous linear source of pollution entering the waterway).
See also
Nonpoint source pollution
SWAT model
Stochastic Empirical Loading and Dilution Model
Storm Water Management Model
References
External links
U.S. Environmental Protection Agency TMDL program for the Truckee River
Final TMDL waste loads for the Truckee Basin derived from the DSSAM Model
Computer-aided engineering software
United States Environmental Protection Agency
Hydrology
Water pollution | DSSAM Model | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 938 | [
"Environmental engineering",
"Hydrology",
"Water pollution"
] |
5,457,518 | https://en.wikipedia.org/wiki/Noryl | The NORYL family of modified resins consists of amorphous blends of polyphenylene oxides (PPO) or polyphenylene ether (PPE) resins with polystyrene. They combine the inherent benefits of PPE resin (affordable high heat resistance, good electrical insulation properties, excellent hydrolytic stability and the ability to use non-halogen fire retardant packages), with excellent dimensional stability, good processability and low density.
They were originally developed in 1966 by General Electric Plastics (now owned by SABIC). NORYL is a registered trademark of SABIC Innovative Plastics IP B.V.
NORYL resins are a rare example of a homogeneous mixture of two polymers. Most polymers are incompatible with one another, so tend to produce separate phases when mixed. The two polymers compatibility in NORYL resins is due to the presence of a benzene ring in the repeat units of both chains.
Properties
The addition of polystyrene to PPE increases the glass transition temperature above 100 °C, owing to the high Tg of PPE, so NORYL resin is stable in boiling water. The precise value of the transition depends on the exact composition of the grade used. There is a smooth linear relation between weight content of polystyrene and the Tg of the blend. Due to its good electrical resistance, it is widely used in switch boxes. However, product design is important in maximising the strength of the product, especially in eliminating sharp corners and other stress concentrations. Injection molding must ensure that moldings are stress-free.
Like most other amorphous thermoplastics, Noryl is sensitive to environmental stress cracking when in contact with many organic liquids. Compounds such as gasoline, kerosene, and methylene chloride may initiate brittle cracks resulting in product failure.
Applications
NORYL resins offer a good balance of mechanical and chemical properties, and may be suitable for a wide variety of applications such as in electronics, electrical equipment, coating, machinery, etc.
One of the most famous applications of NORYL was the molded case of the original Apple II computer. At that point, the product was referred to internally at Apple (1978) as "GE NORYL". A famous picture of an Apple II was made after a fire almost completely melted the NORYL case, but the motherboard, when removed from the case, was found to still operate.
NORYL resins have possible applications in the production of hydrogen, where it could serve as cost-effective electrodes in an electrolyzer, replacing expensive rare elements. It is highly resistant against the alkaline potassium hydroxide. For conductivity, the plastic is sprayed with a nickel-based catalyst.
NORYL resins are being investigated as a possible replacement for polycarbonate used in the manufacturing of Blu-ray Discs.
It is also used in certain construction products, like water pumps for swimming pools.
See also
Hydrogen economy
Thermoplastic
References
External links
MIT Technology Review - Hydrogen on the Cheap
Polymers
Polyethers
SABIC | Noryl | [
"Chemistry",
"Materials_science"
] | 622 | [
"Polymers",
"Polymer chemistry"
] |
5,457,540 | https://en.wikipedia.org/wiki/Ka/Ks%20ratio | {{DISPLAYTITLE:Ka/Ks ratio}}
In genetics, the Ka/Ks ratio, also known as ω or dN/dS ratio, is used to estimate the balance between neutral mutations, purifying selection and beneficial mutations acting on a set of homologous protein-coding genes. It is calculated as the ratio of the number of nonsynonymous substitutions per non-synonymous site (Ka), in a given period of time, to the number of synonymous substitutions per synonymous site (Ks), in the same period. The latter are assumed to be neutral, so that the ratio indicates the net balance between deleterious and beneficial mutations. Values of Ka/Ks significantly above 1 are unlikely to occur without at least some of the mutations being advantageous. If beneficial mutations are assumed to make little contribution, then Ka/Ks estimates the degree of evolutionary constraint.
Context
Selection acts on variation in phenotypes, which are often the result of mutations in protein-coding genes. The genetic code is written in DNA sequences as codons, groups of three nucleotides. Each codon represents a single amino acid in a protein chain. However, there are more codons (64) than amino acids found in proteins (20), so many codons are effectively synonyms. For example, the DNA codons TTT and TTC both code for the amino acid Phenylalanine, so a change from the third T to C makes no difference to the resulting protein. On the other hand, the codon GAG codes for Glutamic acid while the codon GTG codes for Valine, so a change from the middle A to T does change the resulting protein, for better or (more likely) worse, so the change is not a synonym. These changes are illustrated in the tables below.
The Ka/Ks ratio measures the relative rates of synonymous and nonsynonymous substitutions at a particular site.
Methods
Methods for estimating Ka and Ks use a sequence alignment of two or more nucleotide sequences of homologous genes that code for proteins (rather than being genetic switches, controlling development or the rate of activity of other genes). Methods can be classified into three groups: approximate methods, maximum-likelihood methods, and counting methods. However, unless the sequences to be compared are distantly related (in which case maximum-likelihood methods prevail), the class of method used makes a minimal impact on the results obtained; more important are the assumptions implicit in the chosen method.
Approximate methods
Approximate methods involve three basic steps: (1) counting the number of synonymous and nonsynonymous sites in the two sequences, or estimating this number by multiplying the sequence length by the proportion of each class of substitution;
(2) counting the number of synonymous and nonsynonymous substitutions; and (3) correcting for multiple substitutions.
These steps, particularly the latter, require simplistic assumptions to be made if they are to be achieved computationally; for reasons discussed later, it is impossible to exactly determine the number of multiple substitutions.
Maximum-likelihood methods
The maximum-likelihood approach uses probability theory to complete all three steps simultaneously. It estimates critical parameters, including the divergence between sequences and the transition/transversion ratio, by deducing the most likely values to produce the input data.
Counting methods
In order to quantify the number of substitutions, one may reconstruct the ancestral sequence and record the inferred changes at sites (straight counting – likely to provide an underestimate); fitting the substitution rates at sites into predetermined categories (Bayesian approach; poor for small data sets); and generating an individual substitution rate for each codon (computationally expensive). Given enough data, all three of these approaches will tend to the same result.
Interpreting results
The Ka/Ks ratio is used to infer the direction and magnitude of natural selection acting on protein coding genes. A ratio greater than 1 implies positive or Darwinian selection (driving change); less than 1 implies purifying or stabilizing selection (acting against change); and a ratio of exactly 1 indicates neutral (i.e. no) selection. However, a combination of positive and purifying selection at different points within the gene or at different times along its evolution may cancel each other out. The resulting averaged value can mask the presence of one of the selections and lower the seeming magnitude of another selection.
Of course, it is necessary to perform a statistical analysis to determine whether a result is significantly different from 1, or whether any apparent difference may occur as a result of a limited data set. The appropriate statistical test for an approximate method involves approximating dN − dS with a normal approximation, and determining whether 0 falls within the central region of the approximation. More sophisticated likelihood techniques can be used to analyse the results of a Maximum Likelihood analysis, by performing a chi-squared test to distinguish between a null model (Ka/Ks = 1) and the observed results.
Utility
The Ka/Ks ratio is a more powerful test of the neutral model of evolution than many others available in population genetics as it requires fewer assumptions.
Complications
There is often a systematic bias in the frequency at which various nucleotides are swapped, as certain mutations are more probable than others. For instance, some lineages may swap C to T more frequently than they swap C to A. In the case of the amino acid Asparagine, which is coded by the codons AAT or AAC, a high C->T exchange rate will increase the proportion of synonymous substitutions at this codon, whereas a high C→A exchange rate will increase the rate of non-synonymous substitutions. Because it is rather common for transitions (T↔C & A↔G) to be favoured over transversions (other changes), models must account for the possibility of non-homogeneous rates of exchange. Some simpler approximate methods, such as those of Miyata & Yasunaga and Nei & Gojobori, neglect to take these into account, which generates a faster computational time at the expense of accuracy; these methods will systematically overestimate N and underestimate S.
Further, there may be a bias in which certain codons are preferred in a gene, as a certain combination of codons may improve translational efficiency. A 2022 study reported that synonymous mutations in representative yeast genes are mostly strongly non-neutral, which calls into question the assumptions underlying use of the Ka/Ks ratio.
In addition, as time progresses, it is possible for a site to undergo multiple modifications. For instance, a codon may switch from AAA→AAC→AAT→AAA. There is no way of detecting multiple substitutions at a single site, thus the estimate of the number of substitutions is always an underestimate. In addition, in the example above two non-synonymous and one synonymous substitution occurred at the third site; however, because substitutions restored the original sequence, there is no evidence of any substitution. As the divergence time between two sequences increases, so too does the amount of multiple substitutions. Thus "long branches" in a dN/dS analysis can lead to underestimates of both dN and dS, and the longer the branch, the harder it is to correct for the introduced noise. Of course, the ancestral sequence is usually unknown, and two lineages being compared will have been evolving in parallel since their last common ancestor. This effect can be mitigated by constructing the ancestral sequence; the accuracy of this sequence is enhanced by having a large number of sequences descended from that common ancestor to constrain its sequence by phylogenetic methods.
Methods that account for biases in codon usage and transition/transversion rates are substantially more reliable than those that do not.
Limitations
Although the Ka/Ks ratio is a good indicator of selective pressure at the sequence level, evolutionary change can often take place in the regulatory region of a gene which affects the level, timing or location of gene expression. Ka/Ks analysis will not detect such change. It will only calculate selective pressure within protein coding regions. In addition, selection that does not cause differences at an amino acid level—for instance, balancing selection—cannot be detected by these techniques.
Another issue is that heterogeneity within a gene can make a result hard to interpret. For example, if Ka/Ks = 1, it could be due to relaxed selection, or to a chimera of positive and purifying selection at the locus. A solution to this limitation would be to apply Ka/Ks analysis across many species at individual codons.
The Ka/Ks method requires a rather strong signal in order to detect selection.
In order to detect selection between lineages, then the selection, averaged over all sites in the sequence, must produce a Ka/Ks greater than one—quite a feat if regions of the gene are strongly conserved.
In order to detect selection at specific sites, then the Ka/Ks ratio must be greater than one when averaged over all included lineages at that site—implying that the site must be under selective pressure in all sampled lineages.
This limitation can be moderated by allowing the Ka/Ks rate to take multiple values across sites and across lineages; the inclusion of more lineages also increases the power of a sites-based approach.
Further, the method lacks the capability to distinguish between positive and negative nonsynonymous substitutions. Some amino acids are chemically similar to one another, whereas other substitutions may place an amino acid with wildly different properties to its precursor. In most situations, a smaller chemical change is more likely to allow the protein to continue to function, and a large chemical change is likely to disrupt the chemical structure and cause the protein to malfunction. However, incorporating this into a model is not straightforward as the relationship between a nucleotide substitution and the effects of the modified chemical properties is very difficult to determine.
An additional concern is that the effects of time must be incorporated into an analysis, if the lineages being compared are closely related; this is because it can take a number of generations for natural selection to "weed out" deleterious mutations from a population, especially if their effect on fitness is weak. This limits the usefulness of the Ka/Ks ratio for comparing closely related populations.
Individual codon approach
Additional information can be gleaned by determining the Ka/Ks ratio at specific codons within a gene sequence. For instance, the frequency-tuning region of an opsin may be under enhanced selective pressure when a species colonises and adapts to new environment, whereas the region responsible for initializing a nerve signal may be under purifying selection. In order to detect such effects, one would ideally calculate the Ka/Ks ratio at each site. However this is computationally expensive and in practise, a number of Ka/Ks classes are established, and each site is assigned to the best-fitting class.
The first step in identifying whether positive selection acts on sites is to compare a test where the Ka/Ks ratio is constrained to be < 1 in all sites to one where it may take any value, and see if permitting Ka/Ks to exceed 1 in some sites improves the fit of the model. If this is the case, then sites fitting into the class where Ka/Ks > 1 are candidates to be experiencing positive selection. This form of test can either identify sites that further laboratory research can examine to determine possible selective pressure; or, sites believed to have functional significance can be assigned into different Ka/Ks classes before the model is run.
Notes
References
Further reading
External links
KaKs_Calculator
Free online server tool that calculates KaKs ratios among multiple sequences
SeqinR: A free and open biological sequence analysis package for the R language that includes KaKs calculation
Molecular evolution
Genetics
Statistical ratios | Ka/Ks ratio | [
"Chemistry",
"Biology"
] | 2,434 | [
"Evolutionary processes",
"Genetics",
"Molecular evolution",
"Molecular biology"
] |
5,458,288 | https://en.wikipedia.org/wiki/Horse%20Tamers | The colossal pair of marble "Horse Tamers"—often identified as Castor and Pollux—have stood since antiquity near the site of the Baths of Constantine on the Quirinal Hill, Rome. Napoleon's agents wanted to include them among the classical booty removed from Rome after the 1797 Treaty of Tolentino, but they were too large to be buried or to be moved very far. They are fourth-century Roman copies of Greek originals. They gave to the Quirinal its medieval name , which lingered into the nineteenth century. Their coarseness has been noted, while the vigor—notably that of the horses—has been admired. The Colossi of the Quirinal are the original exponents of this theme of dominating power, which has appealed to powerful patrons since the seventeenth century, from Marly-le-Roi to Saint Petersburg.
The huge sculptures were noted in the medieval guidebook for pilgrims, Mirabilia Urbis Romae. Their ruinous bases still bore inscriptions OPUS FIDIÆ and OPUS PRAXITELIS, hopeful attributions that must have dated from Late Antiquity (Haskell and Penny 1981, p 136). The Mirabilia confidently reported that these were "the names of two seers who had arrived in Rome under Tiberius, naked, to tell the 'bare truth' that the princes of the world were like horses which had not yet been mounted by a true king."
Between 1589 and 1591, Sixtus V had them restored and set on new pedestals flanking a fountain, another engineering triumph for Domenico Fontana, who had moved and re-erected the obelisk in Piazza San Pietro. In 1783-86 they were re-set at an angle, and an obelisk, which had recently been found at the Mausoleum of Augustus, was re-erected between them. (The present granite basin, which had served for watering cattle in the Roman Forum was set between them in 1818.)
An interpretation of their subject as Alexander and Bucephalus was proposed in 1558 by Onofrio Panvinio, who suggested that Constantine had removed them from Alexandria, where they would have referred to the familiar legend of the city's founder. This became a popular alternative to their identification as the Dioscuri. According to a story long repeated by popular guides, they were created by Phidias and Praxiteles competing for fame, despite these two long preceding Alexander.
Other works
About 1560 a second pair of colossal marble figures accompanied by horses were unearthed and set up on either side of the entrance to the Campidoglio.
The fame of the Horse Tamers recommended them for other situations where the ruling of base natures by higher nature was iconographically desirable. The Marly Horses made by Guillaume Coustou the Elder for Louis XV at Marly-le-Roi were re-set triumphantly in Paris at the time of the French Revolution, flanking the entrance to the Champs-Elysées In the 1640s, bronze replicas were to flank the entrance to the Louvre: moulds were taken for the purpose, but the project foundered. Paolo Triscornia carved what seem to have been the first full-scale replicas of the groups for the entrance of the Manège (the riding school of the royal guards) in St. Petersburg" (Haskell and Penny p 139). The standing of the heroic nudes had risen with the new approach to Antiquity of Neoclassicism: Sir Richard Westmacott was commissioned to cast a full-scale bronze of the "Phidias" figure, supplied with a shield and sword, as a tribute to the Duke of Wellington; it was erected at Hyde Park Corner opposite the Iron Duke's London residence Apsley House, where some French affected to think it was the Duke himself, stark naked. Christian Friedrich Tieck placed copies of the figures, in cast iron, atop Karl Friedrich Schinkel's Altes Museum, Berlin. In St Petersburg, the Anichkov Bridge has four colossal bronze Horse Tamer sculptures by Baron Peter Klodt von Urgensburg (illustration, left). In Brooklyn's Prospect Park, at the Ocean Parkway ("Park Circle") entrance, stands a pair of bronze Horse Tamers sculptures (1899) by Frederick MacMonnies, installed as the newly combined City of New York was spreading across the Long Island landscape.
Notes
References
See also
4th-century Roman sculptures
Roman copies of Greek sculptures
Horses in art
Outdoor sculptures in Rome
Castor and Pollux
Pope Sixtus V
Alexander the Great in art | Horse Tamers | [
"Astronomy"
] | 933 | [
"Castor and Pollux",
"Astronomical myths"
] |
5,458,586 | https://en.wikipedia.org/wiki/List%20of%20psychedelic%20drugs | The following is a list of psychedelic drugs of various chemical classes, including both naturally occurring and synthetic compounds. Serotonergic psychedelics are usually considered the "classical" psychedelics, whereas the other classes are often seen as having only secondary psychedelic properties; nonetheless all of the compounds listed here are considered psychoactive and hallucinogenic in humans to some degree.
Some of these compounds may be classified differently or under more than one category due to a unique structural classification, multiple mechanisms of action, or the fact that the precise pharmacodynamic actions of the compound are not yet completely understood. Because of the vast amount of possible substitutions and chemical analogs of most psychedelic compounds, the total diversity of chemical compounds which produce psychedelic effects in humans is not fully reflected within this list, leaving room for many that have not yet been sufficiently investigated and others that have not yet been discovered.
Naturally occurring compounds are marked with a †.
Serotonergic psychedelics (serotonin 5-HT2A receptor agonists)
Indoles
Tryptamines (more specifically alkylated tryptamines])
Psilocin†, also known as '4-HO-DMT'; another active constituent of the Psilocybe genus of mushrooms; also a metabolite of psilocybin and psilacetin
Psilocybin†, also known as '4-PO-DMT'; the primary active constituent of the Psilocybe genus of mushrooms; its effects are partially attributed to psilocin, to which it is a prodrug via dephosphorylation
Bufotenin†, also known as '5-HO-DMT' and dimethylserotonin; another constituent of the skin and venom of psychoactive toads, its psychedelic activity is disputed; also a metabolite of 5-MeO-DMT
Baeocystin†, also known as '4-PO-NMT'; another active constituent of the Psilocybe genus of mushrooms; its psychedelic activity is disputed
Aeruginascin†, also known as '4-PO-N-TMT', an active constituent of the mushroom Inocybe aeruginascens
5-MeO-DMT†, the primary active constituent of the skin and venom of psychoactive toads, a prodrug to bufotenin via demethylation
N,N-Dimethyltryptamine†, also known as 'DMT'; the primary active constituent of the Amerindian brew ayahuasca; endogenously present in various plants and animals, including humans, possibly a trace amine neurotransmitter. NB-DMT might act as a prodrug of N,N-DMT
5-Bromo-DMT†, was found in the marine invertebrates Smenospongia aurea and Smenospongia echina, as well as in Verongula rigida
N-Methyl-N-ethyltryptamine, also known as 'MET'
N-Methyl-N-isopropyltryptamine, also known as 'MiPT'
N-Methyl-N-propyltryptamine, also known as 'MPT'
N,N-Diethyltryptamine, also known as 'DET'
N-Ethyl-N-isopropyltryptamine, also known as 'EiPT'
N-Methyl-N-butyltryptamine, also known as 'MBT'
N-Propyl-N-isopropyltryptamine, also known as 'PiPT'
N,N-Dipropyltryptamine, also known as 'DPT'
N,N-Diisopropyltryptamine, also known as 'DiPT'
N,N-Diallyltryptamine, also known as 'DALT'
N,N-Dibutyltryptamine, also known as 'DBT'
N-Ethyltryptamine, also known as 'NET'
N-Methyltryptamine†, also known as 'NMT'; its psychedelic activity is disputed
Trimethyltryptamine, also known as 'TMT' (2,N,N-TMT, 5,N,N-TMT, and 7,N,N-TMT)
α-Methyltryptamine, also known as 'αMT' and 'AMT'; also has entactogenic properties
α-Ethyltryptamine, also known as 'αET' and 'AET'; also has entactogenic properties
α,N-DMT
α,N,N-Trimethyltryptamine, also known as 'α-TMT'
Ethocybin, also known as '4-PO-DET', 'CEY-19', and 'CEY-39'
4-HO-MET, also known as 'Metocin', 'Methylcybin', and 'Colour'
4-HO-DET, also known as 'Ethocin' and 'CZ-74'
4-HO-MPT, also known as 'Meprocin'
4-HO-MiPT, also known as 'Miprocin'
4-HO-MALT
4-HO-DPT, also known as 'Deprocin'
4-HO-DiPT, also known as 'Iprocin'
4-HO-DALT, also known as 'Daltocin'
4-HO-DBT
4-HO-DSBT
4-HO-αMT
4-HO-MPMI, also known as 'Lucigenol'
4-HO-TMT
4-HO-1,N,N-TMT, also known as '1-Me-4-HO-DMT' and '1-methylpsilocin'
4-HO-5-MeO-DMT, also known as 'Psilomethoxin'
4-AcO-DMT, also known as 'psiloacetin'; its effects are partially attributed to psilocin, to which it is a prodrug via deacetylation
4-AcO-MET, also known as 'Metacetin'
4-AcO-MiPT
4-AcO-MALT
4-AcO-DET, also known as 'Ethacetin'
4-AcO-EiPT, also known as 'Ethipracetin'
4-AcO-DPT, also known as 'Depracetin'
4-AcO-DiPT, also known as 'Ipracetin'
4-AcO-DALT, also known as 'Daltacetin'
4-MeO-DMT
4-MeO-MiPT
5-MeO-NMT†
5-MeO-MET
5-MeO-MPT
5-MeO-MiPT, also known as 'Moxy'; also has entactogenic properties
5-MeO-MALT
5-MeO-DET
5-MeO-EiPT
5-MeO-EPT
5-MeO-PiPT
5-MeO-DPT
5-MeO-DiPT, also known as 'Foxy Methoxy'
5-MeO-DALT
5-MeO-αMT, also has entactogenic properties
5-MeO-αET, also has entactogenic properties
5-MeO-MPMI
5-MeO-2,N,N-TMT, also known as 'Indomethacin' and 'Indapex'
5-MeO-7,N,N-TMT
5-MeO-a,N-DMT, also known as 'α,N,O-TMS'
4-F-5-MeO-DMT
5-MeS-DMT
5-Me-MiPT, its psychedelic activity is disputed
5-HO-DiPT
2-α-DMT
2-Me-DET
4-Me-αMT
4-Me-αET, also has entactogenic properties
7-Me-αET, also has entactogenic properties
4,5-DHP-AMT, also known as 'AL-37350A'
4,5-DHP-DMT
4,5-MDO-DMT
4,5-MDO-DiPT
5,6-MDO-DiPT
5,6-MDO-MiPT
5-Fluoro-αMT, also has entactogenic properties
6-Fluoro-αMT
6-Fluoro-DMT
N,N-Tetramethylenetryptamine, also known as 'Pyr-T'
4-HO-pyr-T
5-MeO-pyr-T
RU-28306, also known as '4,a-Methylene-N,N-DMT'
O-4310, also known as '6-Fluoro-1-Isopropyl-4-HO-DMT'
CP-132,484, also known as '4,5-DHP-1-Methyltryptamine'
Benzofuran derivatives (technically not tryptamines)
Dimemebfe, also known as '5-MeO-BFE'
5-MeO-DiBF
Ibogoids (can be classified as complex tryptamines)
Ibogaine†, the primary active constituent of iboga rootbark; also has dissociative properties
Voacangine†, another active constituent of iboga rootbark
Ergolines (more specifically lysergamides, which can be classified as complex tryptamines; also contain a phenethylamine backbone)
Lysergic acid diethylamide, also known as 'LSD' and 'acid'
Lysergic acid amide†, also known as 'LSA' and 'ergine'; the primary active constituent of morning glory and Hawaiian baby woodrose seeds
N1-Methyl-lysergic acid diethylamide, also known as 'MLD-41'
N-Acetyl-lysergic acid diethylamide, also known as 'ALD-52'
1-Propionyl-lysergic acid diethylamide, also known as '1P-LSD'; its effects are partially attributed to LSD, to which it is a prodrug via hydrolyzation
1‐cyclopropanoyl‐d‐lysergic acid diethylamide, also known as '1cP-LSD'
1-valeryl-D-lysergic acid diethylamide, also known as '1V-LSD'
6-Allyl-6-nor-lysergic acid diethylamide, also known as 'AL-LAD'
6-Butyl-6-nor-lysergic acid diethylamide, also known as 'BU-LAD'
6-Ethyl-6-nor-lysergic acid diethylamide, also known as 'ETH-LAD'
1-Propionyl-6-Ethyl-6-nor-lysergic acid diethylamide, also known as '1P-ETH-LAD'
6-Propyl-6-nor-lysergic acid diethylamide, also known as 'PRO-LAD'
6-Cyclopropyl-6-nor-lysergic acid diethylamide, also known as 'CYP-LAD'
6-nor-Lysergic acid diethylamide, also known as 'PARGY-LAD'
Lysergic acid ethylamide, also known as 'LAE-32'
Lysergic acid α-hydroxyethylamide†, also known as 'LSH' and 'LAH'; another active constituent of morning glory seeds; an active constituent of some species of fungi
Lysergic acid 2-butyl amide, also known as 'LSB'
Lysergic acid 3-pentyl amide, also known as 'LSP'
Lysergic acid methyl ester, also known as 'LSME'
Lysergic acid 2,4-dimethylazetidide, also known as 'LSZ' and 'LA-SS-Az'
Lysergic acid piperidine, also known as 'LSD-Pip'; its psychedelic activity is disputed
N,N-Dimethyl-lysergamide, also known as 'DAM-57'
Methylisopropyllysergamide, also known as 'MIPLA'
N,N-Diallyllysergamide, also known as 'DAL'
N-Pyrrolidyllysergamide, also known as 'LPD-824'
N-Morpholinyllysergamide, also known as 'LSM-775'
1-methyl-lysergic acid butanolamide, also known as 'Methysergide'; the active constituent of Sansert and Deseril; a prodrug which has to be metabolized to methylergometrine to become psychoactive
Lysergic acid β-propanolamide†, also known as 'Ergonovine' and 'Ergometrine'; another active constituent of morning glory seeds, and an active constituent of ergot fungi
Lysergic acid 1-butanolamide†, also known as 'Methylergonovine', 'Methergine', and 'Methylergometrine'; another active constituent of morning glory seeds and of ergot fungi
Phenethylamines (more specifically alkoxylated phenethylamines)
Substituted phenethylamines
Mescaline†, the primary active constituent of certain cacti, such as peyote and San Pedro cactus
Lophophine†, also known as 'MMDPEA'; another active constituent of certain cacti, such as peyote and the San Pedro cactus (Trichocereus macrogonus var. pachanoi, syn. Echinopsis pachanoi); also has entactogenic properties
Isomescaline
Cyclopropylmescaline
Thioisomescaline (2-TIM, 3-TIM, and 4-TIM)
4-Desoxymescaline
Jimscaline
Escaline
Metaescaline
Thiometaescaline (3-TME, 4-TME, and 5-TME)
Trisescaline
Thiotrisescaline (3-T-TRIS and 4-T-TRIS)
Symbescaline
Asymbescaline
Thiosymbescaline (3-TSB and 4-TSB)
Phenescaline
Allylescaline, also known as 'AL'
Methallylescaline
Proscaline
Isoproscaline
Metaproscaline
Thioproscaline
Buscaline
Thiobuscaline
α-ethylmescaline, also known as 'AEM'
Ariadne, also known as 'α-Et-DOM', '4C-D', and 'Dimoxamine'
Macromerine
MEPEA
TOM (2-TOM and 5-TOM)
Bis-TOM
TOMSO, also known as '2-methoxy-4-methyl-5-methylsulfinylamphetamine'
TOET (2-TOET and 5-TOET)
BOH
BOM, also known as 'β-Methoxy-mescaline'
β-D
4-D
DME
F-2
F-22
FLEA, also known as 'MDHMA'
MDPH
MDMP
Propynyl
2C family (2,5-dimethoxy, 4-substituted phenethylamines)
βk-2C-B
2C-B
2CB-2EtO
2CB-5EtO
2CB-diEtO
2C-B-FLY
2C-B-BUTTERFLY
2C-C
2C-D
2CD-2EtO
2CD-diEtO
2CD-5EtO
2C-E
2C-EF
2C-F
2C-G (2C-G-1, 2C-G-2, 2C-G-3, 2C-G-4, 2C-G-5, 2C-G-6, and 2C-G-N)
2C-H
2C-I
2CI-2EtO
2C-iP
2C-N
2C-O
2C-O-4
2C-P
2C-SE
2C-T
2CT-5EtO
2C-T-2
2CT-2-2EtO
2CT-2-5EtO
2CT-2-diEtO
2C-T-4 (2C-T-4 and Ψ-2C-T-4)
2CT-4-2EtO
2C-T-7
2CT-7-2EtO
2C-T-8
2C-T-9
2C-T-13
2C-T-15
2C-T-16
2C-T-17
2C-T-19,
2C-T-21
2C-TFM
2C-YN
BOB, also known as 'β-Methoxy-2C-B'
BOD, also known as 'β-Methoxy-2C-D'
BOHD, also known as 'β-Hydroxy-2C-D'
HOT-2
HOT-7
HOT-17
Indane derivatives (technically not phenethylamines)
2CB-Ind
Benzocyclobutene derivatives (technically not phenethylamines)
2C-BCB, also known as 'TCB-2'
NBOMe derivatives
NBOMe-mescaline
2C-H-NBOMe, also known as '25H-NBOMe'
2C-C-NBOMe, also known as '25C-NBOMe'
2CBCB-NBOMe, also known as 'NBOMe-TCB-2'
2CBFly-NBOMe, also known as 'Cimbi-31'
2C-B-NBOMe, also known as '25B-NBOMe', 'M25B-NBOMe', 'BOM 2-CB', 'Cimbi-36', 'Nova', or 'New Nexus'
2C-I-NBOMe, also known as '25I-NBOMe', 'Cimbi-5', "Solaris", or "N-Bomb"
2C-TFM-NBOMe, also known as '25TFM-NBOMe'
2C-D-NBOMe, also known as '25D-NBOMe'
2C-G-NBOMe, also known as '25G-NBOMe'
2C-E-NBOMe, also known as '25E-NBOMe'
2C-P-NBOMe, also known as '25P-NBOMe'
2C-iP-NBOMe, also known as '25iP-NBOMe'
2C-CN-NBOMe, also known as '25CN-NBOMe'
2C-N-NBOMe, also known as '25N-NBOMe'
2C-T-NBOMe, also known as '25T2-NBOMe'
2C-T-4-NBOMe, also known as '25T4-NBOMe'
2C-T-7-NBOMe, also known as '25T7-NBOMe'
DMBMPP, 2-Benzylpiperidine analogue of 25B-NBOMe
NBOH derivatives
2C-C-NBOH, also known as '25C-NBOH' and 'NBOH-2CC'
2C-B-NBOH, also known as '25B-NBOH'
2C-I-NBOH, also known as '25I-NBOH'
2C-CN-NBOH, also known as '25CN-NBOH' and 'NBOH-2C-CN'
NBMD derivatives
2C-I-NBMD, also known as '25I-NBMD'
NBF derivatives
2C-C-NBF, also known as '25C-NBF'
2C-B-NBF, also known as '25B-NBF'
2C-I-NBF, also known as '25I-NBF'
Substituted amphetamines (alpha-methyl-phenethylamines)
3C family (3,5-dimethoxy, 4-substituted amphetamines)
3C-E
3C-P
3C-DFE
3C-BZ
DOx family (2,5-dimethoxy, 4-substituted amphetamines)
DOAM
DOB
Meta-DOB
Methyl-DOB
DOBU
DOC
DOEF
DOET, also known as 'DOE'
DOI
DOM, also known as 'STP'
Ψ-DOM
DON
DOPR
DOiPR
DOT, also known as 'Aleph' (Aleph-2, Aleph-4, Aleph-6, and Aleph-7)
Meta-DOT
Ortho-DOT
DOTFM
Phenylcyclopropylamine derivatives (technically not amphetamines)
DMCPA
DMMDA
DMMDA-2
2,5-dimethoxy-3,4-dimethylamphetamine, also known as 'Ganesha'; (G-3, G-4, G-5, and G-N)
4-methyl-2,5-dimethoxymethamphetamine, also known as 'Beatrice', 'MDO-D', and 'MDOM'
2,N-dimethyl-4,5-methylenedioxyamphetamine, also known as 'Madam-6'
Dimethoxyamphetamine (2,4-DMA, 2,5-DMA, and 3,4-DMA)
Trimethoxyamphetamine (TMA-2, TMA-6)
Tetramethoxyamphetamine
Br-DragonFLY
TFMFly
2-Bromo-4,5-methylenedioxyamphetamine, also known as '6-Bromo-MDA'
4-Bromo-3,5-dimethoxyamphetamine
EEE
EEM
EME
EMM
EDMA
EIDA
Ethyl-J, also known as 'EBDB'
Methyl-J, also known as 'MDMB'
Ethyl-K, also known as 'EBDP'
Methyl-K, also known as 'MBDP' and 'UWA-91'
IDNNA
Iris
MDAI
MDMAI
MDAT
MDMAT
MDAL
MDBU
MDBZ
MDDM
MDIP
MDMEOET
MDMEO
MDOH, also known as 'MDH'
MDHOET
MDPL
MDCPM
MDPR
MEDA
MEM
Methyl-DMA
MMDA, also known as '3-methoxy-MDA' (2T-MMDA-3a and 4T-MMDA-2)
MMDA-2
5-Methyl-MDA
MEE
MME
MPM
DiFMDA
5-APB
6-APB, also known as 'Benzofury'
5-APDB
6-APDB
5-MAPB
5-MAPDB
6-MAPDB, its psychedelic activity is disputed
6-MAPB
6-EAPB
5-EAPB
Para-Methoxyamphetamine, also known as 'PMA' and '4-MA'
Paramethoxymethamphetamine, also known as 'PMMA', 'Methyl-MA', and '4-MMA'
4-Ethylamphetamine, also known as '4-EA'
3-Methoxy-4-methylamphetamine, also known as 'MMA'
4-Methylmethamphetamine, also known as '4-MMA'
4-Methylthioamphetamine, also known as '4-MTA'
4-Fluoroamphetamine, also known as '4-FA', 'PAL-303', 'Flux', 'Flits', 'R2D2', and 'Miley'
Norfenfluramine, also known as '3-TFMA'
Para-Iodoamphetamine, also known as 'PIA', '4-iodoamphetamine', and '4-IA'
Para-Chloroamphetamine, also known as 'PCA', '4-chloroamphetamine', and '4-CA'
Benzoxazines (more specifically cyclopropylethynylated benzoxazines)
Substituted benzoxazines
Efavirenz, the active constituent of Sustiva, Stocrin, and Efavir
Empathogens/entactogens (serotonin (5-HT) releasing agents)
Substituted methylenedioxy-phenethylamines (MDxx)
MDMA, also known as 'Molly', and 'Mandy'
MDA, also known as 'Sass'
2,3-MDA, also known as 'ORTHO-MDA'
5-Methyl-MDA
MMDA, also known as '3-methoxy-MDA'
MDEA, also known as 'MDE'
MBDB
MDAL
MDBU
MDBZ
MDDM
MDIP
MDMEOET
MDMEO
MDOH, also known as 'MDH'
MDHOET
MDPL
MDCPM
MDPR
BDB, also known as 'MDB' and 'J'
MMDA-2
DiFMDA
EIDA
Ethyl-K, also known as 'EBDP'
Lophophine†, also known as 'MMDPEA'; an active constituent of certain cacti, such as peyote and Trichocereus macrogonus var. pachanoi (San Pedro cactus)
Substituted amphetamines (exclusively; most of the substituted methylenedioxy-phenethylamines also overlap this category)
EDMA
Para-Methoxyamphetamine, also known as 'PMA'
Paramethoxymethamphetamine, also known as 'PMMA' and 'Methyl-MA'
4-Ethylamphetamine, also known as '4-EA'
3-Methoxy-4-methylamphetamine, also known as 'MMA'
4-Methylmethamphetamine, also known as '4-MMA'
4-Methylthioamphetamine, also known as '4-MTA'
4-Fluoroamphetamine, also known as '4-FA', 'PAL-303', 'Flux', 'Flits', 'R2D2', and 'Miley'
Norfenfluramine, also known as '3-TFMA'
Para-Iodoamphetamine, also known as 'PIA', '4-iodoamphetamine', and '4-IA'
Para-Chloroamphetamine, also known as 'PCA', '4-chloroamphetamine', and '4-CA'
Substituted cathinones
Methylone, also known as 'bk-MDMA' and 'MDMC'
Ethylone, also known as 'bk-MDEA' and 'MDEC'
Eutylone, also known as 'bk-EBDB'
Butylone, also known as 'bk-MBDB'
Pentylone, also known as 'bk-Methyl-K' and 'bk-MBDP'
4-Ethylmethcathinone, also known as '4-EMC'
3-Methylmethcathinone, also known as '3-MMC'
Substituted benzofurans
5-APB
6-APB
5-APDB
6-APDB
5-MAPB
5-MAPDB
6-MAPDB, its psychedelic activity is disputed
6-MAPB
5-EAPB
6-EAPB
5-MBPB
Substituted tetralins
MDAT
MDMAT
6-CAT
Tetralinylaminopropane, also known as 'TAP' and '6-APT'
Substituted indanes
Trifluoromethylaminoindane, also known as 'TAI'
Ethyltrifluoromethylaminoindane, also known as 'ETAI'
5-Iodo-2-aminoindane, also known as '5-IAI'
MMAI
MDAI
MDMAI
Indanylaminopropane, also known as '5-APDI' and 'IAP'
Substituted naphthalenes
Naphthylaminopropane, also known as 'NAP' and 'PAL-287'
Substituted phenylisobutylamines (alpha-ethyl-phenethylamines)
4-chlorophenylisobutylamine, also known as '4-chloro-α-ethylphenethylamine', '4-CAB', and 'AEPCA'
4-Methylphenylisobutylamine, also known as '4-MAB'
Ariadne, also known as 'α-Et-DOM', '4C-D', and 'Dimoxamine'
Alpha-substituted (-alkylated) tryptamines
α-methyltryptamine, also known as 'αMT' and 'AMT'
5-MeO-αMT
α-ethyltryptamine, also known as 'αET' and 'AET'
4-Me-αET
7-Me-αET
5-MeO-αET
5-MeO-MiPT
Cannabinoids (CB-1 cannabinoid receptor ligands)
Phytocannabinoids
Δ9-THC†, agonist; the primary active constituent of cannabis
11-hydroxy-Δ9-THC, agonist; an active metabolite of orally administered Δ9-THC; not technically a phytocannabinoid
CBD†, negative allosteric modulator, another major active constituent of cannabis
CBN†, a minor active constituent of cannabis, also a metabolite of THC and a product of its degradation
THCV†, a minor active constituent of cannabis
Synthetic cannabinoids
(C6)-CP 47,497
(C9)-CP 47,497
1-Butyl-3-(2-methoxybenzoyl)indole
1-Butyl-3-(4-methoxybenzoyl)indole
1-Pentyl-3-(2-methoxybenzoyl)indole
2-Isopropyl-5-methyl-1-(2,6-dihydroxy-4-nonylphenyl)cyclohex-1-ene
4-HTMPIPO
4-Nonylphenylboronic acid
5Br-UR-144
5Cl-APINACA
5Cl-UR-144
5F-3-pyridinoylindole
5F-AB-FUPPYCA
5F-ADB-PINACA
5F-ADBICA
5F-ADB
5F-AMB
5F-APINACA
5F-CUMYL-PINACA
5F-EMB-PINACA
5F-NNE1
5F-PB-22
5F-PCN
5F-PY-PICA
5F-PY-PINACA
5F-SDB-006
HHC
A-796,260
A-834,735
A-836,339
A-955,840
A-40174
A-41988
A-42574
AB-001
AB-CHFUPYCA
AB-CHMFUPPYCA
AB-CHMINACA
AB-FUBICA
AB-FUBINACA 2-fluorobenzyl isomer
AB-FUBINACA
AB-PICA
AB-PINACA
Abnormal cannabidiol
ADAMANTYL-THPINACA
ADB-CHMINACA
ADB-FUBICA
ADB-FUBINACA
ADB-PINACA
ADBICA
ADSB-FUB-187
Ajulemic acid
AM-087
AM-411
AM-630
AM-630
AM-679
AM-694
AM-855
AM-883
AM-905
AM-906
AM-919
AM-926
AM-938
AM-1220
AM-1221
AM-1235
AM-1241
AM-1248
AM-1346
AM-1387
AM-1714
AM-2201
AM-2232
AM-2233
AM-2389
AM-4030
AM-4113
AM-6527
AM-6545
AM-251
AM-281
AM-404
AMB-CHMINACA
AMB-FUBINACA
AMG-1
AMG-3
AMG-36
AMG-41
APICA
APINACA, also known as 'AKB48'
APP-FUBINACA
Arachidonoyl serotonin
ACEA
ACPA
Arvanil
AZ-11713908
BAY 38-7271
BAY 59-3074
BIM-018
Biochanin A
BML-190
Nabidrox (Canbisol)
Cannabicyclohexanol
Cannabipiperidiethanone
CAY-10401
CAY-10429
CAY-10508
CB-13
CB-25
CB-52
CB-86
CB-86
CBS-0550
CP 47,497
CP 55,244
CP 55,940
CUMYL-5F-PICA
CUMYL-BICA
CUMYL-PICA
CUMYL-PINACA
CUMYL-THPINACA
Dexanabinol, also known as 'HU-211'
Dimethylheptylpyran, also known as 'DMHP'
Drinabant, also known as 'AVE1625'
Dronabinol
EAM-2201
EMB-FUBINACA
FAB-144
FDU-NNE1
FDU-PB-22
FUB-144
FUB-APINACA
FUB-JWH-018
FUB-PB-22
FUBIMINA
Genistein
GW-405,833, also known as 'L-768,242'
GW-842,166X
Hemopressin
Hexahydrocannabinol
HU-210
HU-243
HU-308
HU-320
HU-331
HU-336
HU-345
HU-910
Ibipinabant, also known as 'SLV319'
IDFP
JNJ 1661010
JTE-907
JTE 7-31
JWH-007
JWH-015
JWH-018
JWH-019
JWH-030
JWH-051
JWH-073
JWH-081
JWH-098
JWH-116
JWH-122
JWH-133
JWH-139
JWH-147
JWH-149
JWH-161
JWH-164
JWH-167
JWH-175
JWH-176
JWH-182
JWH-184
JWH-185
JWH-192
JWH-193
JWH-194
JWH-195
JWH-196
JWH-197
JWH-198
JWH-199
JWH-200
JWH-203
JWH-210
JWH-229
JWH-249
JWH-250
JWH-251
JWH-302
JWH-307
JWH-359
JWH-369
JWH-370
JWH-398
JWH-424
JZL184
JZL195
Kaempferol
KM-233
L-759,633
L-759,656
LASSBio-881
LBP-1
Leelamine
Levonantradol, also known as 'CP 50,5561'
LH-21
LY-320,135
LY-2183240
MAM-2201
MDA-7
MDA-19
MDA-77
MDMB-CHMICA
MDMB-CHMINACA
MDMB-FUBINACA
Menabitan
MEPIRAPIM
Methanandamide, also known as 'AM-356'
MJ-15
MK-9470
MMB-2201
MN-18
MN-25, also known as 'UR-12'
Nabazenil
Nabilone
Nabitan
Naboctate
NESS-0327
NESS-040C5
NIDA-41020
NM-2201
NMP-7
NNE1
Nonabine
O-224
O-581
O-585
O-606
O-689
O-774
O-806
O-823
O-889
O-1057
O-1125
O-1184
O-1191
O-1238
O-1248
O-1269
O-1270
O-1376
O-1399
O-1422
O-1601
O-1602
O-1624
O-1656
O-1657
O-1660
O-1812
O-1860
O-1861
O-1871
O-1918
O-2048
O-2050
O-2093
O-2113
O-2220
O-2365
O-2372
O-2373
O-2383
O-2426
O-2484
O-2545
O-2654
O-2694
O-2715
O-2716
O-3223
O-3226
Oleoylethanolamide, also known as 'OEA'
Olvanil
Org 27569
Org 27759
Org 28312
Org 28611
Org 29647
Otenabant, also known as 'CP-945,598'
Palmitoylethanolamide, also known as 'PEA'
Parahexyl
PF-03550096
PF-04457845
PF-622
PF-750
PF-3845
PF-514273
PHOP
PipISB
Pirnabine
Pravadoline
Pregnenolone
PSB-SB-487
PSB-SB-1202
PTI-1
PTI-2
PX-1
PX-2
PX-3
QUCHIC, also known as 'BB-22'
QUPIC, also known as 'PB-22'
RCS-4
RCS-8
Rimonabant, also known as 'SR141716'
Rosonabant, also known as 'E-6776'
RTI-371
S-444,823
SDB-006
SER-601
SPA-229
SR-144,528
STS-135
Surinabant, also known as 'SR147778'
Taranabant, also known as 'MK-0364'
Tedalinab
THC-O-acetate
THC-O-phosphate
THJ-018
THJ-2201
Tinabinol
TM-38837
UR-144
URB-447
URB-447
URB-597
URB-602
URB-754
VCHSR
VDM-11
VSN-16
WIN 54,461
WIN 55,212-2
WIN 56,098
XLR-11
Yangonin
Other
Harmaline†, harmala alkaloids†, and other beta-carbolines, active constituents of ayahuasca; powerful MAOIs (can be classified as indoles)
Glaucine, an Aporphine alkaloid that acts as a positive allosteric modulator of 5-HT2 receptors
Salvinorin A†, an opioid (κ-opioid receptor agonist), the active constituent of Salvia divinorum sage
Salvinorin B methoxymethyl ether†, a semi-synthetic analogue of the natural product salvinorin A with longer duration and increased affinity and potency at the κ-opioid receptor
Salvinorin B ethoxymethyl ether†, a semi-synthetic analogue of the natural product salvinorin A with longer duration and increased affinity and potency at the κ-opioid receptor
Piperazines, such as pFPP and TFMPP, usually classified as stimulants
Myristicin† and elemicin†, the active constituents of nutmeg
Cryogenine (Vertine)†, the active constituent of certain Heimia species
Atropine†, scopolamine†, and hyoscyamine†, the active constituents of certain Solanaceae species
Ibotenic acid† and muscimol†, the active constituents of Amanita muscaria mushrooms
See also
List of entheogens
List of designer drugs
Psychedelic plants
PiHKAL
TiHKAL
References
.
Psychedelic drugs | List of psychedelic drugs | [
"Chemistry"
] | 8,703 | [
"Drug-related lists"
] |
5,458,832 | https://en.wikipedia.org/wiki/Texperts | Texperts was a UK-based SMS question-and-answer service. In December 2008, Texperts was bought by the United States–based Knowledge Generation Bureau, operator of the 118 118 services, a directory services company which had also entered the SMS question-and-answer market. The service was later renamed Kgb Answers and is now defunct.
History
Re5ult Ltd was launched initially with a subscription-based service called 'Re5ult' on a normal mobile number. However, in August 2004 the service was launched on a premium text code and rebranded as 82ASK. In 2007, Re5ult switched phone numbers and brands again, relaunching the service as Texperts on short code 66000.
Performance
Posing the question "How many people [are] alive compared with all who had ever lived?", a Guardian journalist testing the service received an answer within five minutes, with the closest rival answering it in 22 minutes. A similar question asked for "a hotel in Ireland within half an hour of Rosslare en route to Westmeath" which received an answer within 15 minutes; a rival service answered a few minutes sooner and also provided prices which Texperts's response lacked.
Publications
In 2006, Texperts released the book Do Sheep Shrink in the Rain?, subtitled "Over 500 of the most outrageous questions ever asked", published by Virgin Books. The 256-page paperback is a compilation of questions received and answered by the service.
See also
AskMeNow
References
Companies based in Cambridge
Technology companies established in 2003
SMS-based question answering services
2008 mergers and acquisitions
2003 establishments in England
Technology companies disestablished in 2009
2009 disestablishments in England
British companies established in 2003
British companies disestablished in 2009 | Texperts | [
"Technology"
] | 358 | [
"SMS-based question answering services"
] |
5,458,969 | https://en.wikipedia.org/wiki/Fraunhofer%20Institute%20for%20Applied%20Optics%20and%20Precision%20Engineering | The Fraunhofer Institute for Applied Optics and Precision Engineering (IOF), also referred to as the Fraunhofer IOF, is an institute of the Fraunhofer Society for the Advancement of Applied Research (FHG). The institute is based in Jena. Its activities are attributed to applied research and development in the branch of natural sciences in the field of optics and precision engineering. The institute was founded in 1992.
Research and development
Building upon the experience of the Jena region in the field of surface and thin film technologies for optics, the Fraunhofer IOF conducts research and development in the area of optical systems aiming at enhancing the control of light – from its generation and manipulation to its actual use. The combination of competences in the areas of optics and precision engineering is particularly important.
The focuses also result in the department structure:
Opto-mechanical System Design
Micro and Nano-structured Optics
Opto-mechatronical Components and Systems
Precision Optical Components and Systems
Functional Optical Surfaces and Layers
Laser- and Fiber Technology
Imaging and Sensing
Emerging Technologies
see also: thin film technology, surface physics, microstructure technology, nanotechnology, micro-optics, measurement technology, quantum technology
CMN-Optics
In July 2006, the Fraunhofer IOF opened the Center for Advanced Micro- and Nano-Optics (CMN-Optics). The core of the facility is the SB350-OS electron beam lithography system. This device, also known as an "electron beam recorder", allows minimal structure sizes in the range of 50 nm with a high accuracy on substrate sizes up to 300 mm. The center is operated jointly with the Institute for Applied Physics (IAP) of the Friedrich Schiller University of Jena. The facility is also used by the Institute for Photonic Technologies (IPHT), Jena. The facility cost twelve million euros and was financed by the European Union, the Free State of Thuringia and the Fraunhofer Society.
Cooperations
In 2003, the Fraunhofer Society concluded a cooperation agreement with the Friedrich Schiller University of Jena. It is the basis for collaboration between Fraunhofer IOF staff and the staff of the Institute of Applied Physics at the University of Jena. The aim of the cooperation is to provide practical training for the students, to improve the implementation of research results into practice and to share the high-quality equipment and infrastructures of both institutions.
Infrastructure
At the end of 2020, Fraunhofer IOF had almost 330 employees, most of whom are scientists and technicians.
The operating budget for the Fraunhofer IOF was EUR 51,5 million in the 2020 financial year.
The Fraunhofer IOF has been headed by Andreas Tünnermann since 2003, who is also Director of the Institute of Applied Physics at the Friedrich Schiller University in Jena.
The institute has excellently equipped laboratories on an area of 3830 m² plus 1115 m² clean room (ISO class 1 - ISO class 7), a mechanics workshop meeting the highest demands as well as a test field for extensive testing and demonstration purposes.
In the years 2002 and 2013 expansion facilities were built on the Beutenberg Campus in Jena.
In 2017 a fiber technology center was inaugurated at Fraunhofer IOF, which includes new special laboratories for the production of active and passive micro- and nanostructured optical fibers and one of the world's most powerful fiber drawing towers.
Awards
German Future Prize 2007
In cooperation with the semi-conductor manufacturer company Osram Opto Semiconductors from Regensburg, researchers from the Fraunhofer Institute in Jena, headed by Andreas Bräuer, received the German Future Prize worth EUR 250,000 on December 6. Their innovation consisted of improved chips, packages and a special optical system that enable more powerful light-emitting diodes.
German Future Prize 2013
Stefan Nolte from the Fraunhofer Institute for Applied Optics and Precision Engineering (IOF) and Friedrich Schiller University Jena were together with Jens König (Robert Bosch GmbH) and Dirk Sutter (Trumpf Lasers) awarded for their work with ultra-short pulse lasers on December 4, 2013.
German Future Prize 2020
For their project "EUV Lithography - New Light for the Digital Age", the team of experts led by Sergiy Yulin (Fraunhofer IOF), Peter Kürz (ZEISS Semiconductor Manufacturing Technology (SMT) division), and Michael Kösters (TRUMPF Lasersystems for Semiconductor Manufacturing) was awarded the prize for technology and innovation.
References
External links
Fraunhofer Institute for Applied Optics and Precision Engineering (IOF) - Official website
1992 establishments in Germany
Engineering research institutes
Fraunhofer Society
Laboratories in Germany
Organizations established in 1992
Robotics organizations | Fraunhofer Institute for Applied Optics and Precision Engineering | [
"Engineering"
] | 975 | [
"Engineering research institutes"
] |
5,459,336 | https://en.wikipedia.org/wiki/Eternity%20%28novel%29 | Eternity is a science fiction novel by American author Greg Bear published by Warner Books in 1988. It is the second book in his The Way series, dealing largely with the aftermath of the decision to split Axis City and abandon the Way in the preceding book, Eon.
Plot summary
In Eon, Axis City split into two: a segment of Naderites and some Geshels took their portion of the city out of the Way and through Thistledown into orbit around the Earth; they spend the next thirty years aiding the surviving population of Earth to heal and rebuild from the devastating effects of the Death which strains their and the Hexamon government's resources. As time passes, sentiment grows to have Konrad Korzenowski reopen the Way. Firstly, to learn what has happened to the Geshels' long-sundered brethren (who took their portion of Axis City down the Way at relativistic near-light speed). And, secondly, to benefit from the commercial advantages of the Way (despite the risk that the Jarts will be waiting on the other side).
In a parallel Earth, known as Gaia, mathematician Patricia Vasquez (the primary protagonist of Eon), dies of old age; she never found her own Earth where the Death did not happen and her loved ones were still alive, but remained on the one she discovered (in which Alexander the Great did not die young and his empire did not fragment after his death). She passes her otherworldly artifacts of technology to her granddaughter, Rhita, who appears to have inherited her gifts. Rhita moves away from the academic institute the "Hypateion" (a reference to Hypatia) which Patricia founded and that world's version of Alexandria. Patricia's clavicle claims that a test gate has been opened onto this world of Gaia, and that it could be expanded further.
Ser Olmy is concerned by the prospects of the Way being re-opened with the attendant consequences, and by the revelation to him by an old friend that one of the deepest secrets of the Hexamon was a captured Jart whose body died in the process but whose mind was uploaded. Its mentality was alien and powerful enough that it took over or killed many of the researchers who attempted to connect to and study it, so it was hidden away deep in the Stone. As he studies the Jart, Olmy comes to believe that the Jart allowed itself to be captured and is a Trojan Horse. The Jart reveals tidbits about the Jart civilization: in essence, they are a hierarchical meta-civilization that ruthlessly modifies itself, attempting to absorb all useful intelligences and ways of thinking that it encounters, in the service of the Jarts' ultimate goal - to transmit all the data they can possibly gather to "descendant command". Olmy investigates further and discovers that descendant command is the Jart name for what they know as the "Final Mind" - a Teilhardian (or Tiplerian) conception of an ultimate intelligence which will be created at the end of the universe when all intelligences merge themselves into a single transcendent intellect which will effectively be a god. Olmy underestimates the Jart, and it begins to slowly take over his body and mind. Its original mission, assigned to it hundreds of years ago was to engage in sabotage and transmit its freshly acquired understanding of humanity back to present command, but the return of Pavel Mirsky changes everything. The Jart defers to Mirsky, describing him as 'coming from Descendant Command'.
Pavel Mirsky had elected to go with the Geshels down the Way more than thirty years ago, after which the Way had been sealed off. It should have been impossible for him to return, but one day he quietly re-appears on Earth to deliver an urgent message. He had traveled down the Way when it was sealed off with that portion of Axis City, and he and its citizens had voyaged hundreds of years and billions of kilometers; they advanced and changed radically on the way. At the end of the Way was a finite but unbounded cauldron of space and energy - a small proto-universe. They transformed themselves into ineffable beings of energy in order to survive the transition. They became as gods to this place, and for a time their creating went well. But it began to corrode and collapse without conflict and contrast between the creators, threatening to take the would-be gods with it. But they were rescued by the Final Mind of this universe, which took pity on them and freed them from the Way. Mirsky had been reconstituted from what he had become and sent back in time to try to persuade the Hexamon to order the re-opening of the Way - and its destruction.
On Gaia, Rhita persuades the aging queen to support her like the queen had supported Patricia. Their expedition leaves for the location of the test gate somewhere in the barbarous hinterlands of Central Asia in the nick of time, as the queen is deposed during their trip. Rhita's clavicle succeeds in expanding the test gate to a usable size, but it warns her that whoever opened the gate in the first place was not human. That night, the Jarts arrive on Gaia en masse. They begin the task of storing and digitizing all the data and life forms on Gaia to transmit down to descendant command. Rhita's consciousness is of special interest to the Jarts, particularly what she knows of Patricia.
In the meantime on Earth proper, consensus has been reached to re-open the Way but not to destroy it. Mirsky disappears. Another entity who should not be there, Ry Oyu, the former gate opener for the Gate Guild, appears. He prods the president of the Hexamon into covertly ordering Korzenowski into destroying the Way regardless of the decision of the citizens. The backlash destroys the Stone. Ry Oyu, Korzenowski, Ser Olmy (who connived at the destruction), and the Jart controlling Olmy, outrun the Way's destruction and arrive at a Jart defense station located over Gaia. The Jarts respect the wishes of Ry Oyu as a representative of descendant command, and before the Way dies, transmit their accumulated data in a single immensely long fluctuation along the singularity/flaw of the Way to the Final Mind.
Korzenowski has himself digitized and sent with the transmission. Olmy is dropped off on the homeworld of the Frants, a communal mind civilization whom he likes. Ry Oyu has Rhita's mind freed; her consciousness gives Ry Oyu the last piece of data needed to reconstitute Patricia Vasquez. Ry Oyu intends to make up for his failure to instruct Patricia properly when she was trying to open a gate back home in Eon; he correctly opens the gate, and bare moments before the Way completely disintegrates around him, finally sends her back home to an Earth where the Death did not happen. Rhita is also returned to Gaia, a Gaia where she never opened a test gate and where the Jarts did not invade. And Pavel Mirsky, still unsatisfied, returns to the beginning of the universe to witness all interesting events between then and the Final Mind, when he will return and report back to it.
Reception
Publishers Weekly stated in its review of the novel: "This slow, visionary tale is less than compelling, but its portrait of the different responses of intricate, interlocking cultures is striking." Kirkus Reviews noted that the novel was "exceedingly hard to follow if you haven't read the original or don't recall precisely what occurred therein...Imaginative but overcomplicated, bogged down in purposeless information and irrelevant subplots, largely devoid of narrative tension. Another disappointment: the ideas are there, the discipline isn't." John Clute in The Washington Post commented that "The plotting is as joyfully contrived as good pulp plotting always tries to be...In short alternating chapters, Bear builds several segments of space-operatic plot towards his central revelations, and does so with competence and wit; but at the heart of Eternity lies a message that cuts deep into any presumption that telling a good story is the same thing as understanding the world. The universe, Bear says in this large, brave, courteous book, is greater than the tale."
Palaeobiology
It has been suggested by David Langford that the physical form of the Jart specimen as described in the novel is a tribute to the real-life bizarre fossil species Hallucigenia.
See also
Simulated reality
References
1988 American novels
1988 science fiction novels
Novels by Greg Bear
American science fiction novels
Fiction about consciousness transfer
Fiction about immortality
Fiction about nanotechnology
Novels about artificial intelligence
Novels about genetic engineering
Transhumanism in fiction | Eternity (novel) | [
"Materials_science"
] | 1,839 | [
"Fiction about nanotechnology",
"Nanotechnology"
] |
5,459,698 | https://en.wikipedia.org/wiki/Bits%20and%20Bytes | Bits and Bytes was the name of two Canadian educational television series produced by TVOntario that taught the basics of how to use a personal computer.
The first series, made in 1983, starred Luba Goy as the Instructor and Billy Van as the Student. Bits and Bytes 2 was produced in 1991 and starred Billy Van as the Instructor and Victoria Stokle as the Student. The Writer-Producers of both Bits and Bytes and Bits and Bytes 2 were Denise Boiteau & David Stansfield.
Title sequence
The intro sequence featured a montage of common computer terms such as "ERROR", "LOGO" and "ROM", as well as various snippets of simple computer graphics and video effects, accompanied by a theme song that very heavily borrows from the 1978 song "Neon Lights" by Kraftwerk.
Series format
The first series featured an unusual presentation format whereby Luba Goy as the instructor would address Billy Van through a remote video link. The video link would appear to Luba who was seated in an office on a projection screen in front of her. She was then able to direct Billy, who appeared on a soundstage with various desktop computer setups of the era. Popular systems emphasized included the Atari 800, PET, TRS-80, and Apple II. Each episode also included short animated vignettes to explain key concepts, as well as videotaped segments on various developments in computing.
In 1983 TVOntario included the show's episodes as part of a correspondence course. The original broadcasts on TVOntario also had a companion series, The Academy, that was scheduled immediately afterward in which Bits and Bytes technology consultant, Jim Butterfield, appeared as co-host to further elaborate on the concepts introduced in the main series.
Bits and Bytes 2
In the second Bits and Bytes series, produced almost a decade later, Billy Van assumed the role of instructor and taught a new female student. The new series focused primarily on IBM PC compatibles (i.e. Intel-based 286 or 386 computers) running DOS and early versions of Windows, as well as the newer and updated technologies of that era. For that series, a selection of the original's animated spots are reaired to illustrate fundamental computer technology principles along with a number of new spots to cover newly emerged concepts of computer technology such as advances in computer graphics and data management.
Although the possibility of a Bits and Bytes 3 was suggested at the end of the second series, TVOntario eventually elected instead to rebroadcast the Knowledge Network computer series, Dotto's Data Cafe, as a more economical and extensive production on the same subject.
Episodes (1983-84)
Program 1: Getting Started
Program 2: Ready-Made Programs
Program 3: How Programs Work?
Program 4: File & Data Management
Program 5: Communication Between Computers
Program 6: Computer Languages
Program 7: Computer-Assisted Instruction
Program 8: Games & Simulations
Program 9: Computer Graphics
Program 10: Computer Music
Program 11: Computers at Work
Program 12: What Next?
Episodes (1991)
Program 1: Basics
Program 2: Words
Program 3: Numbers
Program 4: Files
Program 5: Messages
Program 6: Pictures
Crew
Original Music - Harry Forbes, George Axon
Animation Voice - Fred Napoli
Animation - Grafilm Productions Inc.
Consultants - Jim Butterfield, David Humphreys, Mike H. Stein, Jo Ann Wilton
Unit Manager - Rodger Lawson
Production Editors - Michael Kushner, Paul Spencer, Brian Elston, Doug Beavan
Production Assistant - George Pyron
Executive Producer - Mike McManus
Director - Stu Beecroft
Written & Produced by - Denise Boiteau and David Stansfield
References
External links
TVOntario's official (but incomplete) archive of the original series via the Internet Archive's Wayback Machine
Complete archive of the original series on YouTube, including episodes and standalone clips of all of the animations and interviews
Fansite with more information about the show
1983 Canadian television series debuts
1991 Canadian television series debuts
TVO original programming
Television shows filmed in Toronto
Computer television series
1980s Canadian children's television series
1990s Canadian children's television series | Bits and Bytes | [
"Technology"
] | 831 | [
"Works about computing",
"Computer television series"
] |
5,459,710 | https://en.wikipedia.org/wiki/George%20Rekers | George Alan Rekers (born July 11, 1948) is an American psychologist and ordained Southern Baptist minister. He is emeritus professor of Neuropsychiatry and Behavioral Science at the University of South Carolina School of Medicine. Rekers has a PhD from University of California, Los Angeles and has been a research fellow at Harvard University, a professor and psychologist for UCLA and the University of Florida, and department head at Kansas State University.
In 1983 Rekers was on the founding board of the Family Research Council, a non-profit Christian lobbying organization, and he is a former officer and scientific advisor of the National Association for Research & Therapy of Homosexuality (NARTH), an organization offering conversion therapy, a pseudoscientific practice intended to convert homosexuals to heterosexuality. Rekers has testified in court that homosexuality is destructive, and against parenthood by gay and lesbian people in a number of court cases involving organizations and state agencies working with children.
In May 2010, Rekers employed a male prostitute as a travel companion for a two-week vacation in Europe. Rekers denied any inappropriate conduct and suggestions that he was gay. The male escort told CNN he had given Rekers "sexual massages" while traveling together in Europe. Rekers subsequently resigned from the board of NARTH.
Personal life, education and academic career
Rekers is married with children.
Rekers received his B.A. in psychology from Westmont College in 1969. He later received his M.A. and PhD in psychology from University of California, Los Angeles (UCLA) in 1971 and 1972, respectively. As part of his doctoral studies at UCLA, Rekers led an experimental study which used behavioral treatment to discourage "deviant sex-role behaviors in a male child". In 2011, Anderson Cooper 360° featured a story about the fate of Kirk Murphy, a child Rekers states that he cured in many of his books. Murphy's siblings and mother state that the therapy ultimately had lasting damage to the boy and led to him growing up to be a man who grappled constantly with his homosexuality before committing suicide in 2003 at the age of 38.
Scholarly work
His work has been criticized by other scholars for reinforcing sex-role stereotypes and for reliance on dubious rationales for therapeutic intervention (e.g. parents' worries that their children might become homosexuals).
Rekers refers in his academic work to "the positive therapeutic effects of religious conversion for curing transsexualism" and "the positive therapeutic effect of a church ministry to repentant homosexuals." Judith Butler describes this work as "intensely polemical", giving "highly conservative political reasons for strengthening the diagnosis [of "gender identity disorder"] so that the structures that support normalcy can be strengthened."
Growing Up Straight (1982)
Growing Up Straight: What Every Family Should Know About Homosexuality is a 1982 guide for parents on how to prevent their children from becoming homosexual. The book was influential. However, the viewpoints espoused in the book have been controversial. Professor Michael R. Schiavi wrote in a 2001 Modern Language Studies journal article that the work was a "horror show written for parents anxious to re-direct sissy sons to sexual righteousness". The journalist Frank Rich questioned the book's status as scholarship in The New York Times, writing that "many of the footnotes cite his own previous writings." The psychologist and sexologist Kenneth Zucker reviewed Growing Up Straight and Shaping Your Child's Sexual Identity, another work by Rekers, in a 1984 issue of Archives of Sexual Behavior. He described both works as, "examples of the passionate response that can be engendered by the study of human sexuality. In this instance, religious rhetoric is used to defend the author's views on the subject. What is perhaps most disappointing about these two books is the idyllic view of family life and human conduct for which the author longs." Zucker went on to write, "Ultimately, one has to wonder how Rekers will feel toward his child patients, should they grow up not to be straight. He might well benefit by recalling the words of Harry Stack Sullivan: "We are all much more simply human than otherwise."
Shaping Your Child's Sexual Identity (1982)
In Shaping Your Child's Sexual Identity, Rekers provides advice to parents which he claims can help them prevent their children from becoming homosexual.
The book received a negative review from the psychologist Kenneth Zucker in Archives of Sexual Behavior. Zucker wrote that, as in some of his other work, such as Growing Up Straight (1982), Rekers ignored, dismissed, or distorted scientific data to prevent it from conflicting with his religious views. He noted that some of the material was controversial, such as Rekers's discussion of his behavior as an expert witness in a child custody case, in which he testified against a lesbian mother seeking to regain custody of her daughters because her lesbianism "placed her children at risk for deviant sex-role development". Zucker criticized Rekers for ignoring scholarly literature suggesting that there is little evidence for such claims. He concluded that Shaping Your Child's Sexual Identity is an example of "the passionate rhetoric that can be engendered by the study of human sexuality."
The neuroscientist Simon LeVay commented that Shaping Your Child's Sexual Identity reveals his "virulent antipathy towards homosexuality." Jackie M. Blount found Rekers's "language and logic reminiscent of works from earlier decades", comparing Shaping Your Child's Sexual Identity to Peter and Barbara Wyden's Growing Up Straight (1968). She wrote that the book was influential. Ellen K. Feder criticized Rekers's work for lacking scientific credibility, describing Rekers's Growing Up Straight and Shaping Your Child's Sexual Identity as "manuals for parents designed to assist them in deterring their children from pursuing a 'deviant' lifestyle." The journalist Robyn E. Blumner of the Tampa Bay Times criticized Shaping Your Child's Sexual Identity for Rekers's "gay-bashing" rhetoric, such as his claim that gay activists secretly want to legalize pedophilia.
Views on the family
Rekers' views on family life were the focus of a major controversy in Florida in 2002 when then-governor Jeb Bush appointed Jerry Regier to the post of head of the Florida Department of Children and Families with responsibility for child welfare. Shortly after the announcement of Regier's appointment, it was disclosed that in 1989 the California-based Coalition on Revival had published a fundamentalist tract titled The Christian World View of the Family under the names of Regier and Rekers, which condemned working mothers as being in "bondage" and argued that the government should have no right to place children in protective custody except in cases of extreme abuse or neglect. The tract's authors also "affirm that Biblical spanking may cause temporary and superficial bruises or welts that do not constitute child abuse" and "deny that the Bible countenances any other definition of the family, such as the sharing of a household by homosexual partners, and that society's laws should be modified in any way to broaden the definition of family." The tract was condemned by Democrats; Bush told the media that Regier "doesn't share those extreme views." Regier survived the controversy and served as DCF head from 2002 to the end of Jeb Bush's term in 2007.
Rekers is a practicing Southern Baptist, and credits the work of C.S. Lewis, particularly his writings on gender relations, with influencing his religious and social views.
Views on homosexuality
Rekers has attracted attention for his views on homosexuality, which have been promoted in a number of forums and court cases. He asserts that homosexuality is a "gender disturbance" that can be corrected through 18 to 22 months of weekly therapy during childhood and adolescence. Mark Pietrzyk, of the gay group the Log Cabin Republicans, has stated that Rekers' method uses aversion therapy – a practice opposed by the American Psychiatric Association (APA) – that punishes "nonconforming" behavior such as swaggering in girls or limp wrists in boys and rewards "conforming" behavior such as girls playing with dolls and boys playing basketball.
A number of authorities working in the relevant fields reject Reker's basic premise utterly; a publication from the APA states "The idea that homosexuality is a mental disorder or that the emergence of same-sex attraction and orientation among some adolescents is in any way abnormal or mentally unhealthy has no support among any mainstream health and mental health professional organizations." (Printing and distribution of that publication was with the support of the American Counseling Association, the Interfaith Alliance, and the National Education Association.)
According to Rekers himself, he spends much of his time with boys whose peers regard them as "sissy" and "effeminate" with the goal of reversing those traits and "help[ing] these children to become better adapted to themselves and to their environment." The APA's opposition to his methods led to him resigning from the organization.
Rekers has appeared in court in several cases as an expert witness testifying on matters concerning homosexuality. His testimony has been strongly criticized by a number of parties including trial judges; the American Civil Liberties Union has asserted that his personal beliefs regarding homosexuality interfere with his ability to give an unbiased professional opinion on lesbian, gay, bisexual and transsexual (LGBT) topics, including gay adoption. Legal experts have discussed whether his involvement with a male prostitute in 2010 could render his testimony unreliable, possibly affecting the outcome of pending cases in Florida and California.
Boy Scouts of America hearing, 1998
Rekers testified before the Washington, D.C. Human Rights Commission on behalf of the Boy Scouts of America in 1998 in defense of the group's policy on excluding homosexuals, arguing that it was justified because admission of homosexuals "would legitimize the value of homosexual behavior in the eyes of many of the Boy Scouts ... There would be more homosexual conduct or behavior by the boys in such troops." He has acknowledged that his views are heavily influenced by religious concerns; as a member of the Southern Baptists, he believes that the city of Sodom was destroyed by God as a punishment for allowing homosexuality and that active homosexuals face "eternal separation from God", i.e., perpetuity in hell.
Arkansas gay adoption case, 2004
Rekers was an expert witness in a 2004 case involving gay adoption in Arkansas, which had banned LGBT people from adopting in 1999. He argued that "it would be in the best interest of foster children to be placed in a heterosexual home" because the majority of people in the country disapproved of homosexual behaviour, putting further stress on children who were already likely to suffer from psychological disorders. According to Rekers, "That disapproval filters down to children [who] will express disapproval in more cruel, insensitive ways" toward a child being parented by a gay person. In cross-examination, Rekers acknowledged that he believed that homosexuality is sinful and that the Bible is the infallible word of God. His testimony was rebutted by Dr. Michael Lamb, a psychiatrist, who stated that there was no scientific evidence for the assertion that homosexuals were worse parents than heterosexuals.
The trial judge, Pulaski County Circuit Court judge Timothy Fox, ruled against the state of Arkansas in December 2004. He was strongly critical of Rekers' testimony, describing it as "extremely suspect" and said that Rekers "was there primarily to promote his own personal ideology." Rekers responded by denouncing the trial as "utterly corrupt."
Following the case, Rekers billed the Arkansas Department of Health and Human Services a sum of $165,000 for his testimony, an amount that far exceeded what the state had anticipated. He later increased the bill to $200,000 with the addition of late fees and other charges for preparing paperwork. The unpaid bill led to two years of legal wrangling that was finally settled out of court with a $60,000 payment.
Florida gay adoption case, 2008
In 2008, Rekers was an expert witness in In re: Gill, a case defending Florida's gay adoption ban. He presented testimony asserting that homosexuals are more likely to suffer from depression, substance abuse, and emotional problems. Citing what he called "God's moral laws," he asserted that individual homosexuals are "manipulated by leaders of the homosexual revolt" to the detriment of those suffering this "sexual perversion." He also asserted that Native Americans would make unsuitable foster parents, asserting that they suffered from a high risk of alcohol abuse and psychiatric disorders.
Miami-Dade Circuit Court Judge Cindy Lederman ruled against the state. In her decision, she said "Dr. Rekers' testimony was far from a neutral and unbiased recitation of the relevant scientific evidence. Dr. Rekers' beliefs are motivated by his strong ideological and theological convictions that are not consistent with the science. Based on his testimony and demeanor at trial, the court cannot consider his testimony to be credible nor worthy of forming the basis of public policy." It later emerged that Rekers had been paid nearly $120,000 for his testimony on behalf of the state, which had been solicited specifically by Florida Attorney General Bill McCollum. The attorney general wrote in 2007: "Our attorneys handling this case have searched long and hard for other expert witnesses with comparable expertise to Dr. Rekers and have been unable to identify any who would be available for this case." However, his choice of witness was criticized by Nadine Smith of the gay-rights organization Equality Florida: "Rekers is part of a small cadre of bogus pseudo scientists that charge these exorbitant fees to peddle information they know has been discredited time and time again. And people like McCollum will pay top dollar for it. There's a reason why he can't find credible sources. Because credible people don't believe this ban should exist."
Third District Court of Appeal State of Florida, stated in the decision: "Dr. Cochran (Professor of Epidemiology and Statistics at the University of California in Los Angeles) also testified about errors in scientific methodology and reporting in Dr. Rekers' study, stating that Dr. Rekers had failed to present an objective review of the evidence on those subjects. Cochran concluded that Dr. Rekers' work did not meet established standards in the field. Another expert, Dr. Peplau (Professor of Psychology at the University of California in Los Angeles), testified that Dr. Rekers had omitted in his review of the scientific literature “other published, widely cited studies on the stability of actual relationships over time.”
Organizational affiliations
Family Research Council
In 1983 Rekers was on the founding board of the Family Research Council, a non-profit Christian lobbying organization, along with James Dobson and Armand Nicholi Jr.
NARTH
Until May 11, 2010, Rekers was listed as an advisor and officer with the National Association for Research & Therapy of Homosexuality but resigned following the exposure of the rent-boy scandal. NARTH is an association which promotes the acceptance of conversion therapy intended to change homosexuals into heterosexuals, contrary to the advice of mainstream professional associations such as the American Psychiatric Association and American Psychological Association.
"Rent boy" allegations
The Miami New Times reported on May 4, 2010, that three weeks previously, Rekers had been photographed at Miami International Airport with a twenty-year-old "rent boy" who was using the pseudonym "Lucien" (later identified as Jo-Vanni Roman). Roman was available for hire through the Rentboy.com website. Rekers acknowledged hiring Roman for the 10-day European vacation as a "travel assistant" and denies any impropriety. He said that Roman was there to help carry his luggage since he had surgery recently and was unable to carry it himself, although the photograph clearly showed Rekers lifting his own luggage. Rekers was quoted as commenting, "If you talk with my travel assistant ... you will find I spent a great deal of time sharing scientific information on the desirability of abandoning homosexual intercourse, and I shared the Gospel of Jesus Christ with him in great detail." The incident was covered by media outlets and TV shows worldwide.
In subsequent interviews, Roman said Rekers had paid him to provide nude massages daily: "'Jo-vanni' in news reports, has told various media outlets that he gave Rekers daily massages in the nude during the trip, which included genital touching." He also talked about how he believed that Rekers was, in fact, homosexual: "It's a situation," Roman said, "where he's going against homosexuality when he is a homosexual." According to the New Times, Roman "made it clear they met through Rentboy.com", and denied that he had been hired to carry luggage; The Times reported that Rekers "hired a companion from a website called Rentboy.com that offers clients a wide range of choices, from 'rentboy' and 'sugar daddy' to 'masseur'." On the May 6, 2010 episode of The Daily Show, Jon Stewart pointed out that Roman is looking on in the photograph, while Rekers is seen handling his own luggage.
Following the first report about Rekers, on May 8, 2010, New York magazine reported that another individual said that Rekers had hired him in a similar capacity in 1992.
On May 12, Christianity Today reported that Rekers stated on his personal website that he had interviewed several people for the role of travel assistant, and was not aware of his assistant's internet advertisements. He e-mailed them saying "I confessed to the Lord and to my family that I was unwise and wrong to hire this travel assistant after knowing him only one month before the trip", saying he was unaware that his "travel assistant" was "more than a person raised in a Christian home". Rekers explained his regrets for the harm caused by his "unwise decision", and that he was being advised by "an experienced pastor and counselor from my church, so I can more fully understand my weaknesses and prevent this kind of unwise decision-making in the future". On his resignation from NARTH he said "I am not gay and never have been." The scandal became popular fodder for media commentators and comics. Frank Rich of the New York Times wrote: "Thanks to Rekers's clownish public exposure, we now know that his professional judgments are windows into his cracked psyche, not gay people's. But...his excursions into public policy have had real and damaging consequences on a large swath of Americans."
Newsweek's June 7, 2010 issue's Back Story listed Rekers, among others, as a prominent conservative activist who has a record of supporting anti-gay legislation and was later caught in a gay sex scandal.
Experiments on gender-variant children
Rekers co-authored four papers with Ole Ivar Lovaas, a psychology professor at the same university, on children with atypical gender behaviors. The subject of the first of these studies, a 'feminine' young boy who was homosexual by four-and-a-half years old at the inception of treatment, committed suicide as an adult; his family attribute the suicide to this treatment.
Following his suicide in 2010, the man's sister told the news that she read his journal which described how he feared disclosing his sexual orientation because when receiving the behavior modification treatment as a young boy, his father would beat him severely if he was given a different color "poker chip" as punishment for feminine-like behavior such as playing with dolls.
Publications
Growing Up Straight: What Every Family Should Know About Homosexuality (1982)
Shaping Your Child's Sexual Identity (1982).
Making up the difference: help for single parents with teenagers (1984).
Family building: six qualities of a strong family (1985).
Counseling Families (1988).
The Christian World View of the Family (1989)
Susan Smith: victim or murderer? (1996).
See also
References
Further reading
External links
ProfessorGeorge.com, personal website
Profile at SourceWatch
1948 births
Living people
20th-century American psychologists
20th-century Baptist ministers from the United States
21st-century American psychologists
21st-century Baptist ministers from the United States
Adultery in evangelical Christianity
American evangelicals
Anti-LGBTQ evangelical Christian activists in the United States
Behaviourist psychologists
Columbia International University alumni
Conversion therapy practitioners
Harvard University people
Kansas State University faculty
Place of birth missing (living people)
Sexual orientation change efforts
Southern Baptist ministers
University of California, Los Angeles alumni
University of California, Los Angeles faculty
University of Florida faculty
University of South Africa alumni
University of South Carolina faculty
Westmont College alumni | George Rekers | [
"Biology"
] | 4,278 | [
"Behaviourist psychologists",
"Behavior",
"Behaviorism"
] |
5,459,797 | https://en.wikipedia.org/wiki/Thai%20Institute%20of%20Chemical%20Engineering%20and%20Applied%20Chemistry | The Thai Institute of Chemical Engineering and Applied Chemistry (TIChE) () is a professional organization for chemical engineers. TIChE was established in 1996 to distinguish chemical engineers as a profession independent of chemists and mechanical engineers.
History
TIChE was established to force the chemical engineering professional certificate isolated from the industrial engineering. The conference in 1990 was the first effort to establish the organization by the cooperation of Department of Chemical Engineering and Department of Chemical Technology, Chulalongkorn University, and Department of Chemical Engineering, King Mongkut's University of Technology Thonburi. In the 4th conference at Khon Kaen University, 1994, TIChE was formally established and permitted by law on November 15, 1996. Now, TIChE composes 18 university members.
The Objectives of TIChE
To promote and support the chemical engineering and chemical technology profession.
To promote and support the educational standard of chemical engineering and chemical technology.
To encourage cooperation and industrial development including research and knowledge.
To disseminate knowledge and consulting in chemical engineering and chemical technology.
To be an agent of chemical engineering and chemical technology profession to cooperate with other organizations.
University Members
(sorted alphabetically)
Burapha University
Department of Chemical Engineering
Chiang Mai University
Department of Industrial Chemistry
Chulalongkorn University
Department of Chemical Engineering
Department of Chemical Technology
The Petroleum and Petrochemical College
Kasetsart University
Department of Chemical Engineering
Khon Kaen University
Department of Chemical Engineering
King Mongkut's Institute of Technology Ladkrabang
Department of Chemical Engineering
King Mongkut's University of Technology North Bangkok
Department of Chemical Engineering
Department of Industrial Chemistry
King Mongkut's University of Technology Thonburi
Department of Chemical Engineering
Mahanakorn University of Technology
Department of Chemical Engineering
Mahidol University
Department of Chemical Engineering
Prince of Songkla University
Department of Chemical Engineering
Rajamangala University of Technology Thanyaburi (Klong 6)
Department of Chemical Engineering
Rangsit University
Department of Chemical and Material Engineering
Silpakorn University
Department of Chemical Engineering
Srinakharinwirot University
Department of Chemical Engineering
Suranaree University of Technology
School of Chemical Engineering
Thammasat University
Department of Chemical Engineering
Ubon Ratchathani University
Department of Chemical Engineering
List of Conference Meetings
7th International Thai Institute of Chemical Engineering and Applied Chemistry Conference (ITIChE 2017) and The 27th National Thai Institute of Chemical Engineering and Applied Chemistry Conference (TIChE 2017)
18–20 October 2017 Shangri-La Hotel, Bangkok
The 18th Thailand Chemical Engineering and Applied Chemistry Conference.
20–21 October 2008 at Jomtien Palm Beach Resort, Cholburi.
Host: Department of Chemical Engineering, Mahidol University, Nakhon Pathom.
The 17th Thailand Chemical Engineering and Applied Chemistry Conference.
29–30 October 2007 at The Empress Hotel, Chiang Mai.
Host: Department of Industrial Chemistry, Chiang Mai University, Chiang Mai.
The 16th Thailand Chemical Engineering and Applied Chemistry Conference.
26–27 October 2006 at Rama Garden Hotel, Bangkok.
Host: Department of Chemical Engineering, Kasetsart University, Bangkok.
The 15th Thailand Chemical Engineering and Applied Chemistry Conference.
27–28 October 2005 at Jomtien Palm Beach Resort, Cholburi.
Host: Department of Chemical Engineering, Burapha University, Cholburi.
The 14th Thailand Chemical Engineering and Applied Chemistry Conference.
Host: Department of Chemical Engineering, King Mongkut's Institute of Technology North Bangkok, Bangkok.
The 13th Thailand Chemical Engineering and Applied Chemistry Conference.
30–31 October 2003 at Royal Hill Resort and Golf Course, Nakhon Nayok.
Host: Department of Chemical Engineering, Srinakharinwirot University, Nakhon Nayok.
Website
References
Chemical engineering organizations
Professional associations based in Thailand
Thai Institute of Chemical Engineering
Research institutes established in 1994
Scientific organizations established in 1994
1994 establishments in Thailand | Thai Institute of Chemical Engineering and Applied Chemistry | [
"Chemistry",
"Engineering"
] | 769 | [
"Chemical engineering",
"Chemical engineering organizations"
] |
5,460,248 | https://en.wikipedia.org/wiki/MAVID | MAVID is a multiple sequence alignment program suitable for the alignment of large numbers of DNA sequences. The sequences can be small mitochondrial genomes or large genomic regions up to megabases long. The latest version is 2.0.4.
The program can be used through the MAVID web server or as a standalone program which can be installed with the source code.
Input/Output
This program accepts sequences in FASTA format.
The output format includes: FASTA format, Clustal, PHYLIP.
References
.
External links
MAVID web server
Phylogenetics software | MAVID | [
"Chemistry",
"Biology"
] | 116 | [
"Biochemistry stubs",
"Biotechnology stubs",
"Bioinformatics",
"Bioinformatics stubs"
] |
5,460,636 | https://en.wikipedia.org/wiki/Enterocytozoon%20bieneusi | Enterocytozoon bieneusi is a species of the order Chytridiopsida (in the division Microsporidia) which infects the intestinal epithelial cells. It is an obligate intracellular parasite.
Microbiology
Enterocytozoon bieneusi, commonly known as microsporidia, is a unicellular, obligate intracellular eukaryote. Their life cycle includes a proliferative merogonic stage, followed by a sporogonic stage resulting in small, environmentally resistant, infective spores, which is their transmission mode. The spores contain a long, coiled polar tube, which distinguishes them from all other organisms and has a crucial role in host cell invasion. E. bieneusi was first found in an AIDS patient in France in 1985 and was later found in swine in 1996 in fecal samples. It causes diarrhea—thus the pigs excrete more spores, causing the disease to spread. As this pathogen is very prevalent throughout the world, E. bieneusi is found in a wide variety of hosts including pigs, humans, and other mammals. E. bieneusi can be studied using TEM, light microscopy, PCR and immunofluorescence and can be cultured for short-term. It is not yet known whether the pathogen itself can be infected by other diseases. There seems to be widespread economic implications of infection by this pathogen for the swine industry. Several treatments, including fumagillin and albendazole have showed promise in treating infection (Mathis et al. 2005).
Discovering the disease
The earliest reference to the order Microsporidia was in the mid-20th century. E. bieneusi was first found in an AIDS patient in France in 1985. The electron microscope studies revealed presence of developmental stages of parasite resembling microsporidia. The investigators then named it as E. bieneusi (Desportes et al. 1985). The presence of E. bieneusi in swine was first detected in fecal samples of pigs in Zurich, Switzerland in 1996 (Deplazes et al. 1996)
Culturing
Short-term culturing of E. bieneusi was achieved by inoculating duodenal aspirate and biopsy specimens into E6 and HLF monolayers. The short-term cultures lasted up to 6 months. After several weeks of culture, gram-positive spore-like structures measuring 1 to 1.2 um long were observed. Mature spores and sporoblasts with double rows of polar tubule coils were seen (Visvesvara 2002). Long term culturing seems to be unsuccessful.
Study and detection methods
Light microscopy of stained clinical smears, especially of fecal samples, is used to diagnose microsporidia infections.. Transmission electron microscopy is required to differentiate between species of microsporidia, but it is time consuming and expensive. Immunofluorescence Assays using monoclonal and polyclonal antibodies are used, and PCR has recently been employed for E. bieneusi (CDC).
Life cycle
The infective form of E. bieneusi is the resistant spore and it can survive for a long time in the environment.
The spore extends its polar tubule and infects the host cell.
The spore injects the infective sporoplasm into the eukaryotic host cell through the polar tubule.
Inside the cell, the sporoplasm undergoes extensive multiplication either by merogony (binary fission) or schizogony (multiple fission).
This development occurs in direct contact with the host cell cytoplasm. In the cytoplasm, microsporidia develop by sporogony to mature spores.
During sporogony, a thick wall is formed around the spore, which provides resistance to adverse environmental conditions. When the spores increase in number and completely fill the host cell cytoplasm, the cell membrane is disrupted and releases the spores to the surroundings. These free mature spores can infect new cells thus continuing the cycle (Desportes 1985).
Transmission mode
Enerocytozoon bieneusi is transported through environment resistant spores.
Common environmental sources of E. bieneusi include ditch and other surface waters, and several species of microsporidia can be isolated from such sources indicating that the disease may be waterborne.
The different modes of transmission that may be possible include the fecal-oral or oral-oral route, inhalation of aerosols, or ingestion of food contaminated with fecal material (Mathis et al. 2005). Furthermore, there seem to be a close relationship between E. bieneusi strains from humans and pigs, suggesting the absence of transmission barrier between pigs and humans for this parasite (Rinder et al. 2000).
Animals, particularly pigs, may play a role of zoonotic reservoir in transmitting the disease to other organisms (Abreu-Acosta et al. 2005), (Lores et al. 2002). Both vertical and horizontal transmissions are possible.
Hosts
Hosts include pigs, fish, birds, cattle, humans (including H. neanderthalensis), and other mammals, such as apes.
Effects on hosts
Enterocytozoon bieneusi is a common parasite in pigs and it causes diarrhea, from self-limited to severe forms. This is documented by the lack of intestinal lesions in pigs experimentally infected with E. bieneusi (Mathis et al. 2005). The pigs that were infected with this disease excreted more spores.
Treatment
Inhibitors of chitin synthase enzymes seem to be effective against this pathogen. Fumagillin and albendazole treatments seem promising in swine (Mathis et al. 2005).
Prevalence
It is very common in pigs and seems to be a natural pathogen in animals such as pigs (Lores et al. 2002). In some communities of pigs, the prevalence rates of E. bieneusi reached 37% (Mathias et al. 2005). There are no recorded large epidemics yet. PCR analysis in Czech Republic revealed existence of E. bieneusi in 94% of the samples indicating the large presence of E. bieneusi in swine, and that they may be naturally occurring (Sak et al. 2008).
Economic impact
Since this is a relatively new finding in pigs, its economic impact has not been studied yet. Pig farming in the US has an annual revenue of $18 billion and the US has about 75000 pig farms. Infection in even a few pigs can be devastating, as the disease is easily spread. Moreover, these pigs can serve as zoonotic reservoirs for E. bieneusi, making transmission to other animals and humans possible. Since the transmission from swine to other humans and animals is not studied yet, this may cause a major impact on the health of this country. Moreover, in other parts of the world, such as China, where the pig industry is a major economic component and where humans and pigs live in crowded conditions, the disease can be very easily spread and can have a potentially major impact on the economy.
References
Abreu-Acosta, N., Lorenzo-Morales, J., Leal-Guio, Y., Coronado-Alvarez, N., Foronda, P., Alcoba-Florez, J., Izquierdo, F., Batista-Diaz, N., Del Aguila, C., & Valladares, B. (2005). Enterocytozoon bieneusi (microsporidia) in clinical samples from immunocompetent individuals in tenerife, canary islands, Spain. Transactions of the Royal Society of Tropical Medicine and Hygiene, 99(11), 848-855.
Breitenmoser, A. C., Mathis, A., Burgi, E., Weber, R., & Deplazes, P. (1999). High prevalence of enterocytozoon bieneusi in swine with four genotypes that differ from those identified in humans. Parasitology, 118 ( Pt 5)(Pt 5), 447-453.
Deplazes, P., Mathis, A., Muller, C., & Weber, R. (1996). Molecular epidemiology of encephalitozoon cuniculi and first detection of enterocytozoon bieneusi in faecal samples of pigs. The Journal of Eukaryotic Microbiology, 43(5), 93S.
Desportes, I., Le Charpentier, Y., Galian, A., Bernard, F., Cochand-Priollet, B., Lavergne, A., Ravisse, P., & Modigliani, R. (1985). Occurrence of a new microsporidan: Enterocytozoon bieneusi n.g., n. sp., in the enterocytes of a human patient with AIDS. The Journal of Protozoology, 32(2), 250-254.
Keeling, P. J., & Fast, N. M. (2002). Microsporidia: Biology and evolution of highly reduced intracellular parasites. Annual Review of Microbiology, 56, 93-116.
Lores, B., del Aguila, C., & Arias, C. (2002). Enterocytozoon bieneusi (microsporidia) in faecal samples from domestic animals from galicia, Spain. Memórias do Instituto Oswaldo Cruz, 97(7), 941-945.
Mathis, A., Weber, R., & Deplazes, P. (2005). Zoonotic potential of the microsporidia. Clinical Microbiology Reviews, 18(3), 423-445.2005
Pagornrat, W., Leelayoova, S., Rangsin, R., Tan-Ariya, P., Naaglor, T., & Mungthin, M. (2009). Carriage rate of enterocytozoon bieneusi in an orphanage in bangkok, Thailand. Journal of Clinical Microbiology, 47(11), 3739-3741.
Rinder, H., Thomschke, A., Dengjel, B., Gothe, R., Loscher, T., & Zahler, M. (2000). Close genotypic relationship between enterocytozoon bieneusi from humans and pigs and first detection in cattle. The Journal of Parasitology, 86(1), 185-188.
Sak, B., Kucerova, Z., Kvac, M., Kvetonova, D., Rost, M., & Secor, E. W. (2010). Seropositivity for enterocytozoon bieneusi, Czech Republic. Emerging Infectious Diseases, 16(2), 335-337.
Sak, B., Kvac, M., Hanzlikova, D., & Cama, V. (2008). First report of enterocytozoon bieneusi infection on a pig farm in the Czech Republic. Veterinary Parasitology, 153(3-4), 220-224.
Visvesvara, G. S. (2002). In vitro cultivation of microsporidia of clinical importance. Clinical Microbiology Reviews, 15(3), 401-413.
Notes
Parasitic fungi
Microsporidia
Fungus species | Enterocytozoon bieneusi | [
"Biology"
] | 2,442 | [
"Fungi",
"Fungus species"
] |
5,460,644 | https://en.wikipedia.org/wiki/Soil%20thermal%20properties | The thermal properties of soil are a component of soil physics that has found important uses in engineering, climatology and agriculture. These properties influence how energy is partitioned in the soil profile. While related to soil temperature, it is more accurately associated with the transfer of energy (mostly in the form of heat) throughout the soil, by radiation, conduction and convection.
The main soil thermal properties are
Volumetric heat capacity, SI Units: J∙m−3∙K−1
Thermal conductivity, SI Units: W∙m−1∙K−1
Thermal diffusivity, SI Units: m2∙s−1
Measurement
It is hard to say something general about the soil thermal properties at a certain location because these are in a constant state of flux from diurnal and seasonal variations. Apart from the basic soil composition, which is constant at one location, soil thermal properties are strongly influenced by the soil volumetric water content, volume fraction of solids and volume fraction of air. Air is a poor thermal conductor and reduces the effectiveness of the solid and liquid phases to conduct heat. While the solid phase has the highest conductivity it is the variability of soil moisture that largely determines thermal conductivity. As such soil moisture properties and soil thermal properties are very closely linked and are often measured and reported together. Temperature variations are most extreme at the surface of the soil and these variations are transferred to sub surface layers but at reduced rates as depth increases. Additionally there is a time delay as to when maximum and minimum temperatures are achieved at increasing soil depth (sometimes referred to as thermal lag).
One possible way of assessing soil thermal properties is the analysis of soil temperature variations versus depth Fourier's law,
where Q is heat flux or rate of heat transfer per unit area J·m−2∙s−1 or W·m−2,
λ is thermal conductivity W·m−1∙K−1;
dT/dz is the gradient of temperature (change in temp/change in depth) K·m−1.
The most commonly applied method for measurement of soil thermal properties, is to perform in-situ measurements, using Non-Steady-State Probe systems, or Heat Probes.
Single and dual heat probes
The single probe method employs a heat source inserted into the soil whereby heat energy is applied continuously at a given rate. The thermal properties of the soil can be determined by analysing the temperature response adjacent to the heat source via a thermal sensor. This method reflects the rate at which heat is conducted away from the probe. The limitation of this device is that it measures thermal conductivity only. Applicable standards are: IEEE Guide for Soil Thermal Resistivity Measurements (IEEE Standard 442-1981) as well as with ASTM D 5334-08 Standard Test Method for Determination of Thermal Conductivity of Soil and Soft Rock by Thermal Needle Probe Procedure.
After further research the dual-probe heat-pulse technique was developed. It consists of two parallel needle probes separated by a distance (r). One probe contains a heater and the other a temperature sensor. The dual probe device is inserted into the soil and a heat pulse is applied and the temperature sensor records the response as a function of time. That is, a heat pulse is sent from the probe across the soil (r) to the sensor. The great benefit of this device is that it measures both thermal diffusivity and volumetric heat capacity. From this, thermal conductivity can be calculated meaning the dual probe can determine all the main soil thermal properties. Potential drawbacks of the heat-pulse technique have been noted. This includes the small measuring volume of soil as well as measurements being sensitive to probe-to-soil contact and sensor-to-heater spacing.
Remote sensing
Remote sensing from satellites, aircraft has greatly enhanced how the variation in soil thermal properties can be identified and utilized to benefit many aspects of human endeavor. While remote sensing of reflected light from surfaces does indicate thermal response of the topmost layers of soil (a few molecular layers thick), it is thermal infrared wavelength that provides energy variations extending to varying shallow depths below the ground surface which is of most interest. A thermal sensor can detect variations to heat transfers into and out of near surface layers because of external heating by the thermal processes of conduction, convection, and radiation. Microwave remote sensing from satellites has also proven useful as it has an advantage over TIR of not being effected by cloud cover.
The various methods of measuring soil thermal properties have been utilized to assist in diverse fields such as; the expansion and contraction of construction materials especially in freezing soils, longevity and efficiency of gas pipes or electrical cables buried in the ground, energy conservation schemes, in agriculture for timing of planting to ensure optimum seedling emergence and crop growth, measuring greenhouse gas emissions as heat effects the liberation of carbon dioxide from soil. Soil thermal properties are also becoming important in areas of environmental science such as determining water movement in radioactive waste and in locating buried land mines.
Uses
The thermal effusivity of soil enables the ground to be used for underground thermal energy storage. Solar energy can be recycled from summer to winter by using the ground as a long term store of heat energy before being retrieved by ground source heat pumps in winter.
Changes in the amount of dissolved organic carbon and soil organic carbon within soil can effect its ability to respirate, either increasing or decreasing the soils carbon uptake.
Furthermore, MCS design criteria for shallow loop ground source heat pumps require an accurate in situ thermal conductivity reading. This can be done by using the above-mentioned thermal heat probe to accurately determine soil thermal conductivity across the site.
References
Thermal properties, soil
Thermodynamic properties | Soil thermal properties | [
"Physics",
"Chemistry",
"Mathematics"
] | 1,122 | [
"Thermodynamic properties",
"Applied and interdisciplinary physics",
"Physical quantities",
"Quantity",
"Soil physics",
"Thermodynamics"
] |
5,460,692 | https://en.wikipedia.org/wiki/Teva%20Pharmaceuticals | Teva Pharmaceutical Industries Ltd. (also known as Teva Pharmaceuticals) is an Israeli multinational pharmaceutical company. Teva specializes primarily in generic drugs, but other business interests include branded-drugs, active pharmaceutical ingredients (APIs) and, to a lesser extent, contract manufacturing services and an out-licensing platform.
Teva's primary branded products include Austedo (deutetrabenazine) which is used for the treatment of chorea associated with Huntington's disease and tardive dyskinesia; and Ajovy (fremanezumab), used for the preventive treatment of migraine in adults. Additional branded drugs sold by Teva include Copaxone, Bendeka and Treanda, all of which are primarily sold in the United States.
Teva is listed on the Tel Aviv Stock Exchange and the New York Stock Exchange. Its manufacturing facilities are located in Israel, North America, Europe, Australia, and South America. The company is a member of the Pharmaceutical Research and Manufacturers of America (PhRMA).
Teva Pharmaceuticals is the largest generic drug manufacturer in the world. Overall, Teva is the 18th largest pharmaceutical company in the world. Teva has a history of legal trouble in relation to collusion and price-fixing to inflate prices for drugs. In 2023, Teva paid the largest fine to date for a domestic antitrust cartel in relation to a criminal investigation by the US Department of Justice into the company's price-fixing.
History
Salomon, Levin, and Elstein
Teva's earliest predecessor was SLE, Ltd., a wholesale drug business founded in 1901 in Ottoman Empire's Mutasarrifate of Jerusalem. SLE Ltd. took its name from the initials of its three cofounders: Chaim Salomon, Moshe Levin and Yitschak Elstein, and used camels to make deliveries. During the 1930s, new immigrants from Europe founded several pharmaceutical companies including Teva and Zori. In the 1930s, Salomon, Levin, and Elstein Ltd. also founded Assia, a pharmaceutical company.
Teva Pharmaceutical Industries
1935–1949
Teva Pharmaceutical Industries took its present form through the efforts of Günther Friedländer and his aunt Else Kober on May 1, 1935. The original registration was under the name Teva Middle East Pharmaceutical & Chemical Works Co. Ltd. in Jerusalem, then part of Mandatory Palestine. Friedländer was a German pharmacist, botanist and pharmacognosist, who immigrated to Mandatory Palestine in 1934.
The company was built with an investment of £4,900, which came from the family's own capital and partly from loans from other German immigrants. Capital shortage led to the joining of the banker Alfred Feuchtwanger as a partner in Teva, who received 33% of the shares in return for his investment.
Friedländer's business philosophy opined that the pharmaceutical industry has a reliable basis in difficult economic times, since "A Jewish mother will always buy medicine for her children". In the Second World War, the company provided medicine to the allied forces and in particular to the British army present in the Middle East. After the war, Sir Alan Gordon Cunningham, the last of the High Commissioners for Palestine and Transjordan, visited Teva on behalf of the Secretary of State for the Colonies. His visit promoted Teva's reputation in the pharmaceutical market and created a momentum for Teva's development.
During Mandatory Palestine, Teva exported its medical products to Arab countries. In 1941, Friedländer presented Teva products in an exhibition held in Cairo, Egypt. The exhibition was sponsored by the General Agent and Sole Distribution of medicine in Egypt and Sudan, Syria and Lebanon. Later on, Teva exported its products to the US, Soviet Union (USSR), health institutes in Denmark, Czechoslovakia, Persia and Burma.
1950–1999
In 1951, Feuchtwanger initiated an initial public offering to raise capital through the newly founded Tel-Aviv Stock Exchange and Teva became a public company. In 1954, Teva received a Certificate Excellence in Training award from the Israeli President, Yitzhak Ben Zvi. In 1964, Teva partnered with Sintex, a company from Mexico, and Schering Plough.
In 1964, Assia and Zori merged and in 1968 acquired a controlling stake in Teva. In 1976, the three companies merged into the modern-day Teva Pharmaceutical Industries Ltd. In 1980, Teva acquired Ikapharm, then Israel's second largest drug manufacturer.
In 1980, Teva acquired Plantex.
In 1982, Teva was granted approval by the U.S. Food and Drug Administration (FDA) for its Kfar Saba manufacturing plant.
In 1995, Teva acquired Biogal Gyógyszergyár Rt. (Debrecen, Hungary) and acquired ICI (Italy).
2000s
In 2000, Teva acquired Canada-based Novopharm.
In October 2003, Teva announced its intentions to acquire Sicor Inc. for $3.4 billion. Following the announcement, the acquisition was completed on January 22, 2004, which marked Teva's entry into biosimilars' market.
In 2005, Teva opened a new, state-of-the-art pharmaceutical manufacturing plant in Har Hotzvim, a technology park in Jerusalem. The plant received FDA approval in early 2007. Teva entered the Japanese market in 2005 and in 2008 established a generics joint venture with Kowa.
In January 2006, Teva acquired its U.S. rival Ivax Corporation for $7.4 billion. In 2008, sales totalled $11.08 billion, $13.9 billion in 2009, and in 2010 total sales rose to $16.1 billion, of which a major portion was in Europe and North America. On December 23, 2008, Teva acquired Barr Pharmaceuticals for $7.5 billion, making Barr and Pliva (which Barr bought earlier) part of Teva.
2010s
On March 18, 2010, Teva said it planned to acquire German generic Ratiopharm for US$5 billion. The deal was completed in August 2010, significantly expanding Teva's European coverage. In May 2011, Teva bought Cephalon for US$6.8 billion. The same month, Teva announced the ¥40 billion purchase of a majority stake in Japanese generic drug company Taiyo Pharmaceutical Industry, a move to secure a Japan-local production facility. Teva completed the $934 million acquisition in July 2011. In June 2013, Teva acquired US firm MicroDose Therapeutx for $40 million with as much as $125 million being paid in regulatory and developmental milestones In 2010, Teva announced that it would build its main distribution center for the Americas in Philadelphia, PA, and was considering opening its US headquarters in the area. In 2010, it had 39,660 employees. In Israel, the number of workers rose 7.5% by 6,774. In October 2010, Teva entered a licensing agreement with BioTime to develop and market BioTime's OpRegen for the treatment of age-related macular degeneration, an effort that in 2013 received $1.5 billion in funding from Israel's Office of the Chief Scientist.
In May 2011, Teva announced it would purchase Cephalon for US$6.8 billion.
In January 2014, Teva acquired NuPathe, after outbidding Endo, for $144 million. In June 2014, Teva acquired Labrys Biologics for up to $825 million, the aim being to strengthen the company's migraine pipeline through addition of LBR-101, an anti-CGRP monoclonal antibody therapeutic.
In March 2015, Teva acquired Auspex Pharmaceuticals for $3.5 billion growing its CNS portfolio. In April, Teva offered to acquire Mylan for $40 billion, only a fortnight after Mylan offered to buy Perrigo for $29 billion. Teva's offer for Mylan was contingent on Mylan abandoning its pursuit of Perrigo. Mylan stated in June 2015 that Teva's disclosure that it had a 1.35 percent stake in Mylan violated US antitrust rules. In October, the company acquired Mexico-based Representaciones e Investigaciones Medicas (Rimsa) for around $2.3 billion. In the same month Teva acquired Gecko Health Innovations. In November 2015, the company announced it would collaborate with Heptares Therapeutics with its work on small-molecule calcitonin gene-related peptide antagonists for migraine treatment, with the deal generating up to $410 million.
Teva Active Pharmaceutical Ingredients (TAPI) operates within Teva as a stand-alone business unit. In 2009, TAPI's sales to third parties totaled $565 million, and in 2010 sales rose by 13% to a total of $641 million.
In July 2017, it was reported that Pascal Soriot, CEO of AstraZeneca since 2012, would become the next CEO of Teva, succeeding Erez Vigodman, however this was soon refuted. As of August 2017, the company has struggled to attract a new CEO, leading to mounting questions for the board. In August 2017, the board of directors announced a 75% cut in the dividend, reflecting declining profitability, and the share price fell by almost half in the days following. As of September 11, 2017, Teva remained the “world's biggest seller of generics medicines.” On September 11, 2017, it was reported that they had selected Kåre Schultz as the new Teva CEO. A day later the company announced it would sell its Paragard contraceptive brand to Cooper Cos for $1.1 billion, with the funds being used to pay down debt. Days later the company announced further divestments: a sale of contraception, fertility, menopause and osteoporosis products to CVC Capital Partners Fund VI for $703 million and its emergency contraception brands for $675 million to Foundation Consumer Healthcare. By December, the company had announced a drastic 25 percent workforce reduction (greater than 14,000 employees) as part of a two-year cost-reduction strategy. Following considerable lobbying by the Israeli Government, from whom Teva received considerable tax breaks, and from Israel's labor federation, the Histadrut, Teva agreed to delay some of the layoffs in Israel.
In October 2019, Teva faced criticism for making a “business decision to discontinue the drug” Vincristine, essential for the treatment of most childhood cancers according to the Food and Drug Administration.
Actavis Generics
In July 2015, Allergan agreed to sell its generic drug business (Actavis Generics) to Teva for $40.5 billion ($33.75 billion in cash and $6.75 billion worth of shares). As a result, Teva dropped its pursuit of Mylan. In order for the deal to gain regulatory approval, Teva sold off a number of assets, including a portfolio of five generic drugs to Sagent Pharmaceuticals for $40 million, as well as a further eight medicines to Dr. Reddy's in a $350 million deal. Teva also sold a further 15 marketed generics, as well as three others that were close to market, for $586 million to Impax Laboratories. In July, Teva sold off a further 42 products to Australian generics company, Mayne Pharma, for $652 million; the deal moved Mayne up 50 spots, into the top-25 companies of US generic companies. As part of the deal Teva will seek to raise $20 to $25 billion through a bond sale.
After completing the $39 billion acquisition of Actavis Generics, Teva announced another, smaller, deal with Allergan, agreeing to acquire its generic distribution business Anda for $500 million.
2010s divestments
On October 21, 2011, Par Pharmaceutical has sealed the deal to three products from Teva Pharmaceutical Industries, which the Israeli firm was required by the US Federal Trade Commission to divest before it could acquire US biotech Cephalon for $6.8 billion.
Following the acquisition of Allergan plc generic division some assets have been sold off to meet US antitrust requirements. In June 2016 Indian pharmaceutical company Dr. Reddy's Laboratories Ltd bought 8 (ANDA) Abbreviated New Drug Applications for $350 million in cash. Also in June 2016 Teva sold two ANDAs to Indian pharmaceutical company Zydus Cadilla, strengthening its US portfolio. On June 21, 2016, American pharmaceutical company Amneal Pharmaceutical previously known as Impax Laboratories bought a portfolio of generic drugs from Teva Pharmaceutical Industries for about $586 million. On July 29, Cipla an Indian pharmaceutical company bought three products from Teva. Indian pharmaceutical company, Aurobindo, was in the race to buy some Teva assets. In October 2016, Teva has sold off part of its UK and Ireland generic business to Indian pharmaceutical company Intas Pharma for 600 million pounds (5083 crore Indian rupees) in cash. In August 2016, Australia pharmaceutical company Mayne Pharmaceutical bought a portfolio of drugs from pharmaceutical giant Teva Pharmaceuticals last year for $845 million Australian dollars. In February 2018, Teva Pharmaceutical Industries Ltd has completed the sale of a portfolio of products within its global women's health business across contraception, fertility, menopause and osteoporosis for $703 million in cash to CVC Capital Partners Fund VI. Teva also agreed to sell its Plan B One-Step and its brands of emergency contraception to Foundation Consumer Healthcare for $675 million in cash. Combined annual net sales of these products were $140 million last year. It also sold Paragard to a unit of Cooper Companies Inc (COO.N) for $1.1 billion.
Acquisition history
The following is an illustration of the company's major mergers and acquisitions and historical predecessors (this is not a comprehensive list):
Corporate governance
Research and development
Teva holds patents on multiple drugs, including Copaxone, a specialty drug (for the treatment of multiple sclerosis), now the world's best selling MS drug, and Azilect (sold as Agilect in some countries) for treatment of Parkinson's disease. By July 2015, Copaxone held a "31.2 percent shares of total MS prescriptions in the United States." Teva's new 40 mg version of Copaxone taken three times a week "accounted for 68.5 percent of total Copaxone prescriptions in the United States." Copaxone accounts for about fifty percent of "Teva's profit and 20 percent of revenue." Competitors' Glatopa, 20 mg version of Copaxone, is taken once a day.
In June 2006, Teva received from the FDA a 180-day exclusivity period to sell simvastatin (Zocor) in the U.S. as a generic drug in all strengths except 80 mg. Teva presently competes with the maker of brand-name Zocor, Merck & Co.; Ranbaxy Laboratories, which has 180-day exclusivity for the 80 mg strength; and Dr. Reddy's Laboratories, whose authorized generic version (licensed by Merck) is exempt from exclusivity.
In June 2010, the company announced it would discontinue its production of propofol, a major sedative estimated to be used in 75% of all US anesthetic procedures.
In March 2015, Teva sold four anti-cancer compounds to Ignyta Inc. for $41.6 million. As part of the deal Teva sold the following compounds which were then renamed:
CEP-32496 (renamed RXDX-105) a small molecule inhibitor of BRAF, EGFR and RET, now in Phase I/II trials
CEP-40783 (renamed RXDX-106) a small molecule inhibitor of AXL and c-Met in preclinical development
CEP-40125 (renamed RXDX-107) a nanoformulation of a modified bendamustine with potential activity in solid tumours. Bendamustine Rapid Infusion as therapy for CLL and NHL is part of Teva's specialty drugs pipeline.
TEV-44229 (renamed RXDX-108) a potent inhibitor of the kinase PKCiota
In July 2019, the company stopped production of Vincristine, a critical drug used to treat the most common forms of childhood cancer, and was criticized by media for creating a worldwide shortage of the drug. On 28 January 2020, the company announced that the Food and Drug Administration (FDA) had approved an autoinjector device for Ajovy (fremanezumab-vfrm) injection.
Controversies
In 2010s section: Following Russia's invasion of Ukraine in 2022, Teva Pharmaceuticals faced criticism for continuing its operations in Russia despite international sanctions. While the company committed to stopping new investments, its focus on ensuring an uninterrupted supply of medicines has raised ethical concerns, with critics arguing that Teva’s presence weakens the impact of sanctions aimed at pressuring Russia economically.
Legal issues
On June 25, 2010, Bayer sued Teva for falsely claiming that Teva's Gianvi was stabilized by betadex as a clathrate, and could consequently be advertised as a generic of Yaz The settlement resulted in Teva changing its product marketing to remove the claim that it used the same ingredients as Yaz. Bayer's method specifically prevents oxidative degradation of the estrogen, while Teva's does not.
In January 2015, the Supreme Court of the United States decided for Teva on the Copaxone patent in Teva Pharmaceuticals USA, Inc. v. Sandoz, Inc.
In December 2016, the attorneys general of 20 states filed a civil complaint accusing Teva of a coordinated scheme to artificially maintain high prices for a generic antibiotic and diabetes drug. The complaint alleged price collusion schemes between six pharmaceutical firms including informal gatherings, telephone calls, and text messages.
In January 2019, the Supreme Court of the United States decided for Teva in Helsinn Healthcare S.A. v. Teva Pharmaceuticals USA Inc.
On May 11, 2019, Teva Pharmaceuticals USA was one of 19 drug companies sued for price fixing in the United States by 44 states for inflating its prices, sometimes up to 1000%, in an illegal agreement among it and its competitors. In June 2021, Teva agreed to pay $925M to settle allegations it had engaged in price fixing in Mississippi. In January 2022, Teva agreed to pay $425M to settle allegations that it had concealed price fixing practices from shareholders.
In May 2019, Teva Pharmaceuticals USA agreed to pay $85 million to the U.S. state of Oklahoma to settle allegations that it had been overprescribing opioids, marketing them as safe, and downplaying their addictive qualities.
In July 2019, Teva paid $69 million to settle pay-for-delay claims.
In January 2020, Teva Pharmaceuticals agreed to pay $54 million to settle allegations under the False Claims Act that it violated the Anti-Kickback Statute by funding improper speaking programs to boost prescriptions.
In 2021, New York Attorney General Letitia James filed a lawsuit against Teva and several other opioid manufacturers for their alleged contribution to the opioid epidemic in New York.
In February 2022, Teva agreed to a $225 million settlement with the state of Texas to end claims that it fueled an opioid epidemic in the state by improperly marketing pain medicine.
In August 2023, Teva admitted to price-fixing charges related to the generic cholesterol drug Pravastatin, and agreed to pay a $225 million fine, after a criminal investigation by the US Department of Justice. The US Department of Justice stated that the settlement was "the largest to date for a domestic antitrust cartel."
Also in August 2023, Teva agreed to a legal settlement with US hospitals over its marketing of opioid products that ultimately raised costs for health providers and contributed to the Opioid epidemic in the United States. The lawsuit consisted of roughly 500 hospitals and health providers, resulting in a payment from Teva of $126 million over 18 years.
On October 31, 2024, the European Commission fined Teva €462.6 million «over misuse of the patent system and disparagement to delay [a] rival multiple sclerosis medicine», namely Copaxone (glatiramer acetate). Teva is accused of having attempted to obstacle other producers of glatiramer acetate both by abusing the patent system and by setting up a misinformation campaign targeting other glatiramer producers.
Pharmaceutical products
A full list of products destined for the US market is available from tevagenerics.com. Teva also manufactures generic medications for use outside of the US; Cyproterone acetate and Moclobemide being two examples.
Abacavir
Acetazolamide
Adderall (generic and branded)
Adrucil
Ajovy
Alprazolam
Amikacin Sulfate
Amitriptyline
Amoxicillin
Apri
Aripiprazole (generic)
Atazanavir (generic)
Atomoxetine
Atorvastatin
Augmentin (generic)
Austedo
Aviane
Azathioprine
Azithromycin
Baclofen
Balziva
Bisoprolol Fumarate
Bleomycin
Budeprion
Budesonide
Buspirone
Busulfan
Calcitriol
Camrese
Carboplatin
Cefdinir
Cephalexin
Ciclosporin
Ciprofloxacin
Citalopram
Cetirizine
Claravis
Clarithromycin
Clonazepam
Clozapine
Codeine
Copaxone
Cryselle
Cyclosporine
Daunorubicin
Dexmethylphenidate
Dextroamphetamine
Diazepam
Dihydrocodeine
Doxorubicin HCl
Emtricitabine/tenofovir
Epirubicin HCl
Eplerenon
Epoprostenol Sodium
Errin
Escitalopram
Estazolam
Estradiol
Etodolac
Famciclovir
Fentanyl
Filgrastim
Finasteride
Fiorinal
Flunitrazepam
Fluocinonide
Fluconazole
Fluoxetine
Fluvoxamine
Fremanezumab
Gabapentin
Galzin
Haloperidol
Haloperidol Decanoate
Hydroxychloroquine
Hydroxyzine Hydrochloride
Ibuprofen Max
Idarubicin HCl
Ifosfamide
Irinotecan
Gianvi
Irbesartan
Junel
Kariva
Kelnor
Lamotrigine
Lansoprazole
Laquinimod
Letrozole
Leucovorin Calcium
Levonorgestrel branded as Enpresse
Loperamide
Losartan
Lorazepam
Methotrexate
Methylphenidate
Minocycline
Mirtazapine
Mitoxantrone
Montelukast (generic)
Naltrexone
Naloxone
Naproxen
Norepinephrine
Norethisterone
Nortrel
Nortriptyline
Nuvigil
Nystatin
Ocella
Olanzapine
Omeprazole
Optalgin
Oxycodone
Oxymorphone
Pantoprazole
Phentermine
Portia
Pravastatin
Pregabalin
Prednisolone
ProAir
Provigil
Quetiapine
QVAR
QNASL
Ramipril
Rasagiline
Rituximab
Salbutamol (Albuterol)
Sildenafil
Sertraline
Simvastatin
Sprintec
Sulfamethoxazole Trimethoprim
Sumatriptan
Tadalafil
Tamoxifen
Temazepam
Temozolomide
Tenofovir
Terbinafin
Testosterone
Tiagabine
Topiramate
Tramadol
Trazodone
Triethylenetetramine
Tri-Sprintec
Ursodiol
Valsartan
Verapamil
Velivet
Venlafaxine
Warfarin
Zolpidem
Zonisamide
Zopiclone
Recalls
In 2018-2019, Teva Pharmaceuticals USA recalled antihypertensive tablets containing valsartan and losartan due to the detection beyond acceptable limit of N-nitrosodiethylamine (NDEA) and N-Nitroso-N-methyl-4-aminobutyric acid (NMBA), respectively, which are probable human carcinogens.
See also
BioTime
Economy of Israel
Health care in Israel
Science and technology in Israel
Teva Active Pharmaceutical Ingredients (TAPI)
AION Labs
References
External links
Official website
Companies listed on the Tel Aviv Stock Exchange
Companies in the TA-35 Index
Pharmaceutical companies of Israel
Pharmaceutical companies established in 1901
Generic drug manufacturers
Israeli brands
Life sciences industry
Manufacturing companies based in Tel Aviv
Multinational companies headquartered in Israel
Pharmaceutical companies based in New Jersey
1901 establishments in the Ottoman Empire | Teva Pharmaceuticals | [
"Biology"
] | 5,229 | [
"Life sciences industry"
] |
5,460,807 | https://en.wikipedia.org/wiki/ODIN%20Technologies | ODIN provides RFID (Radio-Frequency Identification) software for the Aerospace, Government, Healthcare, Financial Services and Social Media markets. ODIN's world headquarters is located in San Diego, CA. ODIN was acquired by Quake Global in December 2012 and continues to focus on healthcare and asset tracking.
History
ODIN won a $14.6M contract from the United States Department of Defense Defense Logistics Agency (DLA).
Acquisition activity
In December 2010 ODIN announced the acquisition of Reva Systems. Terms of the deal were not publicly disclosed. Reva raised around $35 million in venture funding from Charles River Ventures, Northbridge Partners and Cisco.
ODIN was purchased by Quake Global in 2012 for an undisclosed sum and continues to focus on healthcare and asset tracking.
See also
RFID
Information Technology
Symbol Technologies
Intermec
Omni-ID
References
"Airbus taps ODIN for RFID roll-out" RFID Update, p.
DoD Wants Your RFID Shipments: Five Steps To Compliance, RFID Solutions Online, by Scott Decker and Bret Kinsella, ODIN technologies
Pharma’s Flirtation With RFID — Will There Be A Second Date, RFID Solutions Online, by Bret Kinsella, ODIN technologies
The 5 Elements of a Successful RFID Implementation by Bret Kinsella, ODIN technologies
ODIN Benchmarks Globe-Trotting Tags RFID Journal
ODIN Benchmarks RFID Readers RFID Journal
FCC Grants ODIN Experimental License RFID Journal
UHF vs HF for Pharma RFID: And the Winner Is . . . Pharmaceutical Commerce
United States Department of Commerce Awards ODIN technologies the first Excellence in Innovation Award
Odin Technologies Aims to Be the Chief of RFID The Washington Post
NI+C of Japan teams with Odin Technologies for scientific RFID testing Test and Measurement World
ODIN Technologies Establishes RFID Joint Venture in Europe, ODIN Budapest SecureID News
Nine ODIN technologies Engineers Earn CompTIA RFID+ Certification
"Financial Services Technology Consortium hires ODIN technologies to set numbering and performance standard"
Automatic identification and data capture
Companies based in Dulles, Virginia
Defunct electronics companies of the United States
Radio-frequency identification
Radio-frequency identification companies | ODIN Technologies | [
"Technology",
"Engineering"
] | 424 | [
"Radio-frequency identification",
"Radio electronics",
"Data",
"Automatic identification and data capture"
] |
5,460,997 | https://en.wikipedia.org/wiki/Clyssus | In the pre-modern chemistry of Paracelsus, a clyssus, or clissus, was one of the effects, or productions of that art; consisting of the most efficacious principles of any body, extracted, purified, and then remixed.
Or, a clyssus is when the several constituents of a body are prepared and purified separately, and then combined again. Thus, the five principles, reassembled into one body, by long digestion, make a clyssus. So, clyssus of antimony is produced by distillation from antimony, nitre, and sulfur mixed together. There is also clyssus of vitriol, which is a spirit drawn by distillation from vitriol dissolved in vinegar; this was used by pre-modern physicians in treating various diseases, and to extract the tinctures of several vegetables.
Clyssus is used among some authors for a kind of sapa, or extract, made with eight parts of the juice of a plant, and one of sugar, seethed together into the consistency of honey.
References
History of pharmacy
Alchemical substances | Clyssus | [
"Chemistry"
] | 235 | [
"Alchemical substances"
] |
5,461,026 | https://en.wikipedia.org/wiki/Polycarpic | Polycarpic plants are those that flower and set seeds many times before dying. A term of identical meaning is pleonanthic and iteroparous. Polycarpic plants are able to reproduce multiple times due to at least some portion of its meristems being able to maintain a vegetative state in some fashion so that it may reproduce again. This type of reproduction seems to be best suited for plants who have a fair amount of security in their environment as they do continuously reproduce.
Generally, in reference to life-history theory, plants will sacrifice their ability in one regard to improve themselves in another regard, so for polycarpic plants that may strive towards continued reproduction, they might focus less on their growth. However, these aspects may not necessarily be directly correlated and some plants, notably invasive species, do not follow this general trend and actually show a fairly long lifespan with frequent reproduction. To an extent, there does seem to be an importance of the balance of these two traits as one study noted how plants that had a very short lifespan as well as plants that had a very long lifespan and also had little reproductive success were not found in any of the nearly 400 plants included in the study.
Due to their reduced development, it has been noted how polycarpic plants have less energy to reproduce than monocarpic plants throughout their lifetimes. In addition, as its lifespan increases, the plant is also subject to more inconveniences due to its age, and thus might focus more towards adapting to it, resulting in less energy the plant is able to spend on reproduction. One trend that has been noticed throughout some studies is how quicker lifespans generally impact how quickly the plants increasingly expend their energy towards reproduction. However, the specific structure of polycarpic strategies depends on the specific plant and all polycarpic plants do not seem to have a uniform pattern of how energy is expended on reproduction. These strategies are not concrete and these strategies are also subject to being impacted by the random environmental factors or other functions of the plant itself.
The threat of competition might also be influential in how polycarpic plants choose to reproduce. Some studies show that while the competition itself may not be impactful, the plants can still be subject to danger through concerns such as diseases and more. Even if polycarpic plants are faced with competition, there are many ways they might respond to it such as focusing more on growth than reproduction in the hopes that they would eventually overcome the competition to successfully reproduce, or, on the other hand, the threat of elimination of the species might be too large that the plant focuses more strongly on reproduction, but this would ultimately impact their development, diminishing both their ability to grow and reproduce. This study reports that generally, when pressured, the polycarpic plant seems to focus more on reproduction, which may help them against competition as it allows them to become less overwhelmed.
Generally, herbaceous plants will choose to focus on reproduction while woody plants will generally endure it as woody plants are usually able to endure more as well as live longer than herbaceous plants, which generally have a shorter lifespan, would.
See also
Monocarpic
Perennial
References
Plant morphology | Polycarpic | [
"Biology"
] | 643 | [
"Plant morphology",
"Plants"
] |
2,155,690 | https://en.wikipedia.org/wiki/List%20of%20hash%20functions | This is a list of hash functions, including cyclic redundancy checks, checksum functions, and cryptographic hash functions.
Cyclic redundancy checks
Adler-32 is often mistaken for a CRC, but it is not: it is a checksum.
Checksums
Universal hash function families
Non-cryptographic hash functions
Keyed cryptographic hash functions
Unkeyed cryptographic hash functions
See also
Hash function security summary
Secure Hash Algorithms
NIST hash function competition
Key derivation functions (category)
References
List
Checksum algorithms
Cryptography lists and comparisons | List of hash functions | [
"Technology"
] | 109 | [
"Computing-related lists",
"Cryptography lists and comparisons"
] |
2,155,701 | https://en.wikipedia.org/wiki/HD%20149026 | HD 149026, also named Ogma , is a yellow subgiant star approximately 250 light-years from the Sun in the constellation of Hercules. An extrasolar planet (designated HD 149026 b, later named Smertrios) is believed to orbit the star.
Nomenclature
HD 149026 in the star's identifier in the Henry Draper Catalog. Following its discovery in 2005 the planet was designated HD 149026 b.
In July 2014 the International Astronomical Union (IAU) launched NameExoWorlds, a process for giving proper names to certain exoplanets and their host stars. The process involved public nomination and voting for the new names. In December 2015, the IAU announced the winning names were Ogma for this star and Smertrios for its planet.
The winning names based on those submitted by the Club d'Astronomie de Toussaint of France; namely 'Ogmios' and 'Smertrios'. Ogmios was a Gallo-Roman deity and Smertrios was a Gallic deity of war. The IAU substituted the name of Ogma, a deity of eloquence, writing, and great physical strength in the Celtic mythologies of Ireland and Scotland, and who may be related to Ogmios, because 'Ogmios' is the name of an asteroid (189011 Ogmios).
In 2016, the IAU organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. In its first bulletin of July 2016, the WGSN explicitly recognized the names of exoplanets and their host stars approved by the Executive Committee Working Group Public Naming of Planets and Planetary Satellites, including the names of stars adopted during the 2015 NameExoWorlds campaign. This star is now so entered in the IAU Catalog of Star Names.
Properties
The star is thought to be much more massive, larger and brighter than the Sun. The higher mass means that despite its considerably younger age (2.0 Ga) it is already much more evolved than the Sun. The internal fusion of hydrogen in the core of the star is coming to an end, and it is beginning to evolve towards red gianthood. At a distance of 250 light-years, the star is not visible to the unaided eye. However, it should be easily seen in binoculars or a small telescope.
The star is over twice as enriched with chemical elements heavier than hydrogen and helium as the Sun. Because of this and the fact that the star is relatively bright, a group of astronomers in N2K Consortium began to study the star. The star's anomalous composition as measured may be surface pollution only, from the intake of heavy-element planetesimals.
Planetary system
In 2005 an unusual extrasolar planet was discovered to be orbiting the star. Designated HD 149026 b, it was detected transiting the star allowing its diameter to be measured. It was found to be smaller than other known transiting planets, meaning it is unusually dense for a closely orbiting giant planet. The temperature of the giant planet is calculated to be , generating so much infrared heat that it glows. Scientists believe the planet absorbs nearly all the sunlight and radiates it into space as heat.
See also
51 Pegasi
Lists of exoplanets
References
External links
149026
Hercules (constellation)
080838
Planetary transit variables
Planetary systems with one confirmed planet
G-type subgiants
Durchmusterung objects
Stars with proper names | HD 149026 | [
"Astronomy"
] | 724 | [
"Hercules (constellation)",
"Constellations"
] |
2,155,746 | https://en.wikipedia.org/wiki/Mobile%20phone%20feature | A mobile phone feature is a capability, service, or application that a mobile phone offers to its users. Mobile phones are often referred to as feature phones, and offer basic telephony. Handsets with more advanced computing ability through the use of native code try to differentiate their own products by implementing additional functions to make them more attractive to consumers. This has led to great innovation in mobile phone development over the past 20 years.
The common components found on all phones are:
A number of metal–oxide–semiconductor (MOS) integrated circuit (IC) chips.
A battery (typically a lithium-ion battery), providing the power source for the phone functions.
An input mechanism to allow the user to interact with the phone. The most common input mechanism is a keypad, but touch screens are also found in smartphones.
Basic mobile phone services to allow users to make calls and send text messages.
All GSM phones use a SIM card to allow an account to be swapped among devices. Some CDMA devices also have a similar card called a R-UIM.
Individual GSM, WCDMA, IDEN and some satellite phone devices are uniquely identified by an International Mobile Equipment Identity (IMEI) number.
All mobile phones are designed to work on cellular networks and contain a standard set of services that allow phones of different types and in different countries to communicate with each other. However, they can also support other features added by various manufacturers over the years:
roaming which permits the same phone to be used in multiple countries, providing that the operators of both countries have a roaming agreement.
send and receive data and faxes (if a computer is attached), access WAP services, and provide full Internet access using technologies such as GPRS.
applications like a clock, alarm, calendar, contacts, and calculator and a few games.
Sending and receiving pictures and videos (by without internet) through MMS, and for short distances with e.g. Bluetooth.
In Multimedia phones Bluetooth is commonly but important Feature.
GPS receivers integrated or connected (i.e. using Bluetooth) to cell phones, primarily to aid in dispatching emergency responders and road tow truck services. This feature is generally referred to as E911.
Push to Talk over Cellular, available on some mobile phones, is a feature that allows the user to be heard only while the talk button is held, similar to a walkie-talkie.
A hardware notification LED on some phones.
MOS integrated circuit chips
A typical smartphone contains a number of metal–oxide–semiconductor (MOS) integrated circuit (IC) chips, which in turn contain billions of tiny MOS field-effect transistors (MOSFETs). A typical smartphone contains the following MOS IC chips.
Application processor (CMOS system-on-a-chip)
Flash memory (floating-gate MOS memory)
Cellular modem (baseband RF CMOS)
RF transceiver (RF CMOS)
Phone camera image sensor (CMOS image sensor)
Power management integrated circuit (power MOSFETs)
Display driver (LCD or LED driver)
Wireless communication chips (Wi-Fi, Bluetooth, GPS receiver)
Sound chip (audio codec and power amplifier)
Gyroscope
Capacitive touchscreen controller (ASIC and DSP)
RF power amplifier (LDMOS)
User interface
Besides the number keypad and buttons for accepting and declining calls (typically from left to right and coloured green and red respectively), button mobile phones commonly feature two option keys, one to the left and one to the right, and a four-directional D-pad which may feature a center button which acts in resemblance to an "Enter" and "OK" button.
A pushable scroll wheel has been implemented in the 1990s on the Nokia 7110.
Software, applications and services
In early stages, every mobile phone company had its own user interface, which can be considered as "closed" operating system, since there was a minimal configurability. A limited variety of basic applications (usually games, accessories like calculator or conversion tool and so on) was usually included with the phone and those were not available otherwise. Early mobile phones included basic web browser, for reading basic WAP pages. Handhelds (Personal digital assistants like Palm, running Palm OS) were more sophisticated and also included more advanced browser and a touch screen (for use with stylus), but these were not broadly used, comparing to standard phones. Other capabilities like Pulling and Pushing Emails or working with calendar were also made more accessible but it usually required physical (and not wireless) Syncing. BlackBerry 850, an email pager, released January 19, 1999, was the first device to integrate Email.
A major step towards a more "open" mobile OS was the symbian S60 OS, that could be expanded by downloading software (written in C++, java or python), and its appearance was more configurable. In July 2008, Apple introduced its App store, which made downloading mobile applications more accessible. In October 2008, the HTC Dream was the first commercially released device to use the Linux-based Android OS, which was purchased and further developed by Google and the Open Handset Alliance to create an open competitor to other major smartphone platforms of the time (Mainly Symbian operating system, BlackBerry OS, and iOS)-The operating system offered a customizable graphical user interface and a notification system showing a list of recent messages pushed from apps.
The most commonly used data application on mobile phones is SMS text messaging. The first SMS text message was sent from a computer to a mobile phone in 1992 in the UK, while the first person-to-person SMS from phone to phone was sent in Finland in 1993.
The first mobile news service, delivered via SMS, was launched in Finland in 2000. Mobile news services are expanding with many organizations providing "on-demand" news services by SMS. Some also provide "instant" news pushed out by SMS.
Mobile payments were first trialled in Finland in 1998 when two Coca-Cola vending machines in Espoo were enabled to work with SMS payments. Eventually, the idea spread and in 1999 the Philippines launched the first commercial mobile payments systems, on the mobile operators Globe and Smart. Today, mobile payments ranging from mobile banking to mobile credit cards to mobile commerce are very widely used in Asia and Africa, and in selected European markets. Usually, the SMS services utilize short code.
Some network operators have utilized USSD for information, entertainment or finance services (e.g. M-Pesa).
Other non-SMS data services used on mobile phones include mobile music, downloadable logos and pictures, gaming, gambling, adult entertainment and advertising. The first downloadable mobile content was sold to a mobile phone in Finland in 1998, when Radiolinja (now Elisa) introduced the downloadable ringtone service. In 1999, Japanese mobile operator NTT DoCoMo introduced its mobile Internet service, i-Mode, which today is the world's largest mobile Internet service.
Even after the appearance of smartphones, network operators have continued to offer information services, although in some places, those services have become less common.
Power supply
Mobile phones generally obtain power from rechargeable batteries. There are a variety of ways used to charge cell phones, including USB, portable batteries, mains power (using an AC adapter), cigarette lighters (using an adapter), or a dynamo. In 2009, the first wireless charger was released for consumer use. Some manufacturers have been experimenting with alternative power sources, including solar cells.
Various initiatives, such as the EU Common External Power Supply have been announced to standardize the interface to the charger, and to promote energy efficiency of mains-operated chargers. A star rating system is promoted by some manufacturers, where the most efficient chargers consume less than 0.03 watts and obtain a five-star rating.
Battery
Most modern mobile phones use a lithium-ion battery. A popular early mobile phone battery was the nickel metal-hydride (NiMH) type, due to its relatively small size and low weight. Lithium-ion batteries later became commonly used, as they are lighter and do not have the voltage depression due to long-term over-charging that nickel metal-hydride batteries do. Many mobile phone manufacturers use lithium–polymer batteries as opposed to the older lithium-ion, the main advantages being even lower weight and the possibility to make the battery a shape other than strict cuboid.
SIM card
GSM mobile phones require a small microchip called a Subscriber Identity Module or SIM card, to function. The SIM card is approximately the size of a small postage stamp and is usually placed underneath the battery in the rear of the unit. The SIM securely stores the service-subscriber key (IMSI) used to identify a subscriber on mobile telephony devices (such as mobile phones and computers). The SIM card allows users to change phones by simply removing the SIM card from one mobile phone and inserting it into another mobile phone or broadband telephony device.
A SIM card contains its unique serial number, internationally unique number of the mobile user (IMSI), security authentication and ciphering information, temporary information related to the local network, a list of the services the user has access to and two passwords (PIN for usual use and PUK for unlocking).
SIM cards are available in three standard sizes. The first is the size of a credit card (85.60 mm × 53.98 mm x 0.76 mm, defined by ISO/IEC 7810 as ID-1). The newer, most popular miniature version has the same thickness but a length of 25 mm and a width of 15 mm (ISO/IEC 7810 ID-000), and has one of its corners truncated (chamfered) to prevent misinsertion. The newest incarnation known as the 3FF or micro-SIM has dimensions of 15 mm × 12 mm. Most cards of the two smaller sizes are supplied as a full-sized card with the smaller card held in place by a few plastic links; it can easily be broken off to be used in a device that uses the smaller SIM.
The first SIM card was made in 1991 by Munich smart card maker Giesecke & Devrient for the Finnish wireless network operator Radiolinja. Giesecke & Devrient sold the first 300 SIM cards to Elisa (ex. Radiolinja).
Those cell phones that do not use a SIM card have the data programmed into their memory. This data is accessed by using a special digit sequence to access the "NAM" as in "Name" or number programming menu. From there, information can be added, including a new number for the phone, new Service Provider numbers, new emergency numbers, new Authentication Key or A-Key code, and a Preferred Roaming List or PRL. However, to prevent the phone being accidentally disabled or removed from the network, the Service Provider typically locks this data with a Master Subsidiary Lock (MSL). The MSL also locks the device to a particular carrier when it is sold as a loss leader.
The MSL applies only to the SIM, so once the contract has expired, the MSL still applies to the SIM. The phone, however, is also initially locked by the manufacturer into the Service Provider's MSL. This lock may be disabled so that the phone can use other Service Providers' SIM cards. Most phones purchased outside the U.S. are unlocked phones because there are numerous Service Providers that are close to one another or have overlapping coverage. The cost to unlock a phone varies but is usually very cheap and is sometimes provided by independent phone vendors.
A similar module called a Removable User Identity Module or RUIM card is present in some CDMA networks, notably in China and Indonesia.
Multi-card hybrid phones
A hybrid mobile phone can take more than one SIM card, even of different types. The SIM and RUIM cards can be mixed together, and some phones also support three or four SIMs.
From 2010 onwards they became popular in India and Indonesia and other emerging markets, attributed to the desire to obtain the lowest on-net calling rate. In Q3 2011, Nokia shipped 18 million of its low cost dual SIM phone range in an attempt to make up lost ground in the higher end smartphone market.
Display
Mobile phones have a display device, some of which are also touch screens. The screen size varies greatly by model and is usually specified either as width and height in pixels or the diagonal measured in inches.
Some phones have more than one display, for example the Kyocera Echo, an Android smartphone with a dual 3.5 inch screen. The screens can also be combined into a single 4.7 inch tablet style computer.
Artificial intelligence is the hot center of the technology industry, especially with the introduction of Large Language Models (LLMs) like ChatGPT and Gemini. The AI revolution, which is underway, has affected the semiconductor market and we have seen chipmaker stocks skyrocket with it. However, semiconductor stocks are not the only beneficiaries, data centers also benefit greatly from the surge in AI.
According to Future Market Intelligence, the data center market is estimated at around $30.4 billion during 2024, it is expected to grow at a compound annual growth rate of 14.4% to reach $117.24 billion by 2034. Data centers were in demand before the AI boom as well, with data from Jefferies showing their demand rising 10% to 20% for the last 15 years before AI. However, AI accelerated the market to around 30% in just two years. Read more here:
Central processing unit
Mobile phones have central processing units (CPUs), similar to those in computers, but optimised to operate in low power environments. In smartphones, the CPU is typically integrated in a system-on-a-chip (SoC) application processor.
Mobile CPU performance depends not only on the clock rate (generally given in multiples of hertz) but also the memory hierarchy also greatly affects overall performance. Because of these problems, the performance of mobile phone CPUs is often more appropriately given by scores derived from various standardized tests to measure the real effective performance in commonly used applications.
Miscellaneous features
Other features that may be found on mobile phones include GPS navigation, music (MP3) and video (MP4) playback, RDS radio receiver, built-in projector, vibration and other "silent" ring options, alarms, memo recording, personal digital assistant functions, ability to watch streaming video, video download, video calling, built-in cameras (1.0+ Mpx) and camcorders (video recording), with autofocus and flash, ringtones, games, PTT, memory card reader (SD), USB (2.0), dual line support, infrared, Bluetooth (2.0) and WiFi connectivity, NFC, instant messaging, Internet e-mail and browsing and serving as a wireless modem.
The first smartphone was the Nokia 9000 Communicator in 1996 which added PDA functionality to the basic mobile phone at the time. As miniaturization and increased processing power of microchips has enabled ever more features to be added to phones, the concept of the smartphone has evolved, and what was a high-end smartphone five years ago, is a standard phone today.
Several phone series have been introduced to address a given market segment, such as the RIM BlackBerry focusing on enterprise/corporate customer email needs; the SonyEricsson Walkman series of musicphones and Cybershot series of cameraphones; the Nokia Nseries of multimedia phones, the Palm Pre the HTC Dream and the Apple iPhone.
Nokia and the University of Cambridge demonstrated a bendable cell phone called the Morph. Some phones have an electromechanical transducer on the back which changes the electrical voice signal into mechanical vibrations. The vibrations flow through the cheek bones or forehead allowing the user to hear the conversation. This is useful in the noisy situations or if the user is hard of hearing.
As of 2018, there are smartphones that offer reverse wireless charging.
Multi-mode and multi-band mobile phones
Most mobile phone networks are digital and use the GSM, CDMA or iDEN standard which operate at various radio frequencies. These phones can only be used with a service plan from the same company. For example, a Verizon phone cannot be used with a T-Mobile service, and vica versa.
A multi-mode phone operates across different standards whereas a multi-band phone (also known more specifically as dual, tri or quad band) mobile phone is a phone which is designed to work on more than one radio frequency. Some multi-mode phones can operate on analog networks as well (for example, dual band, tri-mode: AMPS 800 / CDMA 800 / CDMA 1900).
For a GSM phone, dual-band usually means 850 / 1900 MHz in the United States and Canada, 900 / 1800 MHz in Europe and most other countries. Tri-band means 850 / 1800 / 1900 MHz or 900 / 1800 / 1900 MHz. Quad-band means 850 / 900 / 1800 / 1900 MHz, also called a world phone, since it can work on any GSM network.
Multi-band phones have been valuable to enable roaming whereas multi-mode phones helped to introduce WCDMA features without customers having to give up the wide coverage of GSM. Almost every single true 3G phone sold is actually a WCDMA/GSM dual-mode mobile. This is also true of 2.75G phones such as those based on CDMA-2000 or EDGE.
Challenges in producing multi-mode phones
The special challenge involved in producing a multi-mode mobile is in finding ways to share the components between the different standards. The phone keypad and display should be shared, otherwise it would be hard to treat as one phone. Beyond that, though, there are challenges at each level of integration. How difficult these challenges are depends on the differences between systems. When talking about IS-95/GSM multi-mode phones, for example, or AMPS/IS-95 phones, the base band processing is very different from system to system. This leads to real difficulties in component integration and so to larger phones.
An interesting special case of multi-mode phones is the WCDMA/GSM phone. The radio interfaces are very different from each other, but mobile to core network messaging has strong similarities, meaning that software sharing is quite easy. Probably more importantly, the WCDMA air interface has been designed with GSM compatibility in mind. It has a special mode of operation, known as punctured mode, in which, instead of transmitting continuously, the mobile is able to stop sending for a short period and try searching for GSM carriers in the area. This mode allows for safe inter-frequency handovers with channel measurements which can only be approximated using "pilot signals" in other CDMA based systems.
A final interesting case is that of mobiles covering the DS-WCDMA and MC-CDMA 3G variants of the CDMA-2000 protocol. Initially, the chip rate of these phones was incompatible. As part of the negotiations related to patents, it was agreed to use compatible chip rates. This should mean that, despite the fact that the air and system interfaces are quite different, even on a philosophical level, much of the hardware for each system inside a phone should be common with differences being mostly confined to software.
Data communications
Mobile phones are now heavily used for data communications. such as SMS messages, browsing mobile web sites, and even streaming audio and video files. The main limiting factors are the size of the screen, lack of a keyboard, processing power and connection speed. Most cellphones, which supports data communications, can be used as wireless modems (via cable or bluetooth), to connect computer to internet. Such access method is slow and expensive, but it can be available in very remote areas.
With newer smartphones, screen resolution and processing power has become bigger and better. Some new phone CPUs run at over 1 GHz. Many complex programs are now available for the various smartphones, such as Symbian and Windows Phone.
Connection speed is based on network support. Originally data transfers over GSM networks were possible only over CSD (circuit switched data), it has bandwidth of 9600 bit/s and usually is billed by connection time (from network point of view, it does not differ much from voice call). Later, there were introduced improved version of CSD – HSCSD (high speed CSD), it could use multiple time slots for downlink, improving speed. Maximum speed for HSCSD is ~42 kbit/s, it also is billed by time. Later was introduced GPRS (general packet radio service), which operates on completely different principle. It also can use multiple time slots for transfer, but it does not tie up radio resources, when not transferring data (as opposed to CSD and like). GPRS usually is prioritized under voice and CSD, so latencies are large and variable. Later, GPRS was upgraded to EDGE, which differs mainly by radio modulation, squeezing more data capacity in same radio bandwidth. GPRS and EDGE usually are billed by data traffic volume. Some phones also feature full Qwerty keyboards, such as the LG enV.
As of April 2006, several models, such as the Nokia 6680, support 3G communications. Such phones have access to the Web via a free download of the Opera web browser. Verizon Wireless models come with Internet Explorer pre-loaded onto the phone.
Vulnerability to viruses
As more complex features are added to phones, they become more vulnerable to viruses which exploit weaknesses in these features. Even text messages can be used in attacks by worms and viruses. Advanced phones capable of e-mail can be susceptible to viruses that can multiply by sending messages through a phone's address book. In some phone models, the USSD was exploited for inducing a factory reset, resulting in clearing the data and resetting the user settings.
A virus may allow unauthorized users to access a phone to find passwords or corporate data stored on the device. Moreover, they can be used to commandeer the phone to make calls or send messages at the owner's expense.
Mobile phones used to have proprietary operating system unique only to the manufacturer which had the beneficial effect of making it harder to design a mass attack. However, the rise of software platforms and operating systems shared by many manufacturers such as Java, Microsoft operating systems, Linux, or Symbian OS, may increase the spread of viruses in the future.
Bluetooth is a feature now found in many higher-end phones, and the virus Caribe hijacked this function, making Bluetooth phones infect other Bluetooth phones running the Symbian OS. In early November 2004, several web sites began offering a specific piece of software promising ringtones and screensavers for certain phones. Those who downloaded the software found that it turned each icon on the phone's screen into a skull-and-crossbones and disabled their phones, so they could no longer send or receive text messages or access contact lists or calendars. The virus has since been dubbed "Skulls" by security experts. The Commwarrior-A virus was identified in March 2005, and it attempts to replicate itself through MMS to others on the phone's contact list. Like Cabir, Commwarrior-A also tries to communicate via Bluetooth wireless connections with other devices, which can eventually lead to draining the battery. The virus requires user intervention for propagation however.
Bluetooth phones are also subject to bluejacking, which although not a virus, does allow for the transmission of unwanted messages from anonymous Bluetooth users.
Cameras
Most current phones also have a built-in digital camera (see camera phone), that can have resolutions as high as 108M pixels.
This gives rise to some concern about privacy, in view of possible voyeurism, for example in swimming pools. South Korea has ordered manufacturers to ensure that all new handsets emit a beep whenever a picture is taken.
Sound recording and video recording is often also possible. Most people do not walk around with a video camera, but do carry a phone. The arrival of video camera phones is transforming the availability of video to consumers, and helps fuel citizen journalism.
See also
Mobile game
Ringtone
Smartphone
Mobile phone form factor
Wallpaper
References
Mobile phone
Mobile phones | Mobile phone feature | [
"Technology"
] | 5,023 | [
"Software features"
] |
2,155,752 | https://en.wikipedia.org/wiki/Citizen%20science | Citizen science (similar to community science, crowd science, crowd-sourced science, civic science, participatory monitoring, or volunteer monitoring) is research conducted with participation from the general public, or amateur/nonprofessional researchers or participants for science, social science and many other disciplines. There are variations in the exact definition of citizen science, with different individuals and organizations having their own specific interpretations of what citizen science encompasses. Citizen science is used in a wide range of areas of study including ecology, biology and conservation, health and medical research, astronomy, media and communications and information science.
There are different applications and functions of citizen science in research projects. Citizen science can be used as a methodology where public volunteers help in collecting and classifying data, improving the scientific community's capacity. Citizen science can also involve more direct involvement from the public, with communities initiating projects researching environment and health hazards in their own communities. Participation in citizen science projects also educates the public about the scientific process and increases awareness about different topics. Some schools have students participate in citizen science projects for this purpose as a part of the teaching curriculums.
Background
The first use of the term "citizen science" can be found in a January 1989 issue of MIT Technology Review, which featured three community-based labs studying environmental issues. In the 21st century, the number of citizen science projects, publications, and funding opportunities has increased. Citizen science has been used more over time, a trend helped by technological advancements. Digital citizen science platforms, such as Zooniverse, store large amounts of data for many projects and are a place where volunteers can learn how to contribute to projects. For some projects, participants are instructed to collect and enter data, such as what species they observed, into large digital global databases. For other projects, participants help classify data on digital platforms. Citizen science data is also being used to develop machine learning algorithms. An example is using volunteer-classified images to train machine learning algorithms to identify species. While global participation and global databases are found on online platforms, not all locations always have the same amount of data from contributors. Concerns over potential data quality issues, such as measurement errors and biases, in citizen science projects are recognized in the scientific community and there are statistical solutions and best practices available which can help.
Definition
The term "citizen science" has multiple origins, as well as differing concepts. "Citizen" is used in the general sense, as meaning in "citizen of the world", or the general public, rather than the legal term citizen of sovereign countries. It was first defined independently in the mid-1990s by Rick Bonney in the United States and Alan Irwin in the United Kingdom. Alan Irwin, a British sociologist, defines citizen science as "developing concepts of scientific citizenship which foregrounds the necessity of opening up science and science policy processes to the public". Irwin sought to reclaim two dimensions of the relationship between citizens and science: 1) that science should be responsive to citizens' concerns and needs; and 2) that citizens themselves could produce reliable scientific knowledge. The American ornithologist Rick Bonney, unaware of Irwin's work, defined citizen science as projects in which nonscientists, such as amateur birdwatchers, voluntarily contributed scientific data. This describes a more limited role for citizens in scientific research than Irwin's conception of the term.
The terms citizen science and citizen scientists entered the Oxford English Dictionary (OED) in June 2014. "Citizen science" is defined as "scientific work undertaken by members of the general public, often in collaboration with or under the direction of professional scientists and scientific institutions". "Citizen scientist" is defined as: (a) "a scientist whose work is characterized by a sense of responsibility to serve the best interests of the wider community (now rare)"; or (b) "a member of the general public who engages in scientific work, often in collaboration with or under the direction of professional scientists and scientific institutions; an amateur scientist". The first use of the term "citizen scientist" can be found in the magazine New Scientist in an article about ufology from October 1979.
Muki Haklay cites, from a policy report for the Wilson Center entitled "Citizen Science and Policy: A European Perspective", an alternate first use of the term "citizen science" by R. Kerson in the magazine MIT Technology Review from January 1989. Quoting from the Wilson Center report: "The new form of engagement in science received the name 'citizen science'. The first recorded example of the use of the term is from 1989, describing how 225 volunteers across the US collected rain samples to assist the Audubon Society in an acid-rain awareness raising campaign."
A Green Paper on Citizen Science was published in 2013 by the European Commission's Digital Science Unit and Socientize.eu, which included a definition for citizen science, referring to "the general public engagement in scientific research activities when citizens actively contribute to science either with their intellectual effort or surrounding knowledge or with their tools and resources. Participants provide experimental data and facilities for researchers, raise new questions and co-create a new scientific culture."
Citizen science may be performed by individuals, teams, or networks of volunteers. Citizen scientists often partner with professional scientists to achieve common goals. Large volunteer networks often allow scientists to accomplish tasks that would be too expensive or time-consuming to accomplish through other means.
Many citizen-science projects serve education and outreach goals. These projects may be designed for a formal classroom environment or an informal education environment such as museums.
Citizen science has evolved over the past four decades. Recent projects place more emphasis on scientifically sound practices and measurable goals for public education. Modern citizen science differs from its historical forms primarily in the access for, and subsequent scale of, public participation; technology is credited as one of the main drivers of the recent explosion of citizen science activity.
In March 2015, the Office of Science and Technology Policy published a factsheet entitled "Empowering Students and Others through Citizen Science and Crowdsourcing". Quoting: "Citizen science and crowdsourcing projects are powerful tools for providing students with skills needed to excel in science, technology, engineering, and math (STEM). Volunteers in citizen science, for example, gain hands-on experience doing real science, and in many cases take that learning outside of the traditional classroom setting". The National Academies of Science cites SciStarter as a platform offering access to more than 2,700 citizen science projects and events, as well as helping interested parties access tools that facilitate project participation.
In May 2016, a new open-access journal was started by the Citizen Science Association along with Ubiquity Press called Citizen Science: Theory and Practice (CS:T&P). Quoting from the editorial article titled "The Theory and Practice of Citizen Science: Launching a New Journal", "CS:T&P provides the space to enhance the quality and impact of citizen science efforts by deeply exploring the citizen science concept in all its forms and across disciplines. By examining, critiquing, and sharing findings across a variety of citizen science endeavors, we can dig into the underpinnings and assumptions of citizen science and critically analyze its practice and outcomes."
In February 2020, Timber Press, an imprint of Workman Publishing Company, published The Field Guide to Citizen Science as a practical guide for anyone interested in getting started with citizen science.
Alternative definitions
Other definitions for citizen science have also been proposed. For example, Bruce Lewenstein of Cornell University's Communication and S&TS departments describes three possible definitions:
The participation of nonscientists in the process of gathering data according to specific scientific protocols and in the process of using and interpreting that data.
The engagement of nonscientists in true decision-making about policy issues that have technical or scientific components.
The engagement of research scientists in the democratic and policy process.
Scientists and scholars who have used other definitions include Frank N. von Hippel, Stephen Schneider, Neal Lane and Jon Beckwith. Other alternative terminologies proposed are "civic science" and "civic scientist".
Further, Muki Haklay offers an overview of the typologies of the level of citizen participation in citizen science, which range from "crowdsourcing" (level 1), where the citizen acts as a sensor, to "distributed intelligence" (level 2), where the citizen acts as a basic interpreter, to "participatory science", where citizens contribute to problem definition and data collection (level 3), to "extreme citizen science", which involves collaboration between the citizen and scientists in problem definition, collection and data analysis.
A 2014 Mashable article defines a citizen scientist as: "Anybody who voluntarily contributes his or her time and resources toward scientific research in partnership with professional scientists."
In 2016, the Australian Citizen Science Association released their definition, which states "Citizen science involves public participation and collaboration in scientific research with the aim to increase scientific knowledge."
In 2020, a group of birders in the Pacific Northwest of North America, eBird Northwest, has sought to rename "citizen science" to the use of "community science", "largely to avoid using the word 'citizen' when we want to be inclusive and welcoming to any birder or person who wants to learn more about bird watching, regardless of their citizen status."
Related fields
In a Smart City era, Citizen Science relies on various web-based tools, such as WebGIS, and becomes Cyber Citizen Science. Some projects, such as SETI@home, use the Internet to take advantage of distributed computing. These projects are generally passive. Computation tasks are performed by volunteers' computers and require little involvement beyond initial setup. There is disagreement as to whether these projects should be classified as citizen science.
The astrophysicist and Galaxy Zoo co-founder Kevin Schawinski stated: "We prefer to call this [Galaxy Zoo] citizen science because it's a better description of what you're doing; you're a regular citizen but you're doing science. Crowd sourcing sounds a bit like, well, you're just a member of the crowd and you're not; you're our collaborator. You're pro-actively involved in the process of science by participating."
Compared to SETI@home, "Galaxy Zoo volunteers do real work. They're not just passively running something on their computer and hoping that they'll be the first person to find aliens. They have a stake in science that comes out of it, which means that they are now interested in what we do with it, and what we find."
Citizen policy may be another result of citizen science initiatives. Bethany Brookshire (pen name SciCurious) writes: "If citizens are going to live with the benefits or potential consequences of science (as the vast majority of them will), it's incredibly important to make sure that they are not only well informed about changes and advances in science and technology, but that they also ... are able to ... influence the science policy decisions that could impact their lives." In "The Rightful Place of Science: Citizen Science", editors Darlene Cavalier and Eric Kennedy highlight emerging connections between citizen science, civic science, and participatory technology assessment.
Benefits and limitations
The general public's involvement in scientific projects has become a means of encouraging curiosity and greater understanding of science while providing an unprecedented engagement between professional scientists and the general public. In a research report published by the U.S. National Park Service in 2008, Brett Amy Thelen and Rachel K. Thiet mention the following concerns, previously reported in the literature, about the validity of volunteer-generated data:
Some projects may not be suitable for volunteers, for instance, when they use complex research methods or require a great deal of (often repetitive) work.
If volunteers lack proper training in research and monitoring protocols, the data they collect might introduce bias into the dataset.
The question of data accuracy, in particular, remains open. John Losey, who created the Lost Ladybug citizen science project, has argued that the cost-effectiveness of citizen science data can outweigh data quality issues, if properly managed.
In December 2016, authors M. Kosmala, A. Wiggins, A. Swanson and B. Simmons published a study in the journal Frontiers in Ecology and the Environment called "Assessing Data Quality in Citizen Science". The abstract describes how ecological and environmental citizen science projects have enormous potential to advance science. Citizen science projects can influence policy and guide resource management by producing datasets that are otherwise not feasible to generate. In the section "In a Nutshell" (pg3), four condensed conclusions are stated. They are:
They conclude that as citizen science continues to grow and mature, a key metric of project success they expect to see will be a growing awareness of data quality. They also conclude that citizen science will emerge as a general tool helping "to collect otherwise unobtainable high-quality data in support of policy and resource management, conservation monitoring, and basic science."
A study of Canadian lepidoptera datasets published in 2018 compared the use of a professionally curated dataset of butterfly specimen records with four years of data from a citizen science program, eButterfly. The eButterfly dataset was used as it was determined to be of high quality because of the expert vetting process used on site, and there already existed a dataset covering the same geographic area consisting of specimen data, much of it institutional. The authors note that, in this case, citizen science data provides both novel and complementary information to the specimen data. Five new species were reported from the citizen science data, and geographic distribution information was improved for over 80% of species in the combined dataset when citizen science data was included.
Several recent studies have begun to explore the accuracy of citizen science projects and how to predict accuracy based on variables like expertise of practitioners. One example is a 2021 study by Edgar Santos-Fernandez and Kerrie Mengersen of the British Ecological Society, who utilized a case study which used recent R and Stan programming software to offer ratings of the accuracy of species identifications performed by citizen scientists in Serengeti National Park, Tanzania. This provided insight into possible problems with processes like this which include, "discriminatory power and guessing behaviour". The researchers determined that methods for rating the citizen scientists themselves based on skill level and expertise might make studies they conduct more easy to analyze.
Studies that are simple in execution are where citizen science excels, particularly in the field of conservation biology and ecology. For example, in 2019, Sumner et al. compared the data of vespid wasp distributions collected by citizen scientists with the 4-decade, long-term dataset established by the BWARS. They set up the Big Wasp Survey from 26 August to 10 September 2017, inviting citizen scientists to trap wasps and send them for identification by experts where data was recorded. The results of this study showed that the campaign garnered over 2,000 citizen scientists participating in data collection, identifying over 6,600 wasps. This experiment provides strong evidence that citizen science can generate potentially high-quality data comparable to that of expert data collection, within a shorter time frame. Although the experiment was to originally test the strength of citizen science, the team also learned more about Vespidae biology and species distribution in the United Kingdom. With this study, the simple procedure enabled citizen science to be executed in a successful manner. A study by J. Cohn describes that volunteers can be trained to use equipment and process data, especially considering that a large proportion of citizen scientists are individuals who are already well-versed in the field of science.
The demographics of participants in citizen science projects are overwhelmingly White adults, of above-average income, having a university degree. Other groups of volunteers include conservationists, outdoor enthusiasts, and amateur scientists. As such, citizen scientists are generally individuals with a pre-understanding of the scientific method and how to conduct sensible and just scientific analysis.
Ethics
Various studies have been published that explore the ethics of citizen science, including issues such as intellectual property and project design.(e.g.) The Citizen Science Association (CSA), based at the Cornell Lab of Ornithology, and the European Citizen Science Association (ECSA), based in the Museum für Naturkunde in Berlin, have working groups on ethics and principles.
In September 2015, ECSA published its Ten Principles of Citizen Science, which have been developed by the "Sharing best practice and building capacity" working group of ECSA, led by the Natural History Museum, London with input from many members of the association.
The medical ethics of internet crowdsourcing has been questioned by Graber & Graber in the Journal of Medical Ethics. In particular, they analyse the effect of games and the crowdsourcing project Foldit. They conclude: "games can have possible adverse effects, and that they manipulate the user into participation".
In March 2019, the online journal Citizen Science: Theory and Practice launched a collection of articles on the theme of Ethical Issues in Citizen Science. The articles are introduced with (quoting): "Citizen science can challenge existing ethical norms because it falls outside of customary methods of ensuring that research is conducted ethically. What ethical issues arise when engaging the public in research? How have these issues been addressed, and how should they be addressed in the future?"
In June 2019, East Asian Science, Technology and Society: An International Journal (EASTS) published an issue titled "Citizen Science: Practices and Problems" which contains 15 articles/studies on citizen science, including many relevant subjects of which ethics is one. Quoting from the introduction "Citizen, Science, and Citizen Science": "The term citizen science has become very popular among scholars as well as the general public, and, given its growing presence in East Asia, it is perhaps not a moment too soon to have a special issue of EASTS on the topic."
Use of citizen science volunteers as de facto unpaid laborers by some commercial ventures have been criticized as exploitative.
Ethics in citizen science in the health and welfare field, has been discussed in terms of protection versus participation. Public involvement researcher Kristin Liabo writes that health researcher might, in light of their ethics training, be inclined to exclude vulnerable individuals from participation, to protect them from harm. However, she argues these groups are already likely to be excluded from participation in other arenas, and that participation can be empowering and a possibility to gain life skills that these individuals need. Whether or not to become involved should be a decision these individuals should be involved in and not a researcher decision.
Economic worth
In the research paper "Can citizen science enhance public understanding of science?" by Bonney et al. 2016, statistics which analyse the economic worth of citizen science are used, drawn from two papers: i) Sauermann and Franzoni 2015, and
ii) Theobald et al. 2015. In "Crowd science user contribution patterns and their implications" by Sauermann and Franzoni (2015), seven projects from the Zooniverse web portal are used to estimate the monetary value of the citizen science that had taken place. The seven projects are: Solar Stormwatch, Galaxy Zoo Supernovae, Galaxy Zoo Hubble, Moon Zoo, Old Weather, The Milky Way Project and Planet Hunters. Using data from 180 days in 2010, they find a total of 100,386 users participated, contributing 129,540 hours of unpaid work. Estimating at a rate of $12 an hour (an undergraduate research assistant's basic wage), the total contributions amount to $1,554,474, an average of $222,068 per project. The range over the seven projects was from $22,717 to $654,130.
In "Global change and local solutions: Tapping the unrealized potential of citizen science for biodiversity research" by Theobald et al. 2015, the authors surveyed 388 unique biodiversity-based projects. Quoting: "We estimate that between 1.36 million and 2.28 million people volunteer annually in the 388 projects we surveyed, though variation is great" and that "the range of in-kind contribution of the volunteerism in our 388 citizen science projects as between $667 million to $2.5 billion annually."
Worldwide participation in citizen science continues to grow. A list of the top five citizen science communities compiled by Marc Kuchner and Kristen Erickson in July 2018 shows a total of 3.75 million participants, although there is likely substantial overlap between the communities.
Relations with education and academia
There have been studies published which examine the place of citizen science within education.(e.g.) Teaching aids can include books and activity or lesson plans.(e.g.). Some examples of studies are:
From the Second International Handbook of Science Education, a chapter entitled: "Citizen Science, Ecojustice, and Science Education: Rethinking an Education from Nowhere", by Mueller and Tippins (2011), acknowledges in the abstract that: "There is an emerging emphasis in science education on engaging youth in citizen science." The authors also ask: "whether citizen science goes further with respect to citizen development." The abstract ends by stating that the "chapter takes account of the ways educators will collaborate with members of the community to effectively guide decisions, which offers promise for sharing a responsibility for democratizing science with others."
From the journal Democracy and Education, an article entitled: "Lessons Learned from Citizen Science in the Classroom" by authors Gray, Nicosia and Jordan (GNJ; 2012) gives a response to a study by Mueller, Tippins and Bryan (MTB) called "The Future of Citizen Science". GNJ begins by stating in the abstract that "The Future of Citizen Science": "provides an important theoretical perspective about the future of democratized science and K12 education." But GRB state: "However, the authors (MTB) fail to adequately address the existing barriers and constraints to moving community-based science into the classroom." They end the abstract by arguing: "that the resource constraints of scientists, teachers, and students likely pose problems to moving true democratized science into the classroom."
In 2014, a study was published called "Citizen Science and Lifelong Learning" by R. Edwards in the journal Studies in the Education of Adults. Edwards begins by writing in the abstract that citizen science projects have expanded over recent years and engaged citizen scientists and professionals in diverse ways. He continues: "Yet there has been little educational exploration of such projects to date." He describes that "there has been limited exploration of the educational backgrounds of adult contributors to citizen science". Edwards explains that citizen science contributors are referred to as volunteers, citizens or as amateurs. He ends the abstract: "The article will explore the nature and significance of these different characterisations and also suggest possibilities for further research."
In the journal Microbiology and Biology Education a study was published by Shah and Martinez (2015) called "Current Approaches in Implementing Citizen Science in the Classroom". They begin by writing in the abstract that citizen science is a partnership between inexperienced amateurs and trained scientists. The authors continue: "With recent studies showing a weakening in scientific competency of American students, incorporating citizen science initiatives in the curriculum provides a means to address deficiencies". They argue that combining traditional and innovative methods can help provide a practical experience of science. The abstract ends: "Citizen science can be used to emphasize the recognition and use of systematic approaches to solve problems affecting the community."
In November 2017, authors Mitchell, Triska and Liberatore published a study in PLOS One titled "Benefits and Challenges of Incorporating Citizen Science into University Education". The authors begin by stating in the abstract that citizen scientists contribute data with the expectation that it will be used. It reports that citizen science has been used for first year university students as a means to experience research. They continue: "Surveys of more than 1500 students showed that their environmental engagement increased significantly after participating in data collection and data analysis." However, only a third of students agreed that data collected by citizen scientists was reliable. A positive outcome of this was that the students were more careful of their own research. The abstract ends: "If true for citizen scientists in general, enabling participants as well as scientists to analyse data could enhance data quality, and so address a key constraint of broad-scale citizen science programs."
Citizen science has also been described as challenging the "traditional hierarchies and structures of knowledge creation".
History
While citizen science developed at the end of the 20th century, characteristics of citizen science are not new. Prior to the 20th century, science was often the pursuit of gentleman scientists, amateur or self-funded researchers such as Sir Isaac Newton, Benjamin Franklin, and Charles Darwin. Women citizen scientists from before the 20th century include Florence Nightingale who "perhaps better embodies the radical spirit of citizen science". Before the professionalization of science by the end of the 19th century, most pursued scientific projects as an activity rather than a profession itself, an example being amateur naturalists in the 18th and 19th centuries.
During the British colonization of North America, American Colonists recorded the weather, offering much of the information now used to estimate climate data and climate change during this time period. These people included John Campanius Holm, who recorded storms in the mid-1600s, as well as George Washington, Thomas Jefferson, and Benjamin Franklin who tracked weather patterns during America's founding. Their work focused on identifying patterns by amassing their data and that of their peers and predecessors, rather than specific professional knowledge in scientific fields. Some consider these individuals to be the first citizen scientists, some consider figures such as Leonardo da Vinci and Charles Darwin to be citizen scientists, while others feel that citizen science is a distinct movement that developed later on, building on the preceding history of science.
By the mid-20th century, however, science was dominated by researchers employed by universities and government research laboratories. By the 1970s, this transformation was being called into question. Philosopher Paul Feyerabend called for a "democratization of science". Biochemist Erwin Chargaff advocated a return to science by nature-loving amateurs in the tradition of Descartes, Newton, Leibniz, Buffon, and Darwin—science dominated by "amateurship instead of money-biased technical bureaucrats".
A study from 2016 indicates that the largest impact of citizen science is in research on biology, conservation and ecology, and is utilized mainly as a methodology of collecting and classifying data.
Amateur astronomy
Astronomy has long been a field where amateurs have contributed throughout time, all the way up to the present day.
Collectively, amateur astronomers observe a variety of celestial objects and phenomena sometimes with equipment that they build themselves. Common targets of amateur astronomers include the Moon, planets, stars, comets, meteor showers, and a variety of deep-sky objects such as star clusters, galaxies, and nebulae. Observations of comets and stars are also used to measure the local level of artificial skyglow. One branch of amateur astronomy, amateur astrophotography, involves the taking of photos of the night sky. Many amateurs like to specialize in the observation of particular objects, types of objects, or types of events that interest them.
The American Association of Variable Star Observers has gathered data on variable stars for educational and professional analysis since 1911 and promotes participation beyond its membership on its Citizen Sky website.
Project PoSSUM is a relatively new organization, started in March 2012, which trains citizen scientists of many ages to go on polar suborbital missions. On these missions, they study noctilucent clouds with remote sensing, which reveals interesting clues about changes in the upper atmosphere and the ozone due to climate change. This is a form of citizen science which trains younger generations to be ambitious, participating in intriguing astronomy and climate change science projects even without a professional degree.
Butterfly counts
Butterfly counts have a long tradition of involving individuals in the study of butterflies' range and their relative abundance. Two long-running programs are the UK Butterfly Monitoring Scheme (started in 1976) and the North American Butterfly Association's Butterfly Count Program (started in 1975). There are various protocols for monitoring butterflies and different organizations support one or more of transects, counts and/or opportunistic sightings. eButterfly is an example of a program designed to capture any of the three types of counts for observers in North America. Species-specific programs also exist, with monarchs the prominent example. Two examples of this involve the counting of monarch butterflies during the fall migration to overwintering sites in Mexico: (1) Monarch Watch is a continent-wide project, while (2) the Cape May Monarch Monitoring Project is an example of a local project. The Austrian project Viel-Falter investigated if and how trained and supervised pupils are able to systematically collect data about the occurrence of diurnal butterflies, and how this data could contribute to a permanent butterfly monitoring system. Despite substantial identification uncertainties for some species or species groups, the data collected by pupils was successfully used to predict the general habitat quality for butterflies.
Ornithology
Citizen science projects have become increasingly focused on providing benefits to scientific research. The North American Bird Phenology Program (historically called the Bird Migration and Distribution records) may have been the earliest collective effort of citizens collecting ornithological information in the U.S. The program, dating back to 1883, was started by Wells Woodbridge Cooke. Cooke established a network of observers around North America to collect bird migration records. The Audubon Society's Christmas Bird Count, which began in 1900, is another example of a long-standing tradition of citizen science which has persisted to the present day, now containing a collection of six million handwritten migration observer cards that date back to the 19th century. Participants input this data into an online database for analysis. Citizen scientists help gather data that will be analyzed by professional researchers, and can be used to produce bird population and biodiversity indicators.
Raptor migration research relies on the data collected by the hawkwatching community. This mostly volunteer group counts migrating accipiters, buteos, falcons, harriers, kites, eagles, osprey, vultures and other raptors at hawk sites throughout North America during the spring and fall seasons. The daily data is uploaded to hawkcount.org where it can be viewed by professional scientists and the public.
Other programs in North America include Project FeederWatch, which is affiliated with the Cornell Lab of Ornithology.
Such indices can be useful tools to inform management, resource allocation, policy and planning. For example, European breeding bird survey data provide input for the Farmland Bird Index, adopted by the European Union as a structural indicator of sustainable development. This provides a cost-effective alternative to government monitoring.
Similarly, data collected by citizen scientists as part of BirdLife Australia's has been analysed to produce the first-ever Australian Terrestrial Bird Indices.
In the UK, the Royal Society for the Protection of Birds collaborated with a children’s TV show to create a national birdwatching day in 1979; the campaign has continued for over 40 years and in 2024, over 600,000 people counted almost 10 million birds during the Big Garden Birdwatch weekend.
Most recently, more programs have sprung up worldwide, including NestWatch, a bird species monitoring program which tracks data on reproduction. This might include studies on when and how often nesting occurs, counting eggs laid and how many hatch successfully, and what proportion of hatchlings survive infancy. Participation in this program is extremely easy for the general public to join. Using the recently created nest watch app which is available on almost all devices, anyone can begin to observe their local species, recording results every 3 to 4 days within the app. This forms a continually-growing database which researchers can view and utilize to understand trends within specific bird populations.
Citizen oceanography
The concept of citizen science has been extended to the ocean environment for characterizing ocean dynamics and tracking marine debris. For example, the mobile app Marine Debris Tracker is a joint partnership of National Oceanic and Atmospheric Administration and the University of Georgia. Long term sampling efforts such as the continuous plankton recorder has been fitted on ships of opportunity since 1931. Plankton collection by sailors and subsequent genetic analysis was pioneered in 2013 by Indigo V Expeditions as a way to better understand marine microbial structure and function.
Coral reefs
Citizen science in coral reef studies developed in the 21st century.
Underwater photography has become more popular since the development of moderate priced digital cameras with waterproof housings in the early 2000s, resulting on millions of pictures posted every year on various websites and social media. This mass of documentation has great scientific potential, as millions of tourists possess a much superior coverage power than professional scientists, who cannot spend so much time in the field.
As a consequence, several participative sciences programs have been developed, supported by geotagging and identification web sites such as iNaturalist. The Monitoring through many eyes project collates thousands of underwater images of the Great Barrier Reef and provides an interface for elicitation of reef health indicators.
The National Oceanic and Atmospheric Administration (NOAA) also offers opportunities for volunteer participation. By taking measurements in The United States' National Marine Sanctuaries, citizens contribute data to marine biology projects. In 2016, NOAA benefited from 137,000 hours of research.
There also exist protocols for auto-organization and self-teaching aimed at biodiversity-interested snorkelers, in order for them to turn their observations into sound scientific data, available for research. This kind of approach has been successfully used in Réunion island, allowing for tens of new records and even new species.
Freshwater fish
Aquarium hobbyists and their respective organizations are very passionate about fish conservation and often more knowledgeable about specific fish species and groups than scientific researchers. They have played an important role in the conservation of freshwater fishes by discovering new species, maintaining extensive databases with ecological information on thousands of species (such as for catfish, Mexican freshwater fishes, killifishes, cichlids), and successfully keeping and providing endangered and extinct-in-the-wild species for conservation projects. The CARES (Conservation, Awareness, Recognition, Encouragement, and Support) preservation program is the largest hobbyist organization containing over 30 aquarium societies and international organizations, and encourages serious aquarium hobbyists to devote tank space to the most threatened or extinct-in-the-wild species to ensure their survival for future generations.
Amphibians
Citizen scientists also work to monitor and conserve amphibian populations. One recent project is FrogWatch USA, organized by the Association of Zoos and Aquariums. Participants are invited to educate themselves on their local wetlands and help to save amphibian populations by reporting the data on the calls of local frogs and toads. The project already has over 150,000 observations from more than 5000 contributors. Participants are trained by program coordinators to identify calls and utilize this training to report data they find between February and August of each "monitoring season". Data is used to monitor diversity, invasion, and long-term shifts in population health within these frog and toad communities.
Rocky reefs
Reef Life Survey is a marine life monitoring programme based in Hobart, Tasmania. The project uses recreational divers that have been trained to make fish and invertebrate counts, using an approximate 50 m constant depth transect of tropical and temperate reefs, which might include coral reefs. Reef Life Survey is international in its scope, but the data collectors are predominantly from Australia. The database is available to marine ecology researchers, and is used by several marine protected area managements in Australia, New Zealand, American Samoa and the eastern Pacific. Its results have also been included in the Australian Ocean DATA Network.
Agriculture
Farmer participation in experiments has a long tradition in agricultural science. There are many opportunities for citizen engagement in different parts of food systems. Citizen science is actively used for crop variety selection for climate adaptation, involving thousands of farmers. Citizen science has also played a role in furthering sustainable agriculture.
Art history
Citizen science has a long tradition in natural science. Today, citizen science projects can also be found in various fields of science like art history. For example, the Zooniverse project AnnoTate is a transcription tool developed to enable volunteers to read and transcribe the personal papers of British-born and émigré artists. The papers are drawn from the Tate Archive. Another example of citizen science in art history is ARTigo. ARTigo collects semantic data on artworks from the footprints left by players of games featuring artwork images. From these footprints, ARTigo automatically builds a semantic search engine for artworks.
Biodiversity
Citizen science has made significant contributions to the analysis of biodiversity across the world. A majority of data collected has been focused primarily on species occurrence, abundance and phenology, with birds being primarily the most popular group observed. There is growing efforts to expand the use of citizen science across other fields. Past data on biodiversity has had limitations in the quantity of data to make any meaningful broad connections to losses in biodiversity. Recruiting citizens already out in the field opens a tremendous amount of new data. For example, thousands of farmers reporting the changes in biodiversity in their farms over many years has provided a large amount of relevant data concerning the effect of different farming methods on biodiversity. Another example, is WomSAT, a citizen science project that collects data on wombat roadkill and sarcoptic mange incidence and distribution, to support conservation efforts for the species.
Citizen science can be used to great effect in addition to the usual scientific methods in biodiversity monitoring. The typical active method of species detection is able to collect data on the broad biodiversity of areas while citizen science approaches has shown to be more effective at identifying invasive species. In combination, this provides an effective strategy of monitoring the changes in biodiversity of ecosystems.
Health and welfare
In the research fields of health and welfare, citizen science is often discussed in other terms, such as "public involvement", "user engagement", or "community member involvement". However the meaning is similar to citizen science, with the exception that citizens are not often involved in collecting data but more often involved in prioritisation of research ideas and improving methodology, e.g. survey questions. In the last decades, researchers and funders have gained awareness of the benefits from involving citizens in the research work, but the involvement of citizens in a meaningful way is not a common practice. There is an ongoing discussion on how to evaluate citizen science in health and welfare research.
One aspect to consider in citizen science in health and welfare, that stands out compared to in other academic fields, is who to involve. When research concerns human experiences, representation of a group becomes important. While it is commonly acknowledged that the people involved need to have lived experience of the concerned topic, representation is still an issue, and researchers are debating whether this is a useful concept in citizen science.
Modern technology
Newer technologies have increased the options for citizen science. Citizen scientists can build and operate their own instruments to gather data for their own experiments or as part of a larger project. Examples include amateur radio, amateur astronomy, Six Sigma Projects, and Maker activities. Scientist Joshua Pearce has advocated for the creation of open-source hardware based scientific equipment that both citizen scientists and professional scientists, which can be replicated by digital manufacturing techniques such as 3D printing. Multiple studies have shown this approach radically reduces scientific equipment costs. Examples of this approach include water testing, nitrate and other environmental testing, basic biology and optics. Groups such as Public Lab, which is a community where citizen scientists can learn how to investigate environmental concerns using inexpensive DIY techniques, embody this approach.
Video technology is much used in scientific research. The Citizen Science Center in the Nature Research Center wing of the North Carolina Museum of Natural Sciences has exhibits on how to get involved in scientific research and become a citizen scientist. For example, visitors can observe birdfeeders at the Prairie Ridge Ecostation satellite facility via live video feed and record which species they see.
Since 2005, the Genographic Project has used the latest genetic technology to expand our knowledge of the human story, and its pioneering use of DNA testing to engage and involve the public in the research effort has helped to create a new breed of "citizen scientist". Geno 2.0 expands the scope for citizen science, harnessing the power of the crowd to discover new details of human population history. This includes supporting, organization and dissemination of personal DNA testing. Like amateur astronomy, citizen scientists encouraged by volunteer organizations like the International Society of Genetic Genealogy have provided valuable information and research to the professional scientific community.
With unmanned aerial vehicles, further citizen science is enabled. One example is the ESA's AstroDrone smartphone app for gathering robotic data with the Parrot AR.Drone.
Citizens in Space (CIS), a project of the United States Rocket Academy, seeks to combine citizen science with citizen space exploration. CIS is training citizen astronauts to fly as payload operators on suborbital reusable spacecraft that are now in development. CIS will also be developing, and encouraging others to develop, citizen-science payloads to fly on suborbital vehicles. CIS has already acquired a contract for 10 flights on the Lynx suborbital vehicle, being developed by XCOR Aerospace, and plans to acquire additional flights on XCOR Lynx and other suborbital vehicles in the future.
CIS believes that "The development of low-cost reusable suborbital spacecraft will be the next great enabler, allowing citizens to participate in space exploration and space science."
The website CitizenScience.gov was started by the U.S. government to "accelerate the use of crowdsourcing and citizen science" in the United States. Following the internet's rapid increase of citizen science projects, this site is one of the most prominent resource banks for citizen scientists and government supporters alike. It features three sections: a catalog of existing citizen science projects which are federally supported, a toolkit to help federal officials as they develop and maintain their future projects, and several other resources and projects. This was created as the result of a mandate within the Crowdsourcing and Citizen Science Act of 2016 (15 USC 3724).
Internet
The Internet has been a boon to citizen science, particularly through gamification. One of the first Internet-based citizen science experiments was NASA's Clickworkers, which enabled the general public to assist in the classification of images, greatly reducing the time to analyze large data sets. Another was the Citizen Science Toolbox, launched in 2003, of the Australian Coastal Collaborative Research Centre. Mozak is a game in which players create 3D reconstructions from images of actual human and mouse neurons, helping to advance understanding of the brain. One of the largest citizen science games is Eyewire, a brain-mapping puzzle game developed at the Massachusetts Institute of Technology that now has over 200,000 players. Another example is Quantum Moves, a game developed by the Center for Driven Community Research at Aarhus University, which uses online community efforts to solve quantum physics problems. The solutions found by players can then be used in the lab to feed computational algorithms used in building a scalable quantum computer.
More generally, Amazon's Mechanical Turk is frequently used in the creation, collection, and processing of data by paid citizens. There is controversy as to whether or not the data collected through such services is reliable, as it is subject to participants' desire for compensation. However, use of Mechanical Turk tends to quickly produce more diverse participant backgrounds, as well as comparably accurate data when compared to traditional collection methods.
The internet has also enabled citizen scientists to gather data to be analyzed by professional researchers. Citizen science networks are often involved in the observation of cyclic events of nature (phenology), such as effects of global warming on plant and animal life in different geographic areas, and in monitoring programs for natural-resource management. On BugGuide.Net, an online community of naturalists who share observations of arthropod, amateurs and professional researchers contribute to the analysis. By October 2022, BugGuide has over 1,886,513 images submitted by 47,732 contributors.
Not counting iNaturalist and eBird, the Zooniverse is home to the internet's largest, most popular and most successful citizen science projects. The Zooniverse and the suite of projects it contains is produced, maintained and developed by the Citizen Science Alliance (CSA). The member institutions of the CSA work with many academic and other partners around the world to produce projects that use the efforts and ability of volunteers to help scientists and researchers deal with the flood of data that confronts them. On 29 June 2015, the Zooniverse released a new software version with a project-building tool allowing any registered user to create a project. Project owners may optionally complete an approval process to have their projects listed on the Zooniverse site and promoted to the Zooniverse community. A NASA/JPL picture to the right gives an example from one of Zooniverse's projects The Milky Way Project.
The website CosmoQuest has as its goal "To create a community of people bent on together advancing our understanding of the universe; a community of people who are participating in doing science, who can explain why what they do matters, and what questions they are helping to answer."
CrowdCrafting enables its participants to create and run projects where volunteers help with image classification, transcription, geocoding and more. The platform is powered by PyBossa software, a free and open-source framework for crowdsourcing.
Project Soothe is a citizen science research project based at the University of Edinburgh. The aim of this research is to create a bank of soothing images, submitted by members of the public, which can be used to help others through psychotherapy and research in the future. Since 2015, Project Soothe has received over 600 soothing photographs from people in 23 countries. Anyone aged 12 years or over is eligible to participate in this research in two ways: (1) By submitting soothing photos that they have taken with a description of why the images make them feel soothed (2) By rating the photos that have been submitted by people worldwide for their soothability.
The internet has allowed for many individuals to share and upload massive amounts of data. Using the internet citizen observatories have been designed as a platform to both increase citizen participation and knowledge of their surrounding environment by collecting whatever relevant data is focused by the program. The idea is making it easier and more exciting for citizens to get and stay involved in local data collection.
The invention of social media has aided in providing massive amounts of information from the public to create citizen science programs. In a case study by Andrea Liberatore, Erin Bowkett, Catriona J. MacLeod, Eric Spurr, and Nancy Longnecker, the New Zealand Garden Bird Survey is conducted as one such project with the aid of social media. It examines the influence of utilizing a Facebook group to collect data from citizen scientists as the researchers work on the project over the span of a year. The authors claim that this use of social media greatly helps with the efficiency of this study and makes the atmosphere feel more communal.
Smartphone
The bandwidth and ubiquity afforded by smartphones has vastly expanded the opportunities for citizen science. Examples include iNaturalist, the San Francisco project, the WildLab, Project Noah, and Aurorasurus. Due to their ubiquity, for example, Twitter, Facebook, and smartphones have been useful for citizen scientists, having enabled them to discover and propagate a new type of aurora dubbed "STEVE" in 2016.
There are also apps for monitoring birds, marine wildlife and other organisms, and the "Loss of the Night".
"The Crowd and the Cloud" is a four-part series broadcast during April 2017, which examines citizen science. It shows how smartphones, computers and mobile technology enable regular citizens to become part of a 21st-century way of doing science. The programs also demonstrate how citizen scientists help professional scientists to advance knowledge, which helps speed up new discoveries and innovations. The Crowd & The Cloud is based upon work supported by the U.S. National Science Foundation.
Seismology
Since 1975, in order to improve earthquake detection and collect useful information, the European-Mediterranean Seismological Centre monitors the visits of earthquake eyewitnesses to its website and relies on Facebook and Twitter. More recently, they developed the LastQuake mobile application which notifies users about earthquakes occurring around the world, alerts people when earthquakes hit near them, gathers citizen seismologists' testimonies to estimate the felt ground shaking and possible damages.
Hydrology
Citizen science has been used to provide valuable data in hydrology (catchment science), notably flood risk, water quality, and water resource management. A growth in internet use and smartphone ownership has allowed users to collect and share real-time flood-risk information using, for example, social media and web-based forms. Although traditional data collection methods are well-established, citizen science is being used to fill the data gaps on a local level, and is therefore meaningful to individual communities. Data collected from citizen science can also compare well to professionally collected data. It has been demonstrated that citizen science is particularly advantageous during a flash flood because the public are more likely to witness these rarer hydrological events than scientists.
Plastics and pollution
Citizen science includes projects that help monitor plastics and their associated pollution. These include The Ocean Cleanup, #OneLess, The Big Microplastic Survey, EXXpedition and Alliance to End Plastic Waste. Ellipsis seeks to map the distribution of litter using aerial data mapping by unmanned aerial vehicles and machine learning software. A Zooniverse project called The Plastic Tide (now finished) helped train an algorithm used by Ellipsis.
Examples of relevant articles (by date):
Citizen Science Promotes Environmental Engagement: (quote) "Citizen science projects are rapidly gaining popularity among the public, in which volunteers help gather data on species that can be used by scientists in research. And it's not just adults who are involved in these projects – even kids have collected high-quality data in the US."
Tackling Microplastics on Our Own: (quote) "Plastics, ranging from the circles of soda can rings to microbeads the size of pinheads, are starting to replace images of sewage for a leading cause of pollution – especially in the ocean". Further, "With recent backing from the Crowdsourcing and Citizen Science Act, citizen science is increasingly embraced as a tool by US Federal agencies."
Citizen Scientists Are Tracking Plastic Pollution Worldwide: (quote) "Scientists who are monitoring the spread of tiny pieces of plastic throughout the environment are getting help from a small army of citizen volunteers – and they're finding bits of polymer in some of the most remote parts of North America."
Artificial intelligence and citizen scientists: Powering the clean-up of Asia Pacific's beaches:(quote) "The main objective is to support citizen scientists cleaning up New Zealand beaches and get a better understanding of why litter is turning up, so preventive and proactive action can be taken."
Citizen science could help address Canada's plastic pollution problem: (quote) "But citizen engagement and participation in science goes beyond beach cleanups, and can be used as a tool to bridge gaps between communities and scientists. These partnerships between scientists and citizen scientists have produced real world data that have influenced policy changes."
Examples of relevant scientific studies or books include (by date):
Distribution and abundance of small plastic debris on beaches in the SE Pacific (Chile): a study supported by a citizen science project: (quote) "The citizen science project 'National Sampling of Small Plastic Debris' was supported by schoolchildren from all over Chile who documented the distribution and abundance of small plastic debris on Chilean beaches. Thirty-nine schools and nearly 1,000 students from continental Chile and Easter Island participated in the activity."
Incorporating citizen science to study plastics in the environment: (quote) "Taking advantage of public interest in the impact of plastic on the marine environment, successful Citizen Science (CS) programs incorporate members of the public to provide repeated sampling for time series as well as synoptic collections over wide geographic regions."
Marine anthropogenic litter on British beaches: A 10-year nationwide assessment using citizen science data: (quote) "Citizen science projects, whereby members of the public gather information, offer a low-cost method of collecting large volumes of data with considerable temporal and spatial coverage. Furthermore, such projects raise awareness of environmental issues and can lead to positive changes in behaviours and attitudes."
Determining Global Distribution of Microplastics by Combining Citizen Science and In-Depth Case Studies: (quote) "Our first project involves the general public through citizen science. Participants collect sand samples from beaches using a basic protocol, and we subsequently extract and quantify microplastics in a central laboratory using the standard operating procedure."
Risk Perception of Plastic Pollution: Importance of Stakeholder Involvement and Citizen Science: (quote) "The chapter finally discusses how risk perception can be improved by greater stakeholder involvement and utilization of citizen science and thereby improve the foundation for timely and efficient societal measures."
Assessing the citizen science approach as tool to increase awareness on the marine litter problem: (quote) "This paper provides a quantitative assessment of students' attitude and behaviors towards marine litter before and after their participation to SEACleaner, an educational and citizen science project devoted to monitor macro- and micro-litter in an Area belonging to Pelagos Sanctuary."
Spatial trends and drivers of marine debris accumulation on shorelines in South Eleuthera, The Bahamas using citizen science: (quote) "This study measured spatial distribution of marine debris stranded on beaches in South Eleuthera, The Bahamas. Citizen science, fetch modeling, relative exposure index and predictive mapping were used to determine marine debris source and abundance."
Making citizen science count: Best practices and challenges of citizen science projects on plastics in aquatic environments: (quote) "Citizen science is a cost-effective way to gather data over a large geographical range while simultaneously raising public awareness on the problem".
White and wonderful? Microplastics prevail in snow from the Alps to the Arctic: (quote) "In March 2018, five samples were taken at different locations on Svalbard (Fig. 1A and Table 1) by citizen scientists embarking on a land expedition by ski-doo (Aemalire project). The citizens were instructed on contamination prevention and equipped with protocol forms, prerinsed 2-liter stainless steel containers (Ecotanca), a porcelain mug, a steel spoon, and a soup ladle for sampling."
Citizen sensing
Citizen sensing can be a form of citizen science: (quote) "The work of citizen sensing, as a form of citizen science, then further transforms Stengers's notion of the work of science by moving the experimental facts and collectives where scientific work is undertaken out of the laboratory of experts and into the world of citizens." Similar sensing activities include Crowdsensing and participatory monitoring. While the idea of using mobile technology to aid this sensing is not new, creating devices and systems that can be used to aid regulation has not been straightforward. Some examples of projects that include citizen sensing are:
Citizen Sense (2013–2018): (quote) "Practices of monitoring and sensing environments have migrated to everyday participatory applications, where users of smart phones and networked devices are able to engage with modes of environmental observation and data collection."
Breathe Project: (quote) "We use the best available science and technology to better understand the quality of the air we breathe and provide opportunities for citizens to engage and take action."
The Bristol Approach to Citizen Sensing: (quote) "Citizen Sensing is about empowering people and places to understand and use smart tech and data from sensors to tackle the issues they care about, connect with other people who can help, and take positive, practical action."
Luftdaten.info: (quote) "You and thousands of others around the world install self-built sensors on the outside their home. Luftdaten.info generates a continuously updated particular matter map from the transmitted data."
CitiSense: (quote) "CitiSense aims to co-develop a participatory risk management system (PRMS) with citizens, local authorities and organizations which enables them to contribute to advanced climate services and enhanced urban climate resilience as well as receive recommendations that support their security."
A group of citizen scientists in a community-led project targeting toxic smoke from wood burners in Bristol, has recorded 11 breaches of World Health Organization daily guidelines for ultra-fine particulate pollution over a period of six months.
In a £7M programme funded by water regulator Ofwat, citizen scientists are being trained to test for pollution and over-abstraction in 10 river catchment areas in the UK. Sensors will be used and the information gathered will be available in a central visualisation platform. The project is led by The Rivers Trust and United Utilities and includes volunteers such as anglers testing the rivers they use. The Angling Trust provides the pollution sensors, with Kristian Kent from the Trust saying: "Citizen science is a reality of the world in the future, so they’re not going to be able to just sweep it under the carpet."
River water quality in the U.K. has been tested by a combined total of over 7,000 volunteers in so-called "blitzes" run over two weekends in 2024. The research by the NGO Earthwatch Europe gathered data from 4,000 freshwater sites and used standardised testing equipment provide by the NGO and Imperial College. The second blitz in October 2024 included testing for chemical pollutants, such as antibiotics, agricultural chemicals and pesticides. Results from 4,531 volunteers showed that over 61% of the freshwater sites "were in a poor state because of high levels of the nutrients phosphate and nitrate, the main source of which is sewage effluent and agricultural runoff". The data gathered through robust volunteer testing is analysed and put into a report helping provide the Environment Agency with information it does not have.
COVID-19 pandemic
Resources for computer science and scientific crowdsourcing projects concerning COVID-19 can be found on the internet or as apps. Some such projects are listed below:
The distributed computing project Folding@home launched a program in March 2020 to assist researchers around the world who were working on finding a cure and learning more about the coronavirus pandemic. The initial wave of projects were meant to simulate potentially druggable protein targets from SARS-CoV-2 (and also its predecessor and close relation SARS-CoV, about which there is significantly more data available). In 2024, the project has been extended to look at other health issues including Alzheimer’s and cancer. The project asks volunteers to download the app and donate computing power for simulations.
The distributed computing project Rosetta@home also joined the effort in March 2020. The project uses computers of volunteers to model SARS-CoV-2 virus proteins to discover possible drug targets or create new proteins to neutralize the virus. Researchers revealed that with the help of Rosetta@home, they had been able to "accurately predict the atomic-scale structure of an important coronavirus protein weeks before it could be measured in the lab." In 2022, the parent Boinc company thanked contributors for donating their computer power and helping work on the de novo protein design including vaccine development.
The OpenPandemics – COVID-19 project is a partnership between Scripps Research and IBM's World Community Grid for a distributed computing project that "will automatically run a simulated experiment in the background [of connected home PCs] which will help predict the effectiveness of a particular chemical compound as a possible treatment for COVID-19". The project asked volunteers to donate unused computing power. In 2024, the project was looking at targeting the DNA polymerase of the cytomegalovirus to identify binders.
The Eterna OpenVaccine project enables video game players to "design an mRNA encoding a potential vaccine against the novel coronavirus." In mid-2021, it was noted that the project had helped create a library of potential vaccine molecules to be tested at Stanford University; SU researchers also noted that importance of volunteers discussing the games and exchanging ideas.
In March 2020, the EU-Citizen.Science project had "a selection of resources related to the current COVID19 pandemic. It contains links to citizen science and crowdsourcing projects"
The COVID-19 Citizen Science project was "a new initiative by University of California, San Francisco physician-scientists" that "will allow anyone in the world age 18 or over to become a citizen scientist advancing understanding of the disease." By 2024, the Eureka platform had over 100,000 participants.
The CoronaReport digital journalism project was "a citizen science project which democratizes the reporting on the Coronavirus, and makes these reports accessible to other citizens." It was developed by the University of Edinburgh and asked people affected by Covid to share the social effects of the pandemic.
The COVID Symptom Tracker was a crowdsourced study of the symptoms of the virus. It was created in the UK by King’s College London and Guy’s and St Thomas’ Hospitals. It had two million downloads by April 2020. Within three months, information from the app had helped identify six variations of Covid. Government funding ended in early 2022, but due to the large number of volunteers, Zoe decided to continue the work to study general health. By February 2023, over 75,000 people had downloaded the renamed Zoe Habit Tracker.
The Covid Near You epidemiology tool "uses crowdsourced data to visualize maps to help citizens and public health agencies identify current and potential hotspots for the recent pandemic coronavirus, COVID-19." The site was launched in Boston in March 2020; at the end of 2020 it was rebranded to Outbreaks Near Me and tracked both Covid and flu.
The We-Care project is a novel initiative by University of California, Davis researchers that uses anonymity and crowdsourced information to alert infected users and slow the spread of COVID-19.
COVID Radar was an app in the Netherlands, active between April 2020 and February 2022, with which users anonymously answered a short daily questionnaire asking about their symptoms, behavior, coronavirus test results, and vaccination status. Symptoms and behavior were visualized on a map and users received feedback on their individual risk and behaviors relative to the national mean. The app had over 250,000 users, who filled out the questionnaire over 8.5 million times. Research from this app continued to be used in 2024.
For coronavirus studies and information that can help enable citizen science, many online resources are available through open access and open science websites, including an intensive care medicine e-book chapter hosted by EMCrit and portals run by the Cambridge University Press, the Europe branch of the Scholarly Publishing and Academic Resources Coalition, The Lancet, John Wiley and Sons, and Springer Nature.
There have been suggestions that the pandemic and subsequent lockdown has boosted the public’s awareness and interest in citizen science, with more people around the world having the motivation and the time to become involved in helping to investigate the illness and potentially move on to other areas of research.
Around the world
The Citizen Science Global Partnership was created in 2022; the partnership brings together networks from Australia, Africa, Asia, Europe, South America and the USA.
Africa
In South Africa (SA), citizen science projects include: the Stream Assessment Scoring System (miniSASS) which "encourages enhanced catchment management for water security in a climate stressed society."
The South African National Biodiversity Institute is partnered with iNaturalist as a platform for biodiversity observations using digital photography and geolocation technology to monitor biodiversity. Such partnerships can reduce duplication of effort, help standardise procedures and make the data more accessible.
Also in SA, "Members of the public, or 'citizen scientists' are helping researchers from the University of Pretoria to identify Phytophthora species present in the fynbos."
In June 2016, citizen science experts from across East Africa gathered in Nairobi, Kenya, for a symposium organised by the Tropical Biology Association (TBA) in partnership with the Centre for Ecology & Hydrology (CEH). The aim was "to harness the growing interest and expertise in East Africa to stimulate new ideas and collaborations in citizen science." Rosie Trevelyan of the TBA said: "We need to enhance our knowledge about the status of Africa's species and the threats facing them. And scientists can't do it all on their own. At the same time, citizen science is an extremely effective way of connecting people more closely to nature and enrolling more people in conservation action".
The website Zooniverse hosts several African citizen science projects, including: Snapshot Serengeti, Wildcam Gorongosa and Jungle Rhythms.
Nigeria has the Ibadan Bird Club whose to aim is to "exchange ideas and share knowledge about birds, and get actively involved in the conservation of birds and biodiversity."
In Namibia, Giraffe Spotter.org is "project that will provide people with an online citizen science platform for giraffes".
Within the Republic of the Congo, the territories of an indigenous people have been mapped so that "the Mbendjele tribe can protect treasured trees from being cut down by logging companies". An Android open-source app called Sapelli was used by the Mbendjele which helped them map "their tribal lands and highlighted trees that were important to them, usually for medicinal reasons or religious significance. Congolaise Industrielle des Bois then verified the trees that the tribe documented as valuable and removed them from its cutting schedule. The tribe also documented illegal logging and poaching activities."
In West Africa, the eradication of the recent outbreak of Ebola virus disease was partly helped by citizen science. "Communities learnt how to assess the risks posed by the disease independently of prior cultural assumptions, and local empiricism allowed cultural rules to be reviewed, suspended or changed as epidemiological facts emerged." "Citizen science is alive and well in all three Ebola-affected countries. And if only a fraction of the international aid directed at rebuilding health systems were to be redirected towards support for citizen science, that might be a fitting memorial to those who died in the epidemic."
The CitSci Africa Association held its International Conference in February 2024 in Nairobi.
Asia
The Hong Kong Birdwatching Society was established in 1957, and is the only local civil society aiming at appreciating and conserving Hong Kong birds and their natural environment. Their bird surveys go back to 1958, and they carry out a number of Citizen Science events such as their yearly sparrow census.
The Bird Count India partnership consists of a large number of organizations and groups involved in birdwatching and bird surveys. They coordinate a number of Citizen Science projects such as the Kerala Bird Atlas and Mysore city Bird Atlas that map the distribution and abundance of birds of entire Indian states..
RAD@home Collaboratory is an Indian citizen science research programme in astronomy & astrophysics. Launched on 15th April 2013 this programme uses hybrid model, social media platforms and in-person training of the interested participants . Recently in 2022, the Collaboratory reported discovery of an active galactic nucleus, a radio galaxy named RAD12, spewing a large unipolar radio bubble on to its merging companion galaxy.
The Taiwan Roadkill Observation Network was founded in 2011 and has more than 16,000 members as of 2019. It is a citizen science project where roadkill across Taiwan is photographed and sent to the Endemic Species Research Institute for study. Its primary goal has been to set up an eco-friendly path to mitigate roadkill challenges and popularize a national discourse on environmental issues and civil participation in scientific research. The members of the Taiwan Roadkill Observation Network volunteer to observe animals' corpses that are by caused by roadkill or by other reasons. Volunteers can then upload pictures and geographic locations of the roadkill to an internet database or send the corpses to the Endemic Species Research as specimens.Because members come from different areas of the island, the collection of data serves as an animal distribution map of the island. According to the geographical data and pictures of corpses collected by the members, the community itself and the sponsor, the Endemic Species Center could find out the hotspots and the reasons for the animals' deaths. One of the most renowned cases is that the community successfully detected rabies cases due to the huge collection of data. The corpses of Melogale moschata had accumulated for years and are thought to be carriers of rabies. Alarmed by this, the government authority took actions to prevent the prevalence of rabies in Taiwan.In another case in 2014, some citizen scientists discovered birds that had died from unknown causes near an agricultural area. The Taiwan Roadkill Observation Network cooperated with National Pingtung University of Science and Technology and engaged citizen scientists to collect bird corpses. The volunteers collected 250 bird corpses for laboratory tests, which confirmed that the bird deaths were attributable to pesticides used on crops. This prompted the Taiwanese government to restrict pesticides, and the Bill of Pesticide Management amendment was passed after the third reading in the Legislative Yuan, establishing a pesticide control system. The results indicated that Taiwan Roadkill Observation Network had developed a set of shared working methods and jointly completed certain actions. Furthermore, the community of the Taiwan Roadkill Observation Network had made real changes to road design to avoid roadkill, improved the management of usage of pesticide, epidemic prevention, as well as other examples. By mid-2024, volunteers had observed over 293,000 animals. The network, the largest citizen science project in Taiwan, noted that more than half of roadkill were amphibians (eg, frogs), while one third are reptiles and birds.
The AirBox Project was launched in Taiwan to create a participatory ecosystem with a focus on PM2.5 monitoring through AirBox devices. By the end of 2014, the public had paid more attention to the PM2.5 levels because the air pollution problem had become worse, especially in central and southern Taiwan. High PM2.5 levels are harmful to our health, with respiratory problems as an example. These pollution levels aroused public concern and led to an intensive debate about air pollution sources. Some experts suggested that air quality was affected by pollutants from mainland China, while some environmentalists believed that it was the result of industrialization, because of, for example, exhaust fumes from local power plants or factories. However, no one knew the answer because of insufficient data.Dr. Ling-Jyh Chen, a researcher of the Institute of Information Science, Academia Sinica, launched The AirBox Project. His original idea was inspired by a popular Taiwanese slogan "Save Your Environment by Yourself". As an expert in a Participatory Sensing system, he decided to take this ground-up approach to collect PM2.5 level data, and thus through open data and data analysis to have a better understanding of the possible air pollution sources. Using this ecosystem, huge amounts of data was collected from AirBox devices. This data was instantly available online, informing people of PM2.5 levels. They could then take the proper actions, such as wearing a mask or staying at home, preventing themselves from going out into the polluted environment.Data can also be analyzed to understand the possible sources of pollution and provide recommendations for improving the situation. There are four main steps to this project: i) Develop the AirBox device. Developing a device that could correctly collect the data of the PM2.5 level was time-consuming. It had taken more than three years to develop an AirBox that can be easily used, but with both high accuracy and low cost. ii) The widespread installation of AirBoxes. In the beginning, very few people were willing to install it at their homes because of their concerns about the possible harm to their health, power consumption and maintenance. Because of this, AirBoxes were only installed in a relatively small area. But with help from Taiwan's LASS (Location Aware Sensing System) community, AirBoxes appeared in all parts of Taiwan. As of February 2017, there are more than 1,600 AirBoxes installed in more than 27 countries. iii) Open Source and Data Analysis. All measurement results were released and visualized in real-time to the public through different media. Data can be analyzed to trace pollution sources. By December 2019, there were over 4,000 AirBoxes installed across the country.
Japan has a long history of citizen science involvement, the 1,200-year-old tradition of collecting records on cherry blossom flowering probably being the world's longest-running citizen science project. One of the most influential citizen science projects has also come out of Japan: Safecast. Dedicated to open citizen science for the environment, Safecast was established in the wake of the Fukushima nuclear disaster, and produces open hardware sensors for radiation and air-pollution mapping. Presenting this data via a global open data network and maps
As technology and public interest grew, the CitizenScience.Asia group was set up in 2022; it grew from an initial hackathon in Hong Kong which worked on the 2016 Zika scare. The network is part of Citizen Science Global Partnership.
Europe
The English naturalist Charles Darwin (1809–1882) is widely regarded to have been one of the earliest citizen science contributors in Europe (see ). A century later, citizen science was experienced by adolescents in Italy during the 1980s, working on urban energy usages and air pollution.
In his book "Citizen Science", Alan Irwin considers the role that scientific expertise can play in bringing the public and science together and building a more scientifically active citizenry, empowering individuals to contribute to scientific development. Since then a citizen science green paper was published in 2013, and European Commission policy directives have included citizen science as one of five strategic areas with funding allocated to support initiatives through the 'Science With and For Society (SwafS)', a strand of the Horizon 2020 programme. This includes significant awards such as the EU Citizen Science Project, which is creating a hub for knowledge sharing, coordination, and action. The European Citizen Science Association (ECSA) was set up in 2014 to encourage the growth of citizen science across Europe, to increase public participation in scientific processes, mainly by initiating and supporting citizen science projects as well as conducting research. ECSA has a membership of over 250 individual and organisational members from over 30 countries across the European Union and beyond.
Examples of citizen science organisations and associations based in Europe include the Biosphere Expeditions (Ireland), Bürger schaffen Wissen (Germany), Citizen Science Lab at Leiden University (Netherlands), Ibercivis (See External Links), Österreich forscht (Austria). Other organisations can be found here: EU Citizen Science.
The European Citizen Science Association was created in 2014.
In 2023, the European Union Prize for Citizen Science was established. Bestowed through Ars Electronica, the prize was designed to honor, present and support "outstanding projects whose social and political impact advances the further development of a pluralistic, inclusive and sustainable society in Europe".
Latin America
In 2015, the Asháninka people from Apiwtxa, which crosses the border between Brazil and Peru, began using the Android app Sapelli to monitor their land. The Ashaninka have "faced historical pressures of disease, exploitation and displacement, and today still face the illegal invasion of their lands by loggers and hunters. This monitoring project shows how the Apiwtxa Ashaninka from the Kampa do Rio Amônia Indigenous Territory, Brazil, are beginning to use smartphones and technological tools to monitor these illegal activities more effectively."
In Argentina, two smartphone Android applications are available for citizen science. i) AppEAR has been developed at the Institute of Limnology and was launched in May 2016. Joaquín Cochero is a researcher who developed an "application that appeals to the collaboration of users of mobile devices in collecting data that allow the study of aquatic ecosystems" (translation). Cochero stated: "Not much of citizen science in Argentina, just a few more oriented to astronomy specific cases. As ours is the first. And I have volunteers from different parts of the country that are interested in joining together to centralize data. That's great because these types of things require many people participate actively and voluntarily" (translation). ii) eBird was launched in 2013, and has so far identified 965 species of birds. eBird in Argentina is "developed and managed by the Cornell Lab of Ornithology at Cornell University, one of the most important ornithological institutions in the world, and locally presented recently with the support of the Ministry of Science, Technology and Productive Innovation of the Nation (MINCyT)" (translation).
Projects in Brazil include: i) Platform and mobile app 'Missions' has been developed by IBM in their São Paulo research lab with Brazil's Ministry for Environment and Innovation (BMEI). Sergio Borger, an IBM team lead in São Paulo, devised the crowdsourced approach when BMEI approached the company in 2010. They were looking for a way to create a central repository for the rainforest data. Users can upload photos of a plant species and its components, enter its characteristics (such as color and size), compare it against a catalog photo and classify it. The classification results are juried by crowdsourced ratings. ii) Exoss Citizen Science is a member of Astronomers Without Borders and seeks to explore the southern sky for new meteors and radiants. Users can report meteor fireballs through uploading pictures on to a webpage or by linking to YouTube. iii) The Information System on Brazilian Biodiversity (SiBBr) was launched in 2014 "aiming to encourage and facilitate the publication, integration, access and use of information about the biodiversity of the country." Their initial goal "was to gather 2.5 million occurrence records of species from biological collections in Brazil and abroad up to the end of 2016. It is now expected that SiBBr will reach nine million records in 2016." Andrea Portela said: "In 2016, we will begin with the citizen science. They are tools that enable anyone, without any technical knowledge, to participate. With this we will achieve greater engagement with society. People will be able to have more interaction with the platform, contribute and comment on what Brazil has." iv) The Brazilian Marine Megafauna Project (Iniciativa Pro Mar) is working with the European CSA towards its main goal, which is the "sensibilization of society for marine life issues" and concerns about pollution and the over-exploitation of natural resources. Having started as a project monitoring manta ray, it now extends to whale shark and educating schools and divers within the Santos area. Its social media activities include a live streaming of a citizen science course to help divers identify marine megafauna. v) A smartphone app called Plantix has been developed by the Leibniz Centre for Agricultural Landscape Research (ZALF) which helps Brazilian farmers discover crop diseases quicker and helps fight them more efficiently. Brazil is a very large agricultural exporter, but between 10 and 30% of crops fail because of disease. "The database currently includes 175 frequently occurring crop diseases and pests as well as 40,000 photos. The identification algorithm of the app improves with every image which records a success rate of over 90 per cent as of approximately 500 photos per crop disease." vi) In an Atlantic Ocean forest region in Brazil, an effort to map the genetic riches of soil is under way. The Drugs From Dirt initiative, based at the Rockefeller University, seeks to turn up bacteria that yield new types of antibiotics – the Brazilian region being particularly rich in potentially useful bacterial genes. Approximately a quarter of the 185 soil samples have been taken by Citizen Scientists without which the project could not run.
In Chile citizen science projects include (some websites in Spanish): i) Testing new cancer therapies with scientists from the Science Foundation for Life. ii) Monitoring the population of the Chilean bumblebee. iii) Monitoring the invasive ladybird Chinita arlequín. iv) Collecting rain water data. v) Monitoring various pollinating fly populations. vi) Providing information and field data on the abundance and distribution of various species of rockfish. vii) Investigating the environmental pollution by plastic litter.
Projects in Colombia include (some websites in Spanish): i) The Communications Project of the Humboldt Institute along with the Organization for Education and Environmental Protection initiated projects in the Bogotá wetlands of Cordoba and El Burro, which have a lot of biodiversity. ii) In the Model Forest of Risaralda, the Colombia promotes citizen participation in research related to how the local environment is adapting to climate change. The first meeting took place in the Flora and Fauna Sanctuary Otún Quimbaya. iii) The Citizen Network Environmental Monitoring (CLUSTER), based in the city of Bucaramanga, seeks to engage younger students in data science, who are trained in building weather stations with open repositories based on free software and open hardware data. iv) The Symposium on Biodiversity has adapted the CS tool iNaturalist for use in Colombia. v) The Sinchi Amazonic Institute of Scientific Research seeks to encourage the development and diffusion of knowledge, values and technologies on the management of natural resources for ethnic groups in the Amazon. This research should further the use of participatory action research schemes and promoting participation communities.
Since 2010, the Pacific Biodiversity Institute (PBI) seeks "volunteers to help identify, describe and protect wildland complexes and roadless areas in South America". The PBI "are engaged in an ambitious project with our Latin American conservation partners to map all the wildlands in South America, to evaluate their contribution to global biodiversity and to share and disseminate this information."
In Mexico, a citizen science project has monitored rainfall data that is linked to a hydrologic payment for ecosystem services project.
Conferences
The first Conference on Public Participation in Scientific Research was held in Portland, Oregon, in August 2012. Citizen science is now often a theme at large conferences, such as the annual meeting of the American Geophysical Union.
In 2010, 2012 and 2014 there were three Citizen Cyberscience summits, organised by the Citizen Cyberscience Centre in Geneva and University College London. The 2014 summit was hosted in London and attracted over 300 participants.
In November 2015, the ETH Zürich and University of Zürich hosted an international meeting on the "Challenges and Opportunities in Citizen Science".
The first citizen science conference hosted by the Citizen Science Association was in San Jose, California, in February 2015 in partnership with the AAAS conference. The Citizen Science Association conference, CitSci 2017, was held in Saint Paul, Minnesota, United States, between 17 and 20 May 2017. The conference had more than 600 attendees. The next CitSci was in March 2019 in Raleigh, North Carolina.
The platform "Österreich forscht" hosts the annual Austrian citizen science conference since 2015.
In popular culture
Barbara Kingsolver’s 2012 novel Flight Behaviour looks at the effects of citizen science on a housewife in Appalachia, when her interest in butterflies brings her into contact with scientists and academics.
See also
List of citizen science projects
List of crowdsourcing projects
List of volunteer computing projects
(produced by artists that are non-institutionalized, in the same way as citizen scientists)
References
Further reading
"The Mozzie Monitors program marks the first time formal mosquito trapping has been combined with citizen science." (Australian project)
Dick Kasperowsik (interviewed by Ulrich Herb): Citizen Science as democratization of science? In: telepolis, 2016, 27 August
Ridley, Matt. (8 February 2012) "Following the Crowd to Citizen Science". The Wall Street Journal
Young, Jeffrey R. (28 May 2010). "Crowd Science Reaches New Heights", The Chronicle of Higher Education
External links
"Controversy over the term 'citizen science'". CBC News. 13 August 2021. Retrieved 15 April 2023.
Crowdsourcing
Human-based computation
Open science | Citizen science | [
"Technology"
] | 17,249 | [
"Information systems",
"Human-based computation"
] |
2,155,853 | https://en.wikipedia.org/wiki/Zeta%20Geminorum | Zeta Geminorum (ζ Geminorum, abbreviated Zeta Gem, ζ Gem) is a bright star with cluster components, distant optical components and a likely spectroscopic partner in the zodiac constellation of Gemini — in its south, on the left 'leg' of the twin Pollux. It is a classical Cepheid variable star, of which over 800 have been found in our galaxy. As such its regular pulsation and luminosity (proven in its class to correspond) and its relative proximity means the star is a useful calibrator in computing the cosmic distance ladder. Based on parallax measurements, it is approximately 1,200 light-years from the Sun.
Zeta Geminorum is the primary or 'A' component of a multiple star system designated WDS J07041+2034. It bears traditional name Mekbuda, usually anglicised to .
Nomenclature
ζ Geminorum (Latinised to Zeta Geminorum) is the star's Bayer designation. WDS J07041+2034 A is its designation in the Washington Double Star Catalog. The designations of the two components as WDS J03158-0849 Aa and Ab derive from the convention used by the Washington Multiplicity Catalog (WMC) for multiple star systems, and adopted by the International Astronomical Union (IAU).
Zeta Geminorum bore the traditional name Mekbuda, from an Arabic phrase meaning "the lion's folded paw" (Zeta and Epsilon Geminorum (Mebsuta) were the paws of a lion). In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalogue and standardize proper names for stars. The WGSN decided to attribute proper names to individual stars rather than entire multiple systems. It approved the name Mekbuda for the component WDS J07041+2034 Aa on 12 September 2016 and it is now so included in the List of IAU-approved Star Names.
In Chinese, (), meaning Well (asterism) consists of eight stars in Gemini: Zeta, Mu, Gamma, Nu, Xi, Epsilon, 36 and Lambda. Zeta is (, ).
Observation history
In 1844, German astronomer Julius Schmidt discovered that Zeta Geminorum varies in brightness with a period of about 10 days, although it had been suspected of variability as early as 1790. It was recognised as being related to the Cepheid class of variable stars, although was often treated as the prototype of its own class, the Geminids, because of its symmetrical light curve.
In 1899, American astronomer W. W. Campbell announced the star to have a variable radial velocity. (This variation was independently discovered by Russian astronomer Aristarkh Belopolsky, published in 1901.) Based on his observations, Campbell later published orbital elements for the binary. However, he found that the curve departed from a keplerian orbit and even suggested that it was a triple star system to explain the irregularities. The periodic variation in radial velocity of the Cepheid variables was later explained as being due to pulsations in the atmosphere of the star.
The periodicity of the star is itself variable, a trend first noted by German astronomer Paul Guthnick in 1920, who suspected that the period change was the result of an orbiting companion. In 1930, Danish astronomer Axel Nielsen suggested that the change was instead the result in a steady decrease of about 3.6 seconds per year in the period.
Companions
Zeta Geminorum has three visible companions known since the 19th century and listed in the Washington Double Star Catalog as B, C, and D. More recently, a possible spectroscopic companion has been listed, further faint stars close by have been catalogued, and a diffuse cluster has been identified as including Zeta Geminorum.
The brightest nearby star, WDS J07041+2034 C, is the magnitude 7.6 HD 268518, 91.9" away when discovered in 1779 and 101.3" distant in 2008. It is a foreground object, only a tenth the distance of Zeta Geminorum and a high proper motion star moving rapidly compared to the more distant stars. It is a G1 main sequence star very similar to the sun.
The closest visible companion is WDS J07041+2034 D, a 12th magnitude star measured to be 67.8" away in 2008. It was 80" distant when first measured in 1905. It appears on the sky between Zeta Geminorum and component C, but is a more distant object than either.
WDS J07041+2034 B is an 11th magnitude star, 76.0" distant in 1831 and 87.4" in 2008. It is itself a spectroscopic binary, although little is known about the two components. The combined spectrum is of an F4 main sequence star. It is thought to be physically associated with the supergiant primary and a member of a loose cluster of stars around Zeta Geminorum.
A combination of photometry, spectroscopy, and astrometry has identified 26 stars approximately 355 parsecs away, which are likely to be members of the birth cluster of Zeta Geminorum. The brightest are late B and early A giant stars such as the 7th magnitude stars HD 49381 and HD 50634, while the faintest detected cluster members are 12th magnitude class F main sequence stars including WDS J07041+2034 B.
Properties
Zeta Geminorum has been reported to be a spectroscopic binary on the basis of lunar occultation observations, but this has not been confirmed by other methods.
Zeta Geminorum's primary (WDS J07041+2034 Aa) is a Classical Cepheid variable that undergoes regular, periodic variation in brightness because of radial pulsations. In the V band, the apparent magnitude varies between a high of 3.68 and a low of 4.16 (with a mean of 3.93) over a period of 10.148 days. This period of variation is decreasing at the rate of 3.1 seconds per year, or 0.085 seconds per cycle. The spectral classification varies between F7Ib and G3Ib over the course of a pulsation cycle. Likewise the effective temperature of the outer envelope varies between 5,780 K and 5,260 K, while the radius varies from 61 to 69 times the Sun's radius. On average, it is radiating about 2,900 times the luminosity of the Sun.
Membership of a cluster provides independent validation of distances determined using recent Hubble Space Telescope and Hipparcos parallaxes. This strongly constrains the star's distance: parsecs. Zeta Geminorum is thus an important calibrator for the Cepheid period-luminosity relation used for establishing the cosmic distance ladder. The Gaia Data Release 2 parallax of suggests the distance is towards the top end of this range, and has a comparable margin of error.
References
External links
Zeta Geminorum (Variable Star of the Month)
Geminorum, Zeta
Gemini (constellation)
Classical Cepheid variables
F-type supergiants
Mekbuda
2650
Geminorum, 43
052973
052973
Durchmusterung objects | Zeta Geminorum | [
"Astronomy"
] | 1,497 | [
"Gemini (constellation)",
"Constellations"
] |
2,156,007 | https://en.wikipedia.org/wiki/NetAid | NetAid was an anti-poverty initiative. It started as a joint venture between the United Nations Development Programme and Cisco Systems. It became an independent nonprofit organization in 2001. In 2007, NetAid became a part of Mercy Corps.
Launch concerts
NetAid began with a concert event on 9 October 1999 with simultaneous activities meant to harness the Internet to raise money and awareness for the Jubilee 2000 campaign. Concerts took place at Wembley Stadium in London, Giants Stadium in New Jersey and the Palais des Nations in Geneva. The Wembley show was at capacity; the U.S. show suffered from very poor ticket sales.
Performers at Wembley Stadium included:
Eurythmics, The Corrs, Catatonia, Bush, Bryan Adams, George Michael, David Bowie, Stereophonics and Robbie Williams.
Performers at Giants Stadium included:
Sheryl Crow, Jimmy Page, Busta Rhymes, Counting Crows, Bono, Puff Daddy, The Black Crowes, Wyclef Jean, Jewel, Mary J. Blige, Cheb Mami, Sting, Slash, Lil' Kim, Lil' Cease, and Zucchero.
Performers in Geneva included: Bryan Ferry, Texas, Des'ree and Ladysmith Black Mambazo.
The NetAid website, originally at www.netaid.org, received over 2.4 million hits and raised $830,000 from 80 countries. Cisco sponsored the concerts and the web site. Along with Kofi Annan, Keyur Patel, MD of KPMG Consulting spearheaded the technology architecture development of the web site, and Anaal Udaybabu (Gigabaud Studios, San Francisco) designed the user experience.
Wyclef Jean released a charity single featuring Bono entitled "New Day" coinciding with NetAid. The song also has an accompanying music video that premiered on MTV's Total Request Live (USA) on September 21, 1999, although the video never charted.
Programs & Administration
Robert Piper of UNDP served as a manager of the NetAid initiative for its launch in 1999.
Following the concerts, NetAid was spun out of Cisco as an independent entity and tried various approaches to raising awareness of extreme poverty and raising money for anti-poverty projects undertaken by other organizations, through a variety of different NetAid campaigns.
In 2000, NetAid launched an online volunteering matching service on its website, in partnership with the United Nations Volunteers programme, then under the direction of Sharon Capeling-Alakija. The volunteering section of the web site, managed by UNV staff, allowed non-governmental organizations (NGOs) and UN-affiliated projects serving the developing world to recruit and involve online volunteers in various projects. UNV took ownership of the online volunteering portion of the service in 2004, moving it to its own URL at onlinevolunteering.org.
In February 2001, Time magazine and NetAid announced a pioneering initiative aimed at collecting donations through Palm VII handheld computers, allowing volunteers collect credit card data from friends and input the information into the NetAid web site via these newly-wireless devices. The experiment "pushes the envelope for Web-based charities, according to analysts, who said the bid to turn handhelds into virtual wallets faces some significant hurdles--for example, guaranteeing the privacy and security of contributors."
In response criticisms regarding its finances, NetAid published a web page in November 2001 citing its record of donations to anti-poverty initiatives to date, such as granting "$1.4 million to 16 poverty alleviation projects in Kosovo and Africa — well over the $1m that had been raised from the public to that point... the remaining $10.6 million was dedicated to creating an innovative institution that will generate new support for reducing global poverty over the long term. Since January 2000, NetAid has used approximately $2 million to catalyze new support and partnerships for fighting global poverty."
As an incubator for civic technology, NetAid explored the use of videogames for social change, co-founding the Games for Change movement in 2004. NetAid's work with games was initially offline, beginning with the "NetAid World Class" board game, which piloted in California, Massachusetts and New York in 2003. In 2004, NetAid co-produced a game with Cisco Systems called "Peter Packet," which addresses how the Internet can help fight poverty, focusing on issues of basic education, clean drinking water, and HIV-AIDS.
By 2006, NetAid had narrowed its focus to raising awareness among high school students in the USA regarding poverty in developing countries.
The different campaigns of NetAid are chronicled through archived versions of its web site, www.netaid.org, available at Wayback Machine.
MercyCorps
In 2007, NetAid became a part of Mercy Corps.
See also
Live Aid
Live 8
NetDay
Swedish Metal Aid
Virtual volunteering
References
Benefit concerts
Benefit concerts in the United States
History of the Internet
Information and communication technologies for development
International volunteer organizations
Non-profit technology
Poverty-related organizations based in the United States
1999 in music
October 1999 | NetAid | [
"Technology"
] | 1,031 | [
"Information and communications technology",
"Information and communication technologies for development",
"Information technology",
"Non-profit technology"
] |
2,156,081 | https://en.wikipedia.org/wiki/One%20Laptop%20per%20Child | One Laptop per Child (OLPC) was a non-profit initiative that operated from 2005 to 2014 with the goal of transforming education for children around the world by creating and distributing educational devices for the developing world, and by creating software and content for those devices.
When the program launched, the typical retail price for a laptop was considerably in excess of $1,000 (US), so achieving this objective required bringing a low-cost machine to production. This became the OLPC XO Laptop, a low-cost and low-power laptop computer designed by Yves Béhar with Continuum, now EPAM Continuum. The project was originally funded by member organizations such as AMD, eBay, Google, Marvell Technology Group, News Corporation, and Nortel. Chi Mei Corporation, Red Hat, and Quanta provided in-kind support. After disappointing sales, the hardware design part of the organization shut down in 2014.
The OLPC project was praised for pioneering low-cost, low-power laptops and inspiring later variants such as Eee PCs and Chromebooks; for assuring consensus at ministerial level in many countries that computer literacy is a mainstream part of education; for creating interfaces that worked without literacy in any language, and particularly without literacy in English.
It was criticized for its US-centric focus ignoring bigger problems, high total costs, low focus on maintainability and training and its limited success. The OLPC project is critically reviewed in a 2019 MIT Press book titled The Charisma Machine: The Life, Death, and Legacy of One Laptop per Child.
OLPC, Inc, a descendent of the original organization, continues to operate, but the design and creation of laptops is no longer part of its mission.
History
The OLPC program has its roots in the pedagogy of Seymour Papert, an approach known as constructionism, which espoused providing computers for children at early ages to enable full digital literacy. Papert, along with Nicholas Negroponte, were at the MIT Media Lab from its inception. Papert compared the old practice of putting computers in a computer lab to books chained to the walls in old libraries. Negroponte likened shared computers to shared pencils. However, this pattern seemed to be inevitable, given the then-high prices of computers (over $1,500 apiece for a typical laptop or small desktop by 2004).
In 2005, Negroponte spoke at the World Economic Forum, in Davos. In this talk he urged industry to solve the problem, to enable a $100 laptop, which would enable constructionist learning, would revolutionize education, and would bring the world's knowledge to all children. He brought a mock-up and was described as prowling the halls and corridors of Davos to whip up support. Despite the reported skepticism of Bill Gates and others, Negroponte left Davos with committed interest from AMD, News Corp, and with strong indications of support from many other firms. From the outset, it was clear that Negroponte thought that the key to reducing the cost of the laptop was to reduce the cost of the display. Thus, when, upon return from Davos, he met Mary Lou Jepsen, the display pioneer who was in early 2005 joining the MIT Media Lab faculty, the discussions turned quickly to display innovation to enable a low-cost laptop. Convinced that the project was now possible, Negroponte led the creation of the first corporation for this: the Hundred Dollar Laptop Corp.
At the 2006 Wikimania, Jimmy Wales announced that the One Laptop Per Child Project would be including Wikipedia as the first element in their content repository. Wales explained, "I think it is in my rational self interest to care about what happens to kids in Africa," elaborating in his fundraising appeal:
At the 2006 World Economic Forum in Davos, Switzerland, the United Nations Development Program (UNDP) announced it would back the laptop. UNDP released a statement saying they would work with OLPC to deliver "technology and resources to targeted schools in the least developed countries".
Starting in 2007, the Association managed development and logistics, and the Foundation managed fundraising such as the Give One Get One campaign ("G1G1").
Intel was a member of the association for a brief period in 2007. Shortly after OLPC's founder, Nicholas Negroponte, accused Intel of trying to destroy the non-profit, Intel joined the board with a mutual non-disparagement agreement between them and OLPC. Intel resigned its membership on January 3, 2008, citing disagreements with requests from Negroponte for Intel to stop dumping their Classmate PCs.
In 2008, Negroponte showed some doubt about the exclusive use of open-source software for the project, and made suggestions supporting a move towards adding Windows XP, which Microsoft was in the process of porting over to the XO hardware. Microsoft's Windows XP, however, was not seen by some as a sustainable operating system. Microsoft announced that they would sell them Windows XP for $3 per XO. It would be offered as an option on XO-1 laptops and possibly be able to dual boot alongside Linux. In response, Walter Bender, who was the former President of Software and Content for the OLPC project, left OLPC and founded Sugar Labs to continue development of the open source Sugar software which had been developed within OLPC. No significant deployments elected to purchase Windows licenses.
Charles Kane became the new President and Chief Operating Officer of the OLPC Association on May 2, 2008. In late 2008, the NYC Department of Education purchased some XO computers for use by New York schoolchildren.
Advertisements for OLPC began streaming on the video streaming website Hulu and others in 2008. One such ad has John Lennon advertising for OLPC, with an unknown voice actor redubbing over Lennon's voice.
In 2008, OLPC lost significant funding. Their annual budget was slashed from $12 million to $5 million which resulted in a restructuring on January 7, 2009. Development of the Sugar operating environment was moved entirely into the community, the Latin America support organization was spun out and staff reductions, including Jim Gettys, affected approximately 50% of the paid employees. The remaining 32 staff members also saw salary reductions. Despite the downsizing, OLPC continued development of the XO-1.5 laptops.
In 2010, OLPC moved its headquarters to Miami. The Miami office oversaw sales and support for the XO-1.5 laptop and its successors, including the XO Laptop version 4.0 and the OLPC Laptop. Funding from Marvell, finalized in May 2010, revitalized the foundation and enabled the 1Q 2012 completion of the ARM-based XO-1.75 laptops and initial prototypes of the XO-3 tablets. OLPC took orders for mass production of the XO 4.0, and shipped over 3 million XO Laptops to children around the world.
Criticism
At the World Summit on the Information Society held by the United Nations in Tunisia from November 16–18, 2005, several African representatives, most notably Marthe Dansokho (a missionary of United Methodist Church), voiced critic towards the motives of the OLPC project and claimed that the project presented solutions for misplaced priorities, stating that African women would not have enough time to research new crops to grow. She added that clean water and schools were more important. Mohammed Diop specifically criticized the project as an attempt to exploit the governments of poor nations by making them pay for hundreds of millions of machines and the need of further investments into internet infrastructure. Others have similarly criticized laptop deployments in very low income countries, regarding them as cost-ineffective when compared to far simpler measures such as deworming and other expenses on basic child health.
Lee Felsenstein, a computer engineer who played a central role in the development of the personal computer, criticized the centralized, top-down design and distribution of the OLPC.
In September 2009, Alanna Shaikh offered a eulogy for the project at UN Dispatch, stating "It's time to call a spade a spade. OLPC was a failure."
Cost
The project originally aimed for a price of 100 US dollars. In May 2006, Negroponte told the Red Hat's annual user summit: "It is a floating price. We are a nonprofit organization. We have a target of $100 by 2008, but probably it will be $135, maybe $140." A BBC news article in April 2010 indicated the price still remained above $200.
In April 2011, the price remained above $209. In 2013, more than 10% of the world population lived on less than US$2 per day. The latter income segment would have to spend more than a quarter of its annual income to purchase a single laptop, while the global average of Information and communications technology (ICT) spending is 3% of income. Empirical studies show that the borderline between ICT as a necessity good and ICT as a luxury good is roughly around the "magical number" of US$10 per person per month, or US$120 per year.
John Wood, founder of Room to Read (a non-profit which builds schools and libraries), emphasizes affordability and scalability over high-tech solutions. While in favor of the One Laptop per Child initiative for providing education to children in the developing world at a cheaper rate, he has pointed out that a $2,000 library can serve 400 children, costing just $5 a child to bring access to a wide range of books in the local languages (such as Khmer or Nepali) and English; also, a $10,000 school can serve 400–500 children ($20–25 a child). According to Wood, these are more appropriate solutions for education in the dense forests of Vietnam or rural Cambodia.
The Scandinavian aid organization FAIR proposed setting up computer labs with recycled second-hand computers as a cheaper initial investment. Negroponte argued against this proposition, stating the expensive running cost of conventional laptops. Computer Aid International doubted the OLPC sales strategy would succeed, citing the "untested" nature of its technology. CAI refurbishes computers and printers and sells them to developing countries for £42 a piece (compare it to £50 a piece for the OLPC laptops).
Teacher training and ongoing support
The OLPC project has been criticized for allegedly adopting a "one-shot" deployment approach with little or no technical support or teacher training, and for neglecting pilot programs and formal assessment of outcomes in favor of quick deployment. Some authors attribute this unconventional approach to the promoters' alleged focus on constructivist education and digital utopianism. Mark Warschauer, a Professor of University of California at Irvine and Morgan Ames, at the time of writing, a PhD candidate at Stanford University, pointed out that the laptop by itself does not completely fill the need of students in underprivileged countries. The "children's machines", as they have been called, have been deployed to several countries, for example Uruguay, Peru, and in the US, Alabama, but after a relatively short time, their usage declined considerably, sometimes because of hardware problems or breakage, in some cases, as high as 27–59% within the first two years, and sometimes due to a lack of knowledge on the part of the users on how to take full advantage of the machine.
However, another factor has recently been acknowledged: a lack of a direct relation to the pedagogy needed in the local context to be truly effective. Uruguay reports that only 21.5% of teachers use the laptop in the classroom on a daily basis, and 25% report using it less than once a week. In Alabama, 80.3% of students say they never or seldom use the computer for class work, and Peru, teachers report that in the first few months, 68.9% use the laptop three times per week, but after two months, only 40% report such usage. Those of a low socio-economic level tend to not be able to effectively use the laptop for educational purposes on their own, but with scaffolding and mentoring from teachers, the machine can become more useful. According to one of the returning OLPC executives, Walter Bender, the approach needs to be more holistic, combining technology with a prolonged community effort, teacher training and local educational efforts and insights.
The organization has been accused of simply giving underprivileged children laptops and "walking away". Some critics claim this "drive-by" implementation model was the official strategy of the project. While the organisation has learning teams dedicated to support and working with teachers, Negroponte has said in response to this criticism that "You actually can" give children a connected laptop and walk away, noting experiences with self-guided learning.
Other explanations of failure included a high minimum order, low reliability and maintainability, unsuitability to local conditions and culture, and encouragement of children to learn new ways of thinking instead of remaining loyal to old ways.
Technology
The XO, previously known as the "$100 Laptop" or "Children's Machine", is an inexpensive laptop computer designed to be distributed to children in developing countries around the world, to provide them with access to knowledge, and opportunities to "explore, experiment and express themselves" (constructionist learning). The laptop was designed by Yves Béhar with Design Continuum, and manufactured by the Taiwanese computer company Quanta Computer.
The rugged, low-power computers use flash memory instead of a hard drive, run a Fedora-based operating system and use the SugarLabs Sugar user interface. Mobile ad hoc networking based on the 802.11s wireless mesh network protocol allows students to collaborate on activities and to share Internet access from one connection. The wireless networking has much greater range than typical consumer laptops. The XO-1 was designed for lower cost and much longer life than typical laptops.
In 2009, OLPC announced an updated XO (dubbed XO-1.5) to take advantage of the latest component technologies. The XO-1.5 includes a new VIA C7-M processor and a new chipset providing a 3D graphics engine and an HD video decoder. It has 1 GB of RAM and built-in storage of 4 GB, with an option for 8 GB. The XO-1.5 uses the same display, and a network wireless interface with half the power dissipation.
Early prototype versions of the hardware were available in June 2009, and they were available for software development and testing available for free through a developer's program.
An XO-1.75 model was developed that used a Marvell ARM processor, targeting a price below $150 and date in 2011.
The XO-2 two sheet design concept was canceled in favor of the one sheet XO-3.
An XO-3 concept resembled a tablet computer and was planned to have the inner workings of the XO 1.75. Price goal was below $100 and date was 2012.
As of May 2010, OLPC was working with Marvell on other unspecified future tablet designs. In October 2010, both OLPC and Marvell signed an agreement granting OLPC $5.6 million to fund development of its XO-3 next generation tablet computer. The tablet was to use an ARM chip from Marvell.
At CES 2012, OLPC showcased the XO-3 model, which featured a touchscreen and a modified form of SugarLabs "Sugar". In early December 2012, however, it was announced that the XO-3 would not be seeing actual production, and focus had shifted to the XO-4.
The XO-4 was launched at International CES 2013 in Las Vegas The XO Laptop version 4 is available in two models: XO 4 and XO 4 Touch, with the latter providing multi-touch input on the display. The XO Laptop version 4 uses an ARM processor to provide high performance with low power consumption, while keeping the industrial design of the traditional XO Laptop.
Software
The laptops include an anti-theft system which can, optionally, require each laptop to periodically make contact with a server to renew its cryptographic lease token. If the cryptographic lease expires before the server is contacted, the laptop will be locked until a new token is provided. The contact may be to a country-specific server over a network or to a local, school-level server that has been manually loaded with cryptographic "lease" tokens that enable a laptop to run for days or even months between contacts. Cryptographic lease tokens can be supplied on a USB flash drive for non-networked schools. The mass production laptops are also tivoized, disallowing installation of additional software or replacement of the operating system. Users interested in development need to obtain the unlocking key separately (most developer laptops for Western users already come unlocked). It is claimed that locking prevents unintentional bricking and is part of the anti-theft system.
In 2006, the OLPC project was heavily criticised over Red Hat's non-disclosure agreement (NDA) with Marvell concerning the wireless device in OLPC, especially in light of the OLPC project being positioned as an open-source friendly initiative. An open letter for documentation was inked by Theo de Raadt (a recipient of the 2004 Award for the Advancement of Free Software), and the initiative for open documentation has been supported by Richard Stallman, the President of the Free Software Foundation. De Raadt later clarified that he finds an issue with OLPC having proprietary firmware files that are not allowed to be independently re-distributed (even in the binary form) by third-party operating systems like OpenBSD, as well as receiving no documentation to write the necessary drivers for the operating system. De Raadt has pointed out that the OpenBSD project requires no firmware source code, and no low-level documentation to work on firmware, only requiring the binary distribution rights and documentation to interface with the said binary firmware that runs outside of the main CPU, a quite simple request that is generally honoured by many other wireless device vendors like Ralink. Stallman fully agreed with de Raadt's request to open up the documentation, since Stallman is known to hold an even stronger and more idealistic position in regards to the proprietary components, and requires that even the firmware that runs outside of the main CPU must be provided in its source code form, something de Raadt does not require. De Raadt later has had to point out that such more idealistic and less realistic position has instead been misattributed to OpenBSD's more practical approach to make it look unreasonable, and stood on record that OpenBSD's position is much easier to satisfy, yet it nonetheless remained unresolved.
OLPC's dedication to "Free and open source" was questioned with their May 15, 2008, announcement that large-scale purchasers would be offered the choice to add an extra cost, special version of the proprietary Windows XP OS developed by Microsoft alongside the regular, free and open Linux-based operating system with the SugarLabs "Sugar OS" GUI. Microsoft developed a modified version of Windows XP and announced in May 2008 that Windows XP would be available for an additional cost of 10 dollars per laptop. James Utzschneider, from Microsoft, said that initially only one operating system could be chosen. OLPC, however, said that future OLPC work would enable XO-1 laptops to dual boot either the free and open Linux/Sugar OS or the proprietary Microsoft Windows XP. Negroponte further said that "OLPC will sell Linux-only and dual-boot, and will not sell Windows-only [XO-1 laptops]". OLPC released the first test firmware enabling XO-1 dual-boot on July 3, 2008. This option did not prove popular. As of 2011, a few pilots had received a few thousand total dual-boot machines, and the new ARM-based machines do not support Windows XP. No significant deployment purchased Windows licenses. Negroponte stated that the dispute had "become a distraction" for the project, and that its end goal was enabling children to learn, while constructionism and the open source ethos was more of a means to that end. Charles Kane concurred, stating that anything which detracted from the ultimate goal of widespread distribution and use was counterproductive.
Bugs
Jeff Patzer, who interned for One Laptop Per Child in Peru, said that teachers there are told to handle problems in one of two ways: if the problem is a software issue, they are to flash the computer, and if it is a hardware problem, they are to report it. He said that this blackboxing approach caused users to feel disconnected with and confused by the laptop, and often resulted in the laptops eventually going unused. Several defects in OLPC XO-1 hardware have emerged in the field, and laptop repair is often neglected by students or their families (who are responsible for maintenance) due to the relatively high cost of some components (such as displays).
On the software side, the Bitfrost security system has been known to deactivate improperly, rendering the laptop unusable until it is unlocked by support technicians with the proper keys (this is a time-consuming process, and the problem often affects large numbers of laptops at the same time). The Sugar interface has been difficult for teachers to learn, and the mesh networking feature in the OLPC XO-1 was buggy and went mostly unused in the field.
The OLPC XO-1 hardware lacks connectivity to external monitors or projectors, and teachers are not provided with software for remote assessment. As a result, students are unable to present their work to the whole class, and teachers must also assess students' work from the individual laptops. Teachers often find it difficult to use the keyboard and screen, which were designed with student use in mind.
Environmental impact
In 2005 and prior to the final design of the XO-1 hardware, OLPC received criticism because of concerns over the environmental and health impacts of hazardous materials found in most computers. The OLPC asserted that it aimed to use as many environmentally friendly materials as it could; that the laptop and all OLPC-supplied accessories would be fully compliant with the EU's Restriction of Hazardous Substances Directive (RoHS); and that the laptop would use an order of magnitude less power than the typical consumer netbooks available as of 2007 thus minimizing the environmental burden of power generation.
The XO-1 delivered (starting in 2007) uses environmental friendly materials, complies with the EU's RoHS and uses between 0.25 and 6.5 watts in operation. According to the Green Electronics Council's Electronic Product Environmental Assessment Tool, whose sole purpose is assessing and measuring the impact laptops have on the environment, the XO is not only non-toxic and fully recyclable, but it lasts longer, costs less, and is more energy efficient. The XO-1 is the first laptop to have been awarded an EPEAT Gold level rating.
Anonymity
Other discussions question whether OLPC laptops should be designed to promote anonymity or to facilitate government tracking of stolen laptops. A June 2008 New Scientist article critiqued Bitfrost's P_THEFT security option, which allows each laptop to be configured to transmit an individualized, non-repudiable digital signature to a central server at most once each day to remain functioning.
Distribution
The laptops are sold to governments, to be distributed through the ministries of education with the goal of distributing "one laptop per child". The laptops are given to students, similar to school uniforms and ultimately remain the property of the child. The operating system and software is localized to the languages of the participating countries.
OLPC later worked directly with program sponsors from the public and private sectors to implement its educational program in entire schools and communities. As a non-profit organization, OLPC did require a source of funding for its program so that the laptops are given to students at no cost to child or to his/her family.
Early distributions
Approximately 500 developer boards (Alpha-1) were distributed in mid-2006; 875 working prototypes (Beta 1) were delivered in late 2006; 2400 "Beta 2" machines were distributed at the end of February 2007; full-scale production started November 6, 2007. Around one million units were manufactured in 2008.
Give 1 Get 1 program
OLPC initially stated that no consumer version of the XO laptop was planned. The project, however, later established the laptopgiving.org website to accept direct donations and ran a "Give 1 Get 1" (G1G1) offer starting on November 12, 2007. The offer was initially scheduled to run for only two weeks, but was extended until December 31, 2007, to meet demand. With a donation of $399 (plus US$25 shipping cost) to the OLPC "Give 1 Get 1" program, donors received an XO-1 laptop of their own and OLPC sent another on their behalf to a child in a developing country. Shipments of "Get 1" laptops sent to donors were restricted to addresses within the United States, its territories, and Canada.
Some 83,500 people participated in the program. Delivery of all of the G1G1 laptops was completed by April 19, 2008. Delays were blamed on order fulfillment and shipment issues both within OLPC and with the outside contractors hired to manage those aspects of the G1G1 program.
Between November 17 and December 31, 2008, a second G1G1 program was run through Amazon.com and Amazon.co.uk. This partnership was chosen specifically to solve the distribution issues of the G1G1 2007 program. The price to consumers was the same as in 2007, at US$399.
The program aimed to be available worldwide. Laptops could be delivered in the US, in Canada and in more than 30 European countries, as well as in some Central and South American countries (Colombia, Haiti, Peru, Uruguay, Paraguay), African countries (Ethiopia, Ghana, Nigeria, Madagascar, Rwanda) and Asian countries (Afghanistan, Georgia, Kazakhstan, Mongolia, Nepal). Despite this, the program sold only about 12,500 laptops and generated a mere $2.5 million, a 93 percent decline from the year before.
Laptop shipments
, OLPC reported that more than 3 million laptops had been shipped.
Regional responses
Uruguay
In October 2007, Uruguay placed an order for 100,000 laptops, making Uruguay the first country to purchase a full order of laptops. The first real, non-pilot deployment of the OLPC technology happened in Uruguay in December 2007. Since then, 200,000 more laptops have been ordered to cover all public school children between 6 and 12 years old.
President Tabaré Vázquez of Uruguay presented the final laptop at a school in Montevideo on October 13, 2009. Over the last two years 362,000 pupils and 18,000 teachers have been involved, and has cost the state $260 (£159) per child, including maintenance costs, equipment repairs, training for the teachers and internet connection. The annual cost of maintaining the programme, including an information portal for pupils and teachers, will be US$21 (£13) per child.
The country reportedly became the first in the world where every primary school child received a free laptop on October 13, 2009 as part of the Plan Ceibal (Education Connect).
Even though roughly 35% of all OLPC computers went to Uruguay, a 2013 study by the Economics Institute (University of the Republic, Uruguay) of the Ceibal plan concluded that use of the laptops did not improve literacy and that the use of the laptops was mostly recreational, with only 4.1% of the laptops being used "all" or "most" days in 2012. The main conclusion was that the results showed no impact of the OLPC program on the test scores in reading and math. Still, more recent studies give an opposite view of the project's results, regarding it a success, like in the case of the 2020 publication by Broadband Commission for Sustainable Development.
Artsakh
On January 26, 2012, prime minister Ara Harutyunyan and entrepreneur Eduardo Eurnekian signed a memorandum of understanding launching an OLPC program in Artsakh. The program is geared towards elementary schools throughout Artsakh. Eurnekian hopes to decrease the gap by giving the war-zoned region an opportunity to engage in a more solid education. The New York-based nonprofit Armenian General Benevolent Union is helping to undertake the responsibility by providing on-the-ground support. The government of Artsakh is enthusiastic and is working with OLPC to bring the program to fruition.
Nigeria
Lagos Analysis Corp., also called Lancor, a Lagos, US-based Nigerian-owned company, sued OLPC in the end of 2007 for $20 million, claiming that the computer's keyboard design was stolen from a Lancor patented device. OLPC responded by claiming that they had not sold any multi-lingual keyboards in the design claimed by Lancor, and that Lancor had misrepresented and concealed material facts before the court. In January 2008, the Nigerian Federal Court rejected OLPC motion to dismiss LANCOR's lawsuit and extended its injunction against OLPC distributing its XO Laptops in Nigeria. OLPC appealed the Court's decision, the Appeal is still pending in the Nigerian Federal Court of Appeals. In March 2008, OLPC filed a lawsuit in Massachusetts to stop LANCOR from suing it in the United States. In October 2008, MIT News magazine erroneously reported that the Middlesex Superior Court granted OLPC's motions to dismiss all of LANCOR's claims against OLPC, Nicholas Negroponte, and Quanta. On October 22, 2010 OLPC voluntarily moved the Massachusetts Court to dismiss its own lawsuit against LANCOR.
In 2007, XO laptops in Nigeria were reported to contain pornographic material belonging to children participating in the OLPC Program. In response, OLPC Nigeria announced they would start equipping the machines with filters.
India
India's Ministry of Human Resource Development, in June 2006, rejected the initiative, saying "it would be impossible to justify an expenditure of this scale on a debatable scheme when public funds continue to be in inadequate supply for well-established needs listed in different policy documents". Later they stated plans to make laptops at $10 each for schoolchildren. Two designs submitted to the Ministry from a final year engineering student of Vellore Institute of Technology and a researcher from the Indian Institute of Science, Bangalore in May 2007 reportedly describe a laptop that could be produced for "$47 per laptop" for even small volumes. The Ministry announced in July 2008 that the cost of their proposed "$10 laptop" would in fact be $100 by the time the laptop became available. In 2010, a related $35 Sakshat Tablet was unveiled in India, released the next year as the "Aakash". In 2011, each Aakash sold for approximately $44 by an Indian company, DataWind. DataWind plans to launch similar projects in Brazil, Egypt, Panama, Thailand and Turkey.
OLPC later expressed support for the initiative.
In 2009, a number of states announced plans to order OLPCs. However, as of 2010, only the state of Manipur had deployed 1000 laptops.
See also
Child computer
Computer literacy
Digital divide
Digital textbook
Dynabook
Educational technology in sub-Saharan Africa
Simputer
Universal access to education
Web (2013 film)
World Computer Exchange
References
Further reading
External links
501(c)(4) nonprofit organizations
Appropriate technology organizations
Articles containing video clips
Digital divide
Information and communication technologies for development
MIT Media Lab
Organizations based in Cambridge, Massachusetts
Defunct computer companies based in Massachusetts
Defunct software companies of the United States
2005 establishments in Massachusetts
Organizations established in 2005
Defunct computer companies of the United States
Defunct computer hardware companies
2014 disestablishments in Massachusetts
Organizations disestablished in 2014 | One Laptop per Child | [
"Technology"
] | 6,584 | [
"Information and communications technology",
"Information and communication technologies for development"
] |
2,156,133 | https://en.wikipedia.org/wiki/Hyperdescent | Hyperdescent is the practice of classifying a child of mixed race ancestry in the more socially dominant of the parents' races.
Hyperdescent is the opposite of hypodescent (the practice of classifying a child of mixed race ancestry in the more socially subordinate parental race). Both hyperdescent and hypodescent vary from, and may not be mutually exclusive with, other methods of determining lineage, such as patrilineality and matrilineality.
Examples
Australia
Until well into the 20th century, Australian state and federal governments engaged in a program of forcibly separating Aboriginal children with white ancestry from their Aboriginal families, and raising them in institutions that were intended to prepare them for white foster homes, jobs under white employers and/or marriage to whites.
This occurred according to theories of hyperdescent that were popular among white people. These ideas were not usually shared by Aboriginal people. White politicians and officials utilised pseudo-scientific theories that Aboriginal people were genetically and culturally inferior to whites, and were becoming extinct. These authorities believed that it was therefore improper for part-white children to live as Aboriginal people.
It was also widely believed that if Aboriginal people whose descendants had children with whites for several generations, successive generations of descendants would be less and less distinguishable from whites.
In Australia, while there were many racist laws intended to keep Aboriginal people in a socially inferior position, there were no anti-miscegenation laws and hence no barriers to marriages between Aboriginal and white partners.
Latin America
Brazil
Brazil is an example of a country with a history of European slavery of black Africans somewhat analogous to that of the United States of America. However, in the United States, hypodescent was applied, gradually classifying anyone with African American ancestry as black, specifically in one-drop rule laws passed in Virginia and other states in the 20th century. In Brazil, by contrast, people of mixed race who were fair-skinned or were educated and of higher economic classes were accepted into the elite. Thomas E. Skidmore, in Black into White: Race and Nationality in Brazilian Thought explains that many of the Brazilian elite encouraged a national process of "whitening" through miscegenation. Skidmore writes (p. 55): In fact, miscegenation did not arouse the instinctive opposition of the white elite in Brazil. On the contrary, it was a well-recognized (and tacitly condoned) process by which a few mixed bloods (almost invariably light mulattoes) had risen to the top of the social and political hierarchy.
Hispanic America
Hyperdescent is the rule in the rest of Latin America as well. The mestizo populations of Latin America usually consider themselves to be of European culture rather than American Indian. This is also apparent in the United States, where the practice of hypodescent is the rule among the non-Hispanic population contrasting with hyperdescent among Hispanics. Nearly half of U.S. Hispanics called themselves "white" in the 2000 Census, along with 80% of the population of Puerto Rico. Non-Hispanics, on the other hand, if they are of mixed race, will usually call themselves white only if they are a small fraction (1/8 or 1/16) American Indian, but otherwise will claim being of mixed race or even of the minority race. In the 2000 Census, of 35,305,818 Hispanics, only 407,073 (or just over 1%) called themselves American Indian, and only 2,224,082 (just over 6%) claimed to be of mixed race, even though these Hispanic groups (such as Mexicans) are majority mestizo in their home countries.
About 41.2% of U.S. Hispanics identify as "Some other race" as of 2006, but government agencies which do not recognize "Some other race" (such as the FBI, the CDC, and the NCHS) include this group, and therefore over 90% of Hispanics, within the white population. In such cases, such as with the NCHS, separate statistics are often kept for "White" (which includes whites and over 90% of Hispanics) and "non-Hispanic white".
Iceland
Another example of hyperdescent is in Iceland, which was initially populated by Norsemen, who took with them slaves from Ireland, Scotland and England, including some women.
Modern Icelanders are considered to be a Scandinavian people ethnically, although many of the founding generations were Irish, Scottish, or English women.
See also
Quadroon
One-drop theory
Racial segregation
Racialism
Racial purity
References
Further reading
Christine B. Hickman, "The Devil and the One Drop Rule: Racial Categories, African Americans, and the U.S. Census," Michigan Law Review, Vol: 95, March, 1997, 1175-1176.
Ian F. Haney Lopez, White by Law: The Legal Construction of Race (NY: New York University Press: 1996)
Thomas e. Skidmore, Black into White: Race and Nationality in Brazilian Thought (Durham: Duke University press, 1993
Multiracial affairs
Kinship and descent | Hyperdescent | [
"Biology"
] | 1,050 | [
"Behavior",
"Human behavior",
"Kinship and descent"
] |
2,156,176 | https://en.wikipedia.org/wiki/Delay-insensitive%20minterm%20synthesis | Within digital electronics, the DIMS (delay-insensitive minterm synthesis) system is an asynchronous design methodology making the least possible timing assumptions. Assuming only the quasi-delay-insensitive delay model the generated designs need little if any timing hazard testing. The basis for DIMS is the use of two wires to represent each bit of data. This is known as a dual-rail data encoding. Parts of the system communicate using the early four-phase asynchronous protocol.
The construction of DIMS logic gates comprises generating every possible minterm using a row of C-elements and then gathering the outputs of these using OR gates which generate the true and false output signals. With two dual-rail inputs the gate would be composed of four two-input C-elements. A three input gate uses eight three-input C-elements.
Latches are constructed using two C-elements to store the data and an OR gate to acknowledge the input once the data has been latched by attaching as its inputs the data output wires. The acknowledge from the forward stage is inverted and passed to the C-elements to allow them to reset once the computation has completed. This latch design is known as the 'half latch'. Other asynchronous latches provide a higher data capacity and levels of decoupling.
DIMS designs are large and slow but they have the advantage of being very robust.
Further reading
Jens Sparsø, Steve Furber: "Principles of Asynchronous Circuit Design"; Kluwer, Dordrecht (2001); chapter 5.5.1.
Digital electronics | Delay-insensitive minterm synthesis | [
"Engineering"
] | 337 | [
"Electronic engineering",
"Digital electronics"
] |
2,156,522 | https://en.wikipedia.org/wiki/Institution%20%28computer%20science%29 | The notion of institution was created by Joseph Goguen and Rod Burstall in the late 1970s, in order to deal with the "population explosion among the logical systems used in computer science". The notion attempts to "formalize the informal" concept of logical system.
The use of institutions makes it possible to develop concepts of specification languages (like structuring of specifications, parameterization, implementation, refinement, and development), proof calculi, and even tools in a way completely independent of the underlying logical system. There are also morphisms that allow to relate and translate logical systems. Important applications of this are re-use of logical structure (also called borrowing), and heterogeneous specification and combination of logics.
The spread of institutional model theory has generalized various notions and results of model theory, and institutions themselves have impacted the progress of universal logic.
Definition
The theory of institutions does not assume anything about the nature of the logical system. That is, models and sentences may be arbitrary objects; the only assumption is that there is a satisfaction relation between models and sentences, telling whether a sentence holds in a model or not. Satisfaction is inspired by Tarski's truth definition, but can in fact be any binary relation.
A crucial feature of institutions is that models, sentences, and their satisfaction, are always considered to live in some vocabulary or context (called signature) that defines the (non-logic) symbols that may be used in sentences and that need to be interpreted in models. Moreover, signature morphisms allow to extend signatures, change notation, and so on. Nothing is assumed about signatures and signature morphisms except that signature morphisms can be composed; this amounts to having a
category of signatures and morphisms. Finally, it is assumed that signature morphisms lead to translations of sentences and models in a way that satisfaction is preserved. While sentences are translated along with signature morphisms (think of symbols being replaced along the morphism), models are translated (or better: reduced) against signature morphisms. For example, in the case of a signature extension, a model of the (larger) target signature may be reduced to a model of the (smaller) source signature by just forgetting some components of the model.
Let denote the opposite of the category of small categories. An institution formally consists of
a category of signatures,
a functor giving, for each signature , the set of sentences , and for each signature morphism , the sentence translation map , where often is written as ,
a functor giving, for each signature , the category of models , and for each signature morphism , the reduct functor , where often is written as ,
a satisfaction relation for each ,
such that for each in , the following satisfaction condition holds:
for each and .
The satisfaction condition expresses that truth is invariant under change of notation
(and also under enlargement or quotienting of context).
Strictly speaking, the model functor ends in the "category" of all large categories.
Examples of institutions
Common logic
Common Algebraic Specification Language (CASL)
First-order logic
Higher-order logic
Intuitionistic logic
Modal logic
Propositional logic
Temporal logic
Web Ontology Language (OWL)
See also
Abstract model theory
Institutional model theory
Universal logic
References
Further reading
. This was the first publication on institution theory and the preliminary version of Goguen and Burstall (1992).
External links
Formalism, Logic, Institution - Relating, Translating and Structuring. Includes large bibliography.
. Contains recent work on institutional model theory.
Theoretical computer science
Model theory | Institution (computer science) | [
"Mathematics"
] | 730 | [
"Theoretical computer science",
"Applied mathematics",
"Mathematical logic",
"Model theory"
] |
2,156,625 | https://en.wikipedia.org/wiki/Hypodescent | In societies that regard some races or ethnic groups of people as dominant or superior and others as subordinate or inferior, hypodescent refers to the automatic assignment of children of a mixed union to the subordinate group. The opposite practice is hyperdescent, in which children are assigned to the race that is considered dominant or superior.
Parallel practices include patrilineality, matrilineality, and cognatic descent, which assign race, ethnicity, or religion according to the father, mother, or some combination, without regard to the race of the other parent. These systems determine group membership based on the gender of the parent rather than the social dominance of the group, and thus can be hypodescent or hyperdescent depending on the genders of the parents and the views of the culture in which they live (i.e. patriarchal vs matriarchal societies).
Attempts to limit (or eliminate) mixed-race populations by legal means are defined in anti-miscegenation laws, such as passed by various states in the United States.
History
While customs, practices and systems of belief emphasizing the value of purity of descent are arguably as old as mankind, few societies systematically codified them, or legally enforced their outcome. Practices were enforced in communities. Such was the case in classical Greece – which made clear distinctions between Greeks and Barbarians. The Roman Republic had a different pattern. While it was expansionist, and militarily and culturally aggressive, it actively encouraged the Romanisation of client kingdoms, which included intermarriage of Romans with their elite citizens and making this class citizens of Rome as reward and as exemplars.
Hypo/hyperdescent in Colonial North America
The North American practice of applying a rule of hypodescent began during the colonial era when indentured servants and transported convicts working at the direction of European colonists and colonial authorities were joined by enslaved Africans that from the 16th century onwards were transported to the Americas via the Atlantic slave trade. But while the freed captives were Christians, these individuals were classified as indentured workers.
Virginia formally enacted a slave code in 1705. There is documentary evidence from the 1650s that some Africans in Virginia were serving lifelong terms of indenture. In the 1660s, the Assembly stated that "any English servant that shall run away in company with any Negroes who are incapable of making satisfaction by addition of time shall serve for the time of the said Negroes absence", indicating that at least some Africans could not "make satisfaction" by serving longer if recaptured (presumably because they were already indentured for life). This device gave legal status to the practice of lifetime enslavement of people of African descent; in subsequent statutes the legislature defined conditions of lifetime servitude.
In 1655, Elizabeth Key Grinstead, a mixed-race woman, fought and won the first freedom suit in Virginia. Her English father had acknowledged her as his daughter, had her baptized as Christian, and, falling ill, established a legal guardian to care for her after his death, arranging a limited-term indenture for her as a girl. But the guardian sold her indenture and left the colony, and the next master did not free her. When he died, his estate claimed her and her son as slave property.
However, following Key's victory, Virginia established the principle in law in 1662 of partus sequitur ventrem, from Roman law; that is, children born in the colonies would take the social status of their mothers. This meant that all children born to enslaved women would be born into slavery, regardless of their paternity and race. This was in contrast to English common law, by which the status of children of English subjects was determined by the father.
As slavery became a racial caste system, people of only partial African ancestry and majority European ancestry were born into slavery. African descent became associated with slavery. By hypodescent, people of even partial African ancestry were classified socially below whites. By the late 18th century, there were numerous families of majority-white slaves, such as the mixed-race children born to the slave Sally Hemings and her master Thomas Jefferson. She was three-quarters white and the half-sister of his late wife; their children, born into slavery, were seven-eighths white. Jefferson gave the four surviving children their freedom as adults; three assimilated into white society.
The Southern author Mary Chesnut wrote in her famous A Diary from Dixie, of the Civil War-era, that "any lady is ready to tell you who is the father of all mulatto children in everybody’s household but her own. Those, she seems to think, drop from the clouds."
Fanny Kemble, the British actress who married an American slaveholder, wrote about her observations of slavery as well, including the way white men sexually abused slave women and left their mixed-race children enslaved.
Sometimes the white fathers freed the children and/or their mothers, or provided education or apprenticeship, or settled property on them in a significant transfer of social capital. Notable antebellum period examples of fathers who provided for their mixed-race children were the fathers of Charles Henry Langston and John Mercer Langston and the father of the Healy family of Georgia. Each had a common-law marriage with a woman of partial African descent. Other mixed-race children were left enslaved; some were sold away by their fathers.
Research by historians and genealogists has shown that unlike the above examples, most free African Americans listed in the first two US censuses in the Upper South were descended from relationships or marriages in colonial Virginia between white women, indentured servant or free, and African or African-American men, indentured servant, free or slave. Their unions reflected the fluid nature of relationships among the working classes before slave caste was hardened, as well as the small households and farms within which many people worked. The children of white mothers were born free. If they were illegitimate and mixed race, they were apprenticed in order to avoid the community being burdened with upkeep, but such people gained a step in freedom.
By the turn of the nineteenth century, many of these families of free African Americans, along with European-American neighbors, migrated to frontier areas of Virginia, North Carolina, and then further west. Such families sometimes settled in insular groups. Mixed-race people of African-European descent are believed to have been the origin of some isolated settlements, which have long claimed or were said to be of American Indian or Portuguese ancestry. As an example, a 21st-century DNA study of a group of Melungeon families in Tennessee and Kentucky, long rumored to be descendants of Turks or Native Americans, showed they were overwhelmingly of African and European ancestry.
Hypo/hyperdescent in Reconstruction, late 19th century and 20th-century United States
By the late 1870s, conservative white Democrats regained power in state legislatures across the South, even in areas where there were black majorities, largely by a process of violence and intimidation of black Republicans. The Democrats gradually imposed white supremacy in law and practice. From 1890 to 1908, beginning with Mississippi, the state legislatures passed new constitutions and laws that created barriers to voter registration by such means as the poll tax, literacy tests, record requirements and others. The number of voters on the rolls fell drastically and most blacks and many poor whites were disenfranchised for decades. The whites also passed Jim Crow laws, such as racial segregation of public facilities.
African Americans and whites established the National Association for the Advancement of Colored People in 1909 to fight against legal discrimination and disenfranchisement. Each time they won a court case, for instance, against the use of white primaries, white-dominated legislatures would pass new laws to exclude blacks from the political system.
In the 20th century, under influences of eugenics and racial discrimination, states enacted laws classifying people as black who had any traceable evidence (or perception of any African ancestry). Under Virginia's Racial Integrity Act of 1924, the 'One-drop' rule defined as black a person with any known African ancestry, regardless of the number of intervening generations.
The same Act established a binary classification system for vital records, classifying people as 'white' or 'black' (Negro at the time). The latter was effectively a 'catch all' term for all people of color. Native Americans were classified as colored, a clear indication of the then-prevalent local attitude to all races other than white.
In its most extreme form in the United States, hypodescent was the basis of the "one drop rule", meaning that if an individual had any black ancestry, the person was classified as black. Laws were passed in southern states and others in the early 20th century, long after the end of slavery to define white and black, under associated laws for segregation: Tennessee adopted such a "one-drop" statute in 1910; Louisiana; Texas; Arkansas in 1911; Mississippi in 1917; North Carolina in 1923; Virginia in 1924; Alabama and Georgia in 1927; and Oklahoma in 1931.
During this same period, Florida, Indiana, Kentucky, Maryland, Missouri, Nebraska, North Dakota, and Utah retained their old "blood fraction" statutes de jure, but amended these fractions (one-sixteenth, one-thirtysecond) to be equivalent to one-drop de facto.
By 1924 many "white" people in Virginia would have had some African and/or Native American ancestry, given the mixing over the centuries. At the same time that Virginia was trying to harden racial caste, African Americans were organizing to overturn segregation and regain civil rights, that had been lost to Jim Crow laws and disfranchisement of the majority of the black community.
Anti-miscegenation marriage laws
By the early 1940s, of the thirty U.S. states that had anti-miscegenation laws, seven states (Alabama, Arizona, Georgia, Montana, Oklahoma, Texas, and Virginia) had adopted the one-drop theory for rules prohibiting interracial marriages. This was part of a continuing social hardening of racial lines after the turn of the century, when southern states imposed legal segregation and disfranchised African Americans.
Other states applied the hypodescent rule without carrying it to the "one-drop" extreme, using instead a blood quantum standard. For example, Utah's anti-miscegenation law prohibited marriage between a white and anyone considered a negro, mulatto, quadroon (one-fourth black), octoroon (one-eighth black), Mongolian, or member of "the Malay race" (here referring to Filipinos). No restrictions were placed on marriages between people who were not "white people". The law was repealed in 1963.
Other examples of application
In the United States, hypodescent has often defined children of mixed-race couples as black when one parent is classified as "black", or either is thought to have African descent.
Since the 1960s particularly and the rise of the Black Power movement, many members of the African-American community have emphasized that mixed-race individuals of African descent should identify as black in order to maximize their political power as a group in the United States. Leaders say they were historically discriminated against as black by white people, so should identify as black to assert their power in numbers.
President of the United States Barack Obama is often referred to as the first black or African-American President. He has said as a youth that he chose to identify as black and worked in community organizing in a black community. His mother and her parents were of European descent; his father and his family are sub-Saharan African from Kenya. But, in a case exemplifying the complex racial history of the United States, Obama is believed to be descended through his maternal line from John Punch, the first African documented historically as a slave in Virginia. The genealogical company Ancestry.com sponsored a study of his family history and documented this connection. Punch's descendants increasingly married white and are believed to have been accepted as white by the early 18th century.
In the history of the US, people have less consistently applied hypodescent in intermarriage between white people and people of other racial groups, such as Native Americans, and Asians. There was certainly discrimination against people of mixed European and Native American, and European and Asian ancestry, however.
Hypodescent is not only practiced by people of European ancestry. In Omaha, Nebraska, white people have celebrated Logan Fontenelle, a mixed-race man of the late 19th century who served as interpreter for a major treaty between the Omaha Nation and the United States that ceded most of their land before they moved to a reservation. White people referred to Fontenelle as chief of the Omaha, and he was one of the signatories of the treaty along with Omaha chiefs, perhaps because he spoke English. Various places in the city of Omaha were named after Fontenelle. But among the Omaha, Fontenelle was considered a white man because his father was white, and he was never a recognized chief. As the Omaha had a patrilineal kinship society, hereditary chieftainship and descent were passed through the male line. A person whose father was white was not considered Omaha unless he was formally adopted by a male Omaha member.
References in culture
Both African-American and white authors have explored issues related to mixed race and hypodescent in fiction and non-fiction.
In the novel Pudd'nhead Wilson, by Mark Twain, the character of the enslaved woman Roxy is described as "Negro", although she has considerable white ancestry and could pass for white. Her son is born into slavery and is 1/32 part black. He is mistakenly switched in infancy with the white son of the master's household, and each grows up to fulfill his social role.
The US late-19th century author Charles Chesnutt, who grew up free in Ohio and was of mixed African-European ancestry, wrote numerous stories set in the post-Civil War South. He explored the issues encountered by people of mixed race, in some cases relating what became known as the tragic mulatto genre.
Passing is a 1929 novel by Nella Larsen, dealing with mixed-race African-American women who choose alternate paths for marriage and identity.
In the musical Show Boat (1927), a white man is married to a mixed-race woman passing for white. He is accused by the sheriff of violating the state's anti-miscegenation laws. The white man pricks his wife's finger with a knife, swallows a drop of blood, then tells the sheriff "I'm no white man – I've got negro blood in me." The sheriff lets him off.
Sinclair Lewis's novel Kingsblood Royal uses hypodescent and the "one drop" principle as principal plot elements.
Numerous memoirs have been published by African Americans who explore growing up as mixed race with a white parent, such as The Color of Water: A Black Man's Tribute to His White Mother by James McBride. Bliss Broyard, in One Drop: My Father's Hidden Life, wrote about her father Anatole Broyard's decision to live and work as a writer, rather than a black writer, largely separating from his mixed-race, Louisiana Creole family. He married a white woman of Swedish descent and their children appear white.
See also
Racialism
Racism
References
Further reading
Multiracial affairs
Kinship and descent | Hypodescent | [
"Biology"
] | 3,174 | [
"Behavior",
"Human behavior",
"Kinship and descent"
] |
2,156,733 | https://en.wikipedia.org/wiki/Neonatal%20line | The neonatal line is a particular band of incremental growth lines seen in histologic sections of both enamel and dentin of primary teeth. It belongs to a series of a growth lines in tooth enamel known as the Striae of Retzius denoting the prolonged rest period of enamel formation that occurs at the time of birth. The neonatal line is darker and larger than the rest of the striae of retzius. The neonatal line is the demarcation between the enamel formation before birth and after birth i.e., prenatal and postnatal enamel respectively. It is caused by the different physiologic changes at birth and is used to identify enamel formation before and after birth. The position of the neonatal line differs from tooth to tooth
Formation
The formation of the neonatal line is caused by changes in the direction and degree of tooth mineralization caused by the biological stress from passing into extra uterine life. Specific factors underlying its formation and width still remain unclear.
Forensic Dentistry
In forensic dentistry, the neonatal line can be used to distinguish matters such as if a child died before or after birth and approximately how long a child lived after birth. The neonatal line can be used as a marker for the exact period of survival of an infant through the measurement of the amount of postnatal hard tissue formation and examination of the thickness of the neonatal line.
References
Histology
Dental enamel
Tooth development | Neonatal line | [
"Chemistry"
] | 294 | [
"Histology",
"Microscopy"
] |
2,156,756 | https://en.wikipedia.org/wiki/Tan%20line | A tan line is a visually clear division on the human skin between an area of pronounced comparative paleness relative to other areas that have been suntanned by exposure to ultraviolet (UV) radiation or by sunless tanning. The source of the radiation may be the sun or artificial UV sources such as tanning lamps. Tan lines are usually an unintentional result of a work environment or recreational activities, but are sometimes intentional. Many people seek to avoid tan lines that will be visible when regular clothes are worn.
Occupation-related tan lines
Farmer's tan
A "farmer's tan" (also called "golfer's tan", "sailor's tan", "twat tan" or "tennis tan") refers to the typical tan lines developed by regular outdoor activity when wearing a short sleeve shirt. The farmer's tan usually starts with a suntan covering the exposed parts of the arms and neck. It is distinct in that the shoulders, chest, and back remain unaffected by the sun. Tennis and golf additionally cause recognizable tans on the middle section of the legs due to the wearing of shorts and socks for prolonged hours in the sun.
The "Texas tan" is similar, with the exception that the shoulders are also affected by the sun, caused by working outdoors while wearing a sleeveless shirt.
Some of the common tan lines associated with a farmer's tan include:
elbows (from a short-sleeve shirt)
neck (from shirt collar. The derogatory word redneck comes from farmers getting this tan line)
thighs (if shorts are worn)
ankles (from socks, only if exposed)
forehead (if a hat is worn)
wrist (if a watch is worn)
eyes (if sunglasses are worn)
Driver's tan
A "driver's tan" (or similar terms such as "trucker's tan" or "taxi driver's arm") is a tanning pattern where one arm from the sleeve downward is tanned significantly more than the other arm due to extensive driving of a motor vehicle with the window down.
Sandal tan
A "sandal tan" is a set of distinctive tan lines on the feet, resulting from the straps of sandals worn throughout the summer by such different professions as lifeguards and monks.
Recreation-related tan lines
Bikini tan
Wearing a bikini in the sun results in the uncovered skin becoming suntanned and creates a "bikini tan". These tan lines separate pale breasts, crotch, and buttocks from otherwise tanned skin. "Racing stripes" may refer to the portion of a bikini tan line exposed when wearing one-piece swimwear.
Biker's tan
A "biker's tan" is a tan line three-quarters up each leg, where Lycra bike shorts would generally begin to cover. Depending on the activity, the inner side of the arms may be paler than the outer side. Unless the biker uses cycling gloves made to allow tanning, the area on the back of each hand will usually not be tanned.
Goggle tan
Raccoon-like tan lines can emerge around the eyes after wearing goggles, common among industrial workers (wearing safety glasses), skiers, snowboarders, and swimmers.
Golfers tan
A golfer's tan is typically a tan on the back of a shaved or bald head that forms when a baseball cap is worn. There is a semicircle-shaped tan that forms from the strap, which adjusts the hat's size. With between 3 and 5 hours spent out on the course in direct sunlight, sunburn, poor tan lines, and heat exhaustion are regular occurrences for the unprepared golfer.
Other recreation-related tan lines
"Soccer tan" — a stripe of tan from the lower thighs to the bottom of the knees common to soccer players; the upper thighs and lower legs are covered by shorts and shin guards/socks.
"Football tan" — a stripe of tan from below the knees to just above the ankle; the thighs, knees, and ankles are covered by uniform pants and ankle socks. The arms are often tanned like a farmer's tan, but the face is untanned due to wearing a helmet. This tanning effect can be particularly pronounced since many football teams run two-a-day practices during the summer.
"Tiger tan" — two tan stripes on the arm of a lacrosse player, resulting from the gaps between gloves, elbow pads, and shoulder pads.
Intentional tan lines
One of the common uses for tanning beds is the option of tanning entirely nude to reduce the appearance of tan lines. In contrast, some people prefer to have tan lines and will wear undergarments or swimwear with the deliberate purpose of creating a sharply defined tan line.
Additionally, "tanning stickers" that attach to the skin while tanning can be purchased. Common designs are a heart, Playboy bunny, and dolphins, but many designs exist. These are typically sold on a roll of 500 to 1000 as single-use, disposable stickers. People can place the sticker on the same area each time they tan (indoors or outdoors), leaving the covered area pale while the rest of their skin tans normally. This allows individuals to see their tanning progress and others to see if the "tan tattoo" is in an exposed area.
It is also possible to use sunscreen to create intentional tan lines that form patterns or words, to make a statement, or to create a design.
Avoiding tan lines
Wearing clothes while tanning results in the creation of tan lines, which many people regard as un-aesthetic. Many people want to avoid tan lines on those body parts that will be visible when they are fully clothed. Some people try to achieve an all-over tan or to maximize their tan coverage. To achieve an all-over tan, tanners need to dispense with clothing and to maximize coverage; they need to minimize the amount of clothing they wear while tanning. For women who cannot dispense with a swimsuit, they sometimes tan with the back strap undone while lying on the front or remove shoulder straps, besides wearing swimsuits covering less area than their normal clothing. Any exposure is subject to local community standards and personal choice. Some people tan in the privacy of their backyard where they can at times tan without clothes, and some countries have set aside clothing-optional swimming areas (popularly known as nude beaches), where people can tan and swim clothes-free. The naturist movement provides completely nude, clothes free sunbathing opportunities in most countries. Some people tan topless, and others wear very brief swimwear, such as a microkini or thong.
A 1969 innovation is tan-through swimwear, which uses fabric perforated with thousands of micro holes that are nearly invisible to the naked eye, but which transmit enough sunlight to approach an all-over tan, especially if the fabric is stretched taut. Tan-through swimwear typically allows more than one-third of UV rays to pass through (equivalent to SPF 3 or less), and an application of sunscreen even to the covered area is recommended but not for all types of tan through fabric. There is fabric that exists that requires no sunscreen underneath as it has built in SPF.
References
Sun tanning | Tan line | [
"Chemistry"
] | 1,476 | [
"Sun tanning",
"Ultraviolet radiation"
] |
2,156,774 | https://en.wikipedia.org/wiki/HD%20149026%20b | HD 149026 b, formally named Smertrios , is an extrasolar planet and hot Jupiter approximately 250 light-years from the Sun in the constellation of Hercules. HD 149026, also named Ogma ,
The 2.8766-day period planet orbits the yellow subgiant star HD 149026 at a distance of and is notable first as a transiting planet, and second for a small measured radius (relative to mass and incoming heat) that suggests an exceptionally large planetary core.
Name
Following its discovery in 2005 the planet was designated HD 149026 b. In July 2014, the International Astronomical Union launched NameExoWorlds, a process for giving proper names to certain exoplanets and their host stars. The process involved public nomination and voting for the new names. In December 2015, the IAU announced the winning name was Smertrios for this planet. The winning name was submitted by the Club d'Astronomie de Toussaint of France. Smertrios was a Gallic deity of war.
Discovery
The planet was discovered by the N2K Consortium in 2005, which searches stars for closely orbiting giant planets similar to 51 Pegasi b using the highly successful radial velocity method. The spectrum of the star was studied from the Keck and Subaru Telescopes. After the planet was first detected from the Doppler effect it caused in the light of the host star, it was studied for transits at the Fairborn Observatory. A tiny decrease of light (0.003 magnitudes) was detected every time the planet was transiting the star, thus confirming its existence.
Although the change of brightness caused by the transiting planet is tiny, it is detectable by amateur astronomers, providing an opportunity for amateurs to make important astronomical contributions. Indeed, one amateur astronomer, Ron Bissinger, actually detected a partial transit a day before the discovery was published.
Orbit
The planet's orbit is probably circular (within one standard deviation of error).
Careful radial velocity measurements have made it possible to detect the Rossiter–McLaughlin effect, the shifting in photospheric spectral lines caused by the planet occulting a part of the rotating stellar surface. This effect allows the measurement of the angle between the planet's orbital plane and the equatorial plane of the star. In the case of HD 149026 b, the alignment was measured to be +11°. This in turn suggests that the formation of the planet was peaceful and probably involved interactions with the protoplanetary disc. A much larger angle would have suggested a violent interplay with other protoplanets. A study in 2012 refined the spin-orbit angle to 12°.
Physical characteristics
The planet orbits the star in a so-called "torch orbit". One revolution around the star takes only a little less than three Earth days to complete. The planet is less massive than Jupiter (0.36 times Jupiter's mass, or 114 times Earth's mass) but more massive than Saturn. The temperature of the planet was initially estimated on the basis of 0.3 Bond albedo to be about above the predicted temperature of HD 209458 b (), which had inaugurated the category of Chthonian "hell planet". Its day-side brightness temperature was subsequently directly measured as 2,300 ± 200 K by comparing the combined emissions of star and planet at 8 μm wavelength before and during a transit event. This is well above the melting point of iron.
This planet's geometric albedo has not been measured directly, while its Bond albedo was measured at 0.53 in 2017. The initial estimate of 0.3 had come from averaging Sudarsky's theoretical classes IV and V. The planet's extremely high temperature has forced astronomers to abandon that estimate; in 2007, they predicted that the planet must absorb most of the starlight that falls on it — that is, near zero albedo like HD 209458 b. Much of the absorption takes place at the top of its atmosphere.
Between that and the hot, high-pressure gas surrounding the core, a stratosphere of cooler gas was once predicted but has not been observed. The atmosphere is likely high in carbon monoxide and dioxide.
The outer shell of dark, opaque, hot clouds are usually thought to be vanadium oxide and titanium oxide ("pM planets"), but spectral measurement in 2021 has revealed a neutral titanium and iron instead, implying the planet may be oxygen-poor and carbon-rich.
The planet-star radius ratio is 0.05158 ± 0.00077. Currently what limits more precision on HD 149026 b's radius "is the uncertainty in the stellar radius", and measurement of the stellar radius is distorted by pollution on the star's surface.
Even allowing for uncertainty the radius of HD 149026 b is only about three quarters that of Jupiter (or 83% that of Saturn). HD 149026 b was the first of its kind: HD 149026 b's low volume means that the planet is too dense for a Saturn-like gas giant of its mass and temperature.
It may have an exceptionally large core composed of "metals", or elements heavier than hydrogen and helium: the initial theoretical models gave the core a mass of 70 times Earth's mass; further refinements suggest 80-110 Earth masses. As a result, the planet has been described as a "super-Neptune", in analogy to the core-dominated outer ice giants of the Solar System, though whether the core of HD 149026 b is mainly icy or rocky is not currently known. Robert Naeye in Sky & Telescope claimed "it contains as much or more metals than all the planets and asteroids in our solar system combined". In addition to uncertainties of radius, its tidal heating over its history needs be taken into account; if its current orbit is circular and if that had evolved from a more eccentric one, the extra heat increases its expected radius per its model and thereby its core radius.
Naeye further speculated that the gravity could be as high as ten g (ten times gravity on Earth's surface) on the surface of the core.
Theoretical consequences
The discovery was advocated as a piece of evidence for the popular solar nebula accretion model, where planets are formed from the accretion of smaller objects. In this model, giant planet embryos grow large enough to acquire large envelopes of hydrogen and helium. However, opponents of this model emphasize that only one example of such a dense planet is not proof. In fact, such a huge core is difficult to explain even by the core accretion model.
One possibility is that because the planet orbits so close to its star, it is — unlike Jupiter — ineffective in cleansing the planetary system of rocky bodies. Instead, a heavy rain of heavier elements on the planet may have helped create the large core.
See also
HAT-P-3b
HD 209458 b
HD 179949 b
Tau Boötis b
References
External links
Hercules (constellation)
Hot Jupiters
Transiting exoplanets
Exoplanets discovered in 2005
Giant planets
Exoplanets detected by radial velocity
Exoplanets with proper names | HD 149026 b | [
"Astronomy"
] | 1,464 | [
"Hercules (constellation)",
"Constellations"
] |
2,156,814 | https://en.wikipedia.org/wiki/Radiological%20information%20system | A radiological information system (RIS) is the core system for the electronic management of medical imaging departments. The major functions of the RIS can include patient scheduling, resource management, examination performance tracking, reporting, results distribution, and procedure billing. RIS complements HIS (hospital information systems) and PACS (picture archiving and communication system), and is critical to efficient workflow to radiology practices.
Basic features
Radiological information systems commonly support the following features:
Patient registration and scheduling
Patient list management
Modality interface using worklists
Workflow management within a radiology department
Request and document scanning
Result entry
Digital reporting (usually using Voice Recognition (VR))
Printables such as patient letters and printed reports
Result transmission via HL7 integration or e-mailing of clinical reports
Patient tracking
Interactive documents
Creation of technical files
Modality and material management
Consent management
Additional features
In addition, a RIS often supports the following:
Appointment scheduling
Voice Recognition (VR)
PACS workflow
Custom report creation
HL7 interfaces with a PACS. HL7 also enables communication between HIS and RIS in addition to RIS and PACS.
Critical findings notification
Billing
Rule engines
Cross-site workflow
See also
DICOM
Electronic health record (EHR)
Health Level 7
Medical imaging
Medical software
References
Computing in medical imaging
Electronic health records
Medical databases
Radiology
Information systems | Radiological information system | [
"Technology"
] | 275 | [
"Information systems",
"Information technology",
"Electronic health records"
] |
2,157,145 | https://en.wikipedia.org/wiki/Rollin%20film | A Rollin film, named after Bernard V. Rollin, is a -thick liquid film of helium in the helium II state. It exhibits a "creeping" effect in response to surfaces extending past the film's level (wave propagation). Helium II can escape from any non-closed container via creeping toward and eventually evaporating from capillaries of or greater.
Rollin films are involved in the fountain effect where superfluid helium leaks out of a container in a fountain-like manner. They have high thermal conductivity.
The ability of superfluid liquids to cross obstacles that lie at a higher level is often referred to as the Onnes effect, named after Heike Kamerlingh Onnes. The Onnes effect is enabled by the capillary forces dominating gravity and viscous forces.
Waves propagating across a Rollin film are governed by the same equation as gravity waves in shallow water, but rather than gravity, the restoring force is the van der Waals force. The film suffers a change in chemical potential when the thickness varies. These waves are known as third sound.
Thickness of the film
The thickness of the film can be calculated by the energy balance. Consider a small fluid volume element which is located at a height from the free surface. The potential energy due to the gravitational force acting on the fluid element is , where is the total density and is the gravitational acceleration. The quantum kinetic energy per particle is , where is the thickness of the film and is the mass of the particle. Therefore, the net kinetic energy is given by , where is the fraction of atoms which are Bose–Einstein condensate. Minimizing the total energy with respect to the thickness provides us the value of the thickness:
See also
Zero sound
Second sound
References
External links
Video of the property in action
Video: Liquid Helium, Superfluid: demonstrating Lambda point transition/viscosity paradox/two fluid model/fountain effect/Rollin film/second sound (Alfred Leitner, 1963, 38 min.)
Helium
Bose–Einstein condensates
Fluid mechanics
Superfluidity
de:Rollin-Film | Rollin film | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 430 | [
"Bose–Einstein condensates",
"Physical phenomena",
"Phase transitions",
"Phases of matter",
"Superfluidity",
"Civil engineering",
"Condensed matter physics",
"Exotic matter",
"Fluid mechanics",
"Matter",
"Fluid dynamics"
] |
2,157,178 | https://en.wikipedia.org/wiki/Delta%E2%80%93Mendota%20Canal | The Delta–Mendota Canal is a aqueduct in central California, United States. The canal was designed and completed in 1951 by the U.S. Bureau of Reclamation as part of the Central Valley Project. It carries freshwater to replace San Joaquin River water which is diverted into the Madera Canal and Friant-Kern Canal at Friant Dam.
The canal begins at the C.W. Bill Jones Pumping Plant (formerly the Tracy Pumping Plant). Water is lifted from the Sacramento-San Joaquin Delta at the Clifton Court Forebay. The canal runs southward along the western edge of the San Joaquin Valley, parallel to the California Aqueduct, and diverges to the east after passing the San Luis Reservoir, receiving more water and eventually emptying into the San Joaquin River near the city of Mendota. The canal travels through six California counties: Alameda, San Joaquin, Stanislaus, Merced, and Fresno counties.
History
After years of drought, the state of California highlighted the importance of a large-scale water project, thus creating the California State Water Plan, but eventually being taken over by the U.S. Bureau of Reclamation in 1931 due to the Great Depression. In 1937 the Central Valley Project was approved by Congress to deliver freshwater throughout the San Joaquin Valley. The Friant-Kern Canal east of Fresno was built to distribute water through the eastern parts of the Central Valley, however, altered the natural flows of the San Joaquin River between the Friant Dam and confluence of the Merced River. The Delta–Mendota Canal was approved for the exchange of water rights in the downstream portion of the San Joaquin River. With the use of the Tracy Pumping Plant, water from the Sacramento River would be diverted into the Delta–Mendota Canal. The United States Bureau of Reclamation and the San Luis Delta Mendota Water Authority are responsible for maintaining the water quality that is discharged at the south end of the canal. The Delta–Mendota Canal is also a key feature to the Delta Division Project which is managed by the Bureau of Reclamation, to minimize salt intrusion from the San Francisco Bay.
Due to the length of the canal, it required several contracts to complete the construction. The first contracts given by the Bureau of Reclamation were awarded on June 14, 1946, to Hubert H. Everist for station 686+00-1365+00, and workers went on strike against the subcontractor Fred J. Maurer and Son. The next series of contracts were awarded on October 24, 1946 to the Morrison Knudsen Company, Inc., and the M.H. Hasler Construction Company, who worked on the stations 185+00-231+00 and 243+00-774+00. Other contractors were the Columbia Pumping Plant, Mowry Pumping Plant, United Concrete Pipe Corporation, Western Contracting Corporation, A. Teichert & Sons, Inc. General work shifts consisted of 3 daily shifts, twenty-one hours a day, 6 days a week.
The pumping plants in the Delta Division have serious impacts on the flow of the Delta and San Joaquin River Basin. During relatively dry years and a high exportation rate, flow of the San Joaquin River has been reversed. This flow reversal confuses migratory fish, and brings saline water.
Water movement
An important feature of the Central Valley Project, with regards to directing water southward through the Delta–Mendota Canal is the C.W. Bill Jones Pumping Plant, formerly known as the Tracy Pumping Plant (TPP). The pumping station is 60 miles (96 km) to the southeast from the City of San Francisco, in the rural community of Byron, California, near the city of Tracy.
Water is extracted from the southern portion of the San Joaquin Delta, and pumped to contractors in the San Joaquin Valley, San Benito and Santa Clara counties to meet urban and agricultural demands. With the use of two 15 foot diameter pipes, six 22,500-horsepower motors, roughly 8,500 Acre-feet (AF) of water from the Delta can be transported southward daily, after being lifted nearly 200 feet.
The plant is named to honor a pioneer in water management for the San Joaquin Valley, and was president of the San Luis and Delta-Mendota Water Association for two decades.
Use
The water is pumped from the canal into O'Neill Forebay, and then is pumped into San Luis Reservoir by the Gianelli Pumping-Generating Plant. Occasionally, water from O'Neill Forebay is released into the canal. The Delta–Mendota Canal ends at Mendota Pool, on the San Joaquin River near the city of Mendota, west of Fresno. The Delta–Mendota Canal capacity is and gradually decreases to at its terminus. Average annual throughput is .
The Intertie Project
In order to improve water delivery in the State of California, an intertie, which is defined as a connection between 2 or more current utilities, was constructed between the Delta–Mendota Canal and the California Aqueduct. The Intertie was constructed in the rural agricultural region of the southwestern portion of the San Joaquin Valley in Alameda County, near the city of Tracy, California. A series of two 108 inch diameter pipes of 500 feet in length connect the state managed California Aqueduct and the federally managed Delta–Mendota Canal. The pipes have a capacity to pump 467 cubit feet of water per second from the California Aqueduct to the Delta–Mendota Canal. This amount of water restores 35,000 acre feet of water annually to the Central Valley Project. The pumps use vertical power, which are more efficient than horizontal pumps, as it removes the need for a gearbox, as well as requires less maintenance and space. The four pumping units are from Cascade Pump Company from Santa Fe Springs, California, Pump Model 48MF with a 48-inch diameter discharge, with the capability to pump 55,125 gallons per minute. Due to the structural deficiencies of the C.W. Bill Jones Pumping Plant, The Intertie improves the overall water delivery system, and allows maintenance and emergencies to be addressed more easily. Completion of the project was April 2012. The cost of construction was an estimated $29 million, and will be repaid by the contractors who purchase water.
Protecting fish species
Another key feature is the Tracy Fish Collection Facility. In order to protect threatened and endangered species, a series of sloughs, channels, and tanks, help capture the fish and safely reintroduce them into the Delta waterways. Constructed in the 1950s, its objective is to protect aquatic fauna from being injured or killed by the pumps that facilitate water to the south of the state. The facility is roughly 1 km east from the Pumping Plant, and became operational 1957. Most common fish species safely moved through the series of louvers are American shad, Splittail, White Catfish, Delta smelt, Chinook salmon, and Striped bass. Historical trends show a decrease in efficient fish diversion, and it is believed due to poor water quality, increased water pollution, changes in water operations, and change of water demands for the various water users. Due to those reasons, physical improvements and changes in procedure are being made.
Land subsidence
Land subsidence is prevalent throughout the San Joaquin Valley, but was unrecognized prior to the construction of the canal. After construction, discrepancies in elevation were believed to be caused by earthquake. Post construction years, the southern 30–40 miles of the canal exceeded subsidence of 6 feet. Nearly two-thirds of the canal has been impacted by subsidence, over 20 miles of the concrete lined canal, and all the earth lined portion. By 1966, 35 miles demonstrated a drop of 1 feet, 3 feet elevation drop in 15 mile stretch, 5 foot decrease in a 5-mile portion, and 2 feet of the canal demonstrated a drop of 6 feet. Land subsidence in the region is due to constant over-drafting of underground water. Another key factor is due to the San Joaquin Valley's geomorphological structure, having young continental, unconsolidated sandy-silty-clayish soils, resting on old unconsolidated marine beds.
Impacts of the land subsidence lead to a decrease between bridges and water level surface. In regions of greater subsidence, portions of bridges, pipes, and cattle guards would be inundated. Unfortunately, land subsidence has led to a decrease in total water conveyance capacity available within the canal. No major negative impacts to the integrity of the structure of the canal have been noted, though minor issues requiring additional maintenance have been report.
Recirculation feasibility study
The U.S. Department of the Interior, Bureau of Reclamation evaluated the ability to improve water quality, energy consumption and production, productive fisheries, and flow to the San Joaquin River with the use of recirculation strategies using the Central Valley Project facilities. The Recirculation Feasibility Study Project was authorized by CALFED (California Federal Bay-Delta Program) Bay-Delta Authorization Act of 2004 (118 Stat. §§ 1681–1702.; Public Law 108-361). History has shown that low precipitation patterns can lead to pumping stations reversing the flow in the San Joaquin River, that negatively impacts fish migration patterns.
Recreational activities
Fishing access is provided in Canal Site 2A in Stanislaus County and Canal Site 5 in Fresno county, both providing parking and restrooms. Many use the gravel road adjacent to the canal for biking and walking. No water-contact activities aside from fishing are allowed.
United States Bureau of Reclamation
USGS annual flow data
References
Aqueducts in California
Central Valley Project
Irrigation in the United States
Agriculture in California
Geography of the San Joaquin Valley
Sacramento–San Joaquin River Delta
United States Bureau of Reclamation
1951 establishments in California
Transport infrastructure completed in 1951 | Delta–Mendota Canal | [
"Engineering"
] | 1,999 | [
"Irrigation projects",
"Central Valley Project"
] |
2,157,818 | https://en.wikipedia.org/wiki/Granite%20Railway | The Granite Railway was one of the first railroads in the United States, built to carry granite from Quincy, Massachusetts, to a dock on the Neponset River in Milton. From there boats carried the heavy stone to Charlestown for construction of the Bunker Hill Monument. The Granite Railway is popularly termed the first commercial railroad in the United States, as it was the first chartered railway to evolve into a common carrier without an intervening closure. The last active quarry closed in 1963; in 1985, the Metropolitan District Commission purchased , including Granite Railway Quarry, as the Quincy Quarries Reservation.
History
In 1825, after an exhaustive search throughout New England, Solomon Willard selected the Quincy site as the source of stone for the proposed Bunker Hill Monument. After many delays and much obstruction, the railway itself was granted a charter on March 4, 1826, with right of eminent domain to establish its right-of-way. Businessman and state legislator Thomas Handasyd Perkins organized the financing of the new Granite Railway Company, owning a majority of its shares, and he was designated its president. The railroad was designed and built by railway pioneer Gridley Bryant and began operations on October 7, 1826. Bryant used developments that had already been in use on the railroads in England, but he modified his design to allow for heavier, more concentrated loads and a frost line.
The railway ran from quarries to the Neponset River. Its wagons had wheels in diameter and were pulled by horses, although steam locomotives had been in operation in England for 13 years. The wooden rails were plated with iron and were laid apart, on stone crossties spaced at intervals. By 1837, these wooden rails had been replaced by granite rails, once again capped with iron.
In 1830, a new section of the railway, called the "Incline", was added to haul granite from the Pine Ledge Quarry to the railway level below. Wagons moved up and down the long incline in an endless conveyor belt. The incline continued in operation until the 1940s.
The railway introduced several important inventions, including railway switches or frogs, the turntable, and double-truck railroad cars. Gridley Bryant never patented his inventions, believing they should be for the benefit of all.
The novelty of the new railroad attracted tourists who journeyed out from Boston to witness the revolutionary technology in person. Notable visitors such as statesman Daniel Webster and English actress Fanny Kemble were early witnesses to the new railway. Miss Kemble described her 1833 visit in her journal.
On July 25, 1832, the Granite Railway was the site of one of the first fatal railway accidents in the United States, when the wagon containing Thomas B. Achuas of Cuba derailed as he and three other tourists were taking a tour. The accident occurred while the wagon, empty of stone but now carrying the four passengers, was ascending the Incline on its return trip and a cable broke. The occupants of the car were thrown over a cliff, approximately . Achuas was killed and the three other passengers were badly injured.
In 1871, the Old Colony and Newport Railway took over the original right-of-way of the Granite Railway, replacing its track with contemporary construction, and steam trains then took granite from the quarries directly to Boston without need of barges from the Neponset River. This portion of the Old Colony Railroad through Quincy and Milton was later absorbed into the New York, New Haven and Hartford Railroad. During the early 20th century, metal channels were laid over the old granite rails on the Incline, and motor trucks were hauled up and down on a cable. Passenger service on the Granite Branch (West Quincy Branch) ended on September 30, 1940; freight service was abandoned in stages from 1941 to 1973.
Most of the right of way of the railway was eventually incorporated into much of the Southeast Expressway in Milton and Quincy.
Gridley Bryant's recollection
In an 1859 letter to Charles B. Stuart, Bryant wrote:
The Quincy Railway was commenced under the following circumstances: The 'Bunker Hill Monument Association' had been formed, and funds enough collected to commence the foundation of the monument in the spring of eighteen hundred and twenty-five. I aided the architect in preparing the foundation, and on the seventeenth day of June following, the corner-stone was laid by General de La Fayette, and I had the honor to assist as master builder at the ceremony. I had, previous to this, purchased a stone quarry (the funds being furnished by Dr. John C. Warren) for the express purpose of procuring the granite for constructing this monument. This quarry was in Quincy, nearly four miles from water-carriage. This suggested to me the idea of a railroad (the Manchester and Liverpool Railroad being in contemplation at that time, but was not begun until the spring following); accordingly, in the fall of eighteen hundred and twenty-five, I consulted Thomas H. Perkins, William Sullivan, Amos Lawrence, Isaac T. Davis, and David Moody, all of Boston, in reference to it. These gentlemen thought the project visionary and chimerical, but, being anxious to aid the Bunker Hill Monument, consented that I might see what could be done. I awaited the meeting of our Legislature in the winter of eighteen hundred and twenty-five and six, and after every delay and obstruction that could be thrown in the way, I finally obtained a charter, although there was great opposition in the House. The questions were asked: 'What do we know about rail-roads? Who ever heard of such a thing? Is it right to take people's land for a project that no one knows anything about? We have corporations enough already.' Such and similar objections were made, and onerous restrictions were imposed, but it finally passed by a small majority only. Unfavorable as the charter was, it was admitted that it was obtained by my exertions; but it was owing to the munificence and public spirit of Colonel T. H. Perkins that we were indebted for the whole enterprise. None of the first named gentlemen ever paid any assessments, and the whole stock finally fell into the hands of Colonel Perkins.
The Quincy Railroad is four miles long, including the branches. I surveyed several routes from the quarry purchased (called the Bunker Hill Quarry), to the nearest tide-water; and finally the present location was decided upon. I commenced the work on the first day of April, eighteen hundred and twenty-six, and on the seventh day of October following the first train of cars passed over the whole length of the road.
The deepest cutting was fifteen feet, and the highest elevation above the surface of the ground was twelve feet. The several grades were as follows: The first, commencing at the wharf or landing, was twenty-six feet to the mile, the second thirteen feet, and the third sixty-six feet. This brought us to the foot of the table-lands that ran around the main quarry; here an elevation of eighty-four feet vertical was to be overcome. This was done by an inclined plane, three hundred and fifteen feet long, at an angle of about fifteen degrees. It had an endless chain, to which the cars were attached in ascending or descending; at the head of this inclined plane I constructed a swing platform to receive the loaded cars as they came from the quarry. This platform was balanced by weights, and had gearing attached to it in such a manner that it would always return (after having dumped) to a horizontal position, being firmly supported on the periphery of an eccentric cam. When the cars were out on the platform there was danger of their running entirely over, and I constructed a self-acting guard, that would rise above the surface of the rail upon the platform as it rose from its connection with the inclined plain, or receded out of the way when the loaded car passed on to the track; the weight of the car depressing the platform as it was lowered down.
I also constructed a turn-table at the foot of the quarry, which is still in use as originally constructed. The railroad was continued at different grades around the quarry, the highest part of which was ninety-three feet above the general level; on the top of this was erected an obelisk or monument forty-five feet high.
The road was constructed in the following manner: Stone sleepers were laid across the track eight feet apart. Upon these, wooden rails, six inches thick and twelve inches high, were placed. Upon the top of these rails, iron plates, three inches wide and one-fourth of an inch thick, were fastened with spikes; but at all the crossings of public roads and drift-ways stone rails were used instead of wood. On the top of these were placed iron plates four inches wide and half an inch thick, being firmly bolted to the stone. The inclined plane was built in the same permanent manner and had a double track.
The first cost of the road was fifty thousand dollars, and that of the first car six hundred dollars. This car had high wheels, six and one-half feet in diameter, the load being suspended on a platform by chains under the axles. This platform was let down at any convenient place and loaded; the car was then run over the load, and the chains attached to it by being inserted in eye-bolts in the platform, and raised a little above the track by machinery on the top of the car. The loads averaged about six tons each. The next car was. made with low wheels, with a strong massive frame. The gauge of the road being five feet, the axles were placed that distance apart, this being the true principle on which to construct railroad trucks, and has been adopted generally in this country.
When stones of eight or ten tons weight were to be transported, I took two of these trucks and attached them together by a platform and king bolts. This made an eight-wheeled car; and when larger stones were to be carried, I increased the number of trucks, and this made a sixteen-wheeled car....
Preservation
The railway's Incline was added to the National Register of Historic Places on June 19, 1973, and a surviving portion of the railroad bed, just off the end of Bunker Hill Lane, was added on October 15, 1973. The Granite Railway was designated as a National Historic Civil Engineering Landmark by the American Society of Civil Engineers in 1976.
A centennial historic plaque from 1926, an original switch frog, a piece of train track, and a section of superstructure from the Granite Railway are in the gardens on top of the Southeast Expressway (Interstate 93) as it passes under East Milton Square. The frog had been displayed at the World's Columbian Exposition at Chicago in 1893. The commemorative display is at the approximate site of the railroad's right-of-way as it went through Milton on its way to the Neponset River.
In Quincy, visitors can walk along several parkland trails that reveal vestiges of the original railway trestle and the Incline. These trails connect to the quarries, most of which are now filled for safety purposes with dirt from the massive Big Dig highway project in Boston. In years past, many persons were injured – and some killed – while diving into the flooded abandoned quarries from great heights.
The Department of Conservation and Recreation maintains the Quincy Quarries Reservation, which has facilities for rock climbing, and trails connecting the remains of the Granite Railway.
Gallery
See also
Iron rails
Mine railway
National Register of Historic Places listings in Quincy, Massachusetts
Oldest railroads in North America
Quincy Quarries Reservation
References
Friends of the Blue Hills Journal of Fanny Kemble
A History of the Origin and Development of the Granite Railway at Quincy, Massachusetts privately printed for The Granite Railway Company, 1926.
Scholes, Robert E. (1968), The Granite Railway and its Associated Enterprises.
Historic American Buildings Survey – Granite Railway, Pine Hill Quarry to Neponset River, Quincy, Norfolk County, MA
Website for Quincy Historical Society and information on the Granite Railway
The Massachusetts state government Department of Conservation and Recreation for the Quincy Quarries Reservation
Granite Railway Drawings
Granite Railway Photographs
Dutton, E.P. Chart of Boston Harbor and Massachusetts Bay with Map of Adjacent Country. Published 1867. A good map of roads and rail lines around Quincy and Milton including the Granite Railroad.
Old USGS maps of Milton at UNH.
Granite Railroad Massachusetts Bay Railroad Enthusiasts
Granite Railway Timeline
Defunct Massachusetts railroads
Quincy, Massachusetts
Horse-drawn railways
Mining railways in the United States
Railway inclines in the United States
5 ft gauge railways in the United States
Companies affiliated with the Old Colony Railroad
Predecessors of the New York, New Haven and Hartford Railroad
Historic Civil Engineering Landmarks
National Register of Historic Places in Quincy, Massachusetts
Rail infrastructure on the National Register of Historic Places in Massachusetts
Railway lines opened in 1826
Railway accidents in 1832
Railway companies established in 1826
Railway companies disestablished in 1870
1826 establishments in Massachusetts
1870 disestablishments in Massachusetts
American companies established in 1826
Granite | Granite Railway | [
"Engineering"
] | 2,609 | [
"Civil engineering",
"Historic Civil Engineering Landmarks"
] |
2,157,874 | https://en.wikipedia.org/wiki/Waterwolf | Waterwolf, or Water-wolf is a Dutch word that comes from the Netherlands, which refers to the tendency of lakes in low lying peaty land, sometimes previously worn-down by men digging peat for fuel, to enlarge or expand by flooding, thus eroding the lake shores, and potentially causing harm to infrastructure, or death. The term waterwolf is an example of zoomorphism, in which a non-living thing is given traits or characteristics of an animal (whereas a non-living thing given human traits or characteristics is personification). The traits of a wolf most commonly given to lakes include "something to be feared", "quick and relentless", "an enemy of man".
The Netherlands, meaning 'low countries', is a nation where 18% of the land is below sea level, and half of the land under one meter above sea level, and is prone to flooding. Before modern flood control, severe storms could cause flooding that could wipe out whole villages in the area of the waterwolf. Much of the land in the Netherlands consists of peat bogs. Peat is organic matter sconsisting of 10% carbon and 90% water, usually found in colder climates where the plant growth and decay are slow. Peat is considered a form of carbon sequestration, and when dried can be burned as a fuel. Historically peat was the primary source of fuel in the Netherlands, and farmers would mine the peat to burn or sell, thus contributing to the erosion of the landscape.
The first great step to reclaiming land taken by the waterwolf was made with the creation of windmills that could pump water out of the surrounding area, allowing for the creation of polders, or areas inhabited below sea level with an artificially managed water table. Modern flood control in the Netherlands consists of maintaining polders and levees, and is highlighted by the world's largest dam project: The Delta Works. While modern flood control has conquered the waterwolf, new events such as rising sea levels from climate change could once again bring the waterwolf.
References
For reference, see the Dutch Wikipedia: :nl:Waterwolf (animalisering). (animalisering means 'personification as an animal'.)
De Waterwolf getemd. Over geschiedenis en volksleven van Haarlemmermeer, edited by TJ. W.R. de Haan, with contributions by W. Jappe Alberts, Tj. W. R. de Haan, Historical Committee Haarlemmermeer, Polder Board of the Haarlemmermeer, and the Haarlemmermer town archives, Kruseman's Uitgeversmaatschappij, The Hague, 1970, OCLC ocm37651338
Lakes of the Netherlands
Lakes
Coastal engineering | Waterwolf | [
"Engineering",
"Environmental_science"
] | 589 | [
"Hydrology",
"Coastal engineering",
"Hydrology stubs",
"Civil engineering",
"Lakes"
] |
2,158,328 | https://en.wikipedia.org/wiki/Precision%20approach%20radar | Precision approach radar or PAR is a type of radar guidance system designed to provide lateral and vertical guidance to an aircraft pilot for landing, until the landing threshold is reached. Controllers monitoring the PAR displays observe each aircraft's position and issue instructions to the pilot that keep the aircraft on course and glidepath during final approach. After the aircraft reaches the decision height (DH) or decision altitude (DA), further guidance is advisory only. The overall concept is known as ground-controlled approach (GCA), and this name was also used to refer to the radar systems in the early days of its development.
PAR radars use a unique type of radar display with two separate "traces", separated vertically. The upper trace shows the elevation of a selected aircraft compared to a line displaying the ideal glideslope, while the lower shows the aircraft's horizontal position relative to the runway midline. GCA approaches normally start with the controller relaying instructions to bring the aircraft into the glidepath and then begin any corrections needed to bring it onto the centerline.
Precision approach radars are most frequently used at military air traffic control facilities. Many of these facilities use the AN/FPN-63, AN/MPN, or AN/TPN-22. These radars can provide precision guidance from a distance of 10 to 20 miles down to the runway threshold. PAR is mostly used by the Navy, as it does not broadcast directional signals which might be used by an enemy to locate an aircraft carrier.
Non-traditional PAR using SSR transponder reply
There are systems that provide PAR functionality without using primary radar. These non-traditional PAR systems use transponder multilateration, triangulation and/or trilateration.
One such system, Transponder Landing System (TLS) precisely tracks aircraft using the mode 3/A transponder response received by antenna arrays located near the runway. These antennas are part of a measurement subsystem that is used to precisely determine the aircraft 3-dimensional position using TOA, DTOA and AOA measurement techniques. The aircraft position is then displayed on a high-resolution color graphics terminal that also shows the approach centerline and the glide path. A GCA controller is then able to use this screen for reference to issue GCA instructions to the pilot.
The signal strength for the secondary surveillance radar subsystem of a non-traditional PAR is not attenuated by rain since the frequency is within the long range band, L-band. Therefore, a non-traditional PAR does not experience noticeable rain fade and in the case of the TLS has an operational range of 60 nm.
This system is co-operative depending, it means that in the case of transponder failure no aircraft detection will be provided.
Flight inspection of the PAR
A traditional PAR flight inspection procedure is performed without a navigation signal available to compare directly to a truth reference. A traditional PAR is flight inspected by comparing written notes between two observers, one taking notes at a truth reference system such as a theodolite and the other observer taking notes while observing the radar console; see ICAO Document 8071. The Transponder Landing System (TLS) non-traditional PAR can transmit an ILS signal that corresponds to the aircraft position relative to the precision approach. Therefore, the graphical depiction can be directly verified using Instrument Landing System (ILS) flight inspection techniques. This direct measurement removes some ambiguity from the PAR flight inspection process.
See also
Index of aviation articles
Acronyms and abbreviations in avionics
Instrument approach
TLS - Transponder Landing System
Ground-controlled approach
AN/MPN
Electronics Technician
References
External links
C. Wolff, Radartutorial Precision Approach Radar
Aeronautical navigation systems
Aircraft landing systems
Air traffic control
Ground radars
Types of final approach (aviation) | Precision approach radar | [
"Technology"
] | 771 | [
"Aircraft instruments",
"Aircraft landing systems"
] |
2,158,403 | https://en.wikipedia.org/wiki/Quantum%20weirdness | Quantum weirdness encompasses the aspects of quantum mechanics that challenge and defy human physical intuition.
Human physical intuition is based on macroscopic physical phenomena as are experienced in everyday life, which can mostly be adequately described by the Newtonian mechanics of classical physics. Early 20th-century models of atomic physics, such as the Rutherford–Bohr model, represented subatomic particles as little balls occupying well-defined spatial positions, but it was soon found that the physics needed at a subatomic scale, which became known as "quantum mechanics", implies many aspects for which the models of classical physics are inadequate. These aspects include:
quantum entanglement;
quantum nonlocality, referred to by Einstein as "spooky action at a distance"; see also EPR paradox;
quantum superposition, presented in dramatic form in the thought experiment known as Schrödinger's cat;
the uncertainty principle;
wave–particle duality;
the probabilistic nature of wave function collapse, decried by Einstein, saying, "God does not play dice".
See also
Bell's theorem
Interpretations of quantum mechanics
Quantum tunneling
Renninger negative-result experiment
Wheeler's delayed-choice experiment
References
Further reading
Book reviews
Articles
Quantum mechanics | Quantum weirdness | [
"Physics"
] | 252 | [
"Theoretical physics",
"Quantum mechanics",
"Quantum physics stubs"
] |
2,158,485 | https://en.wikipedia.org/wiki/Smart%20bookmark | Smart bookmarks are an extended kind of Internet bookmark used in web browsers. By accepting an argument, they directly give access to functions of web sites, as opposed to filling web forms at the respective web site for accessing these functions. Smart bookmarks can be used for web searches, or access to data on web sites with uniformly structured web addresses (e.g., user profiles in a web forum).
History
Smart bookmarks first were introduced in OmniWeb on the NEXTSTEP platform in 1997/1998, where they were called shortcuts. The feature was subsequently taken up by Opera, Galeon and Internet Explorer for Mac, so they can now be used in many web browsers, most of which are Mozilla based, like Kazehakase and Mozilla Firefox.
In Web, smart bookmarks appear in a dropdown menu when entering text in the address bar. By selecting a smart bookmark the respective web site is accessed using the text as argument. Smart bookmarks can also be added to the toolbar, together with their own textbox. The same applies to Galeon, which also allows the user to collapse and expand the textboxes within the toolbar. Smart bookmarks can also be shared, and there is a collection of them at the web site of the Galeon project.
Usage
There are two ways to employ smart bookmarks: either through the assignment of keywords or without. E.g., Mozilla derivatives and also Konqueror requires the assigning of keywords that can then be typed directly into the address bar followed by the term. Epiphany does not allow assigning keywords. Instead, the term is typed directly into the address bar, then all smart bookmarks appear on the address bar, can be dropped down the list, and selected.
See also
Bookmarklets, making it possible to use javascript with smart bookmarks
iMacros for Firefox, embeds web browser macros in bookmarks or links
References
External links
Smart Bookmarks at the Galeon site
Smart Bookmarks And Bookmarklets
Web browsers
Smart devices | Smart bookmark | [
"Technology"
] | 439 | [
"Home automation",
"Smart devices"
] |
2,158,574 | https://en.wikipedia.org/wiki/Rotating%20bolt | Rotating bolt is a method of locking the breech (or rear barrel) of a firearm closed for firing. Johann Nicolaus von Dreyse developed the first rotating bolt firearm, the "Dreyse needle gun", in 1836. The Dreyse locked using the bolt handle rather than lugs on the bolt head like the Mauser M 98 or M16. The first rotating bolt rifle with two lugs on the bolt head was the Lebel Model 1886 rifle. The concept has been implemented on most firearms chambered for high-powered cartridges since the 20th century.
Design
Ferdinand Ritter von Mannlicher, who had earlier developed a non-rotating bolt straight-pull rifle, developed the Steyr-Mannlicher M1895, a straight-pull rifle with a rotating bolt, which was issued to the Austro-Hungarian Army. Mannlicher then developed the M1893 auto rifle which had a screw delayed bolt and later the Mannlicher M1900 operated by a gas piston. This was an inspiration for later gas operated, semi-automatic and selective fire firearms (such as the M1, M14, M16, the L85A1/A2 and the AK-47/74) in which the bolt, upon contact with the breech, rotates and locks into place, the lugs on the bolt locking into the breech or barrel extension.
Upon closing, the bolt goes forward into barrel extension or locking recesses in the receiver, and then rotates; at this point it is locked in place. The bolt remains locked until the action is cycled, either manually by the operator, or mechanically by delayed blowback, recoil operation, or gas operation which then rotates the bolt and unlocks it from the breech so that it can be withdrawn in order to extract and eject the spent casing, and the next round can be chambered. In gas operation, the gas port, which meters a portion of the combustion gases into the action in order to cycle the weapon, is typically located either midway down the barrel or near the muzzle of the weapon. In this way it functions as a delay, ensuring that the bolt remains locked until chamber pressure has subsided to a safe level.
Rotating bolts are found in delayed blowback, gas-operated, recoil-operated, bolt action, lever-action, and pump-action weapon designs. In some forms of delayed blowback, the rotating bolt is used as the delay mechanism: the bolt head rotates as the firing pin strikes, locking the chamber until the gas pressure reaches a safe level to extract. As the firing pin retracts, the bolt head turns anti-clockwise unlocking the breech.
Examples
M1 Garand, gas-operated semi-automatic rifle
M25 Sniper Weapon System, semi-automatic rifle, gas operated, air cooled
Lebel Model 1886 rifle, bolt-action rifle
Steyr-Mannlicher M1895, a straight-pull rifle
M16, a gas-operated rifle
AK-47, a gas-operated rifle
Oerlikon KBA 25mm, gas-operated autocannon
Imbel IA2, a gas-operated assault rifle
IWI Tavor TAR-21, a gas-operated assault rifle
Ruger Mini-14, a gas-operated semi-automatic rifle
IMI Desert Eagle, one of the few handguns with such mechanism.
Remington Model 8, a recoil-operated rifle
Chauchat, a recoil-operated light machine gun
Remington Model 7600, a pump-action rifle
The Winchester model 1200, 1300,and Super-X (SXP) Pump Shotguns
SIG-Sauer MPX, a gas-operated submachine gun
CMMG MkG, the first AR15 derivative to use radial delayed blowback operation.
Browning BLR, a lever action rifle with a rotating bolt
L85A2, A variant of the L85A1
S&T Motiv K16, a gas-operated general purpose machine gun
See also
Roller locked
References
External links
Computer Animation demonstrating rotating bolt in the M16
Primer actuated M16 using rotating bolt
Firearm components
Firearm actions | Rotating bolt | [
"Technology"
] | 839 | [
"Firearm components",
"Components"
] |
2,158,646 | https://en.wikipedia.org/wiki/Double%20electron%20capture | Double electron capture is a decay mode of an atomic nucleus. For a nuclide (A, Z) with a number of nucleons A and atomic number Z, double electron capture is only possible if the mass of the nuclide (A, Z−2) is lower.
In this mode of decay, two of the orbital electrons are captured via the weak interaction by two protons in the nucleus, forming two neutrons (Two neutrinos are emitted in the process). Since the protons are changed to neutrons, the number of neutrons increases by two, while the number of protons Z decreases by two, and the atomic mass number A remains unchanged. As a result, by reducing the atomic number by two, double electron capture transforms the nuclide into a different element.
Example:
{| border="0"
|- style="height:2em;"
| ||+ ||2 ||→ || ||+ ||2
|}
Rarity
In most cases this decay mode is masked by other, more probable modes involving fewer particles, such as single electron capture. When all other modes are “forbidden” (strongly suppressed) double electron capture becomes the main mode of decay. There exist 34 naturally occurring nuclei that are believed to undergo double electron capture, but the process has been confirmed by observation in the decay of only three nuclides: , , and .
One reason is that the probability of double electron capture is stupendously small; the half-lives for this mode lie well above 10 years. A second reason is that the only detectable particles created in this process are X-rays and Auger electrons that are emitted by the excited atomic shell. In the range of their energies (~1–10 keV), the background is usually high. Thus, the experimental detection of double electron capture is more difficult than that for double beta decay.
Double electron capture can be accompanied by the excitation of the daughter nucleus. Its de-excitation, in turn, is accompanied by an emission of photons with energies of hundreds of keV.
Modes with positron emission
If the mass difference between the mother and daughter atoms is more than two masses of an electron (1.022 MeV), the energy released in the process is enough to allow another mode of decay, called electron capture with positron emission. It occurs along with double electron capture, their branching ratio depending on nuclear properties.
When the mass difference is more than four electron masses (2.044 MeV), the third mode, called double positron decay, is allowed. Only six naturally occurring nuclides (78Kr, 96Ru, 106Cd, 124Xe, 130Ba, and 136Ce) plus the non-primordial 148Gd and 154Dy are energetically allowed to decay via these three modes simultaneously.
Neutrinoless double electron capture
The above-described process with the capture of two electrons and emission of two neutrinos (two-neutrino double electron capture) is allowed by the Standard Model of particle physics: No conservation laws (including lepton number conservation) are violated. However, if the lepton number is not conserved, or equivalently the neutrino is its own antiparticle, another kind of process can occur: the so-called neutrinoless double electron capture. In this case, two electrons are captured by nucleus, but neutrinos are not emitted. The energy released in this process is carried away by an internal bremsstrahlung gamma quantum.
Example:
{| border="0"
|- style="height:2em;"
| ||+ ||2 ||→ ||
|}
This mode of decay has never been observed experimentally, and would contradict the Standard Model if it were observed.
See also
Double beta decay
Neutrinoless double beta decay
Beta decay
Neutrino
Particle radiation
Radioactive isotope
References
External links
Nuclear physics
Radioactivity | Double electron capture | [
"Physics",
"Chemistry"
] | 826 | [
"Radioactivity",
"Nuclear physics"
] |
2,158,784 | https://en.wikipedia.org/wiki/Anthony%20Quinton | Anthony Meredith Quinton, Baron Quinton, FBA (25 March 192519 June 2010) was a British political and moral philosopher, metaphysician, and materialist philosopher of mind. He served as President of Trinity College, Oxford from 1978 to 1987; and as chairman of the board of the British Library from 1985 to 1990. He is also remembered as a presenter of the BBC Radio programme Round Britain Quiz.
Life
'Tony' Quinton (as he was called by all who knew him) was born at 5 Seaton Road, Gillingham, Kent. He was the only son of Surgeon Captain Richard Frith Quinton, Royal Navy (1889–1935) and his wife (Gwenllyan) Letitia (née Jones).
He was educated at Stowe School then went on a scholarship to Christ Church, Oxford in 1943. He read modern history for two terms before joining the RAF as a flying officer and navigator. He returned in 1946, obtaining a first-class honours degree in Philosophy, Politics and Economics in 1949. An Examination Fellow of All Souls from 1949, he became a Fellow and tutor of New College, Oxford, in 1955. He was President of Trinity College, Oxford, from 1978 to 1987.
Quinton was president of the Aristotelian Society from 1975 to 1976. He was chairman of the board of the British Library from 1985 to 1990. And he was President of the Royal Institute of Philosophy from 1991 until he stepped down in 2004.
On 7 February 1983, he was created a life peer as Baron Quinton, of Holywell in the City of Oxford and County of Oxfordshire. An admirer of Margaret Thatcher, he sat in the Lords as a Conservative.
To BBC Radio audiences, Quinton became well known as the presenter of the long-running Round Britain Quiz from 1974 to 1985.
Having been the guest of the introductory discussion that opened Bryan Magee's1970-71 BBC Radio 3 series Conversations with Philosophers, and the accompanying book Modern British Philosophy (1971), he went on to participate in Magee's BBC Television series Men of Ideas (1978) and The Great Philosophers (1987). and their companion books.
City of Benares tragedy
With the situation for civilians having worsened over the first year of World War II, Quinton's Canadian mother became persuaded by her mother's forceful urgings to return home, with her son, until the end of the war. Thus, in September 1940, Letitia Quinton booked passage for them both aboard the City of Benares due shortly to sail from Liverpool to Montreal. Departure was, however, delayed by two days on account of the need to clear German mines that had been dropped on the Mersey. Thus, when the ship did leave on 13 September it had to do so without naval escort.
At 10:03pm on 17 September, the ship was torpedoed by German submarine U-48 and began to sink. The Quintons were in the ship's lounge when the alarm bells rang. They went to their cabin to put on their life-jackets, collected their valuables, and returned to the lounge, which was their muster station. Eventually, Colonel James Baldwin-Webb, a British parliamentarian, decided they had waited long enough and took them to the lifeboats. The Quintons boarded Lifeboat 6, which, with roughly 65 people, was already overfull. As it was lowered, the falls and cables on one end snapped, sending the boat lurching forward, and tossing the majority of the passengers into the sea. Quinton was trapped by a heavy set woman, Mrs Anne Fleetwood-Hesketh: he clung to her, hoping her weight would keep them both from falling, but both fell into the sea. Quinton resurfaced and his mother pulled him back into the lifeboat. The boat now contained 23 people, two of whom had been rescued from another lifeboat, so that only 21 passengers of an original estimated 65 survived.
Through the night more passengers, including four children, died. By morning, only eight people, comprising five men, two women (including Mrs Quinton), and one child (Quinton himself) remained alive. Other lifeboats had suffered equally. HMS Hurricane rescued 105 survivors from the water, including Quinton and his mother. One lifeboat was adrift at sea for eight days before being rescued by another ship, which brought the survivor toll up to 148. Of the 406 people on board, 258 died (including 81 children). Quinton was one of 19 children to survive.
Metaphysics
In the debate about philosophical universals, Quinton defended a variety of nominalism that identifies properties with a set of "natural" classes. David Malet Armstrong has been strongly critical of natural class nominalism: Armstrong believes that Quinton's 'natural' classes avoid a fairly fundamental flaw with more primitive class nominalisms, namely that it has to assume that for every class you can construct, it must then have an associated property. The problem for the class nominalist according to Armstrong is that one must come up with some criteria to determine classes that back properties and those which just contain a collection of heterogeneous objects.
Quinton's version of class nominalism asserts that determining which are the natural property classes is simply a basic fact that is not open to any further philosophical scrutiny. Armstrong argues that whatever it is which picks out the natural classes is not derived from the membership of that class, but from some fact about the particular itself.
While Quinton's theory states that no further analysis of the classes is possible, he also says that some classes may be more or less natural—that is, more or less unified than another class. Armstrong illustrates this intuitive difference Quinton is appealing to by pointing to the difference between the class of coloured objects and the class of crimson objects: the crimson object class is more unified in some intuitive sense (how is not specified) than the class of coloured objects.
In Quinton's 1957 paper, he sees his theory as a less extreme version of nominalism than that of Willard van Orman Quine, Nelson Goodman and Stuart Hampshire.
Metaphilosophy
His "shortest definition of philosophy"
His longer definition
Works
Books authored
The Nature of Things (London, 1973)
The Politics of Imperfection: The Religious and Secular Traditions of Conservative Thought in England from Hooker to Oakeshott (1978)
Utilitarian Ethics (1973)
Francis Bacon (Oxford, 1980)
Thoughts and Thinkers (1982)
Hume: The Great Philosophers (1997)
From Wodehouse To Wittgenstein (1998)
with Marcelle Quinton, Before We Met (2008)
Of Men and Manners: Essays Historical and Philosophical (2011) Kenny, Anthony (ed.)
Books edited
Select papers/book chapters
"Spaces and Times" (1962) Philosophy. 37 (140): 130–147, reprinted in: (eds.) Le Poidevin, R., & MacBeath, M. The Philosophy of Time (1993)
'“The ‘A Priori’ and the Analytic.” Proceedings of the Aristotelian Society, vol. 64, 1963, pp. 31–54, reprinted in: (ed.) Strawson, P. F., Philosophical logic. (1967)
"Absolute Idealism" Proceedings of the British Academy 57, 1971 (1973)
"Persistence of intellectual nationalism," in: Perspectives on culture and society, vol. 1 (1988), 1–22
"Ayer's place in the history of philosophy" in: Griffiths, A. Phillips. (ed.) A.J. Ayer: Memorial Essays (1992)
"Morals and politics" (1993) In: (ed) Griffiths, A. Phillips. .Ethics. Royal Institute of Philosophy Supplement 35,
"Political Philosophy" (1994) in: Kenny, Anthony (ed.) The Oxford history of Western philosophy,
Popular writings
"Springtime for Hegel," New York Review, 21 June 2001, review of Hegel: A Biography by Terry Pinkard
Arms
References
External links
"The Two Philosophies of Wittgenstein" (1978) video of Quinton in discussion with Bryan Magee
"The Philosophy of Spinoza & Leibniz" (1987) video of Quinton in discussion with Bryan Magee
"Mind and Brain" (1973), video of Quinton discussing the mind-body problem with Charles Taylor for the Open University, (transcript for same)
1925 births
2010 deaths
20th-century British philosophers
20th-century British essayists
21st-century British philosophers
21st-century British essayists
Analytic philosophers
Aristotelian philosophers
British ethicists
British male essayists
British radio presenters
Conservative Party (UK) life peers
British epistemologists
Fellows of All Souls College, Oxford
Fellows of New College, Oxford
Fellows of the British Academy
Fellows of Trinity College, Oxford
Materialists
Metaphilosophers
British metaphysicians
Ontologists
People educated at Stowe School
People from Gillingham, Kent
British philosophers of culture
British philosophers of education
Philosophers of history
British philosophers of language
British philosophers of mind
British philosophers of religion
Philosophers of social science
Philosophy writers
British political philosophers
Presidents of the Aristotelian Society
Presidents of Trinity College, Oxford
Set theorists
Life peers created by Elizabeth II | Anthony Quinton | [
"Physics"
] | 1,881 | [] |
2,158,869 | https://en.wikipedia.org/wiki/Ian%20Jackson | Ian Jackson is a longtime free software author and Debian developer. Jackson wrote dpkg (replacing a more primitive Perl tool with the same name), SAUCE (Software Against Unsolicited Commercial Email), userv and debbugs. He used to maintain the Linux FAQ. He runs chiark.greenend.org.uk, a Linux system which is home to PuTTY among other things.
Jackson has a PhD in Computer Science from Cambridge University. As of October 2021, he works for the Tor Project. He has previously worked for Citrix for Canonical Ltd. and nCipher Corporation.
Jackson became Debian Project Leader in January 1998, before Wichert Akkerman took his place in 1999. Debian GNU/Linux 2.0 (hamm) was released during his term. During that time he was also a vice-president and then president of Software in the Public Interest in 1998 and 1999.
Jackson was a member of the Debian Technical Committee until November 2014 when he resigned as a result of controversies around the proposed use of systemd in Debian.
Additional works
authbind (1998), an Open-source system utility
References
External links
Ian's home page
Year of birth missing (living people)
Living people
Alumni of Churchill College, Cambridge
Computer programmers
Debian Project leaders
GNU people | Ian Jackson | [
"Technology"
] | 274 | [
"Computing stubs",
"Computer specialist stubs"
] |
2,158,886 | https://en.wikipedia.org/wiki/Single-event%20upset | A single-event upset (SEU), also known as a single-event error (SEE), is a change of state caused by one single ionizing particle (e.g. ions, electrons, photons) striking a sensitive node in a live micro-electronic device, such as in a microprocessor, semiconductor memory, or power transistors. The state change is a result of the free charge created by ionization in or close to an important node of a logic element (e.g. memory "bit"). The error in device output or operation caused as a result of the strike is called an SEU or a soft error.
The SEU itself is not considered permanently damaging to the transistors' or circuits' functionality, unlike the case of single-event latch-up (SEL), single-event gate rupture (SEGR), or single-event burnout (SEB). These are all examples of a general class of radiation effects in electronic devices called single-event effects (SEEs).
History
Single-event upsets were first described during above-ground nuclear testing, from 1954 to 1957, when many anomalies were observed in electronic monitoring equipment. Further problems were observed in space electronics during the 1960s, although it was difficult to separate soft failures from other forms of interference. In 1972, a Hughes satellite experienced an upset where the communication with the satellite was lost for 96 seconds and then recaptured. Scientists Dr. Edward C. Smith, Al Holman, and Dr. Dan Binder explained the anomaly as a single-event upset (SEU) and published the first SEU paper in the IEEE Transactions on Nuclear Science journal in 1975. In 1978, the first evidence of soft errors from alpha particles in packaging materials was described by Timothy C. May and M.H. Woods. In 1979, James Ziegler of IBM, along with W. Lanford of Yale, first described the mechanism whereby a sea-level cosmic ray could cause a single-event upset in electronics. 1979 also saw the world's first heavy ion "single-event effects" test at a particle accelerator facility, conducted at Lawrence Berkeley National Laboratory's 88-Inch Cyclotron and Bevatron.
Cause
Terrestrial SEUs arise due to cosmic particles colliding with atoms in the atmosphere, creating cascades or showers of neutrons and protons, which in turn may interact with electronic circuits. At deep sub-micron geometries, this affects semiconductor devices in the atmosphere.
In space, high-energy ionizing particles exist as part of the natural background, referred to as galactic cosmic rays (GCRs). Solar particle events and high-energy protons trapped in the Earth's magnetosphere (Van Allen radiation belts) exacerbate this problem. The high energies associated with the phenomenon in the space particle environment generally render increased spacecraft shielding useless in terms of eliminating SEUs and catastrophic single-event phenomena (e.g. destructive latch-up). Secondary atmospheric neutrons generated by cosmic rays can also have sufficiently high energy for producing SEUs in electronics on aircraft flights over the poles or at high altitudes. Trace amounts of radioactive elements in chip packages also lead to SEUs.
Testing for SEU sensitivity
The sensitivity of a device to SEU can be empirically estimated by placing a test device in a particle stream at a cyclotron or other particle accelerator facility. This particular test methodology is especially useful for predicting the SER (soft error rate) in known space environments but can be problematic for estimating terrestrial SER from neutrons. In this case, a large number of parts must be evaluated, possibly at different altitudes, to find the actual rate of upset.
Another way to empirically estimate SEU tolerance is to use a chamber shielded from radiation, with a known radiation source, such as Caesium-137.
When testing microprocessors for SEU, the software used to exercise the device must also be evaluated to determine which sections of the device were activated when SEUs occurred.
SEUs and circuit design
By definition, SEUs do not destroy the circuits involved, but they can cause errors. In space-based microprocessors, one of the most vulnerable portions is often the 1st and 2nd-level cache memories, because these must be very small and have very high speed, which means that they do not hold much charge. Often these caches are disabled if terrestrial designs are being configured to survive SEUs. Another point of vulnerability is the state machine in the microprocessor control, because of the risk of entering "dead" states (with no exits), however, these circuits must drive the entire processor, so they have relatively large transistors to provide relatively large electric currents and are not as vulnerable as one might think. Another vulnerable processor component is RAM, and more specifically static RAM (SRAM) used in cache memories. SRAM memories are usually designed with transistor sizes close to the minimum allowed by technology to allocate the maximum number of bits per unit area. Small transistor sizes and high bit density make memories one of the most susceptible components to SEUs. To ensure resilience to SEUs, often an error correcting memory is used, together with circuitry to periodically read (leading to correction) or scrub (if reading does not lead to correction) the memory of errors, before the errors overwhelm the error-correcting circuitry.
In digital and analog circuits, a single event may cause one or more voltages pulses (i.e. glitches) to propagate through the circuit, in which case it is referred to as a single-event transient (SET). Since the propagating pulse is not technically a change of "state" as in a memory SEU, one should differentiate between SET and SEU. If a SET propagates through digital circuitry and results in an incorrect value being latched in a sequential logic unit, it is then considered an SEU.
Hardware problems can also occur for related reasons. Under certain circumstances (of both circuit design, process design, and particle properties) a "parasitic" thyristor inherent to CMOS designs can be activated, effectively causing an apparent short-circuit from power to ground. This condition is referred to as latch-up, and in absence of constructional countermeasures, often destroys the device due to thermal runaway. Most manufacturers design to prevent latch-up and test their products to ensure that latch-up does not occur from atmospheric particle strikes. In order to prevent latch-up in space, epitaxial substrates, silicon on insulator (SOI) or silicon on sapphire (SOS) are often used to further reduce or eliminate the susceptibility.
Notable SEU
In the 2003 elections in Brussels's municipality Schaerbeek (Belgium), an anomalous recorded number of votes triggered an investigation that concluded an SEU was responsible for giving a candidate named Maria Vindevoghel 4,096 extra votes. The possibility of a single-event upset is suggested by the difference in votes being equivalent to a power of two, .
On October 7, 2008, Qantas Flight 72 at 37,000 feet, one of the plane's three air data inertial reference units had a failure, causing incorrect data to be sent to the plane's flight control systems. This caused pitch-downs and caused severe injuries to crew and passengers. All potential causes were found to be "unlikely," or "very unlikely," except for an SEU, whose likelihood couldn't be estimated.
See also
Radiation hardening
Cosmic rays
Hamming distance
Parity bit
Gray code
/
Johnson counter
Soft error
References
Further reading
General SEU
T.C. May and M.H. Woods, IEEE Trans Electron Devices ED-26, 2 (1979)
www.seutest.com - Soft-error testing resources to support the JEDEC JESD89A test protocol.
J. F. Ziegler and W. A. Lanford, "Effect of Cosmic Rays on Computer Memories", Science, 206, 776 (1979)
Ziegler, et al. IBM Journal of Research and Development. Vol. 40, 1 (1996).
NASA Introduction to SEU from Goddard Space Flight Center Radiation Effects Facility
NASA/Smithsonian abstract search.
"Estimating Rates of Single-Event Upsets", J. Zoutendyk, NASA Tech Brief, Vol. 12, No. 10, item #152, Nov. 1988.
Boeing Radiation Effects Laboratory, focussed on Avionics
A Memory Soft Error Measurement on Production Systems, 2007 USENIX Annual Technical Conference, pp. 275–280
A Highly Reliable SEU Hardened Latch and High-Performance SEU Hardened Flip-Flop, International Symposium on Quality Electronic Design (ISQED), California, USA, March 19--21, 2012
SEU in programmable logic devices
"Single-Event Upsets: Should I Worry?" Xilinx Corp.
"Virtex-4: Soft Errors Reduced by Nearly Half!" A. Lesea, Xilinx TecXclusive, 6 May 2005.
Single Event Upsets Altera Corp.
Evaluation of LSI Soft Errors Induced by Terrestrial Cosmic rays and Alpha Particles - H. Kobayashi, K. Shiraishi, H. Tsuchiya, H. Usuki (all of Sony), and Y. Nagai, K. Takahisa (Osaka University), 2001.
SEU-Induced Persistent Error Propagation in FPGAs K. Morgan (Brigham Young University), Aug. 2006.
Microsemi neutron immune FPGA technology.
SEU in microprocessors
Elder, J.H.; Osborn, J.; Kolasinski, W. A.; "A method for characterizing a microprocessor's vulnerability to SEU", IEEE Transactions on Nuclear Science, Dec 1988 v 35 n 6.
SEU Characterization of Digital Circuits Using Weighted Test Programs
Analysis of Application Behavior During Fault Injection
Flight Linux Project
SEU related masters theses and doctoral dissertations
Digital electronics | Single-event upset | [
"Engineering"
] | 2,078 | [
"Electronic engineering",
"Digital electronics"
] |
2,159,159 | https://en.wikipedia.org/wiki/Shishira%20%28season%29 | Shishira () is the season of winter in the Hindu calendar. It comprises the months of Pausha and Magha or mid-January to mid-March in the Gregorian calendar.
References
Sources
Selby, Martha Ann (translator). The Circle of Six Seasons, Penguin, New Delhi, 2003,
Raghavan, V. Ṛtu in Sanskrit literature, Shri Lal Bahadur Shastri Kendriya Sanskrit Vidyapeetha, Delhi, 1972.
Hindu calendar
Seasons | Shishira (season) | [
"Physics"
] | 97 | [
"Physical phenomena",
"Earth phenomena",
"Seasons"
] |
2,159,451 | https://en.wikipedia.org/wiki/Loop%20antenna | A loop antenna is a radio antenna consisting of a loop or coil of wire, tubing, or other electrical conductor, that for transmitting is usually fed by a balanced power source or for receiving feeds a balanced load. Within this physical description there are two (possibly three) distinct types:
Large loop antennas Large loops are also called self-resonant loop antennas or full-wave loops; they have a perimeter close to one or more whole wavelengths at the operating frequency, which makes them self-resonant at that frequency. They are the most efficient of all antenna types for both transmission and reception. Large loop antennas have a two-lobe radiation pattern at their first, full-wave resonance, peaking in both directions perpendicular to the plane of the loop. Large loops are the most efficient, by an order of magnitude, of all antenna designs of similar size.
Halo antennas Halos are often explained as shortened dipoles that have been bent into a circular loop, with the ends not quite touching. Some writers prefer to exclude them from loop antennas, since they can be well-understood as bent dipoles, others make halos an intermediate category between large and small loops, or the extreme upper size limit for small transmitting loops: In shape and performance halo antennas are very similar to small loops, only distinguished by being self resonant and having much higher radiation resistance. (See discussion below)
Small loop antennas Small loops are also called magnetic loops or tuned loops; they have a perimeter smaller than half the operating wavelength (typically no more than to wave). They are used mainly as receiving antennas, but are sometimes used for transmission despite their reduced efficiency; loops with a circumference smaller than about become so inefficient they are rarely used for transmission. A common example of small loop is the ferrite (loopstick) antenna used in most AM broadcast radios. The radiation pattern of small loop antennas is maximum at directions within the plane of the loop, so perpendicular to the maxima of large loops.
Small loops divide into two sub-types, depending on the purpose they are optimized for:
Receiving Small receiving loops are compact antennas optimized to capture radio waves much longer than their size, where full-sized antennas would be either infeasible or impossible. If their perimeters are kept shorter than they have exceptionally precise "null" directions (where the signal vanishes) which gives a tiny antenna for exceedingly accurate direction-finding, better than most moderately large antennas, and as good as many huge antennas.
Transmitting Small transmitting loops are optimized for compact antennas that are the "least-worst" signal radiators. Small antennas of any kind are inefficient, but when a full-sized antenna is not practical, making a small loop with a perimeter as close to as possible (although usually no more than ) makes the small loop better for transmitting, although it sacrifices or outright loses the precise "null" direction of smaller small loops.
Large, self-resonant loop antennas
For the description of large loops in this section, the radio's operating frequency is assumed to be tuned to the loop antenna's first resonance. At that frequency, one whole free-space wavelength is slightly smaller than the perimeter of the loop, which is the smallest that a "large" loop can be.
Self-resonant loop antennas for so-called "short" wave frequencies are relatively large, with a perimeter just greater than the intended wavelength of operation, hence for circular loops diameters between roughly at the largest, around 1.8 MHz. At higher frequencies their sizes become smaller, falling to a diameter of about at 30 MHz.
Large loop antennas can be thought of as folded dipoles whose parallel wires have been split apart and opened out into some oval or polygonal shape. The loop's shape can be a circle, triangle, square, rectangle, or in fact any closed polygon, but for resonance, the loop perimeter must be slightly larger than a wavelength.
Shape
Loop antennas may be in the shape of a circle, a square, or any other closed geometric shape that allows the total perimeter to be slightly more than one wavelength. The most popular shape in amateur radio is the quad antenna or "quad", a self-resonant loop in a square shape so that it can be constructed of wire strung across a supporting -shaped frame. There may be one or more additional loops stacked parallel to the first as "parasitic" director or reflector element(s), creating an antenna array which is unidirectional with gain that increases with each additional parasitic element. This design can also be turned 45 degrees to a diamond shape supported on a -shaped frame. Triangular loops (-shaped) have also been used for vertical loops, since they can be supported from a single mast. A rectangle twice as high as its width obtains slightly increased gain and also matches 50 Ω directly if used as a single element.
Unlike a dipole antenna, the polarization of a resonant loop antenna is not obvious from the orientation of the loop itself, but depends on the placement of its feedpoint. If a vertically oriented loop is fed at the bottom, then its radiation will be horizontally polarized; feeding it from the side will make it vertically polarized.
Radiation pattern
The radiation pattern of a first-resonance loop antenna peaks at right angles to the plane of the loop. As the frequency progresses to the second and third resonances, the perpendicular radiation fades and strong lobes near the plane of the loop arise.
At the lower shortwave frequencies, a full loop is physically quite large, and its only practical installation is "lying flat", with the plane of the loop horizontal to the ground and the antenna wire supported at the same relatively low height by masts along its perimeter. This results in horizontally-polarized radiation, which peaks toward the vertical near the lowest harmonic; that pattern is good for regional NVIS communication, but unfortunately is not generally useful for making continental-scale contacts.
Above about 10 MHz, the loop is approximately 10 meters in diameter, and it becomes more practical for the loop to be mounted "standing up" – that is, with the plane of the loop vertical – in order to direct its main beam towards the horizon. If the frequency is high enough, then the loop might be small enough to attach to an antenna rotator, in order to rotate that direction as desired. Compared to a dipole or folded dipole, a vertical large loop wastes less power radiating toward the sky or ground, resulting in about 1.5 dB higher gain in the two favored horizontal directions.
Additional gain (and a uni-directional radiation pattern) is usually obtained with an array of such elements either as a driven endfire array or in a Yagi configuration – with only one of the loops being driven by the feedline and all the remaining loops being "parasitic" reflectors and directors. The latter is widely used in amateur radio in the "quad" configuration (see photo).
Low-frequency one-wavelength loops "lying down" are sometimes used for local NVIS communication. This is sometimes called a lazy quad. Its radiation pattern consists of a single lobe straight up (radiation toward the ground which is not absorbed is reflected back upward). The radiation pattern and especially the input impedance is affected by its proximity to the ground.
If fed with higher frequencies, then the antenna input impedance will generally include a reactive part and a different resistive component, requiring use of an antenna tuner. As the frequency increases above the first harmonic, the radiation pattern breaks up into multiple lobes which peak at lower angles relative to the horizon, which is an improvement for long-distance communication for frequencies well above the loop's second harmonic.
Halo antennas
A halo antenna is often described as a half-wave dipole antenna that has been bent into a circle. Although it could be categorized as a bent dipole, it has the omnidirectional radiation pattern very nearly the same as a small loop. The halo is more efficient than a small loop, since it is a larger antenna at in circumference with its disproportionately larger radiation resistance. Because of its much greater radiation resistance, a halo presents a good impedance match to 50-Ohm coaxial cable, and its construction is less demanding than a small loop, since the maker is not compelled to take such extreme care to avoid losses from mediocre conductors and contact resistance.
At wave, the halo antenna is near or on the extreme high limit of the size range for "small" loops, but unlike most oversized small loops, it can be analyzed with simple techniques by treating it as a bent dipole.
Practical use
On the VHF bands and above, the physical diameter of a halo is small enough to be effectively used as a mobile antenna.
The horizontal radiation pattern of a horizontal halo is nearly omnidirectional – to within 3 dB or less – and that can be evened out by making the loop slightly smaller and adding more capacitance between the element tips. Not only will that even out the gain, it will reduce upward radiation, which for VHF is typically wasted by radiating into space.
Halos pick up less nearby electrical spark interference than monopoles and dipoles, such as ignition noise from vehicles.
Electrical analysis
Although it has a superficially different appearance, the halo antenna can conveniently be analyzed as a dipole (which also has a half-wave radiating part with a high voltage and zero current at its ends) that has been bent into a circle. Simply using dipole results greatly simplifies the calculations and for most properties are the same as a halo. Halo performance can also be modeled with techniques used for similar, moderate-sized "small" transmitting loops, but for brevity, that complicated analysis is often skipped in introductory articles on loop antennas (unfortunately, this typical omission leaves otherwise well-read persons unaware of the properties of "large" small loops).
The halo's gap
Some writers mistakenly consider the gap in the halo antenna's loop to distinguish it from a small loop antenna, since there is no DC connection between the two ends. But that distinction is lost at RF; the close-bent high-voltage ends are capacitively coupled, and the RF current crosses the gap as displacement current. The gap in the halo is electrically equivalent to the tuning capacitor on a small loop, although the incidental capacitance involved is not nearly as large.
Small loops
Small loops are "small" in comparison to their operating wavelength. Contrary to the pattern of large loop antennas, the reception and radiation strength of small loops peaks inside the plane of the loop, rather than broadside (perpendicular) to it.
As with all antennas that are physically much smaller than the operating wavelength, small loop antennas have small radiation resistance which is dwarfed by ohmic losses, resulting in a poor antenna efficiency. They are thus mainly used as receiving antennas at lower frequencies (wavelengths of tens to hundreds of meters). Like a short dipole antenna, the radiation resistance is small. The radiation resistance is proportional to the square of the area:
where is the area enclosed by the loop, is the wavelength, and is the number of turns of the conductor around the loop.
Because of the higher exponent than linear antennas (loop area squared ≈ perimeter to the 4th power, vs. dipole & monopole length squared = 2nd power), the fall in with reduced size is more extreme. The ability to increase the radiation resistance by using multiple turns is analogous to making a dipole out of two or more parallel lines for each dipole arm ("folded dipole").
Small loops have advantages as receiving antennas at frequencies below 10 MHz. Although a small loop's losses can be high, the same loss applies to both the signal and the noise, so the receiving signal-to-noise ratio of a small loop may not suffer at these lower frequencies, where received noise is dominated by atmospheric noise and static rather than receiver-internal noise. The ability to more manageably rotate a smaller antenna may help to maximize the signal and reject interference. Several construction techniques are used to ensure that small receiving loops' null directions are "sharp", including adding broken shielding of the loop arms and keeping the perimeter around wavelength (or wave at most). Small transmitting loops' perimeters are instead made as large as feasibly possible, up to wave (or even if possible), in order to make the best of their generally poor efficiency, although doing so sacrifices sharp nulls.
The small loop antenna is also known as a magnetic loop since the response of an electrically small receiving loop is proportional to the rate of change of magnetic flux through the loop. At higher frequencies (or shorter wavelengths), when the antenna is no longer electrically small, the current distribution through the loop may no longer be uniform and the relationship between its response and the incident fields becomes more complicated. In the case of transmission, the fields produced by an electrically small loop are the same as an "infinitesimal magnetic dipole" whose axis is perpendicular to the plane of the loop.
Because of their meager radiation resistance, the properties of small loops tend to more often be intensively optimized than are full-size antennas, and the properties optimized for transmitting are not quite the same as for receiving. With full-size antennas, the reciprocity between transmitting and receiving usually makes the distinctions unimportant, but since a few RF properties important for receiving differ from those for transmitting – particularly below about 10~20 MHz – small loops intended for receiving have slight differences from small transmitting loops. They are discussed separately in following two subsections, although many of the comments apply to both.
Small receiving loops
If the perimeter of a loop antenna is much smaller than the intended operating wavelengths – say to of a wavelength – then the antenna is called a small receiving loop, since loop antennas that small are only practical for receiving. Several performance factors, including received power, scale in proportion to the loop's area. For a given loop area, the length of the conductor (and thus its net loss resistance) is minimized if the perimeter is circular, making a circle the optimal shape for small loops. Small receiving loops are typically used below 14 MHz, where human-made and natural atmospheric noise dominate. Thus the signal-to-noise ratio of the received signal will not be adversely affected by low efficiency as long as the loop is not excessively small.
A typical diameter of receiving loops with "air centers" is between . To increase the magnetic field in the loop and thus its efficiency, while greatly reducing size, the coil of wire is often wound around a ferrite rod magnetic core; this is called a ferrite loop antenna. Such ferrite loop antennas are used in almost all AM broadcast receivers with the notable exception of car radios, since the antenna for the AM band needs to be outside the obstructing metal car chassis.
Small loop antennas are also popular for radio direction finding, in part due to their exceedingly sharp, clear "null" along the loop axis: When the loop axis is aimed directly at the transmitter, the target signal abruptly vanishes.
The radiation resistance of a small loop is generally much smaller than the loss resistance due to the conductors composing the loop, leading to a poor antenna efficiency. Consequently, most of the power delivered to a small loop antenna will be converted to heat by the loss resistance, rather than doing useful work pushing out radio waves or gathering them in.
Wasted power is undesirable for a transmitting antenna, however for a receiving antenna, the inefficiency is not important at frequencies below about 15 MHz. At these lower frequencies, due to atmospheric noise (static) and man-made noise (interference), even a weak signal from an inefficient antenna is far stronger than the internal thermal or Johnson noise generated in the radio receiver's own circuitry, so the weak signal from a loop antenna can be amplified without degrading the signal-to-noise ratio, since both are magnified by the same amplification factor.
For example, at 1 MHz, the man-made noise might be 55 dB above the thermal noise floor. If a small loop antenna's loss is 50 dB (as if the antenna included a 50 dB attenuator), then the electrical inefficiency of that antenna will have little influence on the receiving system's signal-to-noise ratio. In contrast, at quieter frequencies at about 20 MHz and above, an antenna with a 50 dB loss could degrade the received signal-to-noise ratio by up to 50 dB, resulting in terrible performance.
However, as frequency rises, there is no need to suffer bad performance: At the higher, quieter frequencies, the wavelengths become short enough that a halo antenna is small enough to be feasible – at 20 MHz it is a little less than in diameter, and proportionally shrinks as the frequency increases. So the quieter the rising frequency gets, the more convenient it is to replace a small receiving loop with a larger, but still relatively compact, halos. It is mostly a direct substitute for a small receiving loop, but with superior signal reception.
Radiation pattern and polarization
Surprisingly, the radiation and receiving pattern of a small loop is perpendicular to that of a large self resonant loop (whose perimeter is close to one wavelength). Since the loop is much smaller than a wavelength, the current at any one moment is nearly constant round the circumference. By symmetry it can be seen that the voltages induced in the loop windings on opposite sides of the loop will cancel each other when a perpendicular signal arrives on the loop axis. Therefore, there is a null in that direction. Instead, the radiation pattern peaks in directions lying in the plane of the loop, because signals received from sources in that plane do not quite cancel owing to the phase difference between the arrival of the wave at the near and far sides of the loop. Increasing that phase difference by increasing the size of the loop causes a disproportionately large increase in the radiation resistance and the resulting antenna efficiency.
Another way of looking at a small loop as an antenna is to consider it simply as an inductive coil coupling to the magnetic field in the direction perpendicular to plane of the coil, according to Ampère's law. Then consider a propagating radio wave also perpendicular to that plane. Since the magnetic (and electric) fields of an electromagnetic wave in free space are transverse (no component in the direction of propagation), it can be seen that this magnetic field and that of a small loop antenna will be at right angles, and thus not coupled. For the same reason, an electromagnetic wave propagating within the plane of the loop, with its magnetic field perpendicular to that plane, is coupled to the magnetic field of the coil. Since the transverse magnetic and electric fields of a propagating electromagnetic wave are at right angles, the electric field of such a wave is also in the plane of the loop, and thus the antenna's polarization (which is always specified as being the orientation of the electric, not the magnetic field) is said to be in that plane.
Thus, mounting the loop in a horizontal plane will produce an omnidirectional antenna which is horizontally polarized; mounting the loop vertically yields a vertically polarized, weakly directional antenna, but with exceptionally sharp nulls along the axis of the loop. Size criteria that favor loops with a perimeter of or smaller ensure the sharpness of the loop's receiving null. Small loops intended for transmitting (see below) are designed as large as feasible to improve the marginal radiation resistance, sacrificing the sharp null by using perimeters as large as to
Receiver input tuning
Since a small-loop antenna is essentially a coil, its electrical impedance is inductive, with an inductive reactance much greater than its radiation resistance. In order to couple to a transmitter or receiver, the inductive reactance is normally canceled with a parallel capacitance. Since a good loop antenna will have a high factor (narrow bandwidth), the capacitor must be variable and is adjusted to match the receiver's tuning.
Small-loop receiving antennas are also almost always resonated using a parallel-plate capacitor, which makes their reception narrow-band, sensitive only to a very specific frequency. This allows the antenna, in conjunction with a (variable) tuning capacitor, to act as a tuned input stage to the receiver's front-end, in lieu of a preselector.
Direction finding with small loops
As long as the loop perimeter is kept below about wave, the directional response of small loop antennas includes a sharp null in the direction normal to the plane of the loop, so small loops are favored as compact radio direction finding antennas for long wavelengths.
The procedure is to rotate the loop antenna to find the direction where the signal vanishes – the "null" direction. Since the null occurs at two opposite directions along the axis of the loop, other means must be employed to determine which side of the antenna the nulled signal is on. One method is to rely on a second loop antenna located at a second location, or to move the receiver to that other location, thus relying on triangulation.
Instead of triangulation, a second dipole or vertical antenna can be electrically combined with a loop or a loopstick antenna. Called a sense antenna, connecting and matching the second antenna changes the combined radiation pattern to a cardioid, with a null in only one (less precise) direction. The general direction of the transmitter can be determined using the sense antenna, and then disconnecting the sense antenna returns the sharp nulls in the loop antenna pattern, allowing a precise bearing to be determined.
AM broadcast receiving antennas
Small-loop antennas are lossy and inefficient for transmitting, but they can be practical receiving antennas in the mediumwave (520–1710 kHz) broadcast band and below, where wavelength-sized antennas are infeasibly large, and the antenna inefficiency is irrelevant, due to large amounts of atmospheric noise.
AM broadcast receivers (and other low frequency radios for the consumer market) typically use small-loop antennas, even when a telescoping antenna may be attached for FM reception. A variable capacitor connected across the loop forms a resonant circuit that also tunes the receiver's input stage as that capacitor tracks the main tuning. A multiband receiver may contain tap points along the loop winding in order to tune the loop antenna at widely different frequencies.
In AM radios built prior to the invention of ferrite in the mid-20th century, the antenna might consist of dozens of turns of wire mounted on the back wall of the radio – a planar helical antenna – or a separate, rotatable, furniture-sized rack looped with wire – a frame antenna.
Ferrite
Ferrite loop antennas are made by winding fine wire around a ferrite rod. They are almost universally used in AM broadcast receivers. Other names for this type of antenna are loopstick, ferrite rod antenna or aerial, ferroceptor, or ferrod antenna. Often, at mediumwave and lower shortwave frequencies, Litz wire is used for the winding to reduce skin effect losses. Elaborate "basket weave" patterns are used at all frequencies to reduce inter-winding capacitance in the coil insuring that the loop self-resonance is well above the operating frequency, so that it acts as an electrical inductor that can be resonated with a tuning capacitor, and with a consequent improvement of the loop Q factor.
Inclusion of a magnetically permeable core increases the radiation resistance of a small loop, mitigating the inefficiency due to ohmic losses. Like all small antennas, such antennas are tiny compared to their effective area. A typical AM broadcast radio loop antenna wound on ferrite may have a cross sectional area of only at a frequency at which an ideal (lossless) antenna would have an effective area some hundred million times larger. Even accounting for the resistive losses in a ferrite rod antenna, its effective receiving area may exceed the loop's physical area by a factor of 100.
Small transmitting loops
Small transmitting loops are "small" in comparison to a full wavelength, but considerably larger than a "small" receive-only loop. They are typically used on frequencies between 14–30 MHz. Unlike receiving loops, small transmitting loops' sizes must be scaled-up for longer wavelengths, in order to keep radiation resistance from falling to unusably low levels; their larger sizes blur or erase the otherwise especially sharp nulls that small receiving loops provide.
Size, shape, efficiency, and pattern
Transmitting loops usually consist of a single turn of large-diameter conductor; they are typically round or octagonal to maximize the enclosed area for a given perimeter, hence maximizing radiation resistance. The smaller of these loops are much less efficient than the extraordinary performance of full-sized, self-resonant loops, or the moderate efficiency of monopoles, dipoles, and halos, but where space for a full wave loop or a half-wave dipole is not available, small loops can provide adequate communications with low-but-tolerable efficiency.
A small transmitting loop antenna with a perimeter of 10% or less of the wavelength will have a relatively constant current distribution along the conductor, and the main lobe will be in the plane of the loop, so it will show the strong null familiar in the radiation pattern of small receiving loops. Loops of any size between 10% and 30% of a wavelength in perimeter, up to almost exactly 50% in circumference, can be built and tuned with series capacitors to resonance, but their non-uniform current will reduce or eliminate the small loops' pattern null. A capacitor is required for a circumference less than a half wave, and an inductor is required for loops more than a half wave and less than a full wave.
Loops in the small transmitting loops' size range may have neither the uniform current of very small loops, nor the sinusoidal current of large loops, and thus cannot be analyzed using the assumptions useful for the small receiving loops nor full-wave loop antennas. Performance is most conveniently determined using NEC analysis. Antennas within this size range include the halo (see above) and the G0CWT (Edginton) loop. For brevity, introductory articles on small loop antennas sometimes confine discussion to loops smaller in circumference than , since for loops with circumferences larger than , the simplifying assumption of uniform current around the entire loop becomes untenably inaccurate. Since the larger halo also has a simple analysis, moderate-sized small-loop antennas and their complicated analysis are often omitted, leaving many otherwise-well-informed antenna builders in the dark regarding the performance obtainable with moderately small loops.
Use for land-mobile radio
Vertically aligned small loops are used in military land-mobile radio, at frequencies of 3–7 MHz, because of their ability to direct energy upwards, unlike a conventional whip antenna. This enables near vertical incidence skywave (NVIS) communication up to in mountainous regions. For NVIS, a typical radiation efficiency of around 1% is acceptable, because signal paths can be established with 1 W of radiated power or less – feasible when a 100 W transmitter is used.
In military use, the antenna may be built using a one- or two-conductor in diameter. The loop itself is typically in diameter.
Power limits and RF safety
One practical issue with small loops as transmitting antennas is that a small transmitting loop has not only a very large current going through it, but also a very high voltage across the capacitor – typically thousands of volts – even when fed with only a few watts of transmitter power. The smaller the loop (in wavelengths), the higher the voltage. This requires a rather expensive and physically large resonating capacitor with a large breakdown voltage, in addition to having minimal dielectric loss (normally requiring an air-gap capacitor or even a vacuum variable capacitor).
Making the loop larger in diameter will lower the gap voltage, as well as improving efficiency; however, all other efficiency improvements will tend to increase the gap voltage: efficiency may be increased by making the loop from a thicker conductor; other measures to lower the conductor's loss resistance include welding or brazing the connections, rather than soldering. But because reducing loss resistance increases the antenna's , the consequence of better efficiency is even greater voltage across the capacitor at the loop's gap. For a given frequency, a smaller small loop is more dangerous than a larger small loop, and perversely, a comparatively efficient small transmitting loop is more dangerous than an inefficient one.
The RF burn and shock problems raised by capacitive loading of small loops is more serious than for inductive loading of short whips or dipole antennas. The high antenna voltage is generally troublesome only on the upper end of a whip's loading coil, since it is spread across the extended coil length, whereas high voltages on a loop's capacitor plates are (ideally) at maximum over all of the plate surfaces. Further, the high-voltage tips of monopoles and dipoles typically are mounted high up and far out of reach, which limits opportunities for radio-frequency burns. In contrast, small-loop / "magnetic" antennas better tolerate being mounted close to the ground, so all parts of loop antennas, including the high-voltage parts, are more often within easy reach.
In summary: the high voltages from high pose a greater threat in small loops than most other small antennas, and demand greater caution, even for very low transmit power.
Feeder loops
In addition to other common impedance matching techniques such as a gamma match, small receiving and transmitting loops are sometimes impedance-matched by connecting the feedline to an even smaller feeder loop inside the area surrounded by the main loop. Although it may still be connected through the ground system, this leaves the main loop with no other DC connection to the transmitter. The feeder loop and the main loop are effectively the primary and secondary coils of a transformer, with power in the near-field inductively coupled from the feed loop into the main loop, which itself is connected to the resonating capacitor and radiates most of the signal power.
If both the main and the feeder loops are single-turn, then the impedance transformation ratio of the nested loops is almost exactly the ratio of the areas of the two loops separately, or the square of the ratio of their diameters (assuming they have the same shape). Typical feeder loops are to the size of the antenna's main loop, which gives transform ratios of 64:1 to 25:1, respectively. Adjusting the proximity and angle of the feeder loop to the main loop, and distorting the feeder's shape, both make small-to-moderate changes to the transform ratio, and allows for fine adjustment of the feedpoint impedance. For main loops with multiple turns, more often used for mediumwave frequencies, the feeder loop can be one or two turns on the same frame as the main loop's turns, in which case the impedance transform ratio is very nearly the square of the ratio of the number of turns on each loop.
Antenna-like non-antenna loops
Some so-called "antennas" look very much like genuine loop antennas, but are designed to couple with the inductive near-field, over distances of , rather than to transmit or receive long-distance electromagnetic waves in the radiative far-field. Because of this difference, the near-field "antennas" are not radio antennas at all (when correctly functioning for the purpose they are designed for).
Likewise, coupling coils used for inductive charging systems, regardless of whether they are used at low or high radio frequencies, are excluded from this article, since they are not (or ideally, should not be) radio antennas.
RFID coils and induction heating
Inductive heating systems, induction cooking stovetops, and RFID tags and readers all interact by near-field magnetic induction rather than far-field transmitted waves. So strictly speaking, they are not radio antennas.
Although they are not radio antennas, these systems do operate at radio frequencies, and they involve the use of small magnetic coils, which are called "antennas" in the trade. However, they are more usefully thought of as analogs to the windings in loosely coupled transformers. Although the magnetic coils in these inductive systems sometimes seem indistinguishable from the small loop antennas discussed above, such devices can only operate over short distances, and are specifically designed to avoid transmitting or receiving radio waves. Because inductive heating systems and RFID readers only use near-field alternating magnetic fields, their performance criteria are dissimilar to the far-field radio antennas discussed in this article.
Footnotes
References
External links
— Online calculator that solves the "Basic equations for a small loop" using formulas from The ARRL Antenna Book, 15th ed.
— Extensive Paper by Leigh Turner VK5KLT (SK) on HF Magnetic Loop Antennas.
— Interactive Magnetic Loop Calculator by Jose Vaca VK3CPU.
Radio frequency antenna types
Wireless tuning and filtering | Loop antenna | [
"Engineering"
] | 6,872 | [
"Radio electronics",
"Wireless tuning and filtering"
] |
2,159,645 | https://en.wikipedia.org/wiki/Tresigallo | Tresigallo (Ferrarese: ) is an Italian municipality in the province of Ferrara, which is in the region of Emilia-Romagna. It has about 4,700 inhabitants.
Despite its medieval origins, to which only a 16th-century palace (Palazzo Pio) of the House of Este bears witness today, it was transformed by the Fascist Minister of Agriculture Edmondo Rossoni, who was born in Tresigallo in 1884. From his ministry in Rome, he developed and supervised the new village map, completely rebuilding it as a utopian city from 1927 to 1934. Two axes were drawn across the town in order to link the main aspects of everyday life: on the horizontal axis there was the Church (spirituality) and the Balilla House, a youth center, renamed Casa della G.I.L (Gioventù Italiana del Littorio); on the vertical axis there was the civic centre (everyday life) and the cemetery (memory).
References
Cities and towns in Emilia-Romagna
Architecture related to utopias | Tresigallo | [
"Engineering"
] | 212 | [
"Architecture related to utopias",
"Architecture"
] |
2,159,669 | https://en.wikipedia.org/wiki/Stuffing%20box | A stuffing box or gland package is an assembly which is used to house a gland seal. It is used to prevent leakage of fluid, such as water or steam, between sliding or turning parts of machine elements.
Components
A stuffing box of a sailing boat will have a stern tube that is slightly bigger than the prop shaft. It will also have packing nut threads or a gland nut. The packing is inside the gland nut and creates the seal. The shaft is wrapped by the packing and put in the gland nut. Through tightening it onto the stern tube, the packing is compressed, creating a seal against the shaft. Creating a proper plunger alignment is critical for correct flow and a long wear life. Stuffing box components are of stainless steel, brass or other application-specific materials. Compression packing is rigorously tested to ensure effective sealing in valves, pumps, agitators, and other rotary equipment.
Gland
A gland is a general type of stuffing box, used to seal a rotating or reciprocating shaft against a fluid. The most common example is in the head of a tap (faucet) where the gland is usually packed with string which has been soaked in tallow or similar grease. The gland nut allows the packing material to be compressed to form a watertight seal and prevent water leaking up the shaft when the tap is turned on. The gland at the rotating shaft of a centrifugal pump may be packed in a similar way and graphite grease used to accommodate continuous operation. The linear seal around the piston rod of a double acting steam piston is also known as a gland, particularly in marine applications. Likewise the shaft of a handpump or wind pump is sealed with a gland where the shaft exits the borehole.
Other types of sealed connections without moving parts are also sometimes called glands; for example, a cable gland or fitting that connects a flexible electrical conduit to an enclosure, machine or bulkhead facilitates assembly and prevents liquid or gas ingress.
Applications
Boats
On a boat having an inboard motor that turns a shaft attached to an external propeller, the shaft passes through a stuffing box, also called a "packing box" or "stern gland" in this application. The stuffing box prevents water from entering the boat's hull. In many small fiberglass boats, for example, the stuffing box is mounted inboard near the point the shaft exits the hull. The "box" is a cylindrical assembly, typically of bronze, comprising a sleeve threaded on one end to accept adjusting and locking nuts. A special purpose heavy-duty rubber hose attaches the stuffing box to a stern tube, also called a shaft log, that projects inward from the hull. Marine-duty hose clamps secure the hose to the stern tube and the aft portion of the stuffing box sleeve. A sound stuffing box installation is critical to safety because failure can admit a catastrophic volume of water into the boat.
In a common type of stuffing box, rings of braided fiber, known as shaft packing or gland packing, form a seal between the shaft and the stuffing box. A traditional variety of shaft packing comprises a square cross-section rope made of flax or hemp impregnated with wax and lubricants. A turn of the adjusting nut compresses the shaft packing. Ideally, the compression is just enough to make the seal both watertight when the shaft is stationary and drip slightly when the shaft is turning. The drip rate must be at once sufficient to lubricate and cool the shaft and packing, but not so much as could sink an unattended boat.
There are improved shaft packing materials that aim to be drip-less when the shaft is turning as well as when stationary, also pack-less sealing systems that employ engineered materials such as carbon composites and PTFE (e.g. Teflon).
Steam engines
In a steam engine, where the piston rod reciprocates through the cylinder cover, a stuffing box provided in the cylinder cover prevents the leakage of steam from the cylinder.
See also
Axlebox
Bilge
Compression seal fitting
Journal bearing
Journal box
Labyrinth seal
References
Further reading
Calder, Nigel (2005). Boatowner's mechanical and electrical manual. Camden, Maine: International Marine/McGraw-Hill.
External links
Servicing Your Stuffing Box, by Don Casey
Step-by-Step Instructions For Servicing Your Stuffing Box, by Capt. Vincent Daniello, August 16, 2012
Pistons
Marine propulsion
Seals (mechanical)
Steam engines
Piston engines | Stuffing box | [
"Physics",
"Technology",
"Engineering"
] | 905 | [
"Seals (mechanical)",
"Engines",
"Piston engines",
"Materials",
"Marine engineering",
"Marine propulsion",
"Matter"
] |
2,159,690 | https://en.wikipedia.org/wiki/Butterflies%20in%20the%20stomach | Butterflies in the stomach is the physical sensation in humans of a "fluttery" feeling in the stomach, caused by a reduction of blood flow to the organ. This is as a result of the release of adrenaline and norepinephrine in the fight-or-flight response, which causes increased heart rate and blood pressure, consequently sending more blood to the muscles.
Butterflies in the stomach are usually linked in culture and language to the sentiment of love and romantic passion to a desired other. One may feel butterflies in the stomach prior to meeting or confronting a love interest due to high levels of emotion and anxiety, as adrenaline and serotonin may be released when a love interest is concerned.
It can also be a symptom of social anxiety disorder. The symptom of this phenomenon is usually experienced prior to attempting to partake in something critical.
See also
Anxiety
References
Emotion
Metaphors referring to body parts | Butterflies in the stomach | [
"Biology"
] | 184 | [
"Emotion",
"Behavior",
"Human behavior"
] |
2,159,714 | https://en.wikipedia.org/wiki/Saturation%20%28magnetic%29 | Seen in some magnetic materials, saturation is the state reached when an increase in applied external magnetic field H cannot increase the magnetization of the material further, so the total magnetic flux density B more or less levels off. (Though, magnetization continues to increase very slowly with the field due to paramagnetism.) Saturation is a characteristic of ferromagnetic and ferrimagnetic materials, such as iron, nickel, cobalt and their alloys. Different ferromagnetic materials have different saturation levels.
Description
Saturation is most clearly seen in the magnetization curve (also called BH curve or hysteresis curve) of a substance, as a bending to the right of the curve (see graph at right). As the H field increases, the B field approaches a maximum value asymptotically, the saturation level for the substance. Technically, above saturation, the B field continues increasing, but at the paramagnetic rate, which is several orders of magnitude smaller than the ferromagnetic rate seen below saturation.
The relation between the magnetizing field H and the magnetic field B can also be expressed as the magnetic permeability: or the relative permeability , where is the vacuum permeability. The permeability of ferromagnetic materials is not constant, but depends on H. In saturable materials the relative permeability increases with H to a maximum, then as it approaches saturation inverts and decreases toward one.
Different materials have different saturation levels. For example, high permeability iron alloys used in transformers reach magnetic saturation at 1.6–2.2teslas (T), whereas ferrites saturate at 0.2–0.5T. Some amorphous alloys saturate at 1.2–1.3T. Mu-metal saturates at around 0.8T.
Explanation
Ferromagnetic materials (like iron) are composed of microscopic regions called magnetic domains, that act like tiny permanent magnets that can change their direction of magnetization. Before an external magnetic field is applied to the material, the domains' magnetic fields are oriented in random directions, effectively cancelling each other out, so the net external magnetic field is negligibly small. When an external magnetizing field H is applied to the material, it penetrates the material and aligns the domains, causing their tiny magnetic fields to turn and align parallel to the external field, adding together to create a large magnetic field B which extends out from the material. This is called magnetization. The stronger the external magnetic field H, the more the domains align, yielding a higher magnetic flux density B. Eventually, at a certain external magnetic field, the domain walls have moved as far as they can, and the domains are as aligned as the crystal structure allows them to be, so there is negligible change in the domain structure on increasing the external magnetic field above this. The magnetization remains nearly constant, and is said to have saturated. The domain structure at saturation depends on the temperature.
Effects and uses
Saturation puts a practical limit on the maximum magnetic fields achievable in ferromagnetic-core electromagnets and transformers of around 2 T, which puts a limit on the minimum size of their cores. This is one reason why high power motors, generators, and utility transformers are physically large; to conduct the large amounts of magnetic flux necessary for high power production, they must have large magnetic cores. In applications in which the weight of magnetic cores must be kept to a minimum, such as transformers and electric motors in aircraft, a high saturation alloy such as Permendur is often used.
In electronic circuits, transformers and inductors with ferromagnetic cores operate nonlinearly when the current through them is large enough to drive their core materials into saturation. This means that their inductance and other properties vary with changes in drive current. In linear circuits this is usually considered an unwanted departure from ideal behavior. When AC signals are applied, this nonlinearity can cause the generation of harmonics and intermodulation distortion. To prevent this, the level of signals applied to iron core inductors must be limited so they don't saturate. To lower its effects, an air gap is created in some kinds of transformer cores. The saturation current, the current through the winding required to saturate the magnetic core, is given by manufacturers in the specifications for many inductors and transformers.
On the other hand, saturation is exploited in some electronic devices. Saturation is employed to limit current in saturable-core transformers, used in arc welding, and ferroresonant transformers which serve as voltage regulators. When the primary current exceeds a certain value, the core is pushed into its saturation region, limiting further increases in secondary current. In a more sophisticated application, saturable core inductors and magnetic amplifiers use a DC current through a separate winding to control an inductor's impedance. Varying the current in the control winding moves the operating point up and down on the saturation curve, controlling the alternating current through the inductor. These are used in variable fluorescent light ballasts, and power control systems.
Saturation is also exploited in fluxgate magnetometers and fluxgate compasses.
In some audio applications, saturable transformers or inductors are deliberately used to introduce distortion into an audio signal. Magnetic saturation generates odd-order harmonics, typically introducing third and fifth harmonic distortion to the lower and mid frequency range.
See also
Magnetic reluctance
Permendur/Hiperco
References
Magnetic hysteresis
Audio effects | Saturation (magnetic) | [
"Physics",
"Materials_science"
] | 1,170 | [
"Physical phenomena",
"Hysteresis",
"Magnetic hysteresis"
] |
2,159,805 | https://en.wikipedia.org/wiki/Transformer%20oil | Transformer oil or insulating oil is an oil that is stable at high temperatures and has excellent electrical insulating properties. It is used in oil-filled wet transformers, some types of high-voltage capacitors, fluorescent lamp ballasts, and some types of high-voltage switches and circuit breakers. Its functions are to insulate, suppress corona discharge and arcing, and to serve as a coolant.
Transformer oil is most often based on mineral oil, but alternative formulations with different engineering or environmental properties are growing in popularity.
Function and properties
Transformer oil's primary functions are to insulate and cool a transformer. It must therefore have high dielectric strength, thermal conductivity, and chemical stability, and must keep these properties when held at high temperatures for extended periods. Typically, they have a flash point greater than , pour point less than , and a dielectric breakdown at greater than 28 kVRMS. To improve cooling of large power transformers, the oil-filled tank may have external radiators through which the oil circulates by natural convection. Power transformers with capacities of thousands of kilovolt-ampere may also have cooling fans, oil pumps, and even oil-to-water heat exchangers.
Power transformers undergo prolonged drying processes, using electrical self-heating, the application of a vacuum, or both to ensure that the transformer is completely free of water vapor before the insulating oil is introduced. This helps prevent corona formation and subsequent electrical breakdown under load.
Oil filled transformers with a conservator oil reservoir may have a gas detector relay like a Buchholz relay. These safety devices detect the buildup of gas inside the transformer due to corona discharge, overheating, or an internal electric arc. On a slow accumulation of gas, or rapid pressure rise, these devices can trip a protective circuit breaker to remove power from the transformer. Transformers without conservators are usually equipped with sudden pressure relays, which perform a similar function as the Buchholz relay.
Mineral oil alternatives
Mineral oil is generally effective as a transformer oil, but it has some disadvantages, one of which is its relatively low flashpoint versus some alternatives. If a transformer leaks mineral oil, it can potentially start a fire. Fire codes often require that transformers inside buildings use a less flammable liquid, or the use of dry-type transformers with no liquid at all. Mineral oil is also an environmental contaminant, and its insulating properties are rapidly degraded by even small amounts of water. Transformers are well equipped to keep water outside the oil for this reason.
Pentaerythritol tetra fatty acid synthetic and natural esters have emerged as an increasingly common mineral oil alternative, especially in high-fire-risk applications such as indoors due to their high fire point, which are over . They are biodegradable, but are more expensive than mineral oil. Natural esters have lower oxidation stability in the 120C oxygen saturated test of approximately 48-hours compared to 500-hours for Mineral oils, and are therefore used in closed transformers.
Hermetic seals are important for larger transformers due to thermal expansion and contraction. Mid-size and large power transformers will typically have a conservator and employ a rubber bag with the use of natural ester to reduce oxygen ingress and prevent the natural ester from experiencing a faster oxidation than utilities are accustomed to with mineral oils. Silicone or fluorocarbon-based oils, which are even less flammable, are also used, but they are more expensive than esters, and are not biodegradable.
There are over 3 million transformers in service with vegetable-based formulations, using soy or rapeseed based formulations in up to 500 kV transformers so far. However, coconut oil-based formulations are unsuitable for use in cold climates or for voltages over 230 kV. Researchers are also investigating nanofluids for transformer use; these would be used as additives to improve the stability and thermal and electrical properties of the oil.
Polychlorinated biphenyls (PCBs)
Polychlorinated biphenyls (PCB) are synthetic dielectrics first made over a century ago and found to have desirable properties that led to their widespread use. Polychlorinated biphenyls were formerly used as transformer oil, since they have high dielectric strength and are not flammable. Unfortunately, they are also toxic, bioaccumulative, not at all biodegradable, and difficult to dispose of safely. When burned, they form even more toxic products, such as chlorinated dioxins and chlorinated dibenzofurans.
Beginning in the 1970s, production and new uses of PCBs were banned in many countries, due to concerns about the accumulation of PCBs and toxicity of their byproducts. For instance, in the USA, production of PCBs was banned in 1979 under the Toxic Substances Control Act. In many countries significant programs are in place to reclaim and safely destroy PCB contaminated equipment. One method that can be used to reclaim PCB contaminated transformer oil is the application of a PCB removal system, also called a PCB dechlorination system.
PCB removal systems use an alkali dispersion to strip the chlorine atoms from the other molecules in a chemical reaction. This forms PCB-free transformer oil and a PCB-free sludge. The two can then be separated via a centrifuge. The sludge can be disposed as regular non-PCB industrial waste. The treated transformer oil is fully restored, meeting the required standards, without any detectable PCB content. It can, thus, be used as the insulating fluid in transformers again.
PCBs and mineral oil are miscible in all proportions, and sometimes the same equipment (drums, pumps, hoses, and so on) was used for either type of liquid, so PCB contamination of transformer oil continues to be a concern. For instance, under present regulations, concentrations of PCBs exceeding 5 parts per million can cause an oil to be classified as hazardous waste in California.
Testing and oil quality
Transformer oils are subject to electrical and mechanical stresses while a transformer is in operation. In addition there is contamination caused by chemical interactions with windings and other solid insulation, catalyzed by high operating temperature. The original chemical properties of transformer oil change gradually, rendering it ineffective for its intended purpose after many years. Oil in large transformers and electrical apparatus is periodically tested for its electrical and chemical properties, to make sure it is suitable for further use. Sometimes oil condition can be improved by filtration and treatment. Tests can be divided into:
Dissolved gas analysis
Furan analysis
PCB analysis
General electrical & physical tests:
Color & Appearance
Breakdown Voltage
Water Content
Acidity (Neutralization Value)
Dielectric Dissipation Factor
Resistivity
Sediments & Sludge
Flash Point
Pour Point
Density
Kinematic Viscosity
The details of conducting these tests are available in standards released by International Electrotechnical Commission, ASTM International, International standard, British Standards, and testing can be done by any of the methods. The Furan and DGA tests are specifically not for determining the quality of transformer oil, but for determining any abnormalities in the internal windings of the transformer or the paper insulation of the transformer, which cannot be otherwise detected without a complete overhaul of the transformer. Suggested intervals for these test are:
General and physical tests - bi-yearly
Dissolved gas analysis - yearly
Furan testing - once every 2 years, subject to the transformer being in operation for min 5 years.
On-site testing
Some transformer oil tests can be carried out in the field, using portable test apparatus. Other tests, such as dissolved gas, normally require a sample to be sent to a laboratory. Electronic on-line dissolved gas detectors can be connected to important or distressed transformers to continually monitor gas generation trends.
To determine the insulating property of the dielectric oil, an oil sample is taken from the device under test, and its breakdown voltage is measured on-site according to the following test sequence:
In the vessel, two standard-compliant test electrodes with a typical clearance of 2.5 mm are surrounded by the insulating oil.
During the test, a test voltage is applied to the electrodes. The test voltage is continuously increased up to the breakdown voltage with a constant slew rate of e.g. 2 kV/s.
Breakdown occurs in an electric arc, leading to a collapse of the test voltage.
Immediately after ignition of the arc, the test voltage is switched off automatically.
Ultra fast switch off is crucial, as the energy that is brought into the oil and is burning it during the breakdown, must be limited to keep the additional pollution by carbonisation as low as possible.
The root mean square value of the test voltage is measured at the very instant of the breakdown and is reported as the breakdown voltage.
After the test is completed, the insulating oil is stirred automatically and the test sequence is performed repeatedly.
The resulting breakdown voltage is calculated as mean value of the individual measurements.
See also
Heat-transfer oil
References
Less and nonflammable liquid-insulated transformers, approval standard class Number 3990, Factory Mutual Research Corporation, 1997.
McShane C.P. (2001) Relative properties of the new combustion-resistant vegetable oil-based dielectric coolants for distribution and power transformers. IEEE Trans. on Industry Applications, Vol.37, No.4, July/August 2001, pp. 1132–1139, No. 0093-9994/01, 2001 IEEE.
"The Environmental technology verification program", U.S. Environmental Protection Agency, Washington, DC, VS-R-02-02, June 2002.
IEEE Guide for loading mineral-oil-immersed transformers, IEEE Standard C57.91-1995, 1996.
External links
Oils
Electric transformers
Liquid dielectrics | Transformer oil | [
"Chemistry"
] | 2,051 | [
"Oils",
"Carbohydrates"
] |
2,159,838 | https://en.wikipedia.org/wiki/Oxford%20Electric%20Bell | The Oxford Electric Bell or Clarendon Dry Pile is an experimental electric bell, in particular a type of bell that uses the electrostatic clock principle that was set up in 1840 and which has run nearly continuously ever since. It was one of the first pieces purchased for a collection of apparatus by clergyman and physicist Robert Walker. It is located in a corridor adjacent to the foyer of the Clarendon Laboratory at the University of Oxford, England, and is still ringing, albeit inaudibly due to being behind two layers of glass.
Design
The experiment consists of two brass bells, each positioned beneath a dry pile (a form of battery), the pair of piles connected in series, giving the bells opposite electric charges. The clapper is a metal sphere approximately in diameter suspended between the piles, which rings the bells alternately due to electrostatic forces. When the clapper touches one bell, it is charged by that pile. It is then repelled from that bell due to having the same charge and attracted to the other bell, which has the opposite charge. The clapper then touches the other bell and the process reverses, leading to oscillation. The use of electrostatic forces means that while high voltage is required to create motion, only a tiny amount of charge is carried from one bell to the other. As a result, the batteries drain very slowly, which is why the piles have been able to last since the apparatus was set up in 1840. Its oscillation frequency is 2 hertz.
The exact composition of the dry piles is unknown, but it is known that they have been coated with molten sulfur for insulation and it is thought that they may be Zamboni piles.
At one point this sort of device played an important role in distinguishing between two different theories of electrical action: the theory of contact tension (an obsolete scientific theory based on then-prevailing electrostatic principles) and the theory of chemical action.
The Oxford Electric Bell does not demonstrate perpetual motion. The bell will eventually stop when the dry piles have distributed their charges equally if the clapper does not wear out first. The Bell has produced approximately 10 billion rings since 1840 and holds the Guinness World Record as "the world's most durable battery [delivering] ceaseless tintinnabulation".
Operation
Apart from occasional short interruptions caused by high humidity, the bell has rung continuously since 1840. The bell may have been constructed in 1825.
See also
Long-term experiment
Franklin bells
Beverly Clock (1864)
Pitch drop experiment (1927)
Clock of the Long Now
References
Further reading
Willem Hackmann, "The Enigma of Volta's "Contact Tension" and the Development of the "Dry Pile"", appearing in Nuova Voltiana: Studies on Volta and His Times, nb Volume 3 (Fabio Bevilacqua; Lucio Frenonese (Editors)), 2000, pp. 103–119.
External links
Oxford Electric Bell, YouTube video with David Glover-Aoki (18 October 2011)
1840 works
1840 establishments in England
1840 in science
Individual bells
History of physics
Physics experiments
Electric bell | Oxford Electric Bell | [
"Physics"
] | 621 | [
"Experimental physics",
"Physics experiments"
] |
2,160,112 | https://en.wikipedia.org/wiki/FUJIC | FUJIC was the first electronic digital computer in operation in Japan. It was finished in March 1956, the project having been effectively started in 1949, and was built almost entirely by Dr. Okazaki Bunji. Originally designed to perform calculations for lens design by Fuji, the ultimate goal of FUJIC's construction was to achieve a speed 1,000 times that of human calculation for the same purpose – the actual performance achieved was double that number.
Employing approximately 1,700 vacuum tubes, the computer's word length was 33 bits. It had an ultrasonic mercury delay-line memory of 255 words, with an average access time of 500 microseconds. An addition or subtraction was clocked at 100 microseconds, multiplication at 1,600 microseconds, and division at 2,100 microseconds.
Used extensively for two years at the Fuji factory in Odawara, it was given later to Waseda University before taking up residence in the National Science Museum of Japan in Tokyo.
See also
MUSASINO-1
List of vacuum-tube computers
References
References and external links
FUJIC at the IPSJ Computer Museum
Dr. Okazaki Bunji at the IPSJ Computer Museum
Raúl Rojas and Ulf Hashagen, ed. The First Computers: History and Architectures. 2000, MIT Press, .
One-of-a-kind computers
Vacuum tube computers
Fujifilm | FUJIC | [
"Technology"
] | 290 | [
"Computing stubs"
] |
2,160,183 | https://en.wikipedia.org/wiki/Data%20recovery | In computing, data recovery is a process of retrieving deleted, inaccessible, lost, corrupted, damaged, or formatted data from secondary storage, removable media or files, when the data stored in them cannot be accessed in a usual way. The data is most often salvaged from storage media such as internal or external hard disk drives (HDDs), solid-state drives (SSDs), USB flash drives, magnetic tapes, CDs, DVDs, RAID subsystems, and other electronic devices. Recovery may be required due to physical damage to the storage devices or logical damage to the file system that prevents it from being mounted by the host operating system (OS).
Logical failures occur when the hard drive devices are functional but the user or automated-OS cannot retrieve or access data stored on them. Logical failures can occur due to corruption of the engineering chip, lost partitions, firmware failure, or failures during formatting/re-installation.
Data recovery can be a very simple or technical challenge. This is why there are specific software companies specialized in this field.
About
The most common data recovery scenarios involve an operating system failure, malfunction of a storage device, logical failure of storage devices, accidental damage or deletion, etc. (typically, on a single-drive, single-partition, single-OS system), in which case the ultimate goal is simply to copy all important files from the damaged media to another new drive. This can be accomplished using a Live CD, or DVD by booting directly from a ROM or a USB drive instead of the corrupted drive in question. Many Live CDs or DVDs provide a means to mount the system drive and backup drives or removable media, and to move the files from the system drive to the backup media with a file manager or optical disc authoring software. Such cases can often be mitigated by disk partitioning and consistently storing valuable data files (or copies of them) on a different partition from the replaceable OS system files.
Another scenario involves a drive-level failure, such as a compromised file system or drive partition, or a hard disk drive failure. In any of these cases, the data is not easily read from the media devices. Depending on the situation, solutions involve repairing the logical file system, partition table, or master boot record, or updating the firmware or drive recovery techniques ranging from software-based recovery of corrupted data, to hardware- and software-based recovery of damaged service areas (also known as the hard disk drive's "firmware"), to hardware replacement on a physically damaged drive which allows for the extraction of data to a new drive. If a drive recovery is necessary, the drive itself has typically failed permanently, and the focus is rather on a one-time recovery, salvaging whatever data can be read.
In a third scenario, files have been accidentally "deleted" from a storage medium by the users. Typically, the contents of deleted files are not removed immediately from the physical drive; instead, references to them in the directory structure are removed, and thereafter space the deleted data occupy is made available for later data overwriting. In the mind of end users, deleted files cannot be discoverable through a standard file manager, but the deleted data still technically exists on the physical drive. In the meantime, the original file contents remain, often several disconnected fragments, and may be recoverable if not overwritten by other data files.
The term "data recovery" is also used in the context of forensic applications or espionage, where data which have been encrypted, hidden, or deleted, rather than damaged, are recovered. Sometimes data present in the computer gets encrypted or hidden due to reasons like virus attacks which can only be recovered by some computer forensic experts.
Physical damage
A wide variety of failures can cause physical damage to storage media, which may result from human errors and natural disasters. CD-ROMs can have their metallic substrate or dye layer scratched off; hard disks can suffer from a multitude of mechanical failures, such as head crashes, PCB failure, and failed motors; tapes can simply break.
Physical damage to a hard drive, even in cases where a head crash has occurred, does not necessarily mean there will be a permanent loss of data. The techniques employed by many professional data recovery companies can typically salvage most, if not all, of the data that had been lost when the failure occurred.
Of course, there are exceptions to this, such as cases where severe damage to the hard drive platters may have occurred. However, if the hard drive can be repaired and a full image or clone created, then the logical file structure can be rebuilt in most instances.
Most physical damage cannot be repaired by end users. For example, opening a hard disk drive in a normal environment can allow airborne dust to settle on the platter and become caught between the platter and the read/write head. During normal operation, read/write heads float 3 to 6nanometers above the platter surface, and the average dust particles found in a normal environment are typically around 30,000nanometers in diameter. When these dust particles get caught between the read/write heads and the platter, they can cause new head crashes that further damage the platter and thus compromise the recovery process. Furthermore, end users generally do not have the hardware or technical expertise required to make these repairs. Consequently, data recovery companies are often employed to salvage important data with the more reputable ones using class 100 dust- and static-free cleanrooms.
Recovery techniques
Recovering data from physically damaged hardware can involve multiple techniques. Some damage can be repaired by replacing parts in the hard disk. This alone may make the disk usable, but there may still be logical damage. A specialized disk-imaging procedure is used to recover every readable bit from the surface. Once this image is acquired and saved on a reliable medium, the image can be safely analyzed for logical damage and will possibly allow much of the original file system to be reconstructed.
Hardware repair
A common misconception is that a damaged printed circuit board (PCB) may be simply replaced during recovery procedures by an identical PCB from a healthy drive. While this may work in rare circumstances on hard disk drives manufactured before 2003, it will not work on newer drives. Electronics boards of modern drives usually contain drive-specific adaptation data (generally a map of bad sectors and tuning parameters) and other information required to properly access data on the drive. Replacement boards often need this information to effectively recover all of the data. The replacement board may need to be reprogrammed. Some manufacturers (Seagate, for example) store this information on a serial EEPROM chip, which can be removed and transferred to the replacement board.
Each hard disk drive has what is called a system area or service area; this portion of the drive, which is not directly accessible to the end user, usually contains drive's firmware and adaptive data that helps the drive operate within normal parameters. One function of the system area is to log defective sectors within the drive; essentially telling the drive where it can and cannot write data.
The sector lists are also stored on various chips attached to the PCB, and they are unique to each hard disk drive. If the data on the PCB do not match what is stored on the platter, then the drive will not calibrate properly. In most cases the drive heads will click because they are unable to find the data matching what is stored on the PCB.
Logical damage
The term "logical damage" refers to situations in which the error is not a problem in the hardware and requires software-level solutions.
Corrupt partitions and file systems, media errors
In some cases, data on a hard disk drive can be unreadable due to damage to the partition table or file system, or to (intermittent) media errors. In the majority of these cases, at least a portion of the original data can be recovered by repairing the damaged partition table or file system using specialized data recovery software such as TestDisk; software like ddrescue can image media despite intermittent errors, and image raw data when there is partition table or file system damage. This type of data recovery can be performed by people without expertise in drive hardware as it requires no special physical equipment or access to platters.
Sometimes data can be recovered using relatively simple methods and tools; more serious cases can require expert intervention, particularly if parts of files are irrecoverable. Data carving is the recovery of parts of damaged files using knowledge of their structure.
Overwritten data
After data has been physically overwritten on a hard disk drive, it is generally assumed that the previous data are no longer possible to recover. In 1996, Peter Gutmann, a computer scientist, presented a paper that suggested overwritten data could be recovered through the use of magnetic force microscopy. In 2001, he presented another paper on a similar topic. To guard against this type of data recovery, Gutmann and Colin Plumb designed a method of irreversibly scrubbing data, known as the Gutmann method and used by several disk-scrubbing software packages.
Substantial criticism has followed, primarily dealing with the lack of any concrete examples of significant amounts of overwritten data being recovered. Gutmann's article contains a number of errors and inaccuracies, particularly regarding information about how data is encoded and processed on hard drives. Although Gutmann's theory may be correct, there is no practical evidence that overwritten data can be recovered, while research has shown to support that overwritten data cannot be recovered.
Solid-state drives (SSD) overwrite data differently from hard disk drives (HDD) which makes at least some of their data easier to recover. Most SSDs use flash memory to store data in pages and blocks, referenced by logical block addresses (LBA) which are managed by the flash translation layer (FTL). When the FTL modifies a sector it writes the new data to another location and updates the map so the new data appear at the target LBA. This leaves the pre-modification data in place, with possibly many generations, and recoverable by data recovery software.
Lost, deleted, and formatted data
Sometimes, data present in the physical drives (Internal/External Hard disk, Pen Drive, etc.) gets lost, deleted and formatted due to circumstances like virus attack, accidental deletion or accidental use of SHIFT+DELETE. In these cases, data recovery software is used to recover/restore the data files.
Logical bad sector
In the list of logical failures of hard disks, a logical bad sector is the most common fault leading data not to be readable. Sometimes it is possible to sidestep error detection even in software, and perhaps with repeated reading and statistical analysis recover at least some of the underlying stored data. Sometimes prior knowledge of the data stored and the error detection and correction codes can be used to recover even erroneous data. However, if the underlying physical drive is degraded badly enough, at least the hardware surrounding the data must be replaced, or it might even be necessary to apply laboratory techniques to the physical recording medium. Each of the approaches is progressively more expensive, and as such progressively more rarely sought.
Eventually, if the final, physical storage medium has indeed been disturbed badly enough, recovery will not be possible using any means; the information has irreversibly been lost.
Remote data recovery
Recovery experts do not always need to have physical access to the damaged hardware. When the lost data can be recovered by software techniques, they can often perform the recovery using remote access software over the Internet, LAN or other connection to the physical location of the damaged media. The process is essentially no different from what the end user could perform by themselves.
Remote recovery requires a stable connection with an adequate bandwidth. However, it is not applicable where access to the hardware is required, as in cases of physical damage.
Four phases of data recovery
Usually, there are four phases when it comes to successful data recovery, though that can vary depending on the type of data corruption and recovery required.
Phase 1 Repair the hard disk drive
The hard drive is repaired in order to get it running in some form, or at least in a state suitable for reading the data from it. For example, if heads are bad they need to be changed; if the PCB is faulty then it needs to be fixed or replaced; if the spindle motor is bad the platters and heads should be moved to a new drive.
Phase 2 Image the drive to a new drive or a disk image file
When a hard disk drive fails, the importance of getting the data off the drive is the top priority. The longer a faulty drive is used, the more likely further data loss is to occur. Creating an image of the drive will ensure that there is a secondary copy of the data on another device, on which it is safe to perform testing and recovery procedures without harming the source.
Phase 3 Logical recovery of files, partition, MBR and filesystem structures
After the drive has been cloned to a new drive, it is suitable to attempt the retrieval of lost data. If the drive has failed logically, there are a number of reasons for that. Using the clone it may be possible to repair the partition table or master boot record (MBR) in order to read the file system's data structure and retrieve stored data.
Phase 4 Repair damaged files that were retrieved
Data damage can be caused when, for example, a file is written to a sector on the drive that has been damaged. This is the most common cause in a failing drive, meaning that data needs to be reconstructed to become readable. Corrupted documents can be recovered by several software methods or by manually reconstructing the document using a hex editor.
Restore disk
The Windows operating system can be reinstalled on a computer that is already licensed for it. The reinstallation can be done by downloading the operating system or by using a "restore disk" provided by the computer manufacturer. Eric Lundgren was fined and sentenced to U.S. federal prison in April 2018 for producing 28,000 restore disks and intending to distribute them for about 25 cents each as a convenience to computer repair shops.
List of data recovery software
Bootable
Data recovery cannot always be done on a running system. As a result, a boot disk, live CD, live USB, or any other type of live distro contains a minimal operating system.
BartPE: a lightweight variant of Microsoft Windows XP or Windows Server 2003 32-bit operating systems, similar to a Windows Preinstallation Environment, which can be run from a live CD or live USB drive. Discontinued.
Finnix: a Debian-based Live CD with a focus on being small and fast, useful for computer and data rescue
Disk Drill: capable of creating bootable macOS USB drives for data recovery
Knoppix: contains utilities for data recovery under Linux
SystemRescue: an Arch Linux-based live CD, useful for repairing unbootable computer systems and retrieving data after a system crash
Windows Preinstallation Environment (WinPE): A customizable Windows Boot DVD (made by Microsoft and distributed for free). Can be modified to boot to any of the programs listed.
Consistency checkers
CHKDSK: a consistency checker for DOS and Windows systems
Disk First Aid: a consistency checker for the classic Mac OS
Disk Utility: a consistency checker for macOS
fsck: a consistency checker for UNIX
GParted: a GUI for GNU Parted, the GNU partition editor, capable of calling fsck
File recovery
CDRoller: recovers data from optical disc
Disk Drill: data recovery application for Mac OS X and Windows
DMDE: multi-platform data recovery and disk editing tool
dvdisaster: generates error-correction data for optical discs
GetDataBack: a Windows recovery program
Hetman Partition Recovery: data drive recovery solution
IsoBuster: recovers data from optical discs, USB sticks, flash drives and hard drives
Mac Data Recovery Guru: Mac OS X data recovery program which works on USB sticks, optical media, and hard drives
MiniTool Partition Wizard: for Windows 7 and later; includes data recovery
Norton Utilities: a suite of utilities that has a file recovery component
PhotoRec: advanced multi-platform program with text-based user interface used to recover files
Recover My Files: proprietary software for Windows 2000 and later—FAT, NTFS and HFS
Recovery Toolbox: freeware and shareware tools plus online services for various Windows 2000 and later programs
Recuva: proprietary software for Windows 2000 and later—FAT and NTFS
Stellar Data Recovery: data recovery utility for Windows and macOS
TestDisk: free, open source, multi-platform. recover files and lost partitions
AVG TuneUp: a suite of utilities that has a file recovery component for Windows XP and later
Windows File Recovery: a command-line utility from Microsoft to recover deleted files for Windows 10 version 2004 and later
Forensics
Foremost: an open-source command-line file recovery program, originally developed by the Air Force Office of Special Investigations and NPS Center for Information Systems Security Studies and Research
Forensic Toolkit: by AccessData, used by law enforcement
Open Computer Forensics Architecture: An open-source program for Linux
The Coroner's Toolkit: a suite of utilities for assisting in forensic analysis of a UNIX system after a break-in
The Sleuth Kit: also known as TSK, a suite of forensic analysis tools developed by Brian Carrier for UNIX, Linux and Windows systems. TSK includes the Autopsy forensic browser.
Imaging tools
Clonezilla: a free disk cloning, disk imaging, data recovery, and deployment boot disk
dd: common byte-to-byte cloning tool found on Unix-like systems
ddrescue: an open-source tool similar to dd but with the ability to skip over and subsequently retry bad blocks on failing storage devices
Team Win Recovery Project: a free and open-source recovery system for Android devices
See also
Backup
Cleanroom
Comparison of file systems
Computer forensics
Continuous data protection
Crypto-shredding
Data archaeology
Data curation
Data preservation
Data loss
Error detection and correction
File carving
Hidden file and hidden directory
Undeletion
List of data-erasing software
References
Further reading
Tanenbaum, A. & Woodhull, A. S. (1997). Operating Systems: Design And Implementation, 2nd ed. New York: Prentice Hall.
Computer data
Data management
Transaction processing
Recovery | Data recovery | [
"Technology",
"Engineering"
] | 3,767 | [
"Cybersecurity engineering",
"Reliability engineering",
"Computer data",
"Backup",
"Data management",
"Data",
"Computer forensics"
] |
2,160,193 | https://en.wikipedia.org/wiki/Neopentane | Neopentane, also called 2,2-dimethylpropane, is a double-branched-chain alkane with five carbon atoms. Neopentane is a flammable gas at room temperature and pressure which can condense into a highly volatile liquid on a cold day, in an ice bath, or when compressed to a higher pressure.
Neopentane is the simplest alkane with a quaternary carbon, and has achiral tetrahedral symmetry. It is one of the three structural isomers with the molecular formula C5H12 (pentanes), the other two being n-pentane and isopentane. Out of these three, it is the only one to be a gas at standard conditions; the others are liquids.
It was first synthesized by Russian chemist in 1870.
Nomenclature
The traditional name neopentane, coined by William Odling in 1876, was still retained in the 1993 IUPAC recommendations, but is no longer recommended according to the 2013 recommendations. The preferred IUPAC name is the systematic name 2,2-dimethylpropane, but the substituent numbers are superfluous because it is the only possible “dimethylpropane”.
A neopentyl substituent, often symbolized by "Np", has the structure Me3C–CH2– for instance neopentyl alcohol (Me3CCH2OH or NpOH). As Np also symbolises the element neptunium (atomic number 93) one should use this abbreviation with care.
The obsolete name tetramethylmethane is also used, especially in older sources.
Physical properties
Boiling and melting points
The boiling point of neopentane is only 9.5 °C, significantly lower than those of isopentane (27.7 °C) and normal pentane (36.0 °C). Therefore, neopentane is a gas at room temperature and atmospheric pressure, while the other two isomers are (barely) liquids.
The melting point of neopentane (−16.6 °C), on the other hand, is 140 degrees higher than that of isopentane (−159.9 °C) and 110 degrees higher than that of n-pentane (−129.8 °C). This anomaly has been attributed to the better solid-state packing assumed to be possible with the tetrahedral neopentane molecule; but this explanation has been challenged on account of it having a lower density than the other two isomers. Moreover, its enthalpy of fusion is lower than the enthalpies of fusion of both n-pentane and isopentane, thus indicating that its high melting point is due to an entropy effect resulting from higher molecular symmetry. Indeed, the entropy of fusion of neopentane is about four times lower than that of n-pentane and isopentane.
1H NMR spectrum
Because of neopentane's full tetrahedral symmetry, all protons are chemically equivalent, leading to a single NMR chemical shift δ = 0.902 when dissolved in carbon tetrachloride. In this respect, neopentane is similar to its silane analog, tetramethylsilane, whose single chemical shift is zero by convention.
The symmetry of the neopentane molecule can be broken if some hydrogen atoms are replaced by deuterium atoms. In particular, if each methyl group has a different number of substituted atoms (0, 1, 2, and 3), one obtains a chiral molecule. The chirality in this case arises solely by the mass distribution of its nuclei, while the electron distribution is still essentially achiral.
Derivatives
The alcohol pentaerythritol can be described as the result of replacing one hydrogen in each of the four methyl groups by a hydroxyl (–OH) group.
A linear polymer with alternating neopentane and orthocarbonate groups, which can be described as an ester (pentaerythritol orthocarbonate) with formula , was synthesized in 2002.
References
External links
IUPAC Nomenclature of Organic Chemistry (online version of the "Blue Book")
Alkanes | Neopentane | [
"Chemistry"
] | 885 | [
"Organic compounds",
"Alkanes"
] |
2,160,196 | https://en.wikipedia.org/wiki/Isopentane | Isopentane, also called methylbutane or 2-methylbutane, is a branched-chain saturated hydrocarbon (an alkane) with five carbon atoms, with formula or .
Isopentane is a volatile and flammable liquid. It is one of three structural isomers with the molecular formula C5H12, the others being pentane (n-pentane) and neopentane (2,2-dimethylpropane).
Isopentane is commonly used in conjunction with liquid nitrogen to achieve a liquid bath temperature of −160 °C. Natural gas typically contains 1% or less isopentane, but it is a significant component of natural gasoline.
History
Although the mixture of pentanes was first isolated from the destructive distillation (pyrolysis) products of the boghead coal by Charles Greville Williams in 1862. In 1864–1865 two chemists tried to extract same hydrocarbons from the Pennsylvanian oil. Carl Schorlemmer noted "that a mere trace of the liquid boiled below 30°C", but the first to properly separate isomers (and thus discover isopentane) was American chemist Cyrus Warren (1824–1891) slightly later, who measured the boiling point of the more volatile one at 30°C.
Nomenclature
The traditional name isopentane, attested in English as early as 1875, was still retained in the 1993 IUPAC recommendations, but is no longer recommended according to the 2013 recommendations. The preferred IUPAC name is the systematic name 2-methylbutane. An isopentyl group is a subset of the generic pentyl group. It has the chemical structure -CH3CH2CH(CH3)2.
Uses
Isopentane is used in a closed loop in geothermal power production to drive turbines.
Isopentane is used, in conjunction with dry ice or liquid nitrogen, to freeze tissues for cryosectioning in histology.
Isopentane is a major component (sometimes 30% or more) of natural gasoline, an analog of common petroleum-derived gasoline that is condensed from natural gas. Its share in commercial car fuel is highly variable: 19–45% in 1990s Sweden, 4–31% in 1990s US and 3.6–11% in the US in 2011. It has a substantially higher octane rating (RON 93.7) than n-pentane (61.7), and therefore there is interest in conversion from the latter.
References
External links
International Chemical Safety Card 1153
IUPAC Nomenclature of Organic Chemistry (online version of the "Blue Book")
Alkanes | Isopentane | [
"Chemistry"
] | 550 | [
"Organic compounds",
"Alkanes"
] |
2,160,219 | https://en.wikipedia.org/wiki/Native%20resolution | The native resolution of a liquid crystal display (LCD), liquid crystal on silicon (LCoS) or other flat panel display refers to its single fixed resolution. As an LCD consists of a fixed raster, it cannot change resolution to match the signal being displayed as a cathode-ray tube (CRT) monitor can, meaning that optimal display quality can be reached only when the signal input matches the native resolution. An image where the number of pixels is the same as in the image source and where the pixels are perfectly aligned to the pixels in the source is said to be pixel perfect.
While CRT monitors can usually display images at various resolutions, an LCD monitor has to rely on interpolation (scaling of the image), which causes a loss of image quality. An LCD has to scale up a smaller image to fit into the area of the native resolution. This is the same principle as taking a smaller image in an image editing program and enlarging it; the smaller image loses its sharpness when it is expanded. This is especially problematic as most resolutions are in a 4:3 aspect ratio (640×480, 800×600, 1024×768, 1280×960, 1600×1200) but there are odd resolutions that are not, notably 1280×1024. If a user were to map 1024×768 to a 1280×1024 screen there would be distortion as well as some image errors, as there is not a one-to-one mapping with regard to pixels. This results in noticeable quality loss and the image is much less sharp.
In theory, some resolutions could work well, if they are exact multiples of smaller image sizes. For example, a 1600×1200 LCD could display an 800×600 image well, as each of the pixels in the image could be represented by a block of four on the larger display, without interpolation. Since 800×600 is an integer factor of 1600×1200, scaling should not adversely affect the image. But in practice, most monitors apply a smoothing algorithm to all smaller resolutions, so the quality still suffers for these "half" modes.
Most LCD monitors are able to inform the PC of their native resolution using Extended display identification data (EDID); however, some LCD TVs, especially those with 1366x768 pixels, fail to provide their native resolution and only provide a set of lower resolutions, resulting in a less than pixel perfect output.
Some widescreen LCD monitors optionally display lower resolutions without scaling or stretching an image, so that the image will always be in full sharpness, although it will not occupy the full screen. This is most often recognizable upon close inspection, as there will typically be black edges visible on either side of the panel horizon.
See also
1:1 pixel mapping
Resolution independence
References
Gaming issues with TFT LCD Displays
(Wayback Machine copy)
Display technology | Native resolution | [
"Engineering"
] | 601 | [
"Electronic engineering",
"Display technology"
] |
2,160,429 | https://en.wikipedia.org/wiki/Dynamical%20billiards | A dynamical billiard is a dynamical system in which a particle alternates between free motion (typically as a straight line) and specular reflections from a boundary. When the particle hits the boundary it reflects from it without loss of speed (i.e. elastic collisions). Billiards are Hamiltonian idealizations of the game of billiards, but where the region contained by the boundary can have shapes other than rectangular and even be multidimensional. Dynamical billiards may also be studied on non-Euclidean geometries; indeed, the first studies of billiards established their ergodic motion on surfaces of constant negative curvature. The study of billiards which are kept out of a region, rather than being kept in a region, is known as outer billiard theory.
The motion of the particle in the billiard is a straight line, with constant energy, between reflections with the boundary (a geodesic if the Riemannian metric of the billiard table is not flat). All reflections are specular: the angle of incidence just before the collision is equal to the angle of reflection just after the collision. The sequence of reflections is described by the billiard map that completely characterizes the motion of the particle.
Billiards capture all the complexity of Hamiltonian systems, from integrability to chaotic motion, without the difficulties of integrating the equations of motion to determine its Poincaré map. Birkhoff showed that a billiard system with an elliptic table is integrable.
Equations of motion
The Hamiltonian for a particle of mass m moving freely without friction on a surface is:
where is a potential designed to be zero inside the region in which the particle can move, and infinity otherwise:
This form of the potential guarantees a specular reflection on the boundary. The kinetic term guarantees that the particle moves in a straight line, without any change in energy. If the particle is to move on a non-Euclidean manifold, then the Hamiltonian is replaced by:
where is the metric tensor at point . Because of the very simple structure of this Hamiltonian, the equations of motion for the particle, the Hamilton–Jacobi equations, are nothing other than the geodesic equations on the manifold: the particle moves along geodesics.
Notable billiards and billiard classes
Hadamard's billiards
Hadamard's billiards concern the motion of a free point particle on a surface of constant negative curvature, in particular, the simplest compact Riemann surface with negative curvature, a surface of genus 2 (a two-holed donut). The model is exactly solvable, and is given by the geodesic flow on the surface. It is the earliest example of deterministic chaos ever studied, having been introduced by Jacques Hadamard in 1898.
Artin's billiard
Artin's billiard considers the free motion of a point particle on a surface of constant negative curvature, in particular, the simplest non-compact Riemann surface, a surface with one cusp. It is notable for being exactly solvable, and yet not only ergodic but also strongly mixing. It is an example of an Anosov system. This system was first studied by Emil Artin in 1924.
Dispersing and semi-dispersing billiards
Let M be complete smooth Riemannian manifold without boundary, maximal sectional curvature of which is not greater than K and with the injectivity radius . Consider a collection of n geodesically convex subsets (walls) , , such that their boundaries are smooth submanifolds of codimension one. Let
, where denotes the interior of the set . The set will be called the billiard table.
Consider now a particle that moves inside the set B with unit speed along a geodesic until
it reaches one of the sets Bi (such an event is called a collision) where it reflects according to the law “the angle of incidence is equal to the angle of reflection” (if it reaches one of the sets , , the trajectory is not defined after that moment). Such dynamical system is called semi-dispersing billiard. If the walls are strictly convex, then the billiard is called dispersing. The naming is motivated by observation that a locally parallel beam of trajectories disperse after a collision with strictly convex part of a wall, but remain locally parallel after a collision with a flat section of a wall.
Dispersing boundary plays the same role for billiards as negative curvature does for geodesic flows causing the exponential instability of the dynamics. It is precisely this dispersing mechanism that gives dispersing billiards their strongest chaotic properties, as it was established by Yakov G. Sinai. Namely, the billiards are ergodic, mixing, Bernoulli, having a positive Kolmogorov-Sinai entropy and an exponential decay of correlations.
Chaotic properties of general semi-dispersing billiards are not understood that well, however, those of one important type of semi-dispersing billiards, hard ball gas were studied in some details since 1975 (see next section).
General results of Dmitri Burago and Serge Ferleger on the uniform estimation on the number of collisions in non-degenerate semi-dispersing billiards allow to establish finiteness of its topological entropy and no more than exponential growth of periodic trajectories. In contrast, degenerate semi-dispersing billiards may have infinite topological entropy.
Lorentz gas, a.k.a. Sinai billiard
The table of the Lorentz gas (also known as Sinai billiard) is a square with a disk removed from its center; the table is flat, having no curvature. The billiard arises from studying the behavior of two interacting disks bouncing inside a square, reflecting off the boundaries of the square and off each other. By eliminating the center of mass as a configuration variable, the dynamics of two interacting disks reduces to the dynamics in the Sinai billiard.
The billiard was introduced by Yakov G. Sinai as an example of an interacting Hamiltonian system that displays physical thermodynamic properties: almost all (up to a measure zero) of its possible trajectories are ergodic and it has a positive Lyapunov exponent.
Sinai's great achievement with this model was to show that the classical Boltzmann–Gibbs ensemble for an ideal gas is essentially the maximally chaotic Hadamard billiards.
Bouncing ball billiard
A particle is subject to a constant force (e.g. the gravity of the Earth) and scatters inelastically on a periodically corrugated vibrating floor. When the floor is made of arc or circles - in a certain intervall of frequencies - one can give a semi-analytic estimates to the rate of exponential separation of the trajectories.
Bunimovich stadium
The table called the Bunimovich stadium is a rectangle capped by semicircles, a shape called a stadium. Until it was introduced by Leonid Bunimovich, billiards with positive Lyapunov exponents were thought to need convex scatters, such as the disk in the Sinai billiard, to produce the exponential divergence of orbits. Bunimovich showed that by considering the orbits beyond the focusing point of a concave region it was possible to obtain exponential divergence.
Magnetic billiards
Magnetic billiards represent billiards where a charged particle is propagating under the presence of a perpendicular magnetic field. As a result, the particle trajectory changes from a straight line into an arc of a circle. The radius of this circle is inversely proportional to the magnetic field strength. Such billiards have been useful in real world applications of billiards, typically modelling nanodevices (see Applications).
Generalized billiards
Generalized billiards (GB) describe a motion of a mass point (a particle) inside a closed domain with the piece-wise smooth boundary . On the boundary the velocity of point is transformed as the particle underwent the action of generalized billiard law. GB were introduced by Lev D. Pustyl'nikov in the general case, and, in the case when is a parallelepiped in connection with the justification of the second law of thermodynamics. From the physical point of view, GB describe a gas consisting of finitely many particles moving in a vessel, while the walls of the vessel heat up or cool down. The essence of the generalization is the following. As the particle hits the boundary , its velocity transforms with the help of a given function , defined on the direct product (where is the real line, is a point of the boundary and is time), according to the following law. Suppose that the trajectory of the particle, which moves with the velocity , intersects at the point at time . Then at time the particle acquires the velocity , as if it underwent an elastic push from the infinitely-heavy plane , which is tangent to at the point , and at time moves along the normal to at with the velocity . We emphasize that the position of the boundary itself is fixed, while its action upon the particle is defined through the function .
We take the positive direction of motion of the plane to be towards the interior of . Thus if the derivative , then the particle accelerates after the impact.
If the velocity , acquired by the particle as the result of the above reflection law, is directed to the interior of the domain , then the particle will leave the boundary and continue moving in until the next collision with . If the velocity is directed towards the outside of , then the particle remains on at the point until at some time the interaction with the boundary will force the particle to leave it.
If the function does not depend on time ; i.e., , the generalized billiard coincides with the classical one.
This generalized reflection law is very natural. First, it reflects an obvious fact that the walls of the vessel with gas are motionless. Second the action of the wall on the particle is still the classical elastic push. In the essence, we consider infinitesimally moving boundaries with given velocities.
It is considered the reflection from the boundary both in the framework of classical mechanics (Newtonian case) and the theory of relativity (relativistic case).
Main results: in the Newtonian case the energy of particle is bounded, the Gibbs entropy is a constant, (in Notes) and in relativistic case the energy of particle, the Gibbs entropy, the entropy with respect to the phase volume grow to infinity, (in Notes), references to generalized billiards.
Quantum chaos
The quantum version of the billiards is readily studied in several ways. The classical Hamiltonian for the billiards, given above, is replaced by the stationary-state Schrödinger equation or, more precisely,
where is the Laplacian. The potential that is infinite outside the region but zero inside it translates to the Dirichlet boundary conditions:
As usual, the wavefunctions are taken to be orthonormal:
Curiously, the free-field Schrödinger equation is the same as the Helmholtz equation,
with
This implies that two and three-dimensional quantum billiards can be modelled by the classical resonance modes of a radar cavity of a given shape, thus opening a door to experimental verification. (The study of radar cavity modes must be limited to the transverse magnetic (TM) modes, as these are the ones obeying the Dirichlet boundary conditions).
The semi-classical limit corresponds to which can be seen to be equivalent to , the mass increasing so that it behaves classically.
As a general statement, one may say that whenever the classical equations of motion are integrable (e.g. rectangular or circular billiard tables), then the quantum-mechanical version of the billiards is completely solvable. When the classical system is chaotic, then the quantum system is generally not exactly solvable, and presents numerous difficulties in its quantization and evaluation. The general study of chaotic quantum systems is known as quantum chaos.
A particularly striking example of scarring on an elliptical table is given by the observation of the so-called quantum mirage.
Applications
Billiards, both quantum and classical, have been applied in several areas of physics to model quite diverse real world systems. Examples include ray-optics, lasers, acoustics, optical fibers (e.g. double-clad fibers ), or quantum-classical correspondence. One of their most frequent application is to model particles moving inside nanodevices, for example quantum dots, pn-junctions, antidot superlattices, among others. The reason for this broadly spread effectiveness of billiards as physical models resides on the fact that in situations with small amount of disorder or noise, the movement of e.g. particles like electrons, or light rays, is very much similar to the movement of the point-particles in billiards. In addition, the energy conserving nature of the particle collisions is a direct reflection of the energy conservation of Hamiltonian mechanics.
Software
Open source software to simulate billiards exist for various programming languages. From most recent to oldest, existing software are: DynamicalBilliards.jl (Julia), Bill2D (C++) and Billiard Simulator (Matlab). The animations present on this page were done with DynamicalBilliards.jl.
See also
Fermi–Ulam model (billiards with oscillating walls)
Lubachevsky–Stillinger algorithm of compression simulates hard spheres colliding not only with the boundaries but also among themselves while growing in sizes
Arithmetic billiards
Illumination problem
Notes
References
Sinai's billiards
(in English, Sov. Math Dokl. 4 (1963) pp. 1818–1822).
Ya. G. Sinai, "Dynamical Systems with Elastic Reflections", Russian Mathematical Surveys, 25, (1970) pp. 137–191.
V. I. Arnold and A. Avez, Théorie ergodique des systèms dynamiques, (1967), Gauthier-Villars, Paris. (English edition: Benjamin-Cummings, Reading, Mass. 1968). (Provides discussion and references for Sinai's billiards.)
D. Heitmann, J.P. Kotthaus, "The Spectroscopy of Quantum Dot Arrays", Physics Today (1993) pp. 56–63. (Provides a review of experimental tests of quantum versions of Sinai's billiards realized as nano-scale (mesoscopic) structures on silicon wafers.)
S. Sridhar and W. T. Lu, "Sinai Billiards, Ruelle Zeta-functions and Ruelle Resonances: Microwave Experiments", (2002) Journal of Statistical Physics, Vol. 108 Nos. 5/6, pp. 755–766.
Linas Vepstas, Sinai's Billiards, (2001). (Provides ray-traced images of Sinai's billiards in three-dimensional space. These images provide a graphic, intuitive demonstration of the strong ergodicity of the system.)
N. Chernov and R. Markarian, "Chaotic Billiards", 2006, Mathematical survey and monographs nº 127, AMS.
Strange billiards
T. Schürmann and I. Hoffmann, The entropy of strange billiards inside n-simplexes. J. Phys. A28, page 5033ff, 1995. PDF-Document
Bunimovich stadium
Flash animation illustrating the chaotic Bunimovich Stadium
Generalized billiards
M. V. Deryabin and L. D. Pustyl'nikov, "Generalized relativistic billiards", Reg. and Chaotic Dyn. 8(3), pp. 283–296 (2003).
M. V. Deryabin and L. D. Pustyl'nikov, "On Generalized Relativistic Billiards in External Force Fields", Letters in Mathematical Physics, 63(3), pp. 195–207 (2003).
M. V. Deryabin and L. D. Pustyl'nikov, "Exponential attractors in generalized relativistic billiards", Comm. Math. Phys. 248(3), pp. 527–552 (2004).
External links
Scholarpedia entry on Dynamical Billiards (Leonid Bunimovich)
Introduction to dynamical systems using billiards, Max Planck Institute for the Physics of Complex Systems
Dynamical systems
Cue sports | Dynamical billiards | [
"Physics",
"Mathematics"
] | 3,429 | [
"Mechanics",
"Dynamical systems"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.