id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160 values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
54,593,972 | https://en.wikipedia.org/wiki/Steroid%20reductase | Steroid reductases are reductase enzymes that are involved in steroid biosynthesis and metabolism. They include:
5α-Reductase
5β-Reductase
See also
Steroidogenic enzyme
References
Oxidoreductases | Steroid reductase | Chemistry | 54 |
31,643,950 | https://en.wikipedia.org/wiki/FIND%20Technology | FIND® technology is a directed evolution technology that uses DNA recombination to improve properties of proteins. It eliminates unimportant and deleterious mutations while maintaining and combining beneficial mutations that would enhance protein function.
Procedures
For the relevant gene, a library of single stranded oligonucleotides is acquired and then mutated using random mutagenesis. The newly mutated library is then subjected to exonuclease activity, creating both sense and anti-sense fragments. The areas of partially overlapping fragments are then combined and extended using a PCR-like method. These double stranded mutants are then screened for the desired optimized function using a relevant assay. The best mutants are chosen for further exonuclease activity. The process (exonuclease, PCR-like recombination, and mutant screening) is repeated, usually about 10-12 times, in order to achieve the best possible mutants with only beneficial mutations.
Example
CHIPS
CHIPS is a protein that inhibits immune cell activation normally associated with inflammation. CHIPS has potential as an anti-inflammatory agent, but native CHIP has been associated with activation and interaction with antibodies. FIND® technology was used to create a truncated, yet functional mutant of this protein with reduced antibody interaction.
Intellectual property
The company Alligator Bioscience has the intellectual rights to the FIND technology and uses it both for contract work optimizing proteins for the pharmaceutical industry and to develop their own protein drugs.
References
Evolutionary biology
Biotechnology | FIND Technology | Biology | 304 |
6,111,458 | https://en.wikipedia.org/wiki/Dublin%20Molecular%20Medicine%20Centre | Dublin Molecular Medicine Centre (DMMC) was a charity set up in 2002, to create critical mass in molecular medicine research in Dublin, Ireland. Funding was provided by the Higher Education Authority.
Resources
The academic resources supporting the teaching hospitals include:
UCD Conway Institute of Biomolecular & Biomedical Research which is organised into 3 interactive multi-disciplinary centres : synthesis and chemical biology; integrative biology and molecular medicine.
RCSI Research Institute whose portfolio included cellular neuroscience, molecular research, advanced drug delivery, proteomics and pharmacy.
TCD Institute of Molecular Medicine focuses on cancer (prostate, haematological, esophageal, cervical, thoracic), infection and immunity (tuberculosis); genomic research into inflammatory disease, molecular histopathology, cell signalling, neuropsychiatric genetics and nutrigenomics.
New Clinical Research Centre
DMMC secured funding from the Wellcome Trust for a major clinical research centre to be led by Professor Dermot P. Kelleher for Dublin comprising two elements:
A new research centre at St. James's Hospital, Dublin.
A network of new clinical research facilities linking the proposed new centre to existing centres at Beaumont Hospital, Dublin, St. Vincent’s University Hospital Dublin and Mater Misericordiae University Hospital
Successor
Dublin Molecular Medicine Centre evolved to become Molecular Medicine Ireland which was established in 2008.
References
External links
DMMC 2005 Annual Report
Medical and health organisations based in the Republic of Ireland
Economy of Dublin (city)
Pharmaceutical industry
2002 establishments in Ireland | Dublin Molecular Medicine Centre | Chemistry,Biology | 313 |
447,288 | https://en.wikipedia.org/wiki/Project%20Athena | Project Athena was a joint project of MIT, Digital Equipment Corporation, and IBM to produce a campus-wide distributed computing environment for educational use. It was launched in 1983, and research and development ran until June 30, 1991. , Athena is still in production use at MIT. It works as software (currently a set of Debian packages) that makes a machine a thin client, that will download educational applications from the MIT servers on demand.
Project Athena was important in the early history of desktop and distributed computing. It created the X Window System, Kerberos, and Zephyr Notification Service. It influenced the development of thin computing, LDAP, Active Directory, and instant messaging.
Description
Leaders of the $50 million, five-year project at MIT included Michael Dertouzos, director of the Laboratory for Computer Science; Jerry Wilson, dean of the School of Engineering; and Joel Moses, head of the Electrical Engineering and Computer Science department. DEC agreed to contribute more than 300 terminals, 1600 microcomputers, 63 minicomputers, and five employees. IBM agreed to contribute 500 microcomputers, 500 workstations, software, five employees, and grant funding.
History
In 1979 Dertouzos proposed to university president Jerome Wiesner that the university network mainframe computers for student use. At that time MIT used computers throughout its research, but undergraduates did not use computers except in Course VI (computer science) classes. With no interest from the rest of the university, the School of Engineering in 1982 approached DEC for equipment for itself. President Paul E. Gray and the MIT Corporation wanted the project to benefit the rest of the university, and IBM agreed to donate equipment to MIT except to the engineering school.
Project Athena began in May 1983. Its initial goals were to:
Develop computer-based learning tools that are usable in multiple educational environments
Establish a base of knowledge for future decisions about educational computing
Create a computational environment supporting multiple hardware types
Encourage the sharing of ideas, code, data, and experience across MIT
The project intended to extend computer power into fields of study outside computer science and engineering, such as foreign languages, economics, and political science. To implement these goals, MIT decided to build a Unix-based distributed computing system. Unlike those at Carnegie Mellon University, which also received the IBM and DEC grants, students did not have to own their own computer; MIT built computer labs for their users, although the goal was to put networked computers into each dormitory. Students were required to learn FORTRAN and Lisp, and would have access to 3M computers, capable of 1 million instructions per second and with 1 megabyte of RAM and a 1 megapixel display.
Although IBM and DEC computers were hardware-incompatible, Athena's designers intended that software would run similarly on both. MIT did not want to be dependent on one vendor at the end of Athena. Sixty-three DEC VAX-11/750 servers were the first timesharing clusters. "Phase II" began in September 1987, with hundreds of IBM RT PC workstations replacing the VAXes, which became fileservers for the workstations. The DEC-IBM division between departments no longer existed. Upon logging into a workstation, students would have immediate access to a universal set of files and programs via central services. Because the workstation used a thin client model, the user interface would be consistent despite the use of different hardware vendors for different workstations. A small staff could maintain hundreds of clients.
The project spawned many technologies that are widely used today, such as the X Window System and Kerberos. Among the other technologies developed for Project Athena were the Zephyr Notification Service and the Hesiod name and directory service.
MIT had 722 workstations in 33 private and public clusters on and off campus, including student living groups and fraternities. A survey found that 92% of undergraduates had used the Athena workstations at least once, and 25% used them every day. The project received an extension of three years in January 1988. Developers who had focused on creating the operating system and courseware for various educational subjects now worked to improve Athena's stability and make it more user friendly. When Project Athena ended in June 1991, MIT's IT department took it over and extended it into the university's research and administrative divisions.
In 1993, the IBM RT PC workstations were retired, being replaced by Sun SPARCclassic, IBM RS/6000 POWERstation 220, and Personal DECstation 5000 Model 25 systems. the MIT campus had more than 1300 Athena workstations, and more than 6000 Athena users logged into the system daily. Athena is still used by many in the MIT community through the computer labs scattered around the campus. It is also now available for installation on personal computers, including laptops.
Educational computing environment
Athena continues in use , providing a ubiquitous computing platform for education at MIT; plans are to continue its use indefinitely.
Athena was designed to minimize the use of labor in its operation, in part through the use of (what is now called ) "thin client" architecture and standard desktop configurations. This not only reduces labor content in operations but also minimizes the amount of training for deployment, software upgrade, and trouble-shooting. These features continue to be of considerable benefit today.
In keeping with its original intent, access to the Athena system has been greatly enlarged in the last several years. Whereas in 1991 much of the access was in public "clusters" (computer labs) in academic buildings, access has been extended to dormitories, fraternities and sororities, and independent living groups. All dormitories have officially supported Athena clusters. In addition, most dormitories have "quick login" kiosks, which is a standup workstation with a timer to limit access to ten minutes. The dormitories have "one port per pillow" Internet access.
Originally, the Athena release used Berkeley Software Distribution (BSD) as the base operating system for all hardware platforms. public clusters consisted of Sun SPARC and SGI Indy workstations. SGI hardware was dropped in anticipation of the end of IRIX production in 2006. Linux-Athena was introduced in version 9, with the Red Hat Enterprise Linux operating system running on cheaper x86 or x86-64 hardware. Athena 9 also replaced the internally developed "DASH" menu system and Motif Window Manager (mwm) with a more modern GNOME desktop. Athena 10 is based on Ubuntu Linux (derived from Debian) only. Support for Solaris is expected to be dropped almost entirely.
Educational software
"I felt that, we would know Athena was successful, if we were surprised by some of the applications, it turned out that our surprises were largely in the humanities" — Joel Moses
The original concept of Project Athena was that there would be course-specific software developed to use in conjunction with teaching. Today, computers are most frequently used for "horizontal" applications such as e-mail, word processing, communications, and graphics.
The big impact of Athena on education has been the integration of third party applications into courses. Maple, and especially, MATLAB, are integrated into large numbers of science and engineering classes. Faculty expect that their students have access to, and know how to use, these applications for projects, and homework assignments, and some have used the MATLAB platform to rebuild the courseware that they had originally built using the X Window System.
More specialized third-party software are used on Athena for more discipline-specific work. Rendering software, for architecture and computer graphics classes, molecular modeling software, for chemistry, chemical engineering, and material science courses, and professional software used by chemical engineers in industry, are important components of a number of MIT classes in various departments.
Contributing to the development of distributed systems
Athena was not a research project, and the development of new models of computing was not a primary objective of the project. Indeed, quite the opposite was true. MIT wanted a high-quality computing environment for education. The only apparent way to obtain one was to build it internally, using existing components where available, and augmenting those components with software to create the desired distributed system. However, the fact that this was a leading edge development in an area of intense interest to the computing industry worked strongly to the favor of MIT by attracting large amounts of funding from industrial sources.
Long experience has shown that advanced development directed at solving important problems tends to be much more successful than advanced development promoting technology that must look for a problem to solve. Athena is an excellent example of advanced development undertaken to meet a need that was both immediate and important. The need to solve a "real" problem kept Athena on track to focus on important issues and solve them, and to avoid getting side-tracked into academically interesting but relatively unimportant problems. Consequently, Athena made very significant contributions to the technology of distributed computing, but as a side-effect to solving an educational problem.
The leading edge system architecture and design features pioneered by Athena, using current terminology, include:
Client–server model of distributed computing using three-tier architecture (see Multitier architecture)
Thin client (stateless) desktops
System-wide security system (Kerberos encrypted authentication and authorization)
Naming service (Hesiod)
X Window System, widely used within the Unix community
X tool kit for easy construction of human interfaces
Instant messaging (Zephyr real time notification service)
System-wide use of a directory system
Integrated system-wide maintenance system (Moira Service Management System)
On-Line Help system (OLH)
Public bulletin board system (Discuss)
Many of the design concepts developed in the "on-line consultant" now appear in popular help desk software packages.
Because the functional and system management benefits provided by the Athena system were not available in any other system, its use extended beyond the MIT campus. In keeping with the established policy of MIT, the software was made available at no cost to all interested parties. Digital Equipment Corporation, having implemented Athena at various beta-test sites, "productized" the software as DECAthena to make it more portable, and offered it along with support services to the market. A number of academic and industrial organizations installed the Athena software. As of early 1992, 20 universities worldwide were using DECathena, with a reported 30 commercial organisations evaluating the product.
The architecture of the system also found use beyond MIT. The architecture of the Distributed Computing Environment (DCE) software from the Open Software Foundation was based on concepts pioneered by Athena. Subsequently, the Windows NT network operating system from Microsoft incorporates Kerberos and several other basic architecture design features first implemented by Athena.
Use outside MIT
Pixar Animation Studios, the computer graphics and animation company (then the Lucasfilm Computer Graphics Project, now owned by Walt Disney Pictures), used ten VAX-11/750 superminicomputers at Project Athena, for some of the rendering of The Adventures of André and Wally B.
Iowa State University runs an implementation of Athena named "Project Vincent", named after John Vincent Atanasoff, the inventor of the Atanasoff–Berry Computer.
North Carolina State University also runs a variation of Athena named "Eos/Unity".
Carnegie Mellon University began a similar system a year earlier than MIT called Project Andrew which spawned AFS, Athena's current filesystem.
University of Maryland, College Park also ran a variation of Athena on the WAM (Workstations at Maryland) and Glue, now renamed '"TerpConnect".
See also
tkWWW, a defunct web browser developed for the project by Joseph Wang
References
Sources
External links
Athena at MIT
TerpConnect (formerly Project Glue) at UMD College Park
Guide to the Ellen McDaniel Collection of Project Athena and Project Vincent Manuals and Other Materials 1986-1993
Computer-related introductions in 1983
1983 establishments in Massachusetts
Massachusetts Institute of Technology
Software projects
Athena, Project | Project Athena | Technology,Engineering | 2,428 |
11,422,170 | https://en.wikipedia.org/wiki/Small%20nucleolar%20RNA%20Z17 | In molecular biology, snoRNA Z17 is a non-coding RNA (ncRNA) molecule which functions in the biogenesis (modification) of other small nuclear RNAs (snRNAs). This type of modifying RNA is located in the nucleolus of the eukaryotic cell which is a major site of snRNA biogenesis. It is known as a small nucleolar RNA (snoRNA) and also often referred to as a guide RNA.
snoRNA Z17 is a member of the C/D box class of snoRNAs which contain the conserved sequence motifs known as the C box (UGAUGA) and the D box (CUGA). Most of the members of the box C/D family function in directing site-specific 2'-O-methylation of substrate RNAs. snoRNA Z17B is predicted to guide the 2'-O-ribose methylation of 18S rRNA at position U121. Two forms of this snoRNA are found in the intron of the ribosomal protein L23a gene.
References
External links
snoRNA Z17B in snoRNABase
Small nuclear RNA | Small nucleolar RNA Z17 | Chemistry | 244 |
50,362,921 | https://en.wikipedia.org/wiki/Seele%20GmbH | Seele GmbH (stylized as seele) () is involved in the design and construction of facades and complex building envelopes made from glass, steel, aluminium, membranes and other materials. It was founded in 1984 by glazier Gerhard Seele and steelwork engineer Siegfried Gossner. About 1,000 employees work at the Seele's 12 locations around the world.
The company produced facade panes for the Apple Park as well as many Apple Stores.
History
Seele was founded in 1984 and is based in Gersthofen, near Munich in Bavaria. The city of Gersthofen, Bavaria, in Germany is the location of the central production plant for unitised façades and an engineering design office with more than 150 staff. Consulting, logistics, site supervision and general project management are among Seele's services.
References
1984 establishments in West Germany
Companies based in Augsburg
Manufacturing companies established in 1984
Structural steel | Seele GmbH | Engineering | 187 |
11,244,549 | https://en.wikipedia.org/wiki/Michael%20Reiss | Michael J. Reiss (born 1960) is a British bioethicist, educator, and journalist. He is also an Anglican priest. Reiss is professor of science education at the Institute of Education, University College London, where he is assistant director, research and development.
Family
Reiss's father was an obstetrician; his mother, a midwife. His father was Jewish; his mother, an agnostic. Reiss had a secular upbringing in north London.
Career
He began his career as a schoolteacher at Hills Road Sixth Form College, Cambridge in 1983. In 1989, he became a lecturer and tutor in the Department of Education at the University of Cambridge. At the age of 29, Reiss began training for ministry in the Church of England with the East Anglian Ministerial Training Course: he was ordained in the Church of England as a deacon in 1990 and as a priest in 1991. For many years, he led the Sunday service in his local village near Cambridge. He was a senior lecturer at Cambridge until 1998, then reader in education and bioethics until 2000. From 2003, he was chief executive of the Science Learning Centre in London.
From 2006 to 2008, he was director of education at the Royal Society, a position he resigned on 16 September 2008, following protests about his views on tackling creationism when teaching evolution in schools, which the Royal Society said were "open to misinterpretation".
Reiss works in the fields of science education, bioethics, and sex education. He has a special interest in the ethical implications of genetic engineering. He was formerly head of the School of Mathematics, Science, and Technology at the Institute of Education, University College London. In science education, he currently directs projects funded by the Department for Children, Schools and Families, including a longitudinal, ethnographic study of pupils' learning, currently in its eleventh year.
Reiss is a frequent consultant to the Royal Society, the Qualifications and Curriculum Authority, the Training and Development Agency for Schools (formerly known as the Teacher Training Agency or the TTA) and other organisations. He serves on the editorial board of the International Journal of Science Education. He was a specialist adviser to the House of Lords Select Committee on Animals in Scientific Procedures, 2001–02, and is a member of the Farm Animal Welfare Council.
As early as November 2006, Reiss suggested that, rather than dismissing creationism as a "misconception," teachers should take the time to explain why creationism had no scientific basis. In September 2008, his views were presented in some media reports as lending support to teaching creationism as a legitimate point of view; however both he and the Royal Society later stated that this was a misrepresentation. Reiss stressed that the topic should not be taught as science, but rather should be construed as a cultural "Worldview." Reiss argued that it was more effective to engage with pupils' ideas about creationism, rather than to obstruct discussion with those who do not accept the scientific version of the evolution of species.
In July 2009, he led a number of the UK's most senior scientists in writing to the Schools Secretary Ed Balls to complain that Ofsted's proposed new curriculum for primary schools did not mention evolution.
In 2010 Reiss debated Michael Behe on the topic of Intelligent Design.
In 2022, he was elected a member of the Academia Europaea.
References
External links
https://web.archive.org/web/20170523121822/http://reiss.tc/
Institute of Education web page
Prof Michael Reiss at IRIS UCL
1960 births
Living people
20th-century British biologists
21st-century British biologists
English Jews
20th-century English Anglican priests
21st-century English Anglican priests
Schoolteachers from Cambridgeshire
Bioethicists
People educated at Westminster School, London
Converts to Anglicanism from atheism or agnosticism
Academics of the UCL Institute of Education
Fellows of the Royal Society of Biology
Members of Academia Europaea
British male journalists
Academics of the University of Cambridge
Theistic evolutionists
Science activists | Michael Reiss | Biology | 837 |
52,942,170 | https://en.wikipedia.org/wiki/Tris%28trimethylsilyl%29phosphine | Tris(trimethylsilyl)phosphine is the organophosphorus compound with the formula P(SiMe3)3 (Me = methyl). It is a colorless liquid that ignites in air and hydrolyses readily.
Synthesis
Tris(trimethylsilyl)phosphine is prepared by treating trimethylsilyl chloride, white phosphorus, and sodium-potassium alloy:
1/4 P4 + 3 Me3SiCl + 3 K → P(SiMe3)3 + 3 KCl
Several other methods exist.
Reactions
The compound hydrolyzes to give phosphine:
P(SiMe3)3 + 3 H2O → PH3 + 3 HOSiMe3
Treatment of certain acyl chlorides with tris(trimethylsilyl)phosphine gives phosphaalkynes, one example being tert-butylphosphaacetylene.
Reaction with potassium tert-butoxide cleaves one P-Si bond, giving the phosphide salt:
P(SiMe3)3 + KO-t-Bu → KP(SiMe3)2 + Me3SiO-t-Bu
It is a reagent in the preparation of metal phosphido clusters by reaction with metal halides or carboxylates. In such reactions the silyl halide or silyl carboxylate is liberated as illustrated in this idealized reaction:
P(SiMe3)3 + 3 CuCl → Cu3P + 3 ClSiMe3
Safety
Tris(trimethylsilyl)phosphine spontaneously ignites in air, thus it is handled using air-free techniques.
References
Trimethylsilyl compounds
Phosphines | Tris(trimethylsilyl)phosphine | Chemistry | 366 |
5,667,061 | https://en.wikipedia.org/wiki/Gain%20%28projection%20screens%29 | Gain is a property of a projection screen and is one of the specifications quoted by projection screen manufacturers.
Interpretation
The measured number is called the peak gain at zero degrees viewing axis. It represents the gain value for a viewer seated along a line perpendicular to the screen's viewing surface. The gain value represents the screen's brightness ratio relative to a set standard (in this case, a sheet of magnesium carbonate). Screens with a higher brightness than this standard are rated with a gain higher than 1.0, while screens with lower brightness are rated from 0.0 to 1.0. Since a projection screen is designed to scatter the impinging light back to the viewers, the scattering can be highly diffuse or highly concentrated. Highly concentrated scatter results in a higher screen gain (a brighter image) at the cost of a more limited viewing angle (as measured by the half-gain viewing angle), whereas highly diffuse scattering results in lower screen gain (a dimmer image) with the benefit of a wider viewing angle.
Sources
Display technology | Gain (projection screens) | Engineering | 211 |
51,944 | https://en.wikipedia.org/wiki/Dichroism | In optics, a dichroic material is either one which causes visible light to be split up into distinct beams of different wavelengths (colours) (not to be confused with dispersion), or one in which light rays having different polarizations are absorbed by different amounts.
In beam splitters
The original meaning of dichroic, from the Greek dikhroos, two-coloured, refers to any optical device which can split a beam of light into two beams with differing wavelengths. Such devices include mirrors and filters, usually treated with optical coatings, which are designed to reflect light over a certain range of wavelengths and transmit light which is outside that range. An example is the dichroic prism, used in some camcorders, which uses several coatings to split light into red, green and blue components for recording on separate CCD arrays, however it is now more common to have a Bayer filter to filter individual pixels on a single CCD array. This kind of dichroic device does not usually depend on the polarization of the light. The term dichromatic is also used in this sense.
With polarized light
The second meaning of dichroic refers to the property of a material, in which light in different polarization states traveling through it experiences a different absorption coefficient; this is also known as diattenuation. When the polarization states in question are right and left-handed circular polarization, it is then known as circular dichroism (CD). Most materials exhibiting CD are chiral, although non-chiral materials showing CD have been recently observed. Since the left- and right-handed circular polarizations represent two spin angular momentum (SAM) states, in this case for a photon, this dichroism can also be thought of as spin angular momentum dichroism and could be modelled using quantum mechanics.
In some crystals,, such as tourmaline, the strength of the dichroic effect varies strongly with the wavelength of the light, making them appear to have different colours when viewed with light having differing polarizations. This is more generally referred to as pleochroism, and the technique can be used in mineralogy to identify minerals. In some materials, such as herapathite (iodoquinine sulfate) or Polaroid sheets, the effect is not strongly dependent on wavelength.
In liquid crystals
Dichroism, in the second meaning above, occurs in liquid crystals due to either the optical anisotropy of the molecular structure or the presence of impurities or the presence of dichroic dyes. The latter is also called a guest–host effect.
See also
Birefringence
Dichromatism
Lycurgus Cup
Pleochroism
References
Polarization (waves) | Dichroism | Physics | 573 |
37,104,757 | https://en.wikipedia.org/wiki/ConventionCamp | Convention Camp is a conference on digital future, social media and web culture. It takes place annually since 2008 at Hanover fairground and is the largest BarCamp in Germany.
History
The Convention Camp was launched in 2008 by the publisher yeebase media and the web agency w3design. The first event was held on October 2, 2008, at the University of Hanover. It initially focused knowledge management and social media software for companies in the foreground.
The second event was held on November 26, 2009. Beginning 2010, the Institute for Marketing and Management of the University of Hannover supports the ConventionCamp.
On November 8, 2011, the fourth ConventionCamp took place. In the background of Stuttgart 21 and Occupy Wall Street, focus of the program was the relationship of society and the Internet. A total of 1,500 visitors attended the conference. In the evening the t3n Web Awards were awarded for the best German websites in several categories.
In 2012, Julian Assange has taken part by videoconference. In other sessions for example speaker Markus Franz posed the question of the Wikipedias in the future.
Notability
The Convention Camp is a mix of classic conference with fixed program and unconference where early visitors can submit their own proposals. With 1,500 participants, the last ConventionCamp is currently (as of September 2012) the largest BarCamp and after the re:publica the second largest conference on Internet issues in Germany. The topics discussed at the event reach a wide reception in the trade press, including the participation of prominent speakers contribute.
The ConventionCamp or its initiators were awarded in March 2011 with the so-called LIDA Award. The abbreviation LIDA stands for Leader in the Digital Age, patron of the award was the Lower Saxony Minister of Economics Jörg Bode.
References
External links
Website of the ConventionCamp
Conferences | ConventionCamp | Technology | 372 |
528,808 | https://en.wikipedia.org/wiki/Chess%20symbols%20in%20Unicode | Unicode has text representations of chess pieces. These allow to produce the symbols using plain text without the need of a graphics interface. The inclusion of the chess symbols enables the use of figurine algebraic notation, which replaces the letter that stands for a piece by its symbol, e.g. ♘c6 instead of Nc6. This also allows the play of chess games in text-only environments, such as the terminal.
Unicode blocks
Unicode 15.1 specifies a total of 110 spread across two blocks. The standard set of chess pieces—king, queen, rook, bishop, knight, or pawn, with white and black variants—were included in the block Miscellaneous Symbols. In Unicode 12.0, the Chess Symbols block (U+1FA00–U+1FA6F) was allocated for inclusion of extra chess piece representations. This includes fairy chess pieces, such as rotated pieces, neutral (neither white nor black) pieces, knighted pieces, equihoppers, as well as xiangqi pieces.
In 2024, four shatranj pieces have been provisionally assigned for a future version in the range U+1FA54–U+1FA57.
Emoji
In Unicode 11.0, an emojified representation of the character was added. As of Unicode 15.1, only this character has an emoji version. In 2024, a proposal was submitted to include emoji versions of the other standard chess symbols.
References
Chess notation
Lists of symbols
Unicode | Chess symbols in Unicode | Mathematics | 303 |
31,341,243 | https://en.wikipedia.org/wiki/Ministry%20of%20Energy%20of%20Georgia | The Ministry of Energy of Georgia (, sakartvelos energetikis saministro) was a governmental agency within the Cabinet of Georgia in charge of regulating the activities in the energy sector of Georgia from 1991 to 2017.
Structure
The ministry is headed by minister appointed by the President of Georgia. Five deputy ministers report directly to the minister. Main functions of the ministry are increasing capabilities for maximum exploitation of the available energy resources in the country and diversification of energy supply imported from other countries; improving and modernizing electricity supply by enhancing the hydropower capacity of Georgia; renovation of existing and construction of new power stations and natural gas transportation infrastructure; development of alternative energy sources; improvements of infrastructure for making the country a reliable transit point for regional energy projects, etc.
Due to improvements in recent years, Georgia has become a major exporter of electricity in the region, exporting 1.3 billion KWh in 2010. Hydropower stations of Georgia produce 80-85% of the electricity utilized within the country, the remaining 15-20% is produced by thermal power stations. According to the authorities, so far Georgia has been exploiting only 18% of its hydro resource potential.
Ministers after 2000
David Mirtskhulava, 2000-2003
Nika Gilauri, February 2004–September 2007
Alexander Khetaguri, September 2007–August 2012
Vakhtang Balavadze, August 2012–October 2012
Kakha Kaladze, October 2012–July 2017
Elia Eloshvili, July 2017–December 2017
See also
Cabinet of Georgia
Economy of Georgia
References
Energy
Georgia
Ministries disestablished in 2017
2017 disestablishments in Georgia (country)
Ministries established in 1991 | Ministry of Energy of Georgia | Engineering | 342 |
937,766 | https://en.wikipedia.org/wiki/Gliese%20570 | Gliese 570 (or 33 G. Librae) is a quaternary star system approximately 19 light-years away. The primary star is an orange dwarf star (much dimmer and smaller than the Sun). The other secondary stars are themselves a binary system, two red dwarfs that orbit the primary star. A brown dwarf has been confirmed to be orbiting in the system. In 1998, an extrasolar planet was thought to orbit the primary star, but it was discounted in 2000.
Distance and visibility
In the night sky, the Gliese 570 system lies in the southwestern part of Libra. The system is southwest of Alpha Librae and northwest of Sigma Librae. In the early 1990s, the European Hipparcos mission measured the parallax of components B and C, suggesting that the system was at a distance of 24.4 light-years from the Sun. This, however, was a relatively large error as Earth-based parallax and orbit observations suggest that the two stars are actually part of a system with Gliese 570 A, and must actually lie at the same distance.
Star system
The primary star of the system (component A) is an orange dwarf star that may just have over three fourths the mass of the Sun, about 77 percent of its radius, and only 15.6 percent of its visual luminosity. It has a separation of 190 astronomical units from the binary components B and C, moving in an eccentric orbit that takes at least 2130 years to complete. Gliese 570 A is spectral type K4V and emits X-rays. Radial velocities of the primary obtained in the course of an extrasolar planet search at Lick Observatory show a linear trend probably due to the orbital motion of the Gliese 570 BC system around the primary.
A binary system in their own right, components B and C are both rather dim red dwarf stars that have less mass, radius, and luminosity than the Sun. Component B is spectral type M1V, component C is spectral type M3V, and both emit X-rays.
On January 15, 2000, astronomers announced that they had found one of the coolest brown dwarfs then known. Catalogued as Gliese 570 D, it was observed at a wide separation of more than 1,500 astronomical unit from the triple star system. It has an estimated mass of 50 times that of Jupiter.
The status of Gliese 570 D as a brown dwarf was confirmed by Doppler spectroscopy at the Cerro Tololo Interamerican Observatory in Chile. The surface temperature of this substellar object was found to be a relatively cool 500 degrees Celsius, making it cooler and less luminous than any other then-known brown dwarf (including the prototype "T" dwarf), and classifying the object as a T7-8V brown dwarf. No X-rays have been reported from this brown dwarf.
Search for planets
In 1998, an extrasolar planet was announced to orbit the primary star within the Gliese 570 system. The planet, identified as "Gliese 570 Ab", was considered doubtful and the claim was retracted in 2000. No extrasolar planets have been confirmed to exist in this multiple star system thus far.
See also
Epsilon Indi
Gliese 229
HD 188753
Iota Horologii
Phi2 Pavonis
Notes
References
External links
Libra (constellation)
K-type main-sequence stars
M-type main-sequence stars
T-type brown dwarfs
Triple star systems
Librae, 33
0570
131976
5568
073182 4
Durchmusterung objects | Gliese 570 | Astronomy | 745 |
3,111,791 | https://en.wikipedia.org/wiki/Beta%20Canum%20Venaticorum | Beta Canum Venaticorum (β Canum Venaticorum, abbreviated Beta CVn, β CVn), also named Chara , is a G-type main-sequence star in the northern constellation of Canes Venatici. At an apparent visual magnitude of 4.25, it is the second-brightest star in the constellation. Based upon an annual parallax shift of , this star is distant from the Sun.
Along with the brighter star Cor Caroli, the pair form the "southern dog" in this constellation that represents hunting dogs.
Nomenclature
β Canum Venaticorum (Latinised to Beta Canum Venaticorum) is the star's Bayer designation.
The traditional name Chara was originally applied to the "southern dog", but it later became used specifically to refer to Beta Canum Venaticorum. Chara (χαρά) means 'joy' in Greek. In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN's first bulletin of July 2016 included a table of the first two batches of names approved by the WGSN; which included Chara for this star.
In Chinese, (), meaning Imperial Guards, refers to an asterism consisting of Beta Canum Venaticorum, Alpha Canum Venaticorum, 10 Canum Venaticorum, 6 Canum Venaticorum, 2 Canum Venaticorum, and 67 Ursae Majoris. Consequently, the Chinese name for Beta Canum Venaticorum itself is (, .)
Characteristics
Beta CVn has a stellar classification of G0 V, and so is a G-type main-sequence star. Since 1943, the spectrum of this star has served as one of the stable anchor points by which other stars are classified. The spectrum of this star shows a very weak emission line of singly ionized calcium (Ca II) from the chromosphere, making it a useful reference star for a reference spectrum to compare with other stars in a similar spectral category. (The Ca-II emission lines are readily accessible and can be used to measure the level of activity in a star's chromosphere.)
Beta CVn is considered to be slightly metal-poor, which means it has a somewhat lower portion of elements heavier than helium when compared to the Sun. In terms of mass, age and evolutionary status, however, this star is very similar to the Sun. As a result, it has been called a solar analog. It is about 3% less massive than the Sun, with a radius 3% larger than the Sun's and 25% greater luminosity.
The components of this star's space velocity are = . In the past it was suggested that it may be a spectroscopic binary. However, further analysis of the data does not seem to bear that out. In addition, a 2005 search for a brown dwarf in orbit around this star failed to discover any such companion, at least down to the sensitivity limit of the instrument used.
Habitability
In 2006, astronomer Margaret Turnbull labeled Beta CVn as the top stellar system candidate to search for extraterrestrial life forms. Because of its solar-type properties, astrobiologists have listed it among the most astrobiologically interesting stars within 10 parsecs of the Sun. However, as of 2009, this star is not known to host planets.
See also
List of star systems within 25–30 light-years
References
External links
Canes Venatici
Canum Venaticorum, Beta
G-type main-sequence stars
Canum Venaticorum, Beta
Chara
Canum_Venaticorum, 08
061317
109358
4785
Durchmusterung objects
TIC objects | Beta Canum Venaticorum | Astronomy | 789 |
7,974,227 | https://en.wikipedia.org/wiki/Lehmer%20matrix | In mathematics, particularly matrix theory, the n×n Lehmer matrix (named after Derrick Henry Lehmer) is the constant symmetric matrix defined by
Alternatively, this may be written as
Properties
As can be seen in the examples section, if A is an n×n Lehmer matrix and B is an m×m Lehmer matrix, then A is a submatrix of B whenever m>n. The values of elements diminish toward zero away from the diagonal, where all elements have value 1.
The inverse of a Lehmer matrix is a tridiagonal matrix, where the superdiagonal and subdiagonal have strictly negative entries. Consider again the n×n A and m×m B Lehmer matrices, where m>n. A rather peculiar property of their inverses is that A−1 is nearly a submatrix of B−1, except for the A−1n,n element, which is not equal to B−1n,n.
A Lehmer matrix of order n has trace n.
Examples
The 2×2, 3×3 and 4×4 Lehmer matrices and their inverses are shown below.
See also
Derrick Henry Lehmer
Hilbert matrix
References
Matrices | Lehmer matrix | Mathematics | 250 |
24,467,949 | https://en.wikipedia.org/wiki/Translational%20regulation | Translational regulation refers to the control of the levels of protein synthesized from its mRNA. This regulation is vastly important to the cellular response to stressors, growth cues, and differentiation. In comparison to transcriptional regulation, it results in much more immediate cellular adjustment through direct regulation of protein concentration. The corresponding mechanisms are primarily targeted on the control of ribosome recruitment on the initiation codon, but can also involve modulation of peptide elongation, termination of protein synthesis, or ribosome biogenesis. While these general concepts are widely conserved, some of the finer details in this sort of regulation have been proven to differ between prokaryotic and eukaryotic organisms.
In prokaryotes
Initiation
Initiation of translation is regulated by the accessibility of ribosomes to the Shine-Dalgarno sequence. This stretch of four to nine purine residues are located upstream the initiation codon and hybridize to a pyrimidine-rich sequence near the 3' end of the 16S RNA within the 30S bacterial ribosomal subunit. Polymorphism in this particular sequence has both positive and negative effects on the efficiency of base-pairing and subsequent protein expression. Initiation is also regulated by proteins known as initiation factors which provide kinetic assistance to the binding between the initiation codon and tRNAfMet, which supplies the 3'-UAC-5' anticodon. IF1 binds the 30S subunit first, instigating a conformational change that allows for the additional binding of IF2 and IF3. IF2 ensures that tRNAfMet remains in the correct position while IF3 proofreads initiation codon base-pairing to prevent non-canonical initiation at codons such as AUU and AUC. Generally, these initiation factors are expressed in equal proportion to ribosomes, however experiments using cold-shock conditions have shown to create stoichiometric imbalances between these translational machinery. In this case, two to three fold changes in expression of initiation factors coincide with increased favorability towards translation of specific cold-shock mRNAs.
Elongation
Due to the fact that translation elongation is an irreversible process, there are few known mechanisms of its regulation. However, it has been shown that translational efficiency is reduced via diminished tRNA pools, which are required for the elongation of polypeptides. In fact, the richness of these tRNA pools are susceptible to change through cellular oxygen supply.
Termination
The termination of translation requires coordination between release factor proteins, the mRNA sequence, and ribosomes. Once a termination codon is read, release factors RF-1, RF-2, and RF-3 contribute to the hydrolysis of the growing polypeptide, which terminates the chain. Bases downstream the stop codon affect the activity of these release factors. In fact, some bases proximal to the stop codon suppress the efficiency of translation termination by reducing the enzymatic activity of the release factors. For instance, the termination efficiency of a UAAU stop codon is near 80% while the efficiency of UGAC as a termination signal is only 7%.
In eukaryotes
Initiation
When comparing initiation in eukaryotes to prokaryotes, perhaps one of the first noticeable differences is the use of a larger 80S ribosome. Regulation of this process begins with the supply of methionine by a tRNA anticodon that basepairs AUG. This base pairing comes about by the scanning mechanism that ensues once the small 40S ribosomal subunit binds the 5' untranslated region (UTR) of mRNA. The usage of this scanning mechanism, in opposition to the Shine-Dalgarno sequence that was referenced in prokaryotes, is the ability to regulate translation through upstream RNA secondary structures. This inhibition of initiation through complex RNA structures may be circumvented in some cases by way of internal ribosomal entry sites (IRESs) that localize pre-initiation complexes (PIC) to the start site. In addition to this, the guidance of the PIC to the 5' UTR is coordinated by subunits of the PIC, known as eukaryotic initiation factors (eIFs). When some of these proteins are down-regulated through stresses, translation initiation is reduced by inhibiting cap dependent initiation, the activation of translation by binding eIF4E to the 5' 7-methylguanylate cap. eIF2 is responsible for coordinating the interaction between the Met-tRNAiMet and the P-site of the ribosome. Regulation by phosphorylation of eIF2 is largely associated with the termination of translation initiation. Serine kinases, GCN2, PERK, PKR, and HRI are examples of detection mechanisms for differing cellular stresses that respond by slowing translation through eIF2 phosphorylation.
Elongation
The hallmark difference of elongation in eukaryotes in comparison to prokaryotes is its separation from transcription. While prokaryotes are able to undergo both cellular processes simultaneously, the spatial separation that is provided by the nuclear membrane prevents this coupling in eukaryotes. Eukaryotic elongation factor 2 (eEF2) is a regulateable GTP-dependent translocase that moves nascent polypeptide chains from the A-site to the P-site in the ribosome. Phosphorylation of threonine 56 is inhibitory to the binding of eEF2 to the ribosome. Cellular stressors, such as anoxia have proven to induce translational inhibition through this biochemical interaction.
Termination
Mechanistically, eukaryotic translation termination matches its prokaryotic counterpart. In this case, termination of the polypeptide chain is achieved through the hydrolytic action of a heterodimer consisting of release factors, eRF1 and eRF3. Translation termination is said to be leaky in some cases as noncoding-tRNAs may compete with release factors to bind stop codons. This is possible due to the matching of 2 out 3 bases within the stop codon by tRNAs that may occasionally outcompete release factor base pairing. An example of regulation at the level of termination is functional translational readthrough of the lactate dehydrogenase gene LDHB. This readthrough provides a peroxisomal targeting signal that localizes the distinct LDHBx to the peroxisome.
In plants
Translation in plants is tightly regulated as in animals, however, it is not as well understood as transcriptional regulation. There are several levels of regulation including translation initiation, mRNA turnover and ribosome loading. Recent studies have shown that translation is also under the control of the circadian clock. Like transcription, the translation state of numerous mRNAs changes over the diel cycle (day night period).
References
Gene expression
RNA | Translational regulation | Chemistry,Biology | 1,423 |
20,720,027 | https://en.wikipedia.org/wiki/Tube%20tester | A tube tester is an electronic instrument designed to test certain characteristics of vacuum tubes (thermionic valves). Tube testers evolved along with the vacuum tube to satisfy the demands of the time, and their evolution ended with the tube era. The first tube testers were simple units designed for specific tubes to be used in the battlefields of World War I by radio operators, so they could easily test the tubes of their communication equipment.
Types of tube testers
Modern testers
The most modern testers perform a multitude of the below tests and are fully automated. Examples of modern testers include the Amplitrex AT1000, the Space-Tech Lab AudioTubeTester, the Maxi pre-amp tester and the maxi-matcher (power tubes only) by maxi test and the new, and somewhat more primitive, DIVO VT1000 by Orange Amplification. While the AT1000, AudioTubeTester and the Maxi-test brand testers offer precise measurements of transconductance/Gm and emissions/iP at full or near-full voltages, the Orange tester offers a very simple numerical quality scale. The AudioTubeTester has a unique feature of quick tube matching +/- percentage display.
Filament continuity tester
The simplest tester is the filament continuity tester, usually with a neon lamp connected in series with the filament/heater and a current limiting resistance fed directly by the mains. There is therefore no need to select the appropriate filament voltage for the particular tube under test, but this equipment will not identify tubes that may be faulty in other (more likely) ways, nor indicate any degree of wear. The same checks can be made with a cheap multimeter's resistance test.
Tube checker
The tube checker is the second-simplest of all tube testers after filament continuity testing. Tubes are used as a low power rectifier, with all elements other than filament connections connected together as the anode, at a fraction of its normal emission. By mistake referred to sometimes as Emission Tester because they are a crude measure of emission in directly heated types (but a measure of unwanted heater-cathode leakage in indirectly heated types). Switches will need to select the correct filament voltage and pins.
Emission tester
Next in complexity is the emission tester, which basically treats any tube as a diode by carefully connecting the cathode to ground, all the grids and plate to B+ voltage, feeding the filament with the correct voltage, and an ammeter in series with either the plate or the cathode. This effectively measures emission, the current which the cathode is capable of emitting, for the given plate voltage, which can usually be controlled by a variable load resistor. Switches will need to select the correct filament voltage plus which pins belong to the filament and cathode(s).
Older testers may call themselves Plate Conductance if the ammeter is in series with the plate, or Cathode Conductance if the meter is in series with the cathode.
The problems of emission testers are:
they do not measure key characteristics of tubes, like transconductance
they do not perform the tests at real load, voltages and currents
they test the tube under static conditions, which are not even near the dynamic conditions the tube would work with in a real electronic device
tubes with grids might not even show the real emission because of hot spots in the cathode, hidden by the grids under normal conditions
grids will be forward biased to some extent - some fine control grid wires are limited in their ability to withstand this
the amount of current that should be considered "100%" has to be known and documented for each tube type (and will be different for different emission test circuit details)
The advantage of an emission tester is that from all types of tube testers, it provides the most reliable warning of tube wear-out. If emission is at 70%, transconductance can be at 90% still, and gain at 100%. The best and most popular version used by the German army was the Funke W19 .
The disadvantage of an emission tester is that it can test a good tube as bad, and a bad tube as good, because it ignores other properties of the tube. A tube with low emission will work perfectly fine in most circuits, and need not be replaced on that indication alone, unless it measures much lower than specified or if it indicates a short.
A variation on the emission tester is the dynamic conductance tester, a type of tester developed by the Jackson Electrical Company of Dayton, Ohio. The main difference is the use of ‘proportional AC voltages’ in place of applying the current directly to the grids and plate.
Short circuit test
Usually, emission testers also have a short circuit test which is just a variation of the continuity tester with a neon lamp, and which allows to identify if there is any shortcut between the different pairs of electrodes.
Parametric tester
A tester of this type applies DC voltage to the tube being tested, and datasheet values are verified under real conditions. Some parametric testers apply AC voltage to the tube being tested, with verification under conditions which simulate DC operation. Examples include the AVO line of tube testers, along with the Funke W20 and the Neuberger RPG375.
Mutual conductance tester
The mutual conductance tester tests the tube dynamically by applying bias and an AC voltage to the control grid, and measuring the current obtained on the plate, while maintaining the correct DC voltages on the plate and screen grid. This setup measures the transconductance of the tube, indicated in micromhos.
Oscilloscope tube curve tracer plug-in
A full set of characteristic curves for vacuum tubes, and later for semiconductor devices, could be displayed on an oscilloscope screen by use of a plug-in adapter, or on a dedicated curve tracer. An example is the Tektronix 570.
Self-service tube testers
From the late 1920s until the early 1970s, many department stores, drug stores and grocery stores in the U.S. had self-service tube-vending displays. They typically consisted of a tube tester atop a locked cabinet of tubes, with a flip chart of instructions. One would remove the tubes from a malfunctioning device, such as a radio or television, bring them to the store, and test them all, looking up the instructions from the part number on the tube and the flip chart. If a tube was defective, store personnel would sell a replacement from the cabinet.
At that time, tubes in consumer devices were installed in sockets and were easily replaceable, except for the CRT in televisions. Devices typically had a removable back with a diagram showing where to replace each tube. There were only a few types of tube socket; a radio or television set would have multiple identical sockets, so it was easy to mistakenly exchange tubes with different functions, but similar bases, between two different sockets. If testing showed all tubes to be working, the next step was a repair shop. As transistorized devices took over the market, the grocery-store tube-tester vanished.
See also
Transistor tester
Tube socket
Bogey value
References
External links
"The idiot's guide to tube testers"
"Does Your Process Need Tube Testing Machines?"
Electronic test equipment
Vacuum tubes | Tube tester | Physics,Technology,Engineering | 1,550 |
22,666,143 | https://en.wikipedia.org/wiki/European%20Master%20on%20Software%20Engineering | The European Master on Software Engineering, or European Masters Programme in Software Engineering (new name since 2015) (EMSE) is a two-year joint Master of Science (Msc) program coordinated by four European universities (Free University of Bozen-Bolzano, Technical University of Madrid, Kaiserslautern University of Technology, University of Oulu), funded by the Erasmus+ Programme of the European Union.
Programme Overview
The discipline software engineering is traditionally designed to bridge industry and research needs. The European Masters Programme in Software Engineering (EMSE) is a part of the Erasmus Mundus Programme from the European Commission which focuses on the area of software engineering. This course enforces a wide spectrum of courses based on the high scientific knowledge of each partner university both in theoretical and applied research. With this program students will be guided into a cognitive path including foundation courses - such as advanced statistics, requirements engineering, verification and validation, software quality and management - and elective courses in which basic knowledge is applied, such as distributed systems, information management, computer networks, cluster technologies, software product lines, system engineering, system security, internet technologies, usability and others.
The courses are project oriented. This means that students have a large offering of internships and projects to develop at the university or in a company. Courses are taught in English. Nevertheless, students can take advantage of the university language centre to learn local languages at zero costs.
With EMSE, students will become familiar with the software engineering discipline both through theoretical and practical experience. This program aims at forming high qualified professionals in software engineering with a strong theoretical base and practical competence that can be spent both in industry and for a further education plan. Namely, motivated students are prepared for a future PhD, taking also advantage of the excellent connections of the EMSE consortium with international research centres and European consortia.
Partner universities
Free University of Bozen-Bolzano, Italy (coordinating university)
Universidad Politécnica de Madrid, Spain
Technische Universität Kaiserslautern, Germany
University of Oulu, Finland
References
External links
Programme's website at Libera Università di Bolzano
2nd Workshop on the European Masters in Software Engineering (WEMSE)
College and university associations and consortia in Europe
Computer science education
Erasmus Mundus Programmes | European Master on Software Engineering | Technology | 469 |
5,481,447 | https://en.wikipedia.org/wiki/C%2B%2B11 | C++11 is a version of a joint technical standard, ISO/IEC 14882, by the International Organization for Standardization (ISO) and International Electrotechnical Commission (IEC), for the C++ programming language. C++11 replaced the prior version of the C++ standard, named C++03, and was later replaced by C++14. The name follows the tradition of naming language versions by the publication year of the specification, though it was formerly named C++0x because it was expected to be published before 2010.
Although one of the design goals was to prefer changes to the libraries over changes to the core language, C++11 does make several additions to the core language. Areas of the core language that were significantly improved include multithreading support, generic programming support, uniform initialization, and performance. Significant changes were also made to the C++ Standard Library, incorporating most of the C++ Technical Report 1 (TR1) libraries, except the library of mathematical special functions.
C++11 was published as ISO/IEC 14882:2011 in September 2011 and is available for a fee. The working draft most similar to the published C++11 standard is N3337, dated 16 January 2012; it has only editorial corrections from the C++11 standard.
C++11 is fully supported by Clang 3.3 and later. C++11 is fully supported by GNU Compiler Collection (GCC) 4.8.1 and later.
Design goals
The design committee attempted to stick to a number of goals in designing C++11:
Maintain stability and compatibility with older code
Prefer introducing new features via the standard library, rather than extending the core language
Improve C++ to facilitate systems and library design, rather than introduce new features useful only to specific applications
Increase type safety by providing safer alternatives to earlier unsafe techniques
Increase performance and the ability to work directly with hardware
Provide proper solutions for real-world problems
Make C++ easy to teach and to learn without removing any utility needed by expert programmers
Attention to beginners is considered important, because most computer programmers will always be such, and because many beginners never widen their knowledge, limiting themselves to work in aspects of the language in which they specialize.
Extensions to the C++ core language
One function of the C++ committee is the development of the language core. Areas of the core language that were significantly improved include multithreading support, generic programming support, uniform initialization, and performance.
Core language runtime performance enhancements
These language features primarily exist to provide some kind of runtime performance benefit, either of memory or of computing speed.
Rvalue references and move constructors
In C++03 (and before), temporaries (termed "rvalues", as they often lie on the right side of an assignment) were intended to never be modifiable — just as in C — and were considered to be indistinguishable from const T& types; nevertheless, in some cases, temporaries could have been modified, a behavior that was even considered to be a useful loophole. C++11 adds a new non-const reference type called an , identified by T&&. This refers to temporaries that are permitted to be modified after they are initialized, for the purpose of allowing "move semantics".
A chronic performance problem with C++03 is the costly and unneeded deep copies that can happen implicitly when objects are passed by value. To illustrate the issue, consider that an std::vector<T> is, internally, a wrapper around a C-style array with a defined size. If an std::vector<T> temporary is created or returned from a function, it can be stored only by creating a new std::vector<T> and copying all the rvalue's data into it. Then the temporary and all its memory is destroyed. (For simplicity, this discussion neglects the return value optimization.)
In C++11, a of std::vector<T> that takes an rvalue reference to an std::vector<T> can copy the pointer to the internal C-style array out of the rvalue into the new std::vector<T>, then set the pointer inside the rvalue to null. Since the temporary will never again be used, no code will try to access the null pointer, and because the pointer is null, its memory is not deleted when it goes out of scope. Hence, the operation not only forgoes the expense of a deep copy, but is safe and invisible.
Rvalue references can provide performance benefits to existing code without needing to make any changes outside the standard library. The type of the returned value of a function returning an std::vector<T> temporary does not need to be changed explicitly to std::vector<T> && to invoke the move constructor, as temporaries are considered rvalues automatically. (However, if std::vector<T> is a C++03 version without a move constructor, then the copy constructor will be invoked with a const std::vector<T>&, incurring a significant memory allocation.)
For safety reasons, some restrictions are imposed. A named variable will never be considered to be an rvalue even if it is declared as such. To get an rvalue, the function template std::move() should be used. Rvalue references can also be modified only under certain circumstances, being intended to be used primarily with move constructors.
Due to the nature of the wording of rvalue references, and to some modification to the wording for lvalue references (regular references), rvalue references allow developers to provide perfect function forwarding. When combined with variadic templates, this ability allows for function templates that can perfectly forward arguments to another function that takes those particular arguments. This is most useful for forwarding constructor parameters, to create factory functions that will automatically call the correct constructor for those particular arguments. This is seen in the emplace_back set of the C++ standard library methods.
constexpr – Generalized constant expressions
C++ has always had the concept of constant expressions. These are expressions such as 3+4 that will always yield the same results, at compile time and at runtime. Constant expressions are optimization opportunities for compilers, and compilers frequently execute them at compile time and hardcode the results in the program. Also, in several places, the C++ specification requires using constant expressions. Defining an array requires a constant expression, and enumerator values must be constant expressions.
However, a constant expression has never been allowed to contain a function call or object constructor. So a piece of code as simple as this is invalid:
int get_five() {return 5;}
int some_value[get_five() + 7]; // Create an array of 12 integers. Ill-formed C++
This was not valid in C++03, because get_five() + 7 is not a constant expression. A C++03 compiler has no way of knowing if get_five() actually is constant at runtime. In theory, this function could affect a global variable, call other non-runtime constant functions, etc.
C++11 introduced the keyword constexpr, which allows the user to guarantee that a function or object constructor is a compile-time constant. The above example can be rewritten as follows:
constexpr int get_five() {return 5;}
int some_value[get_five() + 7]; // Create an array of 12 integers. Valid C++11
This allows the compiler to understand, and verify, that get_five() is a compile-time constant.
Using constexpr on a function imposes some limits on what that function can do. First, the function must have a non-void return type. Second, the function body cannot declare variables or define new types. Third, the body may contain only declarations, null statements and a single return statement. There must exist argument values such that, after argument substitution, the expression in the return statement produces a constant expression.
Before C++11, the values of variables could be used in constant expressions only if the variables are declared const, have an initializer which is a constant expression, and are of integral or enumeration type. C++11 removes the restriction that the variables must be of integral or enumeration type if they are defined with the constexpr keyword:
constexpr double earth_gravitational_acceleration = 9.8;
constexpr double moon_gravitational_acceleration = earth_gravitational_acceleration / 6.0;
Such data variables are implicitly const, and must have an initializer which must be a constant expression.
To construct constant expression data values from user-defined types, constructors can also be declared with constexpr. A constexpr constructor's function body can contain only declarations and null statements, and cannot declare variables or define types, as with a constexpr function. There must exist argument values such that, after argument substitution, it initializes the class's members with constant expressions. The destructors for such types must be trivial.
The copy constructor for a type with any constexpr constructors should usually also be defined as a constexpr constructor, to allow objects of the type to be returned by value from a constexpr function. Any member function of a class, such as copy constructors, operator overloads, etc., can be declared as constexpr, so long as they meet the requirements for constexpr functions. This allows the compiler to copy objects at compile time, perform operations on them, etc.
If a constexpr function or constructor is called with arguments which aren't constant expressions, the call behaves as if the function were not constexpr, and the resulting value is not a constant expression. Likewise, if the expression in the return statement of a constexpr function does not evaluate to a constant expression for a given invocation, the result is not a constant expression.
constexpr differs from consteval, introduced in C++20, in that the latter must always produce a compile time constant, while constexpr does not have this restriction.
Modification to the definition of plain old data
In C++03, a class or struct must follow a number of rules for it to be considered a plain old data (POD) type. Types that fit this definition produce object layouts that are compatible with C, and they could also be initialized statically. The C++03 standard has restrictions on what types are compatible with C or can be statically initialized despite there being no technical reason a compiler couldn't accept the program; if someone were to create a C++03 POD type and add a non-virtual member function, this type would no longer be a POD type, could not be statically initialized, and would be incompatible with C despite no change to the memory layout.
C++11 relaxed several of the POD rules, by dividing the POD concept into two separate concepts: trivial and standard-layout.
A type that is trivial can be statically initialized. It also means that it is valid to copy data around via memcpy, rather than having to use a copy constructor. The lifetime of a trivial type begins when its storage is defined, not when a constructor completes.
A trivial class or struct is defined as one that:
Has a trivial default constructor. This may use the default constructor syntax (SomeConstructor() = default;).
Has trivial copy and move constructors, which may use the default syntax.
Has trivial copy and move assignment operators, which may use the default syntax.
Has a trivial destructor, which must not be virtual.
Constructors are trivial only if there are no virtual member functions of the class and no virtual base classes. Copy/move operations also require all non-static data members to be trivial.
A type that is standard-layout means that it orders and packs its members in a way that is compatible with C. A class or struct is standard-layout, by definition, provided:
It has no virtual functions
It has no virtual base classes
All its non-static data members have the same access control (public, private, protected)
All its non-static data members, including any in its base classes, are in the same one class in the hierarchy
The above rules also apply to all the base classes and to all non-static data members in the class hierarchy
It has no base classes of the same type as the first defined non-static data member
A class/struct/union is considered POD if it is trivial, standard-layout, and all of its non-static data members and base classes are PODs.
By separating these concepts, it becomes possible to give up one without losing the other. A class with complex move and copy constructors may not be trivial, but it could be standard-layout and thus interoperate with C. Similarly, a class with public and private non-static data members would not be standard-layout, but it could be trivial and thus memcpy-able.
Core language build-time performance enhancements
Extern template
In C++03, the compiler must instantiate a template whenever a fully specified template is encountered in a translation unit. If the template is instantiated with the same types in many translation units, this can dramatically increase compile times. There is no way to prevent this in C++03, so C++11 introduced extern template declarations, analogous to extern data declarations.
C++03 has this syntax to oblige the compiler to instantiate a template:
template class std::vector<MyClass>;
C++11 now provides this syntax:
extern template class std::vector<MyClass>;
which tells the compiler not to instantiate the template in this translation unit.
Core language usability enhancements
These features exist for the primary purpose of making the language easier to use. These can improve type safety, minimize code repetition, make erroneous code less likely, etc.
Initializer lists
C++03 inherited the initializer-list feature from C. A struct or array is given a list of arguments in braces, in the order of the members' definitions in the struct. These initializer-lists are recursive, so an array of structs or struct containing other structs can use them.
struct Object
{
float first;
int second;
};
Object scalar = {0.43f, 10}; //One Object, with first=0.43f and second=10
Object anArray[] = {{13.4f, 3}, {43.28f, 29}, {5.934f, 17}}; //An array of three Objects
This is very useful for static lists, or initializing a struct to some value. C++ also provides constructors to initialize an object, but they are often not as convenient as the initializer list. However, C++03 allows initializer-lists only on structs and classes that conform to the Plain Old Data (POD) definition; C++11 extends initializer-lists, so they can be used for all classes including standard containers like std::vector.
C++11 binds the concept to a template, called std::initializer_list. This allows constructors and other functions to take initializer-lists as parameters. For example:
class SequenceClass
{
public:
SequenceClass(std::initializer_list<int> list);
};
This allows SequenceClass to be constructed from a sequence of integers, such as:
SequenceClass some_var = {1, 4, 5, 6};
This constructor is a special kind of constructor, called an initializer-list-constructor. Classes with such a constructor are treated specially during uniform initialization (see below)
The template class std::initializer_list<> is a first-class C++11 standard library type. They can be constructed statically by the C++11 compiler via use of the {} syntax without a type name in contexts where such braces will deduce to an std::initializer_list, or by explicitly specifying the type like std::initializer_list<SomeType>{args} (and so on for other varieties of construction syntax).
The list can be copied once constructed, which is cheap and will act as a copy-by-reference (the class is typically implemented as a pair of begin/end pointers). An std::initializer_list is constant: its members cannot be changed once it is created, and nor can the data in those members be changed (which rules out moving from them, requiring copies into class members, etc.).
Although its construction is specially treated by the compiler, an std::initializer_list is a real type, and so it can be used in other places besides class constructors. Regular functions can take typed std::initializer_lists as arguments. For example:
void function_name(std::initializer_list<float> list); // Copying is cheap; see above
function_name({1.0f, -3.45f, -0.4f});
Examples of this in the standard library include the std::min() and std::max() templates taking std::initializer_lists of numeric type.
Standard containers can also be initialized in these ways:
std::vector<std::string> v = { "xyzzy", "plugh", "abracadabra" };
std::vector<std::string> v({ "xyzzy", "plugh", "abracadabra" });
std::vector<std::string> v{ "xyzzy", "plugh", "abracadabra" }; // see "Uniform initialization" below
Uniform initialization
C++03 has a number of problems with initializing types. Several ways to do this exist, and some produce different results when interchanged. The traditional constructor syntax, for example, can look like a function declaration, and steps must be taken to ensure that the compiler's most vexing parse rule will not mistake it for such. Only aggregates and POD types can be initialized with aggregate initializers (using SomeType var = {/*stuff*/};).
C++11 provides a syntax that allows for fully uniform type initialization that works on any object. It expands on the initializer list syntax:
struct BasicStruct
{
int x;
double y;
};
struct AltStruct
{
AltStruct(int x, double y)
: x_{x}
, y_{y}
{}
private:
int x_;
double y_;
};
BasicStruct var1{5, 3.2};
AltStruct var2{2, 4.3};
The initialization of var1 behaves exactly as though it were aggregate-initialization. That is, each data member of an object, in turn, will be copy-initialized with the corresponding value from the initializer-list. Implicit type conversion will be used where needed. If no conversion exists, or only a narrowing conversion exists, the program is ill-formed. The initialization of var2 invokes the constructor.
One can also do this:
struct IdString
{
std::string name;
int identifier;
};
IdString get_string()
{
return {"foo", 42}; //Note the lack of explicit type.
}
Uniform initialization does not replace constructor syntax, which is still needed at times. If a class has an initializer list constructor (TypeName(initializer_list<SomeType>);), then it takes priority over other forms of construction, provided that the initializer list conforms to the sequence constructor's type. The C++11 version of std::vector has an initializer list constructor for its template type. Thus this code:
std::vector<int> the_vec{4};
will call the initializer list constructor, not the constructor of std::vector that takes a single size parameter and creates the vector with that size. To access the latter constructor, the user will need to use the standard constructor syntax directly.
Type inference
In C++03 (and C), to use a variable, its type must be specified explicitly. However, with the advent of template types and template metaprogramming techniques, the type of something, particularly the well-defined return value of a function, may not be easily expressed. Thus, storing intermediates in variables is difficult, possibly needing knowledge of the internals of a given metaprogramming library.
C++11 allows this to be mitigated in two ways. First, the definition of a variable with an explicit initialization can use the auto keyword. This creates a variable of the specific type of the initializer:
auto some_strange_callable_type = std::bind(&some_function, _2, _1, some_object);
auto other_variable = 5;
The type of some_strange_callable_type is simply whatever the particular template function override of std::bind returns for those particular arguments. This type is easily determined procedurally by the compiler as part of its semantic analysis duties, but is not easy for the user to determine upon inspection.
The type of other_variable is also well-defined, but it is easier for the user to determine. It is an int, which is the same type as the integer literal.
This use of the keyword auto in C++ re-purposes the semantics of this keyword, which was originally used in the typeless predecessor language B in a related role of denoting an untyped automatic variable definition.
Further, the keyword decltype can be used to determine the type of expression at compile-time. For example:
int some_int;
decltype(some_int) other_integer_variable = 5;
This is more useful in conjunction with auto, since the type of auto variable is known only to the compiler. However, decltype can also be very useful for expressions in code that makes heavy use of operator overloading and specialized types.
auto is also useful for reducing the verbosity of the code. For instance, instead of writing
for (std::vector<int>::const_iterator itr = myvec.cbegin(); itr != myvec.cend(); ++itr)
the programmer can use the shorter
for (auto itr = myvec.cbegin(); itr != myvec.cend(); ++itr)
which can be further compacted since "myvec" implements begin/end iterators:
for (const auto& x : myvec)
This difference grows as the programmer begins to nest containers, though in such cases typedefs are a good way to decrease the amount of code.
The type denoted by decltype can be different from the type deduced by auto.
#include <vector>
int main()
{
const std::vector<int> v(1);
auto a = v[0]; // a has type int
decltype(v[0]) b = 1; // b has type const int&, the return type of
// std::vector<int>::operator[](size_type) const
auto c = 0; // c has type int
auto d = c; // d has type int
decltype(c) e; // e has type int, the type of the entity named by c
decltype((c)) f = c; // f has type int&, because (c) is an lvalue
decltype(0) g; // g has type int, because 0 is an rvalue
}
Range-based for loop
C++11 extends the syntax of the for statement to allow for easy iteration over a range of elements:
int my_array[5] = {1, 2, 3, 4, 5};
// double the value of each element in my_array:
for (int& x : my_array)
x *= 2;
// similar but also using type inference for array elements
for (auto& x : my_array)
x *= 2;
This form of for, called the “range-based for”, will iterate over each element in the list. It will work for C-style arrays, initializer lists, and any type that has begin() and end() functions defined for it that return iterators. All the standard library containers that have begin/end pairs will work with the range-based for statement.
Lambda functions and expressions
C++11 provides the ability to create anonymous functions, called lambda functions.
These are defined as follows:
[](int x, int y) -> int { return x + y; }
The return type (-> int in this example) can be omitted as long as all return expressions return the same type.
A lambda can optionally be a closure.
Alternative function syntax
Standard C function declaration syntax was perfectly adequate for the feature set of the C language. As C++ evolved from C, it kept the basic syntax and extended it where needed. However, as C++ grew more complex, it exposed several limits, especially regarding template function declarations. For example, in C++03 this is invalid:
template<class Lhs, class Rhs>
Ret adding_func(const Lhs &lhs, const Rhs &rhs) {return lhs + rhs;} //Ret must be the type of lhs+rhs
The type Ret is whatever the addition of types Lhs and Rhs will produce. Even with the aforementioned C++11 functionality of decltype, this is not possible:
template<class Lhs, class Rhs>
decltype(lhs+rhs) adding_func(const Lhs &lhs, const Rhs &rhs) {return lhs + rhs;} //Not valid C++11
This is not valid C++ because lhs and rhs have not yet been defined; they will not be valid identifiers until after the parser has parsed the rest of the function prototype.
To work around this, C++11 introduced a new function declaration syntax, with a trailing-return-type:
template<class Lhs, class Rhs>
auto adding_func(const Lhs &lhs, const Rhs &rhs) -> decltype(lhs+rhs) {return lhs + rhs;}
This syntax can be used for more mundane function declarations and definitions:
struct SomeStruct
{
auto func_name(int x, int y) -> int;
};
auto SomeStruct::func_name(int x, int y) -> int
{
return x + y;
}
The use of the “auto” keyword in this case is just part of the syntax and does not perform automatic type deduction in C++11. However, starting with C++14, the trailing return type can be removed entirely and the compiler will deduce the return type automatically.
Object construction improvement
In C++03, constructors of a class are not allowed to call other constructors in an initializer list of that class. Each constructor must construct all of its class members itself or call a common member function, as follows:
class SomeType
{
public:
SomeType(int new_number)
{
Construct(new_number);
}
SomeType()
{
Construct(42);
}
private:
void Construct(int new_number)
{
number = new_number;
}
int number;
};
Constructors for base classes cannot be directly exposed to derived classes; each derived class must implement constructors even if a base class constructor would be appropriate. Non-constant data members of classes cannot be initialized at the site of the declaration of those members. They can be initialized only in a constructor.
C++11 provides solutions to all of these problems.
C++11 allows constructors to call other peer constructors (termed delegation). This allows constructors to utilize another constructor's behavior with a minimum of added code. Delegation has been used in other languages e.g., Java and Objective-C.
This syntax is as follows:
class SomeType
{
int number;
public:
SomeType(int new_number) : number(new_number) {}
SomeType() : SomeType(42) {}
};
In this case, the same effect could have been achieved by making new_number a default parameter. The new syntax, however, allows the default value (42) to be expressed in the implementation rather than the interface — a benefit to maintainers of library code since default values for function parameters are “baked in” to call sites, whereas constructor delegation allows the value to be changed without recompilation of the code using the library.
This comes with a caveat: C++03 considers an object to be constructed when its constructor finishes executing, but C++11 considers an object constructed once any constructor finishes execution. Since multiple constructors will be allowed to execute, this will mean that each delegating constructor will be executing on a fully constructed object of its own type. Derived class constructors will execute after all delegation in their base classes is complete.
For base-class constructors, C++11 allows a class to specify that base class constructors will be inherited. Thus, the C++11 compiler will generate code to perform the inheritance and the forwarding of the derived class to the base class. This is an all-or-nothing feature: either all of that base class's constructors are forwarded or none of them are. Also, an inherited constructor will be shadowed if it matches the signature of a constructor of the derived class, and restrictions exist for multiple inheritance: class constructors cannot be inherited from two classes that use constructors with the same signature.
The syntax is as follows:
class BaseClass
{
public:
BaseClass(int value);
};
class DerivedClass : public BaseClass
{
public:
using BaseClass::BaseClass;
};
For member initialization, C++11 allows this syntax:
class SomeClass
{
public:
SomeClass() {}
explicit SomeClass(int new_value) : value(new_value) {}
private:
int value = 5;
};
Any constructor of the class will initialize value with 5, if the constructor does not override the initialization with its own. So the above empty constructor will initialize value as the class definition states, but the constructor that takes an int will initialize it to the given parameter.
It can also use constructor or uniform initialization, instead of the assignment initialization shown above.
Explicit overrides and final
In C++03, it is possible to accidentally create a new virtual function, when one intended to override a base class function. For example:
struct Base
{
virtual void some_func(float);
};
struct Derived : Base
{
virtual void some_func(int);
};
Suppose the Derived::some_func is intended to replace the base class version. But instead, because it has a different signature, it creates a second virtual function. This is a common problem, particularly when a user goes to modify the base class.
C++11 provides syntax to solve this problem.
struct Base
{
virtual void some_func(float);
};
struct Derived : Base
{
virtual void some_func(int) override; // ill-formed - doesn't override a base class method
};
The override special identifier means that the compiler will check the base class(es) to see if there is a virtual function with this exact signature. And if there is not, the compiler will indicate an error.
C++11 also adds the ability to prevent inheriting from classes or simply preventing overriding methods in derived classes. This is done with the special identifier final. For example:
struct Base1 final { };
struct Derived1 : Base1 { }; // ill-formed because the class Base1 has been marked final
struct Base2
{
virtual void f() final;
};
struct Derived2 : Base2
{
void f(); // ill-formed because the virtual function Base2::f has been marked final
};
In this example, the virtual void f() final; statement declares a new virtual function, but it also prevents derived classes from overriding it. It also has the effect of preventing derived classes from using that particular function name and parameter combination.
Neither override nor final are language keywords. They are technically identifiers for declarator attributes:
they gain special meaning as attributes only when used in those specific trailing contexts (after all type specifiers, access specifiers, member declarations (for struct, class and enum types) and declarator specifiers, but before initialization or code implementation of each declarator in a comma-separated list of declarators);
they do not alter the declared type signature and do not declare or override any new identifier in any scope;
the recognized and accepted declarator attributes may be extended in future versions of C++ (some compiler-specific extensions already recognize added declarator attributes, to provide code generation options or optimization hints to the compiler, or to generate added data into the compiled code, intended for debuggers, linkers, and deployment of the compiled code, or to provide added system-specific security attributes, or to enhance reflective programming (reflection) abilities at runtime, or to provide added binding information for interoperability with other programming languages and runtime systems; these extensions may take parameters between parentheses after the declarator attribute identifier; for ANSI conformance, these compiler-specific extensions should use the double underscore prefix convention).
In any other location, they can be valid identifiers for new declarations (and later use if they are accessible).
Null pointer constant and type
For the purposes of this section and this section alone, every occurrence of "0" is meant as "a constant expression which evaluates to 0, which is of type int". In reality, the constant expression can be of any integral type.
Since the dawn of C in 1972, the constant 0 has had the double role of constant integer and null pointer constant. The ambiguity inherent in the double meaning of 0 was dealt with in C by using the preprocessor macro NULL, which commonly expands to either ((void*)0) or 0. C++ forbids implicit conversion from void * to other pointer types, thus removing the benefit of casting 0 to void *. As a consequence, only 0 is allowed as a null pointer constant. This interacts poorly with function overloading:
void foo(char *);
void foo(int);
If NULL is defined as 0 (which is usually the case in C++), the statement foo(NULL); will call foo(int), which is almost certainly not what the programmer intended, and not what a superficial reading of the code suggests.
C++11 corrects this by introducing a new keyword to serve as a distinguished null pointer constant: nullptr. It is of type nullptr_t, which is implicitly convertible and comparable to any pointer type or pointer-to-member type. It is not implicitly convertible or comparable to integral types, except for bool. While the original proposal specified that an rvalue of type nullptr_t should not be convertible to bool, the core language working group decided that such a conversion would be desirable, for consistency with regular pointer types. The proposed wording changes were unanimously voted into the Working Paper in June 2008. A similar proposal was also brought to the C standard working group and was accepted for inclusion in C23.
For backwards compatibility reasons, 0 remains a valid null pointer constant.
char *pc = nullptr; // OK
int *pi = nullptr; // OK
bool b = nullptr; // OK. b is false.
int i = nullptr; // error
foo(nullptr); // calls foo(nullptr_t), not foo(int);
/*
Note that foo(nullptr_t) will actually call foo(char *) in the example above using an implicit conversion,
only if no other functions are overloading with compatible pointer types in scope.
If multiple overloadings exist, the resolution will fail as it is ambiguous,
unless there is an explicit declaration of foo(nullptr_t).
In standard types headers for C++11, the nullptr_t type should be declared as:
typedef decltype(nullptr) nullptr_t;
but not as:
typedef int nullptr_t; // prior versions of C++ which need NULL to be defined as 0
typedef void *nullptr_t; // ANSI C which defines NULL as ((void*)0)
*/
Strongly typed enumerations
In C++03, enumerations are not type-safe. They are effectively integers, even when the enumeration types are distinct. This allows the comparison between two enum values of different enumeration types. The only safety that C++03 provides is that an integer or a value of one enum type does not convert implicitly to another enum type. Further, the underlying integral type is implementation-defined; code that depends on the size of the enumeration is thus non-portable. Lastly, enumeration values are scoped to the enclosing scope. Thus, it is not possible for two separate enumerations in the same scope to have matching member names.
C++11 allows a special classification of enumeration that has none of these issues. This is expressed using the enum class (enum struct is also accepted as a synonym) declaration:
enum class Enumeration
{
Val1,
Val2,
Val3 = 100,
Val4 // = 101
};
This enumeration is type-safe. Enum class values are not implicitly converted to integers. Thus, they cannot be compared to integers either (the expression Enumeration::Val4 == 101 gives a compile error).
The underlying type of enum classes is always known. The default type is int; this can be overridden to a different integral type as can be seen in this example:
enum class Enum2 : unsigned int {Val1, Val2};
With old-style enumerations the values are placed in the outer scope. With new-style enumerations they are placed within the scope of the enum class name. So in the above example, Val1 is undefined, but Enum2::Val1 is defined.
There is also a transitional syntax to allow old-style enumerations to provide explicit scoping, and the definition of the underlying type:
enum Enum3 : unsigned long {Val1 = 1, Val2};
In this case the enumerator names are defined in the enumeration's scope (Enum3::Val1), but for backwards compatibility they are also placed in the enclosing scope.
Forward-declaring enums is also possible in C++11. Formerly, enum types could not be forward-declared because the size of the enumeration depends on the definition of its members. As long as the size of the enumeration is specified either implicitly or explicitly, it can be forward-declared:
enum Enum1; // Invalid in C++03 and C++11; the underlying type cannot be determined.
enum Enum2 : unsigned int; // Valid in C++11, the underlying type is specified explicitly.
enum class Enum3; // Valid in C++11, the underlying type is int.
enum class Enum4 : unsigned int; // Valid in C++11.
enum Enum2 : unsigned short; // Invalid in C++11, because Enum2 was formerly declared with a different underlying type.
Right angle bracket
C++03's parser defines “>>” as the right shift operator or stream extraction operator in all cases. However, with nested template declarations, there is a tendency for the programmer to neglect to place a space between the two right angle brackets, thus causing a compiler syntax error.
C++11 improves the specification of the parser so that multiple right angle brackets will be interpreted as closing the template argument list where it is reasonable. This can be overridden by using parentheses around parameter expressions using the “>”, “>=” or “>>” binary operators:
template<bool Test> class SomeType;
std::vector<SomeType<1>2>> x1; // Interpreted as a std::vector of SomeType<true>,
// followed by "2 >> x1", which is not valid syntax for a declarator. 1 is true.
std::vector<SomeType<(1>2)>> x1; // Interpreted as std::vector of SomeType<false>,
// followed by the declarator "x1", which is valid C++11 syntax. (1>2) is false.
Explicit conversion operators
C++98 added the explicit keyword as a modifier on constructors to prevent single-argument constructors from being used as implicit type conversion operators. However, this does nothing for actual conversion operators. For example, a smart pointer class may have an operator bool() to allow it to act more like a primitive pointer: if it includes this conversion, it can be tested with if (smart_ptr_variable) (which would be true if the pointer was non-null and false otherwise). However, this allows other, unintended conversions as well. Because C++ bool is defined as an arithmetic type, it can be implicitly converted to integral or even floating-point types, which allows for mathematical operations that are not intended by the user.
In C++11, the explicit keyword can now be applied to conversion operators. As with constructors, it prevents using those conversion functions in implicit conversions. However, language contexts that specifically need a Boolean value (the conditions of if-statements and loops, and operands to the logical operators) count as explicit conversions and can thus use a bool conversion operator.
For example, this feature solves cleanly the safe bool issue.
Template aliases
In C++03, it is possible to define a typedef only as a synonym for another type, including a synonym for a template specialization with all actual template arguments specified. It is not possible to create a typedef template. For example:
template <typename First, typename Second, int Third>
class SomeType;
template <typename Second>
typedef SomeType<OtherType, Second, 5> TypedefName; // Invalid in C++03
This will not compile.
C++11 adds this ability with this syntax:
template <typename First, typename Second, int Third>
class SomeType;
template <typename Second>
using TypedefName = SomeType<OtherType, Second, 5>;
The using syntax can also be used as type aliasing in C++11:
typedef void (*FunctionType)(double); // Old style
using FunctionType = void (*)(double); // New introduced syntax
Unrestricted unions
In C++03, there are restrictions on what types of objects can be members of a union. For example, unions cannot contain any objects that define a non-trivial constructor or destructor. C++11 lifts some of these restrictions.
If a union member has a non trivial special member function, the compiler will not generate the equivalent member function for the union and it must be manually defined.
This is a simple example of a union permitted in C++11:
#include <new> // Needed for placement 'new'.
struct Point
{
Point() {}
Point(int x, int y): x_(x), y_(y) {}
int x_, y_;
};
union U
{
int z;
double w;
Point p; // Invalid in C++03; valid in C++11.
U() {} // Due to the Point member, a constructor definition is now needed.
U(const Point& pt) : p(pt) {} // Construct Point object using initializer list.
U& operator=(const Point& pt) { new(&p) Point(pt); return *this; } // Assign Point object using placement 'new'.
};
The changes will not break any existing code since they only relax current rules.
Core language functionality improvements
These features allow the language to do things that were formerly impossible, exceedingly verbose, or needed non-portable libraries.
Variadic templates
In C++11, templates can take variable numbers of template parameters. This also allows the definition of type-safe variadic functions.
New string literals
C++03 offers two kinds of string literals. The first kind, contained within double quotes, produces a null-terminated array of type const char. The second kind, defined as L"", produces a null-terminated array of type const wchar_t, where wchar_t is a wide-character of undefined size and semantics. Neither literal type offers support for string literals with UTF-8, UTF-16, or any other kind of Unicode encodings.
C++11 supports three Unicode encodings: UTF-8, UTF-16, and UTF-32. The definition of the type char has been modified to explicitly express that it is at least the size needed to store an eight-bit coding of UTF-8, and large enough to contain any member of the compiler's basic execution character set. It was formerly defined as only the latter in the C++ standard itself, then relying on the C standard to guarantee at least 8 bits. Furthermore, C++11 adds two new character types: char16_t and char32_t. These are designed to store UTF-16 and UTF-32 respectively.
Creating string literals for each of the supported encodings can be done thus:
u8"I'm a UTF-8 string."
u"This is a UTF-16 string."
U"This is a UTF-32 string."
The type of the first string is the usual const char[]. The type of the second string is const char16_t[] (note lower case 'u' prefix). The type of the third string is const char32_t[] (upper case 'U' prefix).
When building Unicode string literals, it is often useful to insert Unicode code points directly into the string. To do this, C++11 allows this syntax:
u8"This is a Unicode Character: \u2018."
u"This is a bigger Unicode Character: \u2018."
U"This is a Unicode Character: \U00002018."
The number after the \u is a hexadecimal number; it does not need the usual 0x prefix. The identifier \u represents a 16-bit Unicode code point; to enter a 32-bit code point, use \U and a 32-bit hexadecimal number. Only valid Unicode code points can be entered. For example, code points on the range U+D800–U+DFFF are forbidden, as they are reserved for surrogate pairs in UTF-16 encodings.
It is also sometimes useful to avoid escaping strings manually, particularly for using literals of XML files, scripting languages, or regular expressions. C++11 provides a raw string literal:
R"(The String Data \ Stuff " )"
R"delimiter(The String Data \ Stuff " )delimiter"
In the first case, everything between the "( and the )" is part of the string. The " and \ characters do not need to be escaped. In the second case, the "delimiter( starts the string, and it ends only when )delimiter" is reached. The string delimiter can be any string up to 16 characters in length, including the empty string. This string cannot contain spaces, control characters, (, ), or the \ character. Using this delimiter string, the user can have the sequence )" within raw string literals. For example, R"delimiter("(a-z)")delimiter" is equivalent to "\"(a-z)\"".
Raw string literals can be combined with the wide literal or any of the Unicode literal prefixes:
u8R"XXX(I'm a "raw UTF-8" string.)XXX"
uR"*(This is a "raw UTF-16" string.)*"
UR"(This is a "raw UTF-32" string.)"
User-defined literals
C++03 provides a number of literals. The characters 12.5 are a literal that is resolved by the compiler as a type double with the value of 12.5. However, the addition of the suffix f, as in 12.5f, creates a value of type float that contains the value 12.5. The suffix modifiers for literals are fixed by the C++ specification, and C++03 code cannot create new literal modifiers.
By contrast, C++11 enables the user to define new kinds of literal modifiers that will construct objects based on the string of characters that the literal modifies.
Transformation of literals is redefined into two distinct phases: raw and cooked. A raw literal is a sequence of characters of some specific type, while the cooked literal is of a separate type. The C++ literal 1234, as a raw literal, is this sequence of characters '1', '2', '3', '4'. As a cooked literal, it is the integer 1234. The C++ literal 0xA in raw form is '0', 'x', 'A', while in cooked form it is the integer 10.
Literals can be extended in both raw and cooked forms, with the exception of string literals, which can be processed only in cooked form. This exception is due to the fact that strings have prefixes that affect the specific meaning and type of the characters in question.
All user-defined literals are suffixes; defining prefix literals is not possible. All suffixes starting with any character except underscore (_) are reserved by the standard. Thus, all user-defined literals must have suffixes starting with an underscore (_).
User-defined literals processing the raw form of the literal are defined via a literal operator, which is written as operator "". An example follows:
OutputType operator "" _mysuffix(const char * literal_string)
{
// assumes that OutputType has a constructor that takes a const char *
OutputType ret(literal_string);
return ret;
}
OutputType some_variable = 1234_mysuffix;
// assumes that OutputType has a get_value() method that returns a double
assert(some_variable.get_value() == 1234.0)
The assignment statement OutputType some_variable = 1234_mysuffix; executes the code defined by the user-defined literal function. This function is passed "1234" as a C-style string, so it has a null terminator.
An alternative mechanism for processing integer and floating point raw literals is via a variadic template:
template<char...> OutputType operator "" _tuffix();
OutputType some_variable = 1234_tuffix;
OutputType another_variable = 2.17_tuffix;
This instantiates the literal processing function as operator "" _tuffix<'1', '2', '3', '4'>(). In this form, there is no null character terminating the string. The main purpose for doing this is to use C++11's constexpr keyword to ensure that the compiler will transform the literal entirely at compile time, assuming OutputType is a constexpr-constructible and copyable type, and the literal processing function is a constexpr function.
For numeric literals, the type of the cooked literal is either unsigned long long for integral literals or long double for floating point literals. (Note: There is no need for signed integral types because a sign-prefixed literal is parsed as an expression containing the sign as a unary prefix operator and the unsigned number.) There is no alternative template form:
OutputType operator "" _suffix(unsigned long long);
OutputType operator "" _suffix(long double);
OutputType some_variable = 1234_suffix; // Uses the 'unsigned long long' overload.
OutputType another_variable = 3.1416_suffix; // Uses the 'long double' overload.
In accord with the formerly mentioned new string prefixes, for string literals, these are used:
OutputType operator "" _ssuffix(const char * string_values, size_t num_chars);
OutputType operator "" _ssuffix(const wchar_t * string_values, size_t num_chars);
OutputType operator "" _ssuffix(const char16_t * string_values, size_t num_chars);
OutputType operator "" _ssuffix(const char32_t * string_values, size_t num_chars);
OutputType some_variable = "1234"_ssuffix; // Uses the 'const char *' overload.
OutputType some_variable = u8"1234"_ssuffix; // Uses the 'const char *' overload.
OutputType some_variable = L"1234"_ssuffix; // Uses the 'const wchar_t *' overload.
OutputType some_variable = u"1234"_ssuffix; // Uses the 'const char16_t *' overload.
OutputType some_variable = U"1234"_ssuffix; // Uses the 'const char32_t *' overload.
There is no alternative template form. Character literals are defined similarly.
Multithreading memory model
C++11 standardizes support for multithreaded programming.
There are two parts involved: a memory model which allows multiple threads to co-exist in a program and library support for interaction between threads. (See this article's section on threading facilities.)
The memory model defines when multiple threads may access the same memory location, and specifies when updates by one thread become visible to other threads.
Thread-local storage
In a multi-threaded environment, it is common for every thread to have some unique variables. This already happens for the local variables of a function, but it does not happen for global and static variables.
A new thread-local storage duration (in addition to the existing static, dynamic and automatic) is indicated by the storage specifier thread_local.
Any object which could have static storage duration (i.e., lifetime spanning the entire execution of the program) may be given thread-local duration instead. The intent is that like any other static-duration variable, a thread-local object can be initialized using a constructor and destroyed using a destructor.
Explicitly defaulted special member functions
In C++03, the compiler provides, for classes that do not provide them for themselves, a default constructor, a copy constructor, a copy assignment operator (operator=), and a destructor. The programmer can override these defaults by defining custom versions. C++ also defines several global operators (such as operator new) that work on all classes, which the programmer can override.
However, there is very little control over creating these defaults. Making a class inherently non-copyable, for example, may be done by declaring a private copy constructor and copy assignment operator and not defining them. Attempting to use these functions is a violation of the One Definition Rule (ODR). While a diagnostic message is not required, violations may result in a linker error.
In the case of the default constructor, the compiler will not generate a default constructor if a class is defined with any constructors. This is useful in many cases, but it is also useful to be able to have both specialized constructors and the compiler-generated default.
C++11 allows the explicit defaulting and deleting of these special member functions. For example, this class explicitly declares that a default constructor can be used:
class SomeType
{
SomeType() = default; //The default constructor is explicitly stated.
SomeType(OtherType value);
};
Explicitly deleted functions
A function can be explicitly disabled. This is useful for preventing implicit type conversions.
The = delete specifier can be used to prohibit calling a function with particular parameter types. For example:
void noInt(double i);
void noInt(int) = delete;
An attempt to call noInt() with an int parameter will be rejected by the compiler, instead of performing a silent conversion to double. Calling noInt() with a float still works.
It is possible to prohibit calling the function with any type other than double by using a template:
double onlyDouble(double d) {return d;}
template<typename T> double onlyDouble(T) = delete;
calling onlyDouble(1.0) will work, while onlyDouble(1.0f) will generate a compiler error.
Class member functions and constructors can also be deleted. For example, it is possible to prevent copying class objects by deleting the copy constructor and operator =:
class NonCopyable
{
NonCopyable();
NonCopyable(const NonCopyable&) = delete;
NonCopyable& operator=(const NonCopyable&) = delete;
};
Type long long int
In C++03, the largest integer type is long int. It is guaranteed to have at least as many usable bits as int. This resulted in long int having size of 64 bits on some popular implementations and 32 bits on others. C++11 adds a new integer type long long int to address this issue. It is guaranteed to be at least as large as a long int, and have no fewer than 64 bits. The type was originally introduced by C99 to the standard C, and most C++ compilers supported it as an extension already.
Static assertions
C++03 provides two methods to test assertions: the macro assert and the preprocessor directive #error. However, neither is appropriate for use in templates: the macro tests the assertion at execution-time, while the preprocessor directive tests the assertion during preprocessing, which happens before instantiation of templates. Neither is appropriate for testing properties that are dependent on template parameters.
The new utility introduces a new way to test assertions at compile-time, using the new keyword static_assert.
The declaration assumes this form:
static_assert (constant-expression, error-message);
Here are some examples of how static_assert can be used:
static_assert((GREEKPI > 3.14) && (GREEKPI < 3.15), "GREEKPI is inaccurate!");
template<class T>
struct Check
{
static_assert(sizeof(int) <= sizeof(T), "T is not big enough!");
};
template<class Integral>
Integral foo(Integral x, Integral y)
{
static_assert(std::is_integral<Integral>::value, "foo() parameter must be an integral type.");
}
When the constant expression is false the compiler produces an error message. The first example is similar to the preprocessor directive #error, although the preprocessor does only support integral types. In contrast, in the second example the assertion is checked at every instantiation of the template class Check.
Static assertions are useful outside of templates also. For instance, a given implementation of an algorithm might depend on the size of a long long being larger than an int, something the standard does not guarantee. Such an assumption is valid on most systems and compilers, but not all.
Allow sizeof to work on members of classes without an explicit object
In C++03, the sizeof operator can be used on types and objects. But it cannot be used to do this:
struct SomeType { OtherType member; };
sizeof(SomeType::member); // Does not work with C++03. Okay with C++11
This should return the size of OtherType. C++03 disallows this, so it is a compile error. C++11 allows it. It is also allowed for the alignof operator introduced in C++11.
Control and query object alignment
C++11 allows variable alignment to be queried and controlled with alignof and alignas.
The alignof operator takes the type and returns the power of 2 byte boundary on which the type instances must be allocated (as a std::size_t). When given a reference type alignof returns the referenced type's alignment; for arrays it returns the element type's alignment.
The alignas specifier controls the memory alignment for a variable. The specifier takes a constant or a type; when supplied a type alignas(T) is shorthand for alignas(alignof(T)). For example, to specify that a char array should be properly aligned to hold a float:
alignas(float) unsigned char c[sizeof(float)]
Allow garbage collected implementations
Prior C++ standards provided for programmer-driven garbage collection via set_new_handler, but gave no definition of object reachability for the purpose of automatic garbage collection. C++11 defines conditions under which pointer values are "safely derived" from other values. An implementation may specify that it operates under strict pointer safety, in which case pointers that are not derived according to these rules can become invalid.
Attributes
C++11 provides a standardized syntax for compiler/tool extensions to the language. Such extensions were traditionally specified using #pragma directive or vendor-specific keywords (like __attribute__ for GNU and __declspec for Microsoft). With the new syntax, added information can be specified in a form of an attribute enclosed in double square brackets. An attribute can be applied to various elements of source code:
int [[attr1]] i [[attr2, attr3]];
[[attr4(arg1, arg2)]] if (cond)
{
[[vendor::attr5]] return i;
}
In the example above, attribute attr1 applies to the type of variable i, attr2 and attr3 apply to the variable itself, attr4 applies to the if statement and vendor::attr5 applies to the return statement. In general (but with some exceptions), an attribute specified for a named entity is placed after the name, and before the entity otherwise, as shown above, several attributes may be listed inside one pair of double square brackets, added arguments may be provided for an attribute, and attributes may be scoped by vendor-specific attribute namespaces.
It is recommended that attributes have no language semantic meaning and do not change the sense of a program when ignored. Attributes can be useful for providing information that, for example, helps the compiler to issue better diagnostics or optimize the generated code.
C++11 provides two standard attributes itself: noreturn to specify that a function does not return, and carries_dependency to help optimizing multi-threaded code by indicating that function arguments or return value carry a dependency.
C++ standard library changes
A number of new features were introduced in the C++11 standard library. Many of these could have been implemented under the old standard, but some rely (to a greater or lesser extent) on new C++11 core features.
A large part of the new libraries was defined in the document C++ Standards Committee's Library Technical Report (called TR1), which was published in 2005. Various full and partial implementations of TR1 are currently available using the namespace std::tr1. For C++11 they were moved to namespace std. However, as TR1 features were brought into the C++11 standard library, they were upgraded where appropriate with C++11 language features that were not available in the initial TR1 version. Also, they may have been enhanced with features that were possible under C++03, but were not part of the original TR1 specification.
Upgrades to standard library components
C++11 offers a number of new language features that the currently existing standard library components can benefit from. For example, most standard library containers can benefit from Rvalue reference based move constructor support, both for quickly moving heavy containers around and for moving the contents of those containers to new memory locations. The standard library components were upgraded with new C++11 language features where appropriate. These include, but are not necessarily limited to:
Rvalue references and the associated move support
Support for the UTF-16 encoding unit, and UTF-32 encoding unit Unicode character types
Variadic templates (coupled with Rvalue references to allow for perfect forwarding)
Compile-time constant expressions
decltype
explicit conversion operators
Functions declared defaulted or deleted
Further, much time has passed since the prior C++ standard. Much code using the standard library has been written. This has revealed parts of the standard libraries that could use some improving. Among the many areas of improvement considered were standard library allocators. A new scope-based model of allocators was included in C++11 to supplement the prior model.
Threading facilities
While the C++03 language provides a memory model that supports threading, the primary support for actually using threading comes with the C++11 standard library.
A thread class (std::thread) is provided, which takes a function object (and an optional series of arguments to pass to it) to run in the new thread. It is possible to cause a thread to halt until another executing thread completes, providing thread joining support via the std::thread::join() member function. Access is provided, where feasible, to the underlying native thread object(s) for platform-specific operations by the std::thread::native_handle() member function.
For synchronization between threads, appropriate mutexes (std::mutex, std::recursive_mutex, etc.) and condition variables (std::condition_variable and std::condition_variable_any) are added to the library. These are accessible via Resource Acquisition Is Initialization (RAII) locks (std::lock_guard and std::unique_lock) and locking algorithms for easy use.
For high-performance, low-level work, communicating between threads is sometimes needed without the overhead of mutexes. This is done using atomic operations on memory locations. These can optionally specify the minimum memory visibility constraints needed for an operation. Explicit memory barriers may also be used for this purpose.
The C++11 thread library also includes futures and promises for passing asynchronous results between threads, and std::packaged_task for wrapping up a function call that can generate such an asynchronous result. The futures proposal was criticized because it lacks a way to combine futures and check for the completion of one promise inside a set of promises.
Further high-level threading facilities such as thread pools have been remanded to a future C++ technical report. They are not part of C++11, but their eventual implementation is expected to be built entirely on top of the thread library features.
The new std::async facility provides a convenient method of running tasks and tying them to a std::future. The user can choose whether the task is to be run asynchronously on a separate thread or synchronously on a thread that waits for the value. By default, the implementation can choose, which provides an easy way to take advantage of hardware concurrency without oversubscription, and provides some of the advantages of a thread pool for simple usages.
Tuple types
Tuples are collections composed of heterogeneous objects of pre-arranged dimensions. A tuple can be considered a generalization of a struct's member variables.
The C++11 version of the TR1 tuple type benefited from C++11 features like variadic templates. To implement reasonably, the TR1 version required an implementation-defined maximum number of contained types, and substantial macro trickery. By contrast, the implementation of the C++11 version requires no explicit implementation-defined maximum number of types. Though compilers will have an internal maximum recursion depth for template instantiation (which is normal), the C++11 version of tuples will not expose this value to the user.
Using variadic templates, the declaration of the tuple class looks as follows:
template <class ...Types> class tuple;
An example of definition and use of the tuple type:
typedef std::tuple <int, double, long &, const char *> test_tuple;
long lengthy = 12;
test_tuple proof (18, 6.5, lengthy, "Ciao!");
lengthy = std::get<0>(proof); // Assign to 'lengthy' the value 18.
std::get<3>(proof) = " Beautiful!"; // Modify the tuple’s fourth element.
It's possible to create the tuple proof without defining its contents, but only if the tuple elements' types possess default constructors. Moreover, it's possible to assign a tuple to another tuple: if the two tuples’ types are the same, each element type must possess a copy constructor; otherwise, each element type of the right-side tuple must be convertible to that of the corresponding element type of the left-side tuple or that the corresponding element type of the left-side tuple has a suitable constructor.
typedef std::tuple <int , double, string > tuple_1 t1;
typedef std::tuple <char, short , const char * > tuple_2 t2 ('X', 2, "Hola!");
t1 = t2; // Ok, first two elements can be converted,
// the third one can be constructed from a 'const char *'.
Just like std::make_pair for std::pair, there exists std::make_tuple to automatically create std::tuples using type deduction and auto helps to declare such a tuple. std::tie creates tuples of lvalue references to help unpack tuples. std::ignore also helps here. See the example:
auto record = std::make_tuple("Hari Ram", "New Delhi", 3.5, 'A');
std::string name ; float gpa ; char grade ;
std::tie(name, std::ignore, gpa, grade) = record ; // std::ignore helps drop the place name
std::cout << name << ' ' << gpa << ' ' << grade << std::endl ;
Relational operators are available (among tuples with the same number of elements), and two expressions are available to check a tuple's characteristics (only during compilation):
std::tuple_size<T>::value returns the number of elements in the tuple T,
std::tuple_element<I, T>::type returns the type of the object number I of the tuple T.
Hash tables
Including hash tables (unordered associative containers) in the C++ standard library is one of the most recurring requests. It was not adopted in C++03 due to time constraints only. Although hash tables are less efficient than a balanced tree in the worst case (in the presence of many collisions), they perform better in many real applications.
Collisions are managed only via linear chaining because the committee didn't consider it to be opportune to standardize solutions of open addressing that introduce quite a lot of intrinsic problems (above all when erasure of elements is admitted). To avoid name clashes with non-standard libraries that developed their own hash table implementations, the prefix “unordered” was used instead of “hash”.
The new library has four types of hash tables, differentiated by whether or not they accept elements with the same key (unique keys or equivalent keys), and whether they map each key to an associated value. They correspond to the four existing binary search tree based associative containers, with an prefix.
The new classes fulfill all the requirements of a container class, and have all the methods needed to access elements: insert, erase, begin, end.
This new feature didn't need any C++ language core extensions (though implementations will take advantage of various C++11 language features), only a small extension of the header <functional> and the introduction of headers <unordered_set> and <unordered_map>. No other changes to any existing standard classes were needed, and it doesn't depend on any other extensions of the standard library.
std::array and std::forward_list
In addition to the hash tables two more containers was added to the standard library. The std::array is a fixed size container that is more efficient than std::vector but safer and easier to use than a c-style array. The std::forward_list is a single linked list that provides more space efficient storage than the double linked std::list when bidirectional iteration is not needed.
Regular expressions
The new library, defined in the new header <regex>, is made of a couple of new classes:
regular expressions are represented by instance of the template class std::regex;
occurrences are represented by instance of the template class std::match_results,
std::regex_iterator is used to iterate over all matches of a regex
The function std::regex_search is used for searching, while for ‘search and replace’ the function std::regex_replace is used which returns a new string.
Here is an example of the use of std::regex_iterator:
#include <regex>
const char *pattern = R"([^ ,.\t\n]+)"; // find words separated by space, comma, period tab newline
std::regex rgx(pattern); // throws exception on invalid pattern
const char *target = "Unseen University - Ankh-Morpork";
// Use a regex_iterator to identify all words of 'target' separated by characters of 'pattern'.
auto iter =
std::cregex_iterator(target, target + strlen(target), rgx);
// make an end of sequence iterator
auto end =
std::cregex_iterator();
for (; iter != end; ++iter)
{
std::string match_str = iter->str();
std::cout << match_str << '\n';
}
The library <regex> requires neither alteration of any existing header (though it will use them where appropriate) nor an extension of the core language. In POSIX C, regular expressions are also available via the C POSIX library#regex.h.
General-purpose smart pointers
C++11 provides , and improvements to and from TR1. is deprecated.
Extensible random number facility
The C standard library provides the ability to generate pseudorandom numbers via the function rand. However, the algorithm is delegated entirely to the library vendor. C++ inherited this functionality with no changes, but C++11 provides a new method for generating pseudorandom numbers.
C++11's random number functionality is split into two parts: a generator engine that contains the random number generator's state and produces the pseudorandom numbers; and a distribution, which determines the range and mathematical distribution of the outcome. These two are combined to form a random number generator object.
Unlike the C standard rand, the C++11 mechanism will come with three base generator engine algorithms:
linear_congruential_engine,
subtract_with_carry_engine, and
mersenne_twister_engine.
C++11 also provides a number of standard distributions:
uniform_int_distribution,
uniform_real_distribution,
bernoulli_distribution,
binomial_distribution,
geometric_distribution,
negative_binomial_distribution,
poisson_distribution,
exponential_distribution,
gamma_distribution,
weibull_distribution,
extreme_value_distribution,
normal_distribution,
lognormal_distribution,
chi_squared_distribution,
cauchy_distribution,
fisher_f_distribution,
student_t_distribution,
discrete_distribution,
piecewise_constant_distribution and
piecewise_linear_distribution.
The generator and distributions are combined as in this example:
#include <random>
#include <functional>
std::uniform_int_distribution<int> distribution(0, 99);
std::mt19937 engine; // Mersenne twister MT19937
auto generator = std::bind(distribution, engine);
int random = generator(); // Generate a uniform integral variate between 0 and 99.
int random2 = distribution(engine); // Generate another sample directly using the distribution and the engine objects.
Wrapper reference
A wrapper reference is obtained from an instance of the class template reference_wrapper. Wrapper references are similar to normal references (‘&’) of the C++ language. To obtain a wrapper reference from any object the function template ref is used (for a constant reference cref is used).
Wrapper references are useful above all for function templates, where references to parameters rather than copies are needed:
// This function will take a reference to the parameter 'r' and increment it.
void func (int &r) { r++; }
// Template function.
template<class F, class P> void g (F f, P t) { f(t); }
int main()
{
int i = 0;
g (func, i); // 'g<void (int &r), int>' is instantiated
// then 'i' will not be modified.
std::cout << i << std::endl; // Output -> 0
g (func, std::ref(i)); // 'g<void(int &r),reference_wrapper<int>>' is instantiated
// then 'i' will be modified.
std::cout << i << std::endl; // Output -> 1
}
This new utility was added to the existing <functional> header and didn't need further extensions of the C++ language.
Polymorphic wrappers for function objects
Polymorphic wrappers for function objects are similar to function pointers in semantics and syntax, but are less tightly bound and can indiscriminately refer to anything which can be called (function pointers, member function pointers, or functors) whose arguments are compatible with those of the wrapper.
An example can clarify its characteristics:
std::function<int (int, int)> func; // Wrapper creation using
// template class 'function'.
std::plus<int> add; // 'plus' is declared as 'template<class T> T plus( T, T ) ;'
// then 'add' is type 'int add( int x, int y )'.
func = add; // OK - Parameters and return types are the same.
int a = func (1, 2); // NOTE: if the wrapper 'func' does not refer to any function,
// the exception 'std::bad_function_call' is thrown.
std::function<bool (short, short)> func2 ;
if (!func2)
{
// True because 'func2' has not yet been assigned a function.
bool adjacent(long x, long y);
func2 = &adjacent; // OK - Parameters and return types are convertible.
struct Test
{
bool operator()(short x, short y);
};
Test car;
func = std::ref(car); // 'std::ref' is a template function that returns the wrapper
// of member function 'operator()' of struct 'car'.
}
func = func2; // OK - Parameters and return types are convertible.
The template class function was defined inside the header <functional>, without needing any change to the C++ language.
Type traits for metaprogramming
Metaprogramming consists of creating a program that creates or modifies another program (or itself). This can happen during compilation or during execution. The C++ Standards Committee has decided to introduce a library for metaprogramming during compiling via templates.
Here is an example of a meta-program using the C++03 standard: a recursion of template instances for calculating integer exponents:
template<int B, int N>
struct Pow
{
// recursive call and recombination.
enum{ value = B*Pow<B, N-1>::value };
};
template< int B >
struct Pow<B, 0>
{
// ''N == 0'' condition of termination.
enum{ value = 1 };
};
int quartic_of_three = Pow<3, 4>::value;
Many algorithms can operate on different types of data; C++'s templates support generic programming and make code more compact and useful. Nevertheless, it is common for algorithms to need information on the data types being used. This information can be extracted during instantiation of a template class using type traits.
Type traits can identify the category of an object and all the characteristics of a class (or of a struct). They are defined in the new header <type_traits>.
In the next example there is the template function ‘elaborate’ which, depending on the given data types, will instantiate one of the two proposed algorithms (Algorithm::do_it).
// First way of operating.
template< bool B > struct Algorithm
{
template<class T1, class T2> static int do_it (T1 &, T2 &) { /*...*/ }
};
// Second way of operating.
template<> struct Algorithm<true>
{
template<class T1, class T2> static int do_it (T1, T2) { /*...*/ }
};
// Instantiating 'elaborate' will automatically instantiate the correct way to operate.
template<class T1, class T2>
int elaborate (T1 A, T2 B)
{
// Use the second way only if 'T1' is an integer and if 'T2' is
// in floating point, otherwise use the first way.
return Algorithm<std::is_integral<T1>::value && std::is_floating_point<T2>::value>::do_it( A, B ) ;
}
Via type traits, defined in header <type_traits>, it's also possible to create type transformation operations (static_cast and const_cast are insufficient inside a template).
This type of programming produces elegant and concise code; however, the weak point of these techniques is the debugging: it's uncomfortable during compilation and very difficult during program execution.
Uniform method for computing the return type of function objects
Determining the return type of a template function object at compile-time is not intuitive, particularly if the return value depends on the parameters of the function. As an example:
struct Clear
{
int operator()(int) const; // The parameter type is
double operator()(double) const; // equal to the return type.
};
template <class Obj>
class Calculus
{
public:
template<class Arg> Arg operator()(Arg& a) const
{
return member(a);
}
private:
Obj member;
};
Instantiating the class template Calculus<Clear>, the function object of calculus will have always the same return type as the function object of Clear. However, given class Confused below:
struct Confused
{
double operator()(int) const; // The parameter type is not
int operator()(double) const; // equal to the return type.
};
Attempting to instantiate Calculus<Confused> will cause the return type of Calculus to not be the same as that of class Confused. The compiler may generate warnings about the conversion from int to double and vice versa.
TR1 introduces, and C++11 adopts, the template class std::result_of that allows one to determine and use the return type of a function object for every declaration. The object CalculusVer2 uses the std::result_of object to derive the return type of the function object:
template< class Obj >
class CalculusVer2
{
public:
template<class Arg>
typename std::result_of<Obj(Arg)>::type operator()(Arg& a) const
{
return member(a);
}
private:
Obj member;
};
In this way in instances of function object of CalculusVer2<Confused> there are no conversions, warnings, or errors.
The only change from the TR1 version of std::result_of is that the TR1 version allowed an implementation to fail to be able to determine the result type of a function call. Due to changes to C++ for supporting decltype, the C++11 version of std::result_of no longer needs these special cases; implementations are required to compute a type in all cases.
Improved C compatibility
For compatibility with C, from C99, these were added:
Preprocessor:
variadic macros,
concatenation of adjacent narrow/wide string literals,
_Pragma() – equivalent of #pragma.
long long – integer type that is at least 64 bits long.
__func__ – macro evaluating to the name of the function it is in.
Headers:
cstdbool (stdbool.h),
cstdint (stdint.h),
cinttypes (inttypes.h).
Features originally planned but removed or not included
Heading for a separate TR:
Modules
Decimal types
Math special functions
Postponed:
Concepts
More complete or required garbage collection support
Reflection
Macro scopes
Features removed or deprecated
The term sequence point was removed, being replaced by specifying that either one operation is sequenced before another, or that two operations are unsequenced.
The former use of the keyword export was removed. The keyword itself remains, being reserved for potential future use.
Dynamic exception specifications are deprecated. Compile-time specification of non-exception-throwing functions is available with the noexcept keyword, which is useful for optimization.
std::auto_ptr is deprecated, having been superseded by std::unique_ptr.
Function object base classes (std::unary_function, std::binary_function), adapters to pointers to functions and adapters to pointers to members, and binder classes are all deprecated.
See also
C11
References
External links
The C++ Standards Committee
C++0X: The New Face of Standard C++
Herb Sutter's blog coverage of C++11
Anthony Williams' blog coverage of C++11
A talk on C++0x given by Bjarne Stroustrup at the University of Waterloo
The State of the Language: An Interview with Bjarne Stroustrup (15 August 2008)
Wiki page to help keep track of C++ 0x core language features and their availability in compilers
Online C++11 standard library reference
Online C++11 compiler
Bjarne Stroustrup's C++11 FAQ
More information on C++11 features:range-based for loop, why auto_ptr is deprecated, etc.
C++
Programming language standards
Articles with example C++ code
C++ programming language family
IEC standards
ISO standards
sv:C++#Historia | C++11 | Technology | 19,309 |
43,346,375 | https://en.wikipedia.org/wiki/Planck%20relation | The Planck relation (referred to as Planck's energy–frequency relation, the Planck–Einstein relation, Planck equation, and Planck formula, though the latter might also refer to Planck's law) is a fundamental equation in quantum mechanics which states that the energy of a photon, known as photon energy, is proportional to its frequency :
The constant of proportionality, , is known as the Planck constant. Several equivalent forms of the relation exist, including in terms of angular frequency :
where . Written using the symbol for frequency, the relation is
The relation accounts for the quantized nature of light and plays a key role in understanding phenomena such as the photoelectric effect and black-body radiation (where the related Planck postulate can be used to derive Planck's law).
Spectral forms
Light can be characterized using several spectral quantities, such as frequency , wavelength , wavenumber , and their angular equivalents (angular frequency , angular wavelength , and angular wavenumber ). These quantities are related through
so the Planck relation can take the following "standard" forms:
as well as the following "angular" forms:
The standard forms make use of the Planck constant . The angular forms make use of the reduced Planck constant . Here is the speed of light.
de Broglie relation
The de Broglie relation, also known as de Broglie's momentum–wavelength relation, generalizes the Planck relation to matter waves. Louis de Broglie argued that if particles had a wave nature, the relation would also apply to them, and postulated that particles would have a wavelength equal to . Combining de Broglie's postulate with the Planck–Einstein relation leads to
or
The de Broglie relation is also often encountered in vector form
where is the momentum vector, and is the angular wave vector.
Bohr's frequency condition
Bohr's frequency condition states that the frequency of a photon absorbed or emitted during an electronic transition is related to the energy difference () between the two energy levels involved in the transition:
This is a direct consequence of the Planck–Einstein relation.
See also
Compton wavelength
References
Cited bibliography
Cohen-Tannoudji, C., Diu, B., Laloë, F. (1973/1977). Quantum Mechanics, translated from the French by S.R. Hemley, N. Ostrowsky, D. Ostrowsky, second edition, volume 1, Wiley, New York, .
French, A.P., Taylor, E.F. (1978). An Introduction to Quantum Physics, Van Nostrand Reinhold, London, .
Griffiths, D.J. (1995). Introduction to Quantum Mechanics, Prentice Hall, Upper Saddle River NJ, .
Landé, A. (1951). Quantum Mechanics, Sir Isaac Pitman & Sons, London.
Landsberg, P.T. (1978). Thermodynamics and Statistical Mechanics, Oxford University Press, Oxford UK, .
Messiah, A. (1958/1961). Quantum Mechanics, volume 1, translated from the French by G.M. Temmer, North-Holland, Amsterdam.
Schwinger, J. (2001). Quantum Mechanics: Symbolism of Atomic Measurements, edited by B.-G. Englert, Springer, Berlin, .
van der Waerden, B.L. (1967). Sources of Quantum Mechanics, edited with a historical introduction by B.L. van der Waerden, North-Holland Publishing, Amsterdam.
Weinberg, S. (1995). The Quantum Theory of Fields, volume 1, Foundations, Cambridge University Press, Cambridge UK, .
Weinberg, S. (2013). Lectures on Quantum Mechanics, Cambridge University Press, Cambridge UK, .
Foundational quantum physics
Max Planck
Old quantum theory | Planck relation | Physics | 774 |
6,458,252 | https://en.wikipedia.org/wiki/Coramsine | Coramsine (SBP002) was an experimental cancer drug that was evaluated in preliminary clinical trials, but was abandoned by Solbec Pharmaceuticals Ltd after the results were insufficient for them to raise investment capital to continue its development.
Composition
Coramsine is a chemotherapeutic and immunomodulating agent whose primary ingredients are two solasodine glycoalkaloids, solasonine and solamargine, which are derived from the plant Solanum linnaeanum (devil's apple).
History
The study of glycoalkaloids as potential anti-cancer agents began with Queensland researcher Bill Cham in the late 1970s. Cham heard reports from farmers that topical application of the Devil's Apple plant was effective in slowing the growth of various skin cancers on horses and cattle.
Animal studies and in vitro studies showed positive results, however Cham decided to focus his energies on developing the glycoalkaloid mixture, patented as BEC, as a topical cream for non-melanoma skin cancer.
In 2000, Solbec Pharmaceuticals Ltd. licensed the intellectual property rights to BEC from Cham and after it displayed good results against peritoneal mesothelioma in animals. Solbec initiated human trials which also yielded encouraging results. Other researchers have also demonstrated antiproliferative activity of steroidal glycosides against cancer cells.
During 2005 and 2006 Solbec was granted orphan drug designation for Coramsine by the U.S. Food and Drug Administration in the treatment of renal cell carcinoma and for malignant melanoma respectively. 2006 also saw the completion of Phase I/IIa trials and the commissioning of Phase IIb trials that would target renal cell carcinoma (stage III/IV) and malignant melanoma (stage III/IV), but in November 2006 shortly before their commencement Solbec postponed the trials due to Australia's Therapeutic Goods Administration (TGA) having concerns about the drug's pre-clinical data. A development plan for coramsine was approved by the TGA in May 2007 resulting in further pre-clinical studies, which were successfully completed in March 2008. Solbec unsuccessfully sought a business partner to develop coramsine further, abandoning its development, as they changed the company's direction as well as its legal business name in December 2008, following the Great Recession and credit crunch. The subsequent company licensed the technology back to the original founder, Bill Cham, who manufactures it from his private company in Vanuatu and markets it worldwide via the internet under the name Curaderm BEC5, a cream of solasodine rhamnosyl glycosides (BEC). Curaderm BEC5 has not been approved for medical use by any regulatory agency.
Mechanism of action
Cormasine is thought to kill tumor cells by direct cell lysis, showing selectivity for cancer cells as opposed to healthy cells via a rhamnose binding protein. Coramsine also has the potential to modulate the production of interleukin-6.
References
Alkaloids
Abandoned drugs | Coramsine | Chemistry | 643 |
49,989,377 | https://en.wikipedia.org/wiki/Carnegie%20Mellon%20University%20Computational%20Biology%20Department | The Ray and Stephanie Lane Computational Biology Department (CBD) is one of the seven departments within the School of Computer Science at Carnegie Mellon University in Pittsburgh, Pennsylvania, United States. Now situated in the Gates-Hillman Center, CBD was established in 2007 as the Lane Center for Computational Biology by founding department head Robert F. Murphy. The establishment was supported by funding from Raymond J. Lane and Stephanie Lane, CBD officially became a department within the School of Computer Science in 2009. In November 2023, Carnegie Mellon named the department as the Ray and Stephanie Lane Computational Biology Department, in recognition of the Lanes' significant investment in computational biology at CMU.
CBD specializes in genomics, systems biology, and biological imaging, pioneering advanced computational methods, including AI and machine learning. The accolades of its faculty (current and former) include leadership roles such as president of the National Science Foundation and the International Society of Advanced Cytometry, and as membership in the National Institutes of Health Council of Councils. They have received numerous prestigious awards, including the Overton Prize, Guggenheim Fellowship, Okawa Award, United States Air Force Young Investigator Award, Presidential Young Investigator Award, NSF CAREER Award, Sloan Fellowship, and New Innovator's Award from the NIH, among others. Additionally, faculty members have been elected to the National Academy of Sciences, American Association for the Advancement of Science, and the International Society of Computational Biology.
As part of the HHMI-NIBIB Interfaces Initiative, CBD received funding from Howard Hughes Medical Institute and the National Institute of Biomedical Imaging and Bioengineering (NIBIB) to develop an interdisciplinary Ph.D. program in computational biology with the University of Pittsburgh, which was founded as the Joint CMU-Pitt Ph.D. Program in Computational Biology in 2005. This program is currently receiving training support through a National Institutes of Health T32 Training Grant. CBD is the home of the B.S. in Computational Biology, one of the four B.S. degree programs within Carnegie Mellon School of Computer Science. The Computational Biology undergraduate program has been consistently ranked as one of the top 3 programs by US News.
CBD is the home of an NIH Center for the HuBMAP Integration, Visualization & Engaging (HIVE) Initiative led by Ziv Bar-Joseph and an NIH Center for Multiscale Analysis of 4D Nucleome Structure and Function by Comprehensive Multimodal Data Integration led by Jian Ma.
CBD houses the Center for AI-Driven Biomedical Research (AI4BIO) at CMU, a catalyst for innovations at the intersection of AI and biomedicine across the School of Computer Science and campus.
Notable faculty
Robert F. Murphy (founding department head)
Russell Schwartz (current department head)
Ziv Bar-Joseph
Jaime Carbonell
Jian Ma
Kathryn Roeder
Roni Rosenfeld
Eric Xing
Degree programs
Joint CMU-Pitt Ph.D. Program in Computational Biology (with University of Pittsburgh)
M.S. in Computational Biology (joint with the Department of Biological Sciences)
M.S. in Automated Science
B.S. in Computational Biology
Minor in Computational Biology
References
Medical imaging research institutes
Schools and departments of Carnegie Mellon
University departments in the United States
Bioinformatics organizations
Organizations established in 2007
2007 establishments in Pennsylvania | Carnegie Mellon University Computational Biology Department | Biology | 668 |
317,471 | https://en.wikipedia.org/wiki/Silastic | Silastic (a portmanteau of 'silicone' and 'plastic') is a trademark registered in 1948 by Dow Corning Corporation for flexible, inert silicone elastomer.
Composition
The Silastic trademark refers to silicone elastomers, silicone tubing and some cross-linked polydimethylsiloxane materials manufactured by Dow Corning, the owner of the global trademark.
Applications
Silastic-brand silicone elastomers have a range of applications. In the automotive industry they are used for making gaskets, spark plug boots, hoses and other components that must operate over a broad temperature range and resist oil and coolants. The elastomers are widely used in the architectural, aerospace, electronic, food and beverage, textile, and transportation industries for molding, coating, adhesion and sealing. Due to their inert nature, medical-grade Silastic-brand silicone elastomers are important materials in numerous medical and pharmaceutical devices including catheters, pacemaker leads, tubing, wound dressings, silos for abdominal wall defects, and nasolacrimal duct obstruction. These medical-grade elastomers are also used in the manufacture of hyper-realistic masks, where they perfectly mimic the texture of human skin and follow all facial movements and expressions.
References
External links
Dow Corning Corporation
Silicones
Elastomers
Brand name materials
Dow Chemical Company | Silastic | Chemistry | 294 |
57,063,312 | https://en.wikipedia.org/wiki/Douma%20chemical%20attack | On 7 April 2018, a chemical warfare attack was launched by the forces of the government of Bashar al-Assad in the city of Douma, Syria. Medics and witnesses reported that it caused the deaths of between 40 and 50 people and injuries to possibly well over 100. The attack was attributed to the Syrian Army by rebel forces in Douma, and by the United States, British, and French governments. A two-year long investigation by the Organisation for the Prohibition of Chemical Weapons (OPCW) Investigation and Identification Team (IIT) concluded in January 2023 that the Syrian Air Force perpetrated the chemical attacks during its military campaign in Douma. On 14 April 2018, the United States, France and the United Kingdom carried out a series of military strikes against multiple government sites in Syria.
Background
According to Organisation for the Prohibition of Chemical Weapons (OPCW) and United Nations investigations, both the Syrian Arab Republic's forces and Islamic State militants have used chemical weapons during the conflict. Human Rights Watch has documented 85 chemical weapons attacks in Syria since 2013, and its sources indicate the Syrian government is responsible for the majority. People reported incidents of chemical weapons use specifically in Douma in January 2018; Russia vetoed a potential United Nations mission to investigate. The Arms Control Association reported two smaller chlorine gas attacks in Douma on 7 March and 11 March.
Douma had been under rebel control since 18 October 2012, and, with the rest of the Eastern Ghouta region, under siege since April 2013. The Rif Dimashq offensive (February–April 2018), code-named Operation Damascus Steel, a military offensive launched by the Syrian Arab Army (SAA) and its allies on 18 February 2018 to capture the rebel-held territory. The Jaysh al-Islam rebel coalition controlled Douma. By mid-March, rebel territory in Eastern Ghouta had reduced to three pockets, one in the south around Hamouria held by Faylaq al-Rahman; a second in the west around Harasta held by Ahrar al-Sham; as well as Douma in the north held by Jaysh al-Islam. In the second half of March, the other two pockets were secured via evacuation deals between the rebels, Syria, and Russia. On 31 March, the last of the evacuations was conducted and the Syrian army declared victory in Eastern Ghouta, while the rebels that were still holding out in Douma were given an ultimatum to surrender by the end of the day.
Reports
A chemical attack in Douma occurred on 7 April 2018. The Union of Medical Care and Relief Organizations, a humanitarian organization that supervises medical services in the region, attributed seventy deaths to the attack. On-site medics reported smelling a chlorine-like odour, but that symptoms and death toll pointed to something more noxious such as sarin nerve agent caused the deaths. A video from the scene showed dead men, women, and children with foam at their mouths.
The Syrian American Medical Society (SAMS) reported over 500 injured people at Douma "were brought to local medical centers with symptoms indicative of exposure to a chemical agent." SAMS also said a chlorine bomb struck a Douma hospital, killing six people, and that another attack with "mixed agents" affected a building nearby. According to the Syrian opposition groups, witnesses also reported a strong smell of chlorine and said effects appeared stronger than in previous similar attacks. Syrian opposition activists also posted videos of yellow compressed gas cylinders that they said were used during the attack. Based on the symptoms and the speed with which the victims were affected, medical workers and experts suggested either a combination of chlorine with another gas or a nerve agent was used. Several medical, monitoring, and activist groups—including the White Helmets—reported that two Syrian Air Force Mi-8 helicopters dropped barrel bombs on the city of Douma. The bombs caused severe convulsions in some residents and suffocated others.
On 6 July 2018, an interim report was issued by the OPCW. Various chlorinated organic chemicals (dichloroacetic acid, trichloroacetic acid, chlorophenol, dichlorophenol, bornyl chloride, chloral hydrate etc.) were found in samples, along with residues of explosive, but the designated laboratory 03 stated that no CWC-scheduled chemicals or nerve agent-related chemicals were detected. In September 2018 the United Nations Commission of Enquiry on Syria reported: "Throughout 7 April, numerous aerial attacks were carried out in Douma, striking various residential areas. A vast body of evidence collected by the Commission suggests that, at approximately 7.30 p.m., a gas cylinder containing a chlorine payload delivered by helicopter struck a multi-storey residential apartment building located approximately 100 metres south-west of Shohada square. The Commission received information on the death of at least 49 individuals, and the wounding of up to 650 others."
While it was initially unclear which chemicals had been used, in 2019 the OPCW FFM (Fact-Finding Mission) report concluded: "Regarding the alleged use of toxic chemicals as a weapon on 7 April 2018 in Douma, the Syrian Arab Republic, the evaluation and analysis of all the information gathered by the FFM—witnesses' testimonies, environmental and biomedical samples analysis results, toxicological and ballistic analyses from experts, additional digital information from witnesses—provide reasonable grounds that the use of a toxic chemical as a weapon took place. This toxic chemical contained reactive chlorine. The toxic chemical was likely molecular chlorine." The OPCW said it found no evidence to support the Assad government's claim that a local facility was being used by rebel fighters to produce chemical weapons.
Aftermath
The day after the chemical attack, all rebels controlling Douma agreed to a deal with the government to surrender the area.
In the early hours of 9 April 2018, an air strike was conducted against Tiyas Military Airbase. Two Israeli F-15I jets reportedly attacked the airfield from Lebanese airspace, firing eight missiles, of which five were intercepted, according to claims by Russia. According to the Syrian Observatory for Human Rights monitor, at least 14 people were killed and more were wounded.
On 10 April, member states proposed competing UN Security Council resolutions to handle the response to the chemical attack. The U.S., France, and UK vetoed a Russian-proposed UN resolution. Russia had also vetoed the U.S.'s proposed resolution to create "a new investigative mechanism to look into chemical weapons attacks in Syria and determine who is responsible."
On 14 April, France, the United Kingdom and the United States launched missiles against four Syrian government targets in response to the attack. The strikes were claimed to successfully destroy the chemical weapons capabilities of Syria. Nevertheless, according to Pentagon, the Syrian Arab Republic still retains the ability to launch chemical weapons attacks.
Investigations and reports
Media commentary and investigations
CBS journalist Seth Doane also traveled to Douma on 16 April, finding the site of the alleged attack where a neighbor reported a choking gas that smelled like chlorine. He took Doane to site of the impact and showed where the remains of a missile rested. Eliot Higgins, a citizen journalist, founder of Bellingcat, and blogger investigating the Syrian civil war, concluded based on geographical, video, and open source evidence that the chlorine gas was dropped by one of two Mi-8 helicopters taking off from Dumayr Airbase 30 minutes earlier. Military officials in the US, UK, and France all insisted the bomb had been dropped from one of two Syrian government helicopters flying from the airbase at Dumayr.
The Guardian reported testimony from witnesses that medical personnel in Douma faced "extreme intimidation" from Syrian officials for them to remain silent about their patients' treatment. They and their families were allegedly threatened by the Syrian government. Medics who tried to leave the area were said to have been heavily searched in case they were transporting samples. The Guardian described Russian state media as "pushing" two lines; that they have spoken to witnesses denying the occurrence of any attacks, and that they have found chlorine-filled canisters in Douma "used for rebel attacks later blamed on the regime."
In June 2018, a New York Times investigation found that Syrian military helicopters dropped a chlorine bomb on the rooftop balcony of an apartment building in Douma. At least 34 victims were counted and their bodies "showed horrific signs of chemical exposure." Dozens of videos and photos were examined with academics, scientists and chemical weapons experts. The New York Times was unable to visit Douma, but forensically analysed the visual evidence from Syrian activists and Russian reports. They collaborated with Forensic Architecture to reconstruct a three-dimensional model of the building, balcony and bomb, and analysed how damage to the bomb's casing related to the debris. According to their findings, key pieces of evidence indicated the bomb was not planted, but dropped from the air by a Syrian military helicopter, and the evidence supported the involvement of chlorine. The dent on the front of the bomb indicated it crashed nose down into the floor of the balcony and pierced the ceiling. The front of the casing showed corrosion similar to that which is caused when metal is exposed to chlorine and water. The grid pattern imprinted on the underside of the bomb matched the metal lattice in the rubble that was over the balcony. Twisted metal found in the rubble corresponded to rigging seen attached to similar weapons. Apparent frost covering the underside of the casing indicated the canister of chlorine was emptied quickly. According to The New York Times, since the Syrian military controlled the airspace over Douma, it would be "almost impossible" for the attack to have been staged by opposition fighters who do not have aircraft. The New York Times noted that remote access "cannot tell us everything", and environmental and tissue samples were also needed in chemical weapons investigations.
The investigations published soon after the fact by Bellingcat, the New York Times, and Forensic Architecture were later confirmed by an in-depth report by James Harkin and Lauren Feeney in The Intercept. After six months of examining the evidence, interviewing witnesses, and consulting with experts such as Higgins and Theodore Postol of the Massachusetts Institute of Technology, Harkin and Feeney concluded that Syrian Air Force helicopters dropped two chlorine canister bombs on Douma on 7 April 2018. Harkin noted that many chlorine attacks launched by Syrian forces in the past had resulted in no casualties, hypothesizing that—in contrast to the much more lethal sarin gas—Syrian forces likely employed chlorine at Douma to induce panic among the population rather than to kill many people. One of the canisters never released its payload and caused no deaths, but the other canister struck the weak roof of an apartment complex at an unexpected angle, releasing a very high concentration of chlorine that killed the people beneath it in a matter of minutes. According to Harkin, the frightened residents seen flocking to a nearby hospital and being doused with water in viral footage were not survivors of the chemical attack, but victims of conventional weapons and smoke inhalation.
A report released by the Global Public Policy Institute (GPPi), a Berlin-based think tank, determined that chlorine attacks accounted for 91.5% of all confirmed chemical weapons attacks attributable to the Syrian government throughout the war, including the 7 April 2018 attack on Douma. The report held the Syrian government responsible for 98% of all recorded chemical weapons attacks over the course of the Syrian civil war and believes its use of chemical weapons "is best understood as part of its overall war strategy of collective punishment of populations in opposition-held areas".
Commenting on the OPCW FFM report of 2019, Bellingcat remarked that the detail provided, 'continues to make it clear that the Douma attack was yet another chlorine attack delivered by helicopter, using the same type of modified gas cylinders as seen in previous chlorine attacks.' On 23 January 2020, Bellingcat published a report in which it argued that it is effectively impossible for the Douma attack to have been a false flag incident.
In December 2024, as the Assad government fell, journalists from the BBC and The Guardian were able to interview witnesses and survivors in Douma, who described how they'd been prevented from speaking in the years since the incident.
OPCW investigation
On 10 April, the Syrian and Russian governments invited the Organisation for the Prohibition of Chemical Weapons to send a team to investigate the attacks. When the investigators arrived in Damascus on 14 April, their access to the site was blocked by Russia and Syria who cited security concerns.
On 17 April, the OPCW was promised access to the site, but had not entered Douma and was unable to carry out the inspection because a large crowd gathered at one site, while their reconnaissance teams came under fire at the other site. According to the OPCW director, "On arrival at site one, a large crowd gathered and the advice provided by the UNDSS was that the reconnaissance team should withdraw," and "at site two, the team came under small arms fire and an explosive was detonated. The reconnaissance team returned to Damascus." The OPCW statement did not lay blame on any party for the incident. The United States believed the Syrian government was stalling the OPCW to give itself time to remove evidence.
On 19 April, the OPCW still was unable to access the sites. According to a US State Department spokeswoman, there was "credible information" that "Russian officials are working with the Syrian regime to deny and to delay these inspectors from gaining access to Douma," and "to sanitize the locations of the suspected attacks and remove incriminating evidence of chemical weapons use."
OPCW inspectors visited the site and collected samples on 21 April and 25 April 2018. The organization said that it would submit to its member states a report "based on analysis of the sample results, as well other information and materials collected by the team."
At the warehouse and the facility suspected by the authorities of the Syrian Arab Republic of producing chemical weapons in Douma, information was gathered to assess whether these facilities were associated with the production of chemical weapons or toxic chemicals that could be used as weapons. From the information gathered during the two on-site visits to these locations, there was no indication of either facility being involved in the production of chemical warfare agents or toxic chemicals for use as weapons. During the visit to Location 2 (cylinder on the roof), Syrian Arab Republic representatives did not provide the access requested by the OPCW Fact-Finding Mission (FFM) team to some apartments within the building, which were closed at the time. The Syrian Arab Republic representatives stated that they did not have the authority to force entry into the locked apartments.
On 6 July 2018, the FFM published its interim report. The report stated that:
The results show that no organophosphorous nerve agents or their degradation products were detected in the environmental samples or in the plasma samples taken from alleged casualties. Along with explosive residues, various chlorinated organic chemicals were found in samples from two sites, for which there is full chain of custody.
In March 2019, the OPCW FFM final report concluded:
The OPCW said it found no evidence to support the government's claim that a local facility was being used by rebel fighters to produce chemical weapons. It was not the mandate of the fact-finding team to assign blame for the attack.
An engineering report written by Ian Henderson, a liaison officer at the OPCW Command Post Office in Damascus, was leaked in 2019. According to his report, there was a "higher probability that both cylinders were manually placed at those two locations rather than being delivered from aircraft".
In November 2019, Fernando Arias reaffirmed his defense of the FFM report, saying of differing views: "While some of these diverse views continue to circulate in some public discussion forums, I would like to reiterate that I stand by the independent, professional conclusion [of the investigation]."
Russia threatened to block the budget for the OPCW at the annual meeting in The Hague in 2019 if it included funding for a new team which would give the organisation powers to pin blame on culprits for the use of toxic arms. Previously the watchdog only had a mandate to say whether or not an attack had occurred. Russia, Iran and China led efforts to block the budget in 2018 but it passed by a majority of 99–27. "Moscow has consistently raised doubts over chemical attacks in Syria or insisted they were staged, and has recently highlighted a leaked report raising questions about a deadly chlorine attack in the Syrian town of Douma in April 2018. Tensions have also been high since four Russian spies were expelled from the Netherlands in 2018 for allegedly trying to hack into the OPCW's computers." On 28 November 2019 the bid by Russia to block funding for a new team that will identify culprits behind toxic attacks in Syria failed with member states at the global chemical watchdog overwhelmingly approving a new budget.
On 17 January 2020, Bellingcat published a report in which it said it had found problems with Henderson's engineering assessment.
In February 2020, Fernando Arias, the Director-General of the OPCW, shared the findings of an independent investigation into possible breaches of confidentiality which was initiated after the leak. The investigation took place between July 2019 and February 2020. The investigators determined that two former OPCW officials, referred to as Inspector A and Inspector B, violated their obligations concerning the protection of confidential information related to the Douma investigation. According to the investigators, Inspector A was not a member of the FFM, played a minor supporting role in the Douma investigation, and did not have access to all information gathered by the FFM team – including witness interviews, laboratory results, and analyses by independent experts. After the July 2018 interim report, it had taken a further seven months for the FFM to further investigate the incident and conduct the bulk of its work, and Inspector A no longer had any supporting role regarding the FFM during this period. According to the investigators, the assessment of Inspector A was an unofficial personal document created with incomplete information and without authorisation. Inspector B was a member of the FFM for the first time and travelled to Syria in April 2018. He never left the command post in Damascus because he had not completed the training necessary to deploy on-site. The majority of the FFM's work occurred after Inspector B separated with the OPCW at the end of August 2018. During a briefing in February 2020 to State Parties to the Chemical Weapons Convention, Fernando Arias said:
Inspectors A and B are not whistle-blowers. They are individuals who could not accept that their views were not backed by evidence. When their views could not gain traction, they took matters into their own hands and breached their obligations to the Organisation. Their behaviour is even more egregious as they had manifestly incomplete information about the Douma investigation. Therefore, as could be expected, their conclusions are erroneous, uninformed, and wrong.
OPCW-IIT Findings
The third report published in 27 January 2023 by the OPCW Investigation and Identification Team (IIT) concluded that the Syrian Armed Forces were responsible for the chemical attack. The OPCW-IIT findings concluded: "between 19:10 and 19:40 (UTC +3) on 7 April 2018, during a major military offensive aimed at regaining control of the city of Douma, at least one Syrian Air Force Mi-8/17 helicopter, departing from Dumayr airbase and operating under the control of the Tiger Forces, dropped two yellow cylinders which hit two residential buildings in a central area of the city. At Location 2, the cylinder hit the rooftop floor of a three-storey residential building without fully penetrating it, ruptured, and rapidly released toxic gas—chlorine—in very high concentrations, which rapidly dispersed within the building killing 43 named individuals and affecting dozens more."
In a joint press release published by the US Department of State on the same day, the Foreign Ministers of United States, UK, France and Germany thanked the OPCW for its "independent, unbiased, and expert" research and denounced the Syrian government for its continuing violations of Chemical Weapons Conventions, stating: "Our governments condemn in the strongest terms the Syrian regime’s repeated use of these horrific weapons..Syria must fully declare and destroy its chemical weapons program and allow the deployment of OPCW staff to its country to verify it has done so... IIT also obtained information that, at the time of the attack, the airspace over Douma was exclusively controlled by the Syrian Arab Air Force and the Russian Aerospace Defence Forces. We call on the Russian Federation to stop shielding Syria from accountability for its use of chemical weapons. No amount of disinformation from the Kremlin can hide its hand in abetting the Assad regime."
Reactions
Government
– On 12 April, French President Emmanuel Macron said he has proof that the Syrian government attacked the town of Douma with chemical weapons and at least used chlorine.
– The Foreign Ministry of Iran spokesman said: "While the Syrian army has the upper hand in the war against armed terrorists, it is not logical for them to use chemical weapons. Such claims and accusations [about chemical weapons use] by the Americans and some Western countries signal a new plot against the government and nation of Syria and is an excuse for military action against them."
– The Qatar Foreign Ministry condemned the use of chemical weapons, and called for an investigation into the incident and for punishment of those involved.
– On 13 March 2018 the Chief of the General Staff of the Russian Armed Forces, Valery Gerasimov, said the Russian military had "reliable intelligence" that suggested the rebels holding Eastern Ghouta, along with the White Helmets activists, were preparing to stage and film a chemical weapons attack against civilians, which the U.S. government would blame on the Syrian forces and use as a pretext to bomb the government quarter in Damascus. In the event that the lives of Russian servicemen should be threatened by U.S. strikes, Gerasimov said Russia would respond militarily—"against both the missiles and the platforms from which they're launched". The Russian Foreign Ministry on 8 April denied chemical weapons had been used. A few days later, the Russian military said members of the White Helmets organization filmed a staged attack. Then, on 13 April, the Russian Ministry of Defence said that it was Britain that staged the attack in order to provoke U.S. airstrikes. On 26 April, Russian officials held a press conference in The Hague where they presented several apparent witnesses from the Douma incident, flown in from Syria, who said that reported victims had not suffered symptoms of a chemical attack. The Russian envoy to the OPCW said that videos of the attack were little more than "a sloppily staged video showing the pretence for a strike is completely groundless". On 20 January 2020, Russia convened a UN Security Council (UNSC) Arria meeting (not treated as formal council business) where it presented the view that there was no evidence that chemical weapons were used in Douma. Ian Henderson appeared via video. The ambassador from Germany compared the presentation to Alice in Wonderland.
– The Ministry of Foreign Affairs condemned the use of chemical weapons, and stress the need for a peaceful solution based on the principles of the Geneva Declaration and UN Security Council resolutions.
– The Syrian state-owned Syrian Arab News Agency reported a Foreign and Expatriates Ministry source saying that Syria's alleged use of "chemical weapons have become an unconvincing stereotype, except for some countries which traffic with the blood of civilians and support terrorism in Syria."
– A spokesman for President Recep Tayyip Erdoğan said the "Syrian regime must give account for the attacks in various regions of the country at different times," and called upon the international community to address war crimes and crimes against humanity.
– Foreign Secretary Boris Johnson said that "these latest reports must urgently be investigated and the international community must respond" and that "investigators from the Organisation for the Prohibition of Chemical Weapons [are] looking into reports of chemical weapons use in Syria have our full support. Russia must not yet again try to obstruct these investigations". He also condemned the use of chemical weapons in general, adding that "those responsible for the use of chemical weapons have lost all moral integrity and must be held to account."
– President Donald Trump condemned the attack on Twitter, heavily criticizing Russia over it. Trump canceled his trip to the 8th Summit of the Americas, sending Vice President Mike Pence in his place. On 10 April, Trump, UK Prime Minister Theresa May, and French President Emmanuel Macron said in a statement following joint telephone calls that they had "agreed that the international community needed to respond to uphold the worldwide prohibition on the use of chemical weapons". On 11 April, via Twitter, President Trump told Russia to "get ready" for "nice and new and 'smart' missiles." Vasily Nebenzia, Russia's ambassador to the United Nations, said the United States would "bear responsibility" for any "illegal military adventure" they conducted. The following day, Trump appeared to soften his resolve, tweeting he "[n]ever said when an attack on Syria would take place. Could be very soon or not so soon at all!" U.S. Defense Secretary James Mattis stated the U.S. was still waiting on the OPCW investigation, but that there were "a lot of media and social media indicators that either chlorine or sarin was used" in Douma. The BBC quoted U.S. officials as saying urine and blood samples taken from victims had tested positively for traces of chlorine. On 14 April, France, the United Kingdom and the United States launched airstrikes against four Syrian government targets in response to the attack.
Intergovernment
– In a statement, the EU said "the evidence points towards yet another chemical attack by the regime" and "it is a matter of grave concern that chemical weapons continue to be used, especially on civilians. The European Union condemns in the strongest terms the use of chemical weapons and calls for an immediate response by the international community". It also called for the United Nations Security Council to identify the perpetrators and for Russia and Iran to influence Assad against launching such attacks.
– On 10 April 2018, the United Nations Security Council failed to adopt three competing resolutions on an inquiry into the chemical attack, with Russia and the United States clashing over the issue and exchanging military threats.
– The WHO released a statement, with a reference to outside medical sources, that 43 people died while suffering "symptoms consistent with exposure to highly toxic chemicals."
See also
List of massacres during the Syrian civil war
List of Syrian civil war barrel bomb attacks
Syria chemical weapons program
References
Notes
External links
Report of the Fact-Finding Mission Regarding the Incident of Alleged Use of Toxic Chemicals as a Weapon in Douma, Syrian Arab Republic, on 7 April 2018
New York Times interactive 24 June 2018
Bellingcat: The opcw ffms report on 7 April 2018 douma chemical attack versus the open source evidence
April 2018 events in Syria
Chemical weapons attacks
Douma District
Military operations of the Syrian civil war in 2018
Military operations of the Syrian civil war involving chemical weapons
Military operations of the Syrian civil war involving the Syrian government
Rif Dimashq Governorate in the Syrian civil war
Attacks on hospitals during the Syrian civil war | Douma chemical attack | Chemistry | 5,690 |
37,914,061 | https://en.wikipedia.org/wiki/Principle%20of%20normality | The Principle of normality in solid mechanics states that if a normal to the yield locus is constructed at the point of yielding, the strains that result from yielding are in the same ratio as the stress components of the normal.
References
Solid mechanics | Principle of normality | Physics | 49 |
882,332 | https://en.wikipedia.org/wiki/Alexander%20Macfarlane | Alexander Macfarlane FRSE LLD (21 April 1851 – 28 August 1913) was a Scottish logician, physicist, and mathematician.
Life
Macfarlane was born in Blairgowrie, Scotland, to Daniel MacFarlane (Shoemaker, Blairgowrie) and Ann Small. He studied at the University of Edinburgh. His doctoral thesis "The disruptive discharge of electricity" reported on experimental results from the laboratory of Peter Guthrie Tait.
In 1878 Macfarlane spoke at the Royal Society of Edinburgh on algebraic logic as introduced by George Boole. He was elected a Fellow of the Royal Society of Edinburgh. His proposers were Peter Guthrie Tait, Philip Kelland, Alexander Crum Brown, and John Hutton Balfour. The next year he published Principles of the Algebra of Logic which interpreted Boolean variable expressions with algebraic manipulation.
During his life, Macfarlane played a prominent role in research and education. He taught at the universities of Edinburgh and St Andrews, was physics professor at the University of Texas (1885–1894), professor of Advanced Electricity, and later of mathematical physics, at Lehigh University. In 1896 Macfarlane encouraged the association of quaternion students to promote the algebra. He became the Secretary of the Quaternion Society, and in 1909 its president. He edited the Bibliography of Quaternions that the Society published in 1904.
Macfarlane was also the author of a popular 1916 collection of mathematical biographies (Ten British Mathematicians), a similar work on physicists (Lectures on Ten British Physicists of the Nineteenth Century, 1919). Macfarlane was caught up in the revolution in geometry during his lifetime, in particular through the influence of G. B. Halsted who was mathematics professor at the University of Texas. Macfarlane originated an Algebra of Physics, which was his adaptation of quaternions to physical science. His first publication on Space Analysis preceded the presentation of Minkowski Space by seventeen years.
Macfarlane actively participated in several International Congresses of Mathematicians including the primordial meeting in Chicago, 1893, and the Paris meeting of 1900 where he spoke on "Application of space analysis to curvilinear coordinates".
Macfarlane retired to Chatham, Ontario, where he died in 1913.
Space analysis
Alexander Macfarlane stylized his work as "Space Analysis". In 1894 he published his five earlier papers and a book review of Alexander McAulay's Utility of Quaternions in Physics. Page numbers are carried from previous publications, and the reader is presumed familiar with quaternions. The first paper is "Principles of the Algebra of Physics" where he first proposes the hyperbolic quaternion algebra, since "a student of physics finds a difficulty in principle of quaternions which makes the square of a vector negative." The second paper is "The Imaginary of the Algebra". Similar to Homersham Cox (1882/83), Macfarlane uses the hyperbolic versor as the hyperbolic quaternion corresponding to the versor of Hamilton. The presentation is encumbered by the notation
Later he conformed to the notation exp(A α) used by Euler and Sophus Lie. The expression is meant to emphasize that α is a right versor, where π/2 is the measure of a right angle in radians. The π/2 in the exponent is, in fact, superfluous.
Paper three is "Fundamental Theorems of Analysis Generalized for Space". At the 1893 mathematical congress Macfarlane read his paper "On the definition of the trigonometric functions" where he proposed that the radian be defined as a ratio of areas rather than of lengths: "the true analytical argument for the circular ratios is not the ratio of the arc to the radius, but the ratio of twice the area of a sector to the square on the radius." The paper was withdrawn from the published proceedings of mathematical congress (acknowledged at page 167), and privately published in his Papers on Space Analysis (1894). Macfarlane reached this idea or ratios of areas while considering the basis for hyperbolic angle which is analogously defined.
The fifth paper is "Elliptic and Hyperbolic Analysis" which considers the spherical law of cosines as the fundamental theorem of the sphere, and proceeds to analogues for the ellipsoid of revolution, general ellipsoid, and equilateral hyperboloids of one and two sheets, where he provides the hyperbolic law of cosines.
In 1900 Alexander published "Hyperbolic Quaternions" with the Royal Society in Edinburgh, and included a sheet of nine figures, two of which display conjugate hyperbolas. Having been stung in the Great Vector Debate over the non-associativity of his Algebra of Physics, he restored associativity by reverting to biquaternions, an algebra used by students of Hamilton since 1853.
Works
1879: Principles of the Algebra of Logic from Internet Archive.
1885: Physical Arithmetic from Internet Archive.
1887: The Logical Form of Geometrical Theorems from Annals of Mathematics 3: 154,5.
1894: Papers on Space Analysis.
1898: Book Review: “La Mathematique; philosophie et enseignement” by C.A. Laissant in Science 8: 51–3.
1899 The Pythagorean Theorem from Science 34: 181,2.
1899: The Fundamental Principles of Algebra from Science 10: 345–364.
1906: Vector Analysis and Quaternions.
1910: Unification and Development of the Principles of the Algebra of Space from Bulletin of the Quaternion Society.
1911: Book Review: Life and Scientific Work of P.G. Tait by C.G. Knott from Science 34: 565,6.
1912: A System of Notation for Vector-Analysis; with a Discussion of the Underlying Principles from Bulletin of the Quaternion Society.
1913: On Vector-Analysis as Generalized Algebra, address to 5th International Congress of Mathematicians, Cambridge, via Internet Archive
Publications of Alexander Macfarlane from Bulletin of the Quaternion Society, 1913
References
Robert de Boer (2009) Biography of Alexander Macfarlane from WebCite.
Robert de Boer (2009) Alexander Macfarlane in Chicago, 1893 from WebCite
Electric Scotland historical biography
Knott, Cargill Gilston (1913) Alexander Macfarlane, Nature.
Macfarlane papers at the University of Texas
External links
1851 births
1913 deaths
People from Blairgowrie and Rattray
Scottish logicians
Scottish philosophers
Scottish physicists
19th-century Scottish mathematicians
20th-century Scottish mathematicians
Academics of the University of Edinburgh
Academics of the University of St Andrews
Alumni of the University of Edinburgh
Fellows of the Royal Society of Edinburgh
Lehigh University faculty
People from Chatham-Kent
Relativity theorists
Scottish expatriates in the United States
Scottish emigrants to Canada
University of Texas at Austin faculty
British geometers | Alexander Macfarlane | Physics | 1,400 |
4,692,441 | https://en.wikipedia.org/wiki/Shear%20pin | A shear pin is a mechanical detail designed to allow a specific outcome to occur once a predetermined force is applied. It can either function as a safeguard designed to break to protect other parts, or as a conditional operator that will not allow a mechanical device to operate until the correct force is applied.
As safeguards
In the role of a mechanical safeguard, a shear pin is a safety device designed to shear in the case of a mechanical overload, preventing other, more expensive or less-easily replaced parts from being damaged. As a mechanical sacrificial part, it is analogous to an electric fuse.
They are most commonly used in drive trains, such as a snow blower's auger or the propellers attached to marine engines.
Another use is in pushback bars used for large aircraft. In this device, shear pins are frequently used to connect the "head" of the towbar – the portion that attaches to the aircraft – to the main shaft of the towbar. In this way, the failure of the shear pin will physically separate the aircraft and the tractor. The design may be such that the shear pin will have several different causes of failure – towbar rotation about its long axis, sudden braking or acceleration, excessive steering force, etc. – all of which could otherwise be extremely damaging to the aircraft.
As conditional operators
In the role as a conditional operator, a shear pin will be used to prevent a mechanical device from operating before the criteria for operation are met. A shear pin gives a distinct threshold for the force required for operation. It is very cheap and easy to produce delivering a very high reliability and predictable tolerance. They are almost maintenance-free and can remain ready for operation for years with little to no decrease in reliability. Shear pins are only useful for a single operating cycle, after each operation they have to be replaced. A very simple example is the plastic or wire loop affixed to the handles of common fire extinguishers. Its presence prevents accidental discharge by only allowing the handle to be depressed once a high amount of initial force is applied; by breaking, it allows the handle to subsequently be depressed more easily.
Many designs take advantage of the maintenance-free state of constant readiness. For example, a hydraulic damper protecting a structure from earthquake damage could be secured with a shear pin. During normal conditions the system would be completely rigid, but when acted upon by the force of an earthquake the shear pin would break and the hydraulic damping system would operate.
Their high reliability and low cost make them very popular for use in weapons. A typical example is using shear pins in an explosive device. A shear pin can here hold a striker pin in place, preventing the striker pin from striking an initiator (primer) unless the correct force is applied. That force can be the acceleration of a rifle grenade being launched. The force would snap the shear pin, allowing the striker pin to move backwards onto a primer, which in turn ignites a pyrotechnic delay composition for auto destruction. In this use shear pins prevent the striker pin from hitting the primer during handling or if the grenade was dropped by accident. Additionally, shear-pins are frequently used in anti-tank mine fuzes, to prevent them from being triggered by much lighter, non-target vehicles such as motorcycles. Typically, the shear-pin in an anti-tank mine is designed to snap (and release the spring-loaded firing pin) when a weight in excess of 1500 kilograms is applied to the pressure plate.
Material
A shear pin could potentially be made from any material although metal is the most common.
When making a metal object for a mechanical application, an alloy and tempering is usually selected to make the construction resistant to damage. This can for example be achieved by giving the material a high degree of elasticity so that, like a spring, the metal returns to its original shape after being deformed by an external force. A shear pin however is often tempered to make the metal brittle, so that it breaks or shatters rather than bends when the required force is applied.
The material of a shear pin is selected and treated so that it is relatively resistant to fatigue. That is, when subjected to small forces, each one insufficient to break the pin, the pin does not retain damage. If material fatigue were to weaken a shear pin, the pin could potentially be broken by a force smaller than the original threshold force causing the mechanism to operate unintentionally, or a safety shear pin to break during normal operation of the machinery it protects.
Construction
The pin itself may be as simple as a metal rod inserted into a channel drilled through two moving parts, locking them in place as long as the pin is intact.
It may also be a plain metal rod inserted through a hub and axle; the diameter of the rod, alloy and tempering of the metal, are all carefully chosen to allow the pin to shear only when the predetermined threshold force or shock is reached.
A split pin (cotter pin in American usage) can also be used as a shear pin.
See also
Bolted joint
Torque limiter
External links
Hardware (mechanical)
Mechanical engineering
Safety equipment
Torque | Shear pin | Physics,Technology,Engineering | 1,048 |
1,853,790 | https://en.wikipedia.org/wiki/Suicide%20bridge | A suicide bridge is a bridge used frequently by people to end their lives, most typically by jumping off and into the water or ground below. A fall from the height of a tall bridge into water may be fatal, although some people have survived jumps from high bridges such as the Golden Gate Bridge. However, significant injury or death is far from certain; numerous studies report minimally injured persons who died from drowning.
To reach such locations, those with the intention of ending their lives must often walk long distances to reach the point where they finally decide to jump. For example, some individuals have traveled over the San Francisco–Oakland Bay Bridge by car in order to jump from the Golden Gate Bridge.
Prevention
Suicide prevention advocates believe that suicide by bridge is more likely to be impulsive than other means, and that barriers can have a significant effect on reducing the incidence of suicides by bridge. One study showed that installing barriers on the Duke Ellington Bridge in Washington, D.C.—which has a high incidence of suicide—did not cause an increase of suicides at the nearby Taft Bridge. A similar result was seen when barriers were erected on the popular suicide bridge the Clifton Suspension Bridge, in the United Kingdom. Families affected and groups that help the mentally ill have lobbied governments to erect similar barriers. One such barrier is the Luminous Veil on the Prince Edward Viaduct in Toronto, Canada, once considered North America's second deadliest bridge, with over 400 jumps on record.
Special telephones with connections to crisis hotlines are sometimes installed on bridges.
Bridges
Australia
The Sydney Harbour Bridge, the Mooney Mooney Bridge on the Central Coast (New South Wales), and the Westgate Bridge in Melbourne, Australia and the Story Bridge in Brisbane are considered suicide bridges.
Sydney Harbour Bridge has a suicide prevention barrier. In February 2009, following the murder of a four-year-old girl who was thrown off the bridge by her father, the first stage of a temporary suicide barrier was erected on Westgate Bridge, constructed of concrete crash barriers topped with a welded mesh fence. The permanent barrier has now been completed throughout the span of the bridge. The barriers are costed at AU$20 million and have been reported to have reduced suicide rates on the Westgate by 85%.
Suicide prevention barriers were installed on the Story Bridge in 2013; a three-metre-high barrier runs the full length of both sides of the bridge.
Canada
There are a number of suicide bridges in the Metro Vancouver area, the most frequented being the Lions Gate Bridge, which saw 324 suicidal incidents, including 78 jumps from 2006 to 2017.
The High Level Bridge in Edmonton, Alberta, is considered a suicide bridge. It is unknown how many deaths have occurred at the bridge, but there have been at least 25 in total, with 10 being from 2012–2013. There have also been many failed attempts at the bridge. A suicide prevention barrier has been installed along with signage and support phone lines.
The Jacques Cartier Bridge in Montreal, Quebec, is considered a suicide bridge. In 2004, a suicide prevention barrier was installed. Until then the bridge saw an average of 10 suicides a year.
The Prince Edward Viaduct, commonly referred to as the Bloor Viaduct, in Toronto, Ontario, was considered a suicide bridge. With nearly 500 suicides by 2003, the Viaduct was ranked as the second most fatal standing structure in North America, after the Golden Gate Bridge in San Francisco. Suicides dropped to zero after a barrier was completed in 2003.
The Lethbridge Viaduct in Lethbridge, Alberta, also known as the High Level Bridge, is considered a suicide bridge. It is unknown how many deaths have occurred at the bridge since its opening in 1909. Suicide prevention signage has been installed at the entrance to the bridge, however no further prevention program is in development.
The Angus L. Macdonald Bridge in Halifax, Nova Scotia, has been used for suicide attempts. As of 2010, safety barriers have been installed the full length of the pedestrian walkway.
The Reversing Falls Bridge in Saint John, New Brunswick has had often use for those making suicide attempts. Efforts have been made by the city to install barriers, but they have struggled to secure provincial funds to do so.
The Burgoyne Bridge in St. Catharines, Ontario, has had several suicides. In 2020, stainless steel netting was installed as a suicide prevention measure.
Czech Republic
About 300 people have jumped to their death from the Nusle Bridge, in Prague, Czech Republic. Barriers almost 3 metres high were erected here in 1997 with the aim to prevent further jumps. In 2007, the fencing was topped off with a of polished metal to make it impossible to climb.
The in Kladno has also been described as a suicide bridge and "second Nusle". Between 2013 and 2018, 23 suicides were attempted there. Because it is only from the ground, attempts are not always successful, however the bridge is easy to access and there is no suicide barrier.
New Zealand
The Auckland Harbour Bridge and Grafton Bridge in Auckland have been known for suicides and suicide attempts, with multiple attempts to install suicide prevention barriers in recent decades.
South Africa
88 people have jumped to their death from the Van Stadens Bridge, near Port Elizabeth, Eastern Cape, South Africa. A barrier has since been installed.
South Korea
A frequently used suicide bridge in Seoul is the Mapo Bridge, locally known as "Suicide Bridge" and "The Bridge of Death". South Korean authorities have tried to counter this by calling the bridge "The Bridge of Life" and posting reassuring messages on the ledges.
United Kingdom
The Clifton Suspension Bridge in Bristol was designed by Isambard Kingdom Brunel and opened in 1864. Since then, it has gained a reputation as a suicide bridge, with over 500 deaths from jumping. It has plaques that advertise the telephone number of Samaritans. In 1998, the bridge was fitted with suicide barriers, which halved the suicide rate in the years following. It lays over the River Avon. CCTV is also installed on the bridge.
A notable suicide bridge in London is the Hornsey Lane Bridge, which passes over Archway Road and connects the Highgate and Crouch End areas. The bridge provides views of notable landmarks such as St. Paul's Cathedral, The Gherkin and The Shard. It was the venue for mental illness campaign group Mad Pride's inaugural vigil in 2000 and was the subject of Johnny Burke's 2006 film The Bridge. When, at the end of 2010, three men in three weeks died by suicide from jumping from the bridge, a campaign was set up by local residents for better anti-suicide measures to be put in place. In October 2015 Islington Council and Haringey Council approved Transport for London's plans for the construction of a safety fence. In summer 2019, Haringey Council installed additional measures to prevent suicide from the bridge in the form of a 3m high fence.
At the Humber Bridge in Hull, more than 200 incidents of people jumping or falling from the bridge have taken place since its opening in 1981. Between 1990 and February 2001 the Humber Rescue Team was called 64 times to deal with people falling or jumping off the bridge.
Overtoun Bridge near Dumbarton in West Dunbartonshire has been publicised due to reports of dogs jumping or falling from the bridge.
United States
The Golden Gate Bridge in San Francisco has the second highest number of suicides in the world (after the Nanjing Yangtze River Bridge) with around 1,600 bodies having been recovered as of 2012, and the assumption of many more unconfirmed deaths. In 2004, documentary filmmaker Eric Steel set off controversy by revealing that he had tricked the bridge committee into allowing him to film the Golden Gate for months and had captured 23 suicides on film for his documentary The Bridge (2006). In March 2005, San Francisco supervisor Tom Ammiano proposed funding a study on erecting a suicide barrier on the bridge. In June 2014, a suicide barrier was approved for the Golden Gate Bridge. Barrier construction began in 2017 and was expected to be completed by 2021.
In Seattle, Washington, more than 230 people have died by suicide from the George Washington Memorial Bridge, making it the second deadliest suicide bridge in the United States. In a span of a decade ending in January 2007, nearly 50 people jumped to their deaths, nine in 2006. At a cost of $5,000,000, a suicide barrier was completed on February 16, 2011.
The San Diego-Coronado Bridge is the third-deadliest suicide bridge in the United States, followed by the Sunshine Skyway Bridge in St. Petersburg, Florida.
The Cold Spring Canyon Arch Bridge along State Route 154 in Santa Barbara County, California has seen 55 jumps by suicide since opening in 1964, including 7 in 2009. A proposal to install a barrier on this bridge in 2005 led to the completion of a safety barrier/fence in March 2012.
Colorado Street Bridge in Pasadena, California, has also seen barriers erected.
During the mid-20th century in Philadelphia, Pennsylvania, the Wissahickon Memorial Bridge had a policeman stationed after it opened because of the numerous suicides taking place.
In recent years, the Eads Bridge, connecting St. Louis, Missouri and East St. Louis, Illinois, has seen several suicides, approximately 18 since its re-opening.
Cornell University has had a number of suicides by jumping from the bridges over the gorges on campus from the 1970s to 2010. Between 1991 and 1994, five students died by suicide in the gorges.
New River Gorge Bridge in Fayetteville, West Virginia
Suicide Bridge Road is located just off Maryland Route 14 near the town of Secretary, Maryland.
The Chesapeake Bay Bridge in Maryland.
The George Washington Bridge in New York City.
The Natchez Trace Parkway Bridge in Williamson County, Tennessee
The All-America Bridge in Akron, Ohio.
The Washington Avenue Bridge in Minneapolis, Minnesota
See also
Copycat suicide
List of suicide sites
Lover's Leap
Aokigahara suicide forest
References
External links
(A series of articles about suicides on the Golden Gate Bridge.)
(A bridge suicide jump survivor invents a prevention device.)
(Detailed documentation of Skyway Bridge suicides in Florida.)
Bridges
Suicide by jumping | Suicide bridge | Engineering | 2,078 |
25,859,047 | https://en.wikipedia.org/wiki/Reflector%20sight | A reflector sight or reflex sight is an optical sight that allows the user to look through a partially reflecting glass element and see an illuminated projection of an aiming point or some other image superimposed on the field of view. These sights work on the simple optical principle that anything at the focus of a lens or curved mirror (such as an illuminated reticle) will appear to be sitting in front of the viewer at infinity. Reflector sights employ some form of "reflector" to allow the viewer to see the infinity image and the field of view at the same time, either by bouncing the image created by lens off a slanted glass plate, or by using a mostly clear curved glass reflector that images the reticle while the viewer looks through the reflector. Since the reticle is at infinity, it stays in alignment with the device to which the sight is attached regardless of the viewer's eye position, removing most of the parallax and other sighting errors found in simple sighting devices.
Since their invention in 1900, reflector sights have come to be used as gun sights on various weapons. They were used on fighter aircraft, in a limited capacity in World War I, widely used in World War II, and still used as the base component in many types of modern head-up displays. They have been used in other types of (usually large) weapons as well, such as anti-aircraft gun sights, anti-tank gun sights, and any other role where the operator had to engage fast moving targets over a wide field of view, and the sight itself could be supplied with sufficient electrical power to function. There was some limited use of the sight on small arms after World War II, but the sight came into widespread use during the late 1970s with the invention of the red dot sight. This sight uses a red light-emitting diode (LED) as its illumination source, making a durable, dependable sight with an extremely long illumination run time.
Other applications of reflector sights include sights on surveying equipment, optical telescope pointing aids, and camera viewfinders.
Design
Reflector sights work by using a lens or an image-forming curved mirror with a luminous or reflective overlay image or reticle at its focus, creating an optical collimator that produces a virtual image of that reticle. The image is reflected off some form of angled beam splitter or the partially silvered collimating curved mirror itself so that the observer (looking through the beam splitter or mirror) will see the image at the focus of the collimating optics superimposed in the sight's field of view in focus at ranges up to infinity. Since the optical collimator produces a reticle image made up of collimated light, light that is nearly parallel, the light making up that image is theoretically perfectly parallel with the axis of the device or gun barrel it is aligned with, i.e. with no parallax at infinity. The collimated reticle image can also be seen at any eye position in the cylindrical volume of collimated light created by the sight behind the optical window. But this also means, for targets closer than infinity, sighting towards the edge of the optical window can make the reticle move in relation to the target since the observer is sighting down a parallel light bundle at the edge. Eye movement perpendicular to the device's optical axis will make the reticle image move in exact relationship to eye position in the cylindrical column of light created by the collimating optics.
A common type (used in applications such as aircraft gun sights) uses a collimating lens and a beam splitter. This type tends to be bulky since it requires at least two optical components, the lens and the beam splitter/glass plate. The reticle collimation optics are situated at 90° to the optical path making lighting difficult, usually needing additional electric illumination, condensing lenses, etc. A more compact type replaces the lens/beam splitter configuration with a half silvered or dichroic curved collimating mirror set at an angle that performs both tasks of focusing and combining the image of an offset reticle. This type is most often seen as the red dot type used on small arms. It is also possible to place the reticle between the viewer and the curved mirror at the mirror's focus. The reticle itself is too close to the eye to be in focus but the curved mirror presents the viewer with an image of the reticle at infinity. This type was invented by Dutch optical engineer Lieuwe van Albada in 1932, originally as a camera viewfinder, and was also used as a gunsight on World War II bazookas: the US M9 and M9A1 "Bazooka" featured the D7161556 folding "Reflecting Sight Assembly".
The viewing portion of a reflector sight does not use any refractive optical elements, it is simply a projected reticle bounced off a beam splitter or curved mirror right into the users eye. This gives it the defining characteristics of not needing considerable experience and skill to use, as opposed to simple mechanical sights such as iron sights. A reflector sight also does not have the field of view and eye relief problems of sights based on optical telescopes: depending on design constraints their field of view is the user's naked eye field of view, and their non-focusing collimated nature means they do not have the optical telescopes constraint of eye relief. Reflector sights can be combined with telescopes, usually by placing the telescope directly behind the sight so it can view the projected reticle creating a telescopic sight, but this re-introduces the problems of narrow field of view and limited eye relief. The primary drawback of reflector sight is that they need some way to illuminate the reticle to function. Reticles illuminated by ambient light are hard to use in low light situations, and sights with electrical illumination stop functioning altogether if that system fails.
History
The idea of a reflector sight originated in 1900 with Irish optical designer and telescope maker Howard Grubb in patent No.12108. Grubb conceived of his "Gun Sight for large and small Ordnance" as a better alternative to the difficult to use iron sight while avoiding the telescopic sight's limited field of view, greater apparent target speed, parallax errors, and the danger of keeping the eye against an eye stop. In the 1901 the Scientific Transactions of the Royal Dublin Society he described his invention as:
It was noted soon after its invention that the sight could be a good alternative to iron sights and also had uses in surveying and measuring equipment. The reflector sight was first used on German fighter aircraft in 1918 and widely adopted on all kinds of fighter and bomber aircraft in the 1930s. By World War II the reflector sight was being used on many types of weapons besides aircraft, including anti-aircraft guns, naval guns, anti-tank weapons, and many other weapons where the user needed the simplicity and quick target acquisition nature of the sight. Through its development in the 1930s and into World War II the sight was also being referred to in some applications by the abbreviation "reflex sight".
Weapon sights
Reflector sights were invented as an improved gun-sight and since their invention they have been adapted to many types of weapons. When used with different types of guns, reflector sights are considered an improvement over simple iron sights (sights composed of two spaced metal aiming points that have to be aligned). Iron sights take considerable experience and skill in the user who has to hold a proper eye position and focus exclusively on the front sight, keeping it centered on the (unfocused) rear sight, while keeping the whole centered on a target at different distances, requiring alignment of all three planes of focus to achieve a hit. The reflector sight's single, parallax-free virtual image, in focus with the target, removes this aiming problem, helping poor, average, and expert shooters alike.
Since the collimated image produced by the sight is only truly parallax free at infinity, the sight has an error circle equal to the diameter of the collimating optics for any target at a finite distance. Depending on the eye position behind the sight and the closeness of the target this induces some aiming error. For larger targets at a distance (given the non-magnifying, quick target acquisitions nature of the sight) this aiming error is considered trivial. On small arms aimed at close targets this is compensated for by keeping the reticle in the middle of the optical window (sighting down its optical axis). Some manufacturers of small arms sights also make models with the optical collimator set at a finite distance. This gives the sight parallax due to eye movement the size of the optical window at close range which diminishes to a minimal size at the set distance (somewhere around a desired target range of ).
Compared to standard telescopic sights, a reflector sight can be held at any distance from the eye (does not require a designed eye relief), and at almost any angle, without distorting the image of the target or reticle. They are often used with both eyes open (the brain will tend to automatically superimpose the illuminated reticle image coming from the dominant eye onto the other eye's unobstructed view), giving the shooter normal depth perception and full field of view. Since reflector sights are not dependent on eye relief, they can theoretically be placed in any mechanically-convenient mounting position on a weapon.
Aircraft
The earliest record of the reflector sight being used with fighter aircraft was in 1918. The optical firm of Optische Anstalt Oigee of Berlin, working from the Grubb patents, developed two versions what came to be known as the Oigee Reflector Sight. Both used a 45 degree angle glass beam splitter and electrical illumination and were used to aim the plane's machine guns. One version was used in operational trials on the biplane Albatros D.Va and triplane Fokker Dr.1 fighters. There was some interest in this sight after World War I but reflector sights in general were not widely adopted for fighter and bomber aircraft until the 1930s, first by the French, then by most other major airforces. These sights were not only used for aiming fighter aircraft, they were used with aircraft defensive guns and in bombsights.
Reflector sights as aircraft gun-sights have many advantages. The pilot/gunner need not position their head to align the sight line precisely as they did in two-point mechanical sights, head position is only limited to that determined by the optics in the collimator, mostly by the diameter of the collimator lens. The sight does not interfere with the overall view, particularly when the collimator light is turned off. Both eyes may be used simultaneously for sighting.
The optical nature of the reflector sight meant it was possible to feed other information into field of view, such as modifications of the aiming point due to deflection determined by input from a gyroscope. 1939 saw the development by the British of the first of these gyro gunsights, reflector sights adjusted by gyroscope for the aircraft's speed and rate of turn, enabling the display of a lead-adjusted sighting reticle that lagged the actual "boresight" of the weapon(s), allowing the boresight to lead the target in a turn by the proper amount for an effective strike
As reflector sight designs advanced after World War II, giving the pilot more and more information, they eventually evolved into the head-up display (HUD). The illuminated reticle was eventually replaced by a video screen at the focus of the collimating optics that not only gave a sighting point and information from a lead-finding computer and radar, but also various aircraft indicators (such as an artificial horizon, compass, altitude and airspeed indicators), facilitating the visual tracking of targets or the transition from instrument to visual methods during landings.
Firearms
The idea of attaching a reflector sight to a firearm has been around since its invention in 1900. Soon after World War II, models appeared for rifles and shotguns including the Nydar shotgun sight (1945), which used a curved semi-reflective mirror to reflect an ambient lit reticle, and the Giese electric gunsight (1947), which had a battery-powered illuminated reticle. Later types included the Qwik-Point (1970) and the Thompson Insta-Sight. Both were beam-splitter type reflector sights that used ambient light: illuminating a green crosshair in the Insta-Sight, and a red plastic rod "light pipe" that produced a red aiming spot reticle in the Qwik-Point.
The mid- to late 1970s saw the introduction of what are usually referred to as red dot sights, a type that gives the user a simple bright red dot as an aiming point. The typical configuration for this sight is a compact curved mirror reflector design with a red light-emitting diode (LED) at its focus. Using an LED as a reticle is an innovation that greatly improves the reliability and general usefulness of the sight: there is no need for other optical elements to focus light behind a reticle; the mirror can use a dichroic coating to reflect just the red spectrum, passing through most other light; and the LED itself is solid state and consumes very little power, allowing battery-powered sights to run for hundreds and even tens of thousands of hours.
Reflector sights for military firearms (usually referred to as reflex sights) took a long time to be adopted. The US House Committee on Armed Services noted as far back as 1975 on the suitability of the use of reflex sight for the M16 rifle, but the US military did not widely introduce reflector sights until the early 2000s with the Aimpoint CompM2 red dot sight, designated the "M68 Close Combat Optic".
Reticle types
Many reticle illumination and pattern options are available. Common light sources used in firearm reflector sights include battery powered lights, fiber optic light collectors, and even tritium capsules. Some sights are specifically designed to be visible when viewed through night vision devices. The color of a sight reticle is often red or amber for visibility against most backgrounds. Some sights use a chevron or triangular pattern instead, to aid precision aiming and range estimation, and still others provide selectable patterns.
Sights that use dot reticles are almost invariably measured in minutes of arc, sometimes called "minutes of angle" or "moa". Moa is a convenient measure for shooters using Imperial or US customary units, since 1 moa subtends approximately at a distance of , which makes moa a convenient unit to use in ballistics calculations. A 5 moa (1.5 milliradian) dot is small enough not to obscure most targets, and large enough to quickly acquire a proper "sight picture". For many types of action shooting, a larger dot has traditionally been preferred; 7, 10, 15 or even 20 moa (2, 3, 4.5 or 6 mil) have been used; often these will be combined with horizontal and/or vertical lines to provide a level reference.
Most sights have either active or passive adjustments for the reticle brightness, which help the shooter adapt to different lighting conditions. A very dim reticle will help prevent loss of night vision in low-light conditions, while a brighter reticle will display more clearly in full sunlight.
Modern optical reflector sights designed for firearms and other uses fall into two housing-configurations: "tubed" and "open".
Tube sights look similar to standard telescopic sights, with a cylindrical tube containing the optics. Many tube sights offer the option of interchangeable filters (such as polarizing or haze-reducing filters), glare-reducing sunshades, and conveniently protective "flip-up" lens covers.
Open sights (also known as "mini reflex sights" and "mini red dots") take advantage of the fact that the reflector sight's only optical element, the optical window, does not need any housing at all. This configuration consists of a base with just the necessary reflective surface for collimating the reticle mounted on it. Due to their diminished profile, open sights do not usually accommodate filters and other accessory options typically supported by tube designs.
Other uses
Reflector sights have been used over the years in nautical navigation devices and surveying equipment. Albada type sights were used on early large format cameras, "Point and shoot" type cameras, and on simple disposable cameras.
These sights are also used on astronomical telescopes as finderscopes, to help aim the telescope at the desired object. There are many commercial models, the first of which was the Telrad, invented by amateur astronomer Steve Kufeld in the late 1970s. Others are now available from companies such as Apogee, Celestron, Photon, Rigel, and Televue.
Reflector sights are also used in the entertainment industry in productions such as live theater on "Follow Spot" spotlights. Sights such as Telrad's adapted for use and the purpose built Spot Dot allow the spotlight operator to aim the light without turning it on.
Similar types
Collimator sights (also called collimating or "occluded eye gunsight" (OEG)) are simply the optical collimator focusing a reticle without any optical window. The viewer cannot see through them and only sees an image of the reticle. They are used either with both eyes open while one looks into the sight, with one eye open and moving the head to alternately see the sight and then at the target, or using one eye to partially see the sight and target at the same time. The reticle is illuminated by an electric, radioluminescent or passive ambient light source. The Armson OEG and the Normark Corp. Singlepoint are two examples of commercially available ambient lit collimator sights. These sights have the advantage of requiring less illumination for the reticle for the same level of usability, due to the high contrast black background behind the reticle. For this reason occluded eye gunsights were more practical for use on small arms before low power consumption illumination sources such as LEDs became commonplace.
Holographic weapon sights are similar in layout to reflector sights but do not use a projected reticle system. Instead, a representative reticle is recorded in three-dimensional space onto holographic film at the time of manufacture. This image is part of the optical viewing window. The recorded hologram is illuminated by a collimated laser built into the sight. The sight can be adjusted for range and windage by simply tilting or pivoting the optical window.
See also
Fire-control system
Collimator sight
Holographic weapon sight
Red dot sight
Red dot magnifier
Prism sight, a type of telescopic sight
Laser sight
Glossary of firearms terminology
References
Further reading
External links
Article on the WWII Maxon M45 machine gun mount with section on the Navy Mark 9 reflector sight
May-June, 2007 CBS Interactive Business Network article: Seeing red: illuminated reticle sights
Firearm sights
Optical devices | Reflector sight | Materials_science,Engineering | 3,974 |
34,307,312 | https://en.wikipedia.org/wiki/Isoflavonoid%20biosynthesis | The biosynthesis of isoflavonoids involves several enzymes; These are:
Liquiritigenin,NADPH:oxygen oxidoreductase (hydroxylating, aryl migration), also known as Isoflavonoid synthase, is an enzyme that uses liquiritigenin (a flavanone), O2, NADPH and H+ to produce 2,7,4'-trihydroxyisoflavanone (an isoflavonoid), H2O and NADP+.
Biochanin-A reductase
Flavone synthase
2'-hydroxydaidzein reductase
2-hydroxyisoflavanone dehydratase
2-hydroxyisoflavanone synthase
Isoflavone 4'-O-methyltransferase
Isoflavone 7-O-methyltransferase
Isoflavone 2'-hydroxylase
Isoflavone 3'-hydroxylase
Isoflavone-7-O-beta-glucoside 6"-O-malonyltransferase
Isoflavone 7-O-glucosyltransferase
4'-methoxyisoflavone 2'-hydroxylase
Pterocarpans biosynthesis
3,9-dihydroxypterocarpan 6a-monooxygenase
Glyceollin synthase
Pterocarpin synthase
See also
Flavonoid biosynthesis
References
External links
http://www.genome.jp/kegg/pathway/map/map00943.html
Isoflavonoids metabolism
Biosynthesis | Isoflavonoid biosynthesis | Chemistry | 359 |
16,130,469 | https://en.wikipedia.org/wiki/Ute%20meridian | The Ute meridian, also known as the Grand River meridian, was established in 1880 and is a principal meridian of Colorado. The initial point lies inside the boundaries of Grand Junction Regional Airport, Grand Junction, Colorado.
See also
List of principal and guide meridians and base lines of the United States
References
External links
Surveying
Named meridians
Geography of Colorado
Meridians and base lines of the United States | Ute meridian | Engineering | 80 |
54,431,682 | https://en.wikipedia.org/wiki/Relational%20constructivism | Relational constructivism can be perceived as a relational consequence of radical constructivism. In contrary to social constructivism, it picks up the epistemological threads and maintains the radical constructivist idea that humans cannot overcome their limited conditions of reception (i.e. self-referentially operating cognition). Therefore, humans are not able to come to objective conclusions about the world.
In spite of the subjectivity of human constructions of reality, relational constructivism focusses on the relational conditions applying to human perceptional processes. According to Björn Kraus:
It is substantial for relational constructivism that it basically originates from an epistemological point of view, thus from the subject and its construction processes. Coming from this perspective it then focusses on the (not only social, but also material) relations under which these cognitive construction processes are performed. Consequently, it's not only about social construction processes, but about cognitive construction processes performed under certain relational conditions.
Lifeworld and life conditions as relational constructions
In the course of recent constructivist discourses, a discussion about the term lifeworld took place. Björn Kraus' relational-constructivist version of the lifeworld term considers its phenomenological roots (Husserl and Schütz), but expands it within the range of epistemological constructivist theory building.
In consequence, a new approach is created, which focusses on the individual perspective upon the lifeworld term and takes account of social and material environmental conditions and their relevance, as emphasized, for example, by Jürgen Habermas. Essential therefore is Kraus' basic assumption that cognitive development depends on two determining factors. A person's own reality is their subjective construct, but this construct—in spite of all subjectivity—is not random: because a person is still linked to their environment, their own reality is influenced by the conditions of this environment (German: Grundsätzliche Doppelbindung menschlicher Strukturentwicklung).
Building up on this point of view, a separation of individual perception and the social and material environmental conditions is made possible. Kraus accordingly picks up the term lifeworld, adds the term "life conditions" (German: Lebenslage; originally introduced by philosophers Otto Neurath in 1931 as well as Gerhard Weisser in 1956) and opposes the two terms to each other.
By this means, lifeworld describes a person's subjectively experienced world, whereas life conditions describe the person's actual circumstances in life. Accordingly, it could be said a person's lifeworld is built depending on their particular life conditions. More precisely, the life conditions include the material and immaterial living circumstances as for example employment situation, availability of material resources, housing conditions, social environment as well as the person's physical condition. The lifeworld, in contrast, describes the subjective perception of these conditions.
Kraus uses the epistemological distinction between subjective reality and objective reality. Thus, a person's lifeworld correlates with the person's life conditions in the same way that subjective reality correlates with objective reality. One is the insurmountable, subjective construct built depending on the other one's conditions.
Kraus defined lifeworld and life conditions as follows:Life conditions mean a person's material and immaterial circumstances of life. Lifeworld means a person's subjective construction of reality, which he or she forms under the condition of his or her life circumstances.
This contrasting comparison provides a conceptual specification, enabling in the first step the distinction between a subjectively experienced world and its material and social conditions and allowing in the second step to focus on these conditions' relevance for the subjective construction of reality. With this in mind, Manfred Ferdinand, who is reviewing the lifeworld terms used by Alfred Schütz, Edmund Husserl, Björn Kraus and Ludwig Wittgenstein, concludes: "Kraus' thoughts on a constructivist comprehension of lifeworlds contours the integration of micro-, meso- and macroscopic approaches, as it is demanded by Invernizzi and Butterwege: This integration is not only necessary in order to relate the subjective perspectives and the objective frame conditions to each other but also because the objective frame conditions obtain their relevance for the subjective lifeworlds not before they are perceived and assessed."
A relational constructivist theory of power: Instructive vs. destructive power
Björn Kraus deals with the epistemological perspective upon power regarding the question about possibilities of interpersonal influence by developing a special form of constructivism ("Machtanalytischer Konstruktivismus").
Instead of focussing on the valuation and distribution of power, he asks what the term can describe at all. Coming from Max Weber's definition of power, he realizes the term of power has to be split into "instructive power" and "destructive power". More precisely, instructive power means the chance to determine the actions and thoughts of another person, whereas destructive power means the chance to diminish the opportunities of another person.
Kraus defined "instructive power" and "destructive power" as follows:"Instructive power" means the chance to determine a human's thinking or behaviour. (Instructive power as chance for instructive interaction is dependent on the instructed person's own will, which ultimately can refuse instructive power.) "Destructive power" means the chance to restrict a human's possibilities. (Destructive power as chance for destructive interaction is independent of the instructed person's own will, which can't refuse destructive power.)
How significant this distinction really is, becomes evident by looking at the possibilities of rejecting power attempts: rejecting instructive power is possible – rejecting destructive power is not. By using this distinction, proportions of power can be analyzed in a more sophisticated way, helping to sufficiently reflect on matters of responsibility. This perspective permits one to get over an "either-or-position" (either there is power, or there isn't), which is common especially in epistemological discourses about power theories, and to introduce the possibility of an "as well as-position".
According to Wolf Ritscher, it is Björn Kraus who "has reflected on the topic of power as a substantial aspect of social existence in a constructivist manner and has shown that constructivism can also be used in terms of social theory".
The systems term in relational constructivism
It is central to relational constructivism that social conditions cannot be recognized as allegedly objective, but are described from an observer position in social relationships on the basis of determined criteria. In this sense, for example, power is not seen as objectively recognizable, but as a relational phenomenon. Its description depends on the observers point of view. As with Weber, the definition of instructive power and destructive power focuses on the "opportunity within a social relation to put through one's own will, also against reluctance". Here, the category of power is not conceived as a per se existing but rather as a social phenomenon. In this respect, the terms instructive power and destructive power do not describe any observer-independent, existing units that a person has or attributes that are inherent in a person, but rather assertion potential in social relations.
The same applies to the relational-constructivist understanding of lifeworlds and living conditions: Although a person's living conditions seem to be much more accessible than a person's lifeworld by observation, both categories are always subject to the ever differing perspective of an observer. Nevertheless, it remains easier to describe life conditions than lifeworlds. While living conditions actually can be observed, statements about lifeworlds always refer to speculated cognitive constructs that cannot be accessed by observation.
For Kraus, it is important that systems cannot be defined as independent from an observer. This is the reason he names criteria allowing distinction between a system and its surrounding environment:
A system is a set of elements, which are determined as cohesive from an observer's perspective. Their relations to each other differ quantitatively and/or qualitatively from those to other entities. These observed differences allow to constitute a system border, distinguishing the system from its environment.
He concludes that it depends on these criteria and the observations made by the observing persons, whether systems can be identified or not.
Criticism and counter-criticism – loss of truth and "fake news"
Constructivist positions are accused with being "blind to the difference between truth and lies." It is problematized that truths only seem to exist in the plural and that the associated task of distinguishing between lies and truth is "dangerous on the one hand and inappropriate on the other".
Kraus takes a detailed look at this problem at various points and, using recourse to philosophical discourses of truth, clarifies that a distinction must first be made between "truth" and "truthfulness" and that the opposite of "truth" is not the "lie" but the "Falsehood". The counterpart of "truthfulness", on the other hand, is the category of "lies".
So there are the following comparisons: truth - falsehood and truthfulness - lie. Based on this, Kraus defines lie as a contradiction to the subjective belief that it is true.
A person's statement is considered a lie if it contradicts their own thinking of it as true.
Then he differentiates between lies (as deliberate false statements) and errors (as subjective thinking of something as true that is judged as not true or false). He also clarifies that it can only be decided from observer positions whether a statement is true or false, but that these decisions cannot be made arbitrarily, but must be reasonably justified.
In this respect, there can be no objective truth from the perspective of a constructivist epistemology, but it is still possible to justify when a statement should be considered true in terms of consensus and/or coherence.
Kraus claims that with this approach, it is also constructivistically possible to make a well-founded decision about the difference between news and fake news.
Literature
Kraus, Björn (2014): Introducing a model for analyzing the possibilities of power, help and control. In: Social Work and Society. International Online Journal. Retrieved 3 April 2019. (http://www.socwork.net/sws/article/view/393)
Kraus, Björn (2015): The Life We Live and the Life We Experience: Introducing the Epistemological Difference between "Lifeworld" (Lebenswelt) and "Life Conditions" (Lebenslage). In: Social Work and Society. International Online Journal. Retrieved 27 August 2018.(http://www.socwork.net/sws/article/view/438)
Kraus, Björn (2017): Plädoyer für den Relationalen Konstruktivismus und eine Relationale Soziale Arbeit. (Forum Sozial, 1/2017). (http://www.pedocs.de/frontdoor.php?source_opus=15381)
Kraus, Björn (2019): Relational constructivism and relational social work. In: Webb, Stephen, A. (edt.) The Routledge Handbook of Critical Social Work. Routledge international Handbooks. London and New York: Taylor & Francis Ltd.
Kraus, Björn (2019): Relationaler Konstruktivismus – Relationale Soziale Arbeit. Von der systemisch-konstruktivistischen Lebensweltorientierung zu einer relationalen Theorie der Sozialen Arbeit. Weinheim, München: Beltz, Juventa.
References
German philosophy
Social epistemology
Constructivism | Relational constructivism | Technology | 2,424 |
8,120,099 | https://en.wikipedia.org/wiki/Nephrin | Nephrin is a protein necessary for the proper functioning of the renal filtration barrier. The renal filtration barrier consists of fenestrated endothelial cells, the glomerular basement membrane, and the podocytes of epithelial cells. Nephrin is a transmembrane protein that is a structural component of the slit diaphragm. It is present on the tips of the podocytes as an intricate mesh connecting adjacent foot processes. Nephrin contributes to the strong size selectivity of the slit diaphragm, however, the relative contribution of the slit diaphragm to exclusion of protein by the glomerulus is debated. The extracellular interactions, both homophilic and heterophilic—between nephrin and NEPH1—are not completely understood. In addition to eight immunoglobulin G–like motifs and a fibronectin type 3 repeat, nephrin has a single transmembrane domain and a short intracellular tail. Tyrosine phosphorylation at different sites on the intracellular tail contribute to the regulation of slit diaphragm formation during development and repair in pathology affecting podocytes. Podocin may interact with nephrin to guide it onto lipid rafts in podocytes, requiring the integrity of an arginine residue of nephrin at position 1160.
A defect in the gene for nephrin, NPHS1, is associated with congenital nephrotic syndrome of the Finnish type and causes massive amounts of protein to be leaked into the urine, or proteinuria. Nephrin is also required for cardiovascular development.
Interactions
Nephrin has been shown to interact with:
CASK,
CD2AP,
CDH3 and
CTNND1,
FYN,
KIRREL, and
NPHS2.
See also
Podocyte
References
Further reading
External links
Proteins | Nephrin | Chemistry | 391 |
9,860,414 | https://en.wikipedia.org/wiki/Thermosome | A thermosome is a group II chaperonin protein complex that functions in archaea. It is the homolog of eukaryotic CCT. This group II chaperonin is an ATP-dependent chaperonin that is responsible for folding or refolding of incipient or denatured proteins. A thermosome has two rings, each consisting of eight subunits, stacked together to form a cylindrical shape with a large cavity at the center. The thermosome is also defined by its heterooligomeric nature. The complex consists of that alternate location within its two rings.
Being a Group II chaperonin, the thermosome has a similar structure to group I chaperonins. The main difference, however, lies in the existence of a helical protrusion in the thermosome which composes of a built-in lid of the hydrophilic cavity. Not only is thermosome ATP-dependent, but the mechanism in which thermosome shifts from open to close conformation is also temperature-dependent. The open conformation of the ATP-thermosome exists mainly at low temperatures. Whereas, the closed conformation of the thermosome occurs when heating to physiological temperature.
Similar to the GroEL chaperonins in bacteria, the thermosome shows negative cooperativity since the two rings of the thermosome show different affinities for the binding of ATP. However, unlike the GroEL system, the thermosome is less affected by the concentration of ATP. In the absence of ATP, the thermosome does not have a preference for the T-state over the R-state. There is, however, an inhibition for the loading of the second ring when ADP is bound to the first ring.
The N-terminus and C-terminus of thermosomes are arranged in an anti-parallel fashion and their interactions form part of the intra-ring interactions. Both the N-terminus and C-terminus of the thermosome have charged residues which interact with each other to contribute to the thermal stability of the thermosome. The cpn-α and cpn-β thermosomes specifically show maximum thermal stability in the pH range of 7.0 to 8.0 because this is the range where the charged N- and C-termini residues have net charges close to zero. Under lower or high pH conditions, these residues are charged and repelled each other which negatively affect thermal stability. This shows one possible way in which pH affects the stability of the thermosome.
External links
References
Protein complexes
Archaea biology | Thermosome | Chemistry,Biology | 556 |
67,249,581 | https://en.wikipedia.org/wiki/Idecabtagene%20vicleucel | Idecabtagene vicleucel, sold under the brand name Abecma, is a cell-based gene therapy to treat multiple myeloma.
The most common side effects include cytokine release syndrome (CRS), infections, fatigue, musculoskeletal pain, and a weakened immune system (hypogammaglobulinemia).
Idecabtagene vicleucel is a B-cell maturation antigen (BCMA)-directed genetically modified autologous chimeric antigen receptor (CAR) T-cell therapy. Each dose is customized using a patient's own T-cells, which are a type of white blood cell, that are collected and genetically modified to include a new gene that facilitates targeting and killing myeloma cells, and infused back into the patient.
Idecabtagene vicleucel was approved for medical use in the United States in March 2021. It is the first cell-based gene therapy approved by the US Food and Drug Administration (FDA) for the treatment of multiple myeloma. It was approved for medical use in the European Union in August 2021.
Medical uses
Idecabtagene vicleucel is indicated for the treatment of adults with relapsed or refractory multiple myeloma after two or more prior lines of therapy, including an immunomodulatory agent, a proteasome inhibitor, and an anti-CD38 monoclonal antibody.
Multiple myeloma is an uncommon type of blood cancer in which abnormal plasma cells build up in the bone marrow and form tumors in many bones of the body. This disease keeps the bone marrow from making enough healthy blood cells, which can result in low blood counts. Myeloma can also damage the bones and the kidneys and weaken the immune system. The exact cause of multiple myeloma is unknown. According to the National Cancer Institute, myeloma accounted for approximately 1.8% (32,000) of all new cancer cases in the United States in 2020.
Adverse effects
The FDA label for idecabtagene vicleucel carries a boxed warning for cytokine release syndrome (CRS), neurologic toxicity, hemophagocytic lymphohistiocytosis/macrophage activation syndrome (HLH/MAS), and prolonged cytopenia. CRS and HLH/MAS are systemic responses to the activation and proliferation of CAR-T cells causing high fever and flu-like symptoms, and prolonged cytopenia is a drop in the number of a certain blood cell type for an extended period of time.
In April 2024, the FDA label boxed warning was expanded to include T cell malignancies.
History
The safety and efficacy of idecabtagene vicleucel were evaluated in a multicenter study of 127 people with relapsed (myeloma that returns after completion of treatment) and refractory (myeloma that does not respond to treatment) multiple myeloma who received at least three prior lines of antimyeloma therapies; 88% had received four or more prior lines of therapies. Efficacy was evaluated in 100 people who received idecabtagene vicleucel in the dose range of 300 to 460 ×106 CAR-positive T cells. Overall, 72% of people partially or completely responded to the treatment. Of those studied, 28% of people showed complete response—or disappearance of all signs of multiple myeloma—to idecabtagene vicleucel, and 65% of this group remained in complete response to the treatment for at least twelve months.
The US Food and Drug Administration (FDA) granted the application for idecabtagene vicleucel breakthrough therapy and orphan drug designations. The FDA granted approval of Abecma to Celgene Corporation, a Bristol-Myers Squibb Company.
Society and culture
Names
Idecabtagene vicleucel is the international nonproprietary name (INN).
References
Further reading
External links
Drugs developed by Bristol Myers Squibb
Cancer treatments
Drugs that are a gene therapy
Approved gene therapies
CAR T-cell therapy
Orphan drugs | Idecabtagene vicleucel | Biology | 864 |
11,358,084 | https://en.wikipedia.org/wiki/Idealization%20and%20devaluation | Psychoanalytic theory posits that an individual unable to integrate difficult feelings mobilizes specific defenses to overcome these feelings, which the individual perceives to be unbearable. The defense that effects (brings about) this process is called splitting. Splitting is the tendency to view events or people as either all bad or all good. When viewing people as all good, the individual is said to be using the defense mechanism idealization: a mental mechanism in which the person attributes exaggeratedly positive qualities to the self or others. When viewing people as all bad, the individual employs devaluation: attributing exaggeratedly negative qualities to the self or others.
In child development, idealization and devaluation are quite normal. During the childhood development stage, individuals become capable of perceiving others as complex structures, containing both good and bad components. If the development stage is interrupted (by early childhood trauma, for example), these defense mechanisms may persist into adulthood.
Sigmund Freud
The term idealization first appeared in connection with Freud's definition of narcissism. Freud's vision was that all human infants pass through a phase of primary narcissism in which they assume they are the centre of their universe. To obtain the parents' love the child comes to do what they think the parents value. Internalising these values the child forms an ego ideal. This ego ideal contains rules for good behaviour and standards of excellence toward which the ego has to strive. When the child cannot bear ambivalence between the real self and the ego ideal and defenses are used too often, it is called pathologic. Freud called this situation secondary narcissism, because the ego itself is idealized. Explanations of the idealization of others besides the self are sought in drive theory as well as in object relations theory. From the viewpoint of libidinal drives, idealization of other people is a "flowing-over" of narcissistic libido onto the object; from the viewpoint of self-object relations, the object representations (like that of the caregivers) were made more beautiful than they really were.
Heinz Kohut
An extension of Freud's theory of narcissism came when Heinz Kohut presented the so-called "self-object transferences" of idealization and mirroring. To Kohut, idealization in childhood is a healthy mechanism. If the parents fail to provide appropriate opportunities for idealization (healthy narcissism) and mirroring (how to cope with reality), the child does not develop beyond a developmental stage in which they see themselves as grandiose but in which they also remain dependent on others to provide their self-esteem. Kohut stated that, with narcissistic patients, idealization of the self and the therapist should be allowed during therapy and then very gradually will diminish as a result of unavoidable optimal frustration.
Otto Kernberg
Otto Kernberg has provided an extensive discussion of idealization, both in its defensive and adaptive aspects. He conceptualised idealization as involving a denial of unwanted characteristics of an object, then enhancing the object by projecting one's own libido or omnipotence on it. He proposed a developmental line with one end of the continuum being a normal form of idealization and the other end a pathological form. In the latter, the individual has a problem with object constancy and sees others as all good or all bad, thus bolstering idealization and devaluation. At this stage idealization is associated with borderline pathology. At the other end of the continuum, idealization is said to be a necessary precursor for feelings of mature love.
See also
References
Borderline personality disorder
Defence mechanisms
Dichotomies
Narcissism | Idealization and devaluation | Biology | 772 |
840,758 | https://en.wikipedia.org/wiki/Ext%20functor | In mathematics, the Ext functors are the derived functors of the Hom functor. Along with the Tor functor, Ext is one of the core concepts of homological algebra, in which ideas from algebraic topology are used to define invariants of algebraic structures. The cohomology of groups, Lie algebras, and associative algebras can all be defined in terms of Ext. The name comes from the fact that the first Ext group Ext1 classifies extensions of one module by another.
In the special case of abelian groups, Ext was introduced by Reinhold Baer (1934). It was named by Samuel Eilenberg and Saunders MacLane (1942), and applied to topology (the universal coefficient theorem for cohomology). For modules over any ring, Ext was defined by Henri Cartan and Eilenberg in their 1956 book Homological Algebra.
Definition
Let be a ring and let be the category of modules over . (One can take this to mean either left -modules or right -modules.) For a fixed -module , let for in . (Here is the abelian group of -linear maps from to ; this is an -module if is commutative.) This is a left exact functor from to the category of abelian groups , and so it has right derived functors . The Ext groups are the abelian groups defined by
for an integer i. By definition, this means: take any injective resolution
remove the term B, and form the cochain complex:
For each integer , is the cohomology of this complex at position . It is zero for negative. For example, is the kernel of the map , which is isomorphic to .
An alternative definition uses the functor G(A)=HomR(A, B), for a fixed R-module B. This is a contravariant functor, which can be viewed as a left exact functor from the opposite category (R-Mod)op to Ab. The Ext groups are defined as the right derived functors RiG:
That is, choose any projective resolution
remove the term A, and form the cochain complex:
Then Ext(A, B) is the cohomology of this complex at position i.
One may wonder why the choice of resolution has been left vague so far. In fact, Cartan and Eilenberg showed that these constructions are independent of the choice of projective or injective resolution, and that both constructions yield the same Ext groups. Moreover, for a fixed ring R, Ext is a functor in each variable (contravariant in A, covariant in B).
For a commutative ring R and R-modules A and B, Ext(A, B) is an R-module (using that HomR(A, B) is an R-module in this case). For a non-commutative ring R, Ext(A, B) is only an abelian group, in general. If R is an algebra over a ring S (which means in particular that S is commutative), then Ext(A, B) is at least an S-module.
Properties of Ext
Here are some of the basic properties and computations of Ext groups.
Ext(A, B) ≅ HomR(A, B) for any R-modules A and B.
Ext(A, B) = 0 for all i > 0 if the R-module A is projective (for example, free) or if B is injective.
The converses also hold:
If Ext(A, B) = 0 for all B, then A is projective (and hence Ext(A, B) = 0 for all i > 0).
If Ext(A, B) = 0 for all A, then B is injective (and hence Ext(A, B) = 0 for all i > 0).
for all i ≥ 2 and all abelian groups A and B.
Generalizing the previous example, for all i ≥ 2 if R is a principal ideal domain.
If R is a commutative ring and u in R is not a zero divisor, then
for any R-module B. Here B[u] denotes the u-torsion subgroup of B, {x ∈ B: ux = 0}. Taking R to be the ring of integers, this calculation can be used to compute for any finitely generated abelian group A.
Generalizing the previous example, one can compute Ext groups when the first module is the quotient of a commutative ring by any regular sequence, using the Koszul complex. For example, if R is the polynomial ring k[x1,...,xn] over a field k, then Ext(k,k) is the exterior algebra S over k on n generators in Ext1. Moreover, Ext(k,k) is the polynomial ring R; this is an example of Koszul duality.
By the general properties of derived functors, there are two basic exact sequences for Ext. First, a short exact sequence of R-modules induces a long exact sequence of the form
for any R-module A. Also, a short exact sequence induces a long exact sequence of the form
for any R-module B.
Ext takes direct sums (possibly infinite) in the first variable and products in the second variable to products. That is:
Let A be a finitely generated module over a commutative Noetherian ring R. Then Ext commutes with localization, in the sense that for every multiplicatively closed set S in R, every R-module B, and every integer i,
Ext and extensions
Equivalence of extensions
The Ext groups derive their name from their relation to extensions of modules. Given R-modules A and B, an extension of A by B is a short exact sequence of R-modules
Two extensions
are said to be equivalent (as extensions of A by B) if there is a commutative diagram:
Note that the Five lemma implies that the middle arrow is an isomorphism. An extension of A by B is called split if it is equivalent to the trivial extension
There is a one-to-one correspondence between equivalence classes of extensions of A by B and elements of Ext(A, B). The trivial extension corresponds to the zero element of Ext(A, B).
The Baer sum of extensions
The Baer sum is an explicit description of the abelian group structure on Ext(A, B), viewed as the set of equivalence classes of extensions of A by B. Namely, given two extensions
and
first form the pullback over ,
Then form the quotient module
The Baer sum of E and E′ is the extension
where the first map is and the second is .
Up to equivalence of extensions, the Baer sum is commutative and has the trivial extension as identity element. The negative of an extension 0 → B → E → A → 0 is the extension involving the same module E, but with the homomorphism B → E replaced by its negative.
Construction of Ext in abelian categories
Nobuo Yoneda defined the abelian groups Ext(A, B) for objects A and B in any abelian category C; this agrees with the definition in terms of resolutions if C has enough projectives or enough injectives. First, Ext(A,B) = HomC(A, B). Next, Ext(A, B) is the set of equivalence classes of extensions of A by B, forming an abelian group under the Baer sum. Finally, the higher Ext groups Ext(A, B) are defined as equivalence classes of n-extensions, which are exact sequences
under the equivalence relation generated by the relation that identifies two extensions
if there are maps for all m in {1, 2, ..., n} so that every resulting square commutes
that is, if there is a chain map which is the identity on A and B.
The Baer sum of two n-extensions as above is formed by letting be the pullback of and over A, and be the pushout of and under B. Then the Baer sum of the extensions is
The derived category and the Yoneda product
An important point is that Ext groups in an abelian category C can be viewed as sets of morphisms in a category associated to C, the derived category D(C). The objects of the derived category are complexes of objects in C. Specifically, one has
where an object of C is viewed as a complex concentrated in degree zero, and [i] means shifting a complex i steps to the left. From this interpretation, there is a bilinear map, sometimes called the Yoneda product:
which is simply the composition of morphisms in the derived category.
The Yoneda product can also be described in more elementary terms. For i = j = 0, the product is the composition of maps in the category C. In general, the product can be defined by splicing together two Yoneda extensions.
Alternatively, the Yoneda product can be defined in terms of resolutions. (This is close to the definition of the derived category.) For example, let R be a ring, with R-modules A, B, C, and let P, Q, and T be projective resolutions of A, B, C. Then Ext(A,B) can be identified with the group of chain homotopy classes of chain maps P → Q[i]. The Yoneda product is given by composing chain maps:
By any of these interpretations, the Yoneda product is associative. As a result, is a graded ring, for any R-module A. For example, this gives the ring structure on group cohomology since this can be viewed as . Also by associativity of the Yoneda product: for any R-modules A and B, is a module over .
Important special cases
Group cohomology is defined by , where G is a group, M is a representation of G over the integers, and is the group ring of G.
For an algebra A over a field k and an A-bimodule M, Hochschild cohomology is defined by
Lie algebra cohomology is defined by , where is a Lie algebra over a commutative ring k, M is a -module, and is the universal enveloping algebra.
For a topological space X, sheaf cohomology can be defined as Here Ext is taken in the abelian category of sheaves of abelian groups on X, and is the sheaf of locally constant -valued functions.
For a commutative Noetherian local ring R with residue field k, is the universal enveloping algebra of a graded Lie algebra π*(R) over k, known as the homotopy Lie algebra of R. (To be precise, when k has characteristic 2, π*(R) has to be viewed as an "adjusted Lie algebra".) There is a natural homomorphism of graded Lie algebras from the André–Quillen cohomology D*(k/R,k) to π*(R), which is an isomorphism if k has characteristic zero.
See also
global dimension
bar resolution
Grothendieck group
Grothendieck local duality
Notes
References
Homological algebra
Binary operations | Ext functor | Mathematics | 2,403 |
70,125,767 | https://en.wikipedia.org/wiki/Huallaga%20River%20Boats%20Collision%20%282021%29 | The Huallaga river boats collision was a fatal boat collision that killed 21 in Peru. It occurred on August 29, 2021, in the Alto Amazonas Province, west of the Department of Loreto. An additional unknown number of people were described as missing.
Description
The event occurred in the early morning of August 29 in the Alto Amazonas Province, when a motorized ferry collided with a river boat. The boat had approximately 80 people on board. Intense morning fog made it difficult to see.
Petroperú reported that the 80-person boat was called Ayachi, and the motor boat Nauta. Ayachi picked up its passengers at 1:00 a.m. in Santa María to transfer them to Yurimaguas, while Nauta headed for Iquitos. Ayachi's passengers belonged to an evangelical congregation called Nueva Jesuralén.
Rescue
At the time of the accident, smaller boats of locals came to rescue the survivors. A passenger from Ayachi relates:
"Some grabbed us from behind, desperate. We were under the boat. We have managed to get out. My colleagues don't. I have lost my wife and seven-year-old son."
Rescuers from the Peruvian National Police and the Peruvian Navy went to the scene, where they managed to rescue 50 people alive. At the beginning, 16 were reported missing.
The number of survivors rose to 60, and the number of deceased increased to 23 on August 31. One family was reported to have 14 deaths in the accident.
References
2021 in Peru
Collision
2021 disasters in Peru | Huallaga River Boats Collision (2021) | Physics | 321 |
26,084,262 | https://en.wikipedia.org/wiki/Mougin%20turret | The Mougin turret is a land-based revolving gun turret that housed some of the heaviest armament in French fortifications of the late 19th and early 20th centuries. While not reliably resistant to the explosive shells of opposing artillery, Mougin turrets remained active through 1940, when they engaged German and Italian forces during the Battle of France and the Italian invasion of France. The turrets were used at twenty-two forts of the Séré de Rivières system built in the 1870s.
The Mougin turret was named for its designer, Commandant Mougin, who developed the first turret in 1875. The turret consists of two 155 mm guns under a bowl-shaped armor shield, sunk into the ground and surrounded by a thick concrete apron that protected the multi-level traverse and loading facilities below. The turret is distinguished from naval turrets by the absence of protruding barrels. Two oval ports show just the muzzles of the guns. By contrast with naval practice, in which guns pivot in elevation on trunnions near their breeches, their muzzles and barrels protruding and moving in an arc, the Mougin turret's guns pivot at their muzzles, the barrel, gun carriage and breech ends rising and falling within the turret. This reduces the chances of enemy fire hitting the guns, a small risk on a moving ship, but significant for a fixed fortification. When the turret was under fire, it moved the gun apertures away from the incoming fire, returning fire while rotating without pausing, when contact was made on the correct target azimuth.
Description
The visible portion of the turret was in diameter, in cast and rolled iron of four segments thick, with a fifth casting forming the top. The rotating gun and turret assembly weighed 160 tons, rotating on a circular rail around a hydraulically supported pivot. The movement of the turret initially required three teams of six men. After 1901 steam engines were installed to replace men. A full revolution took about two minutes, enough time to reload before the target azimuth was obtained again. Elevation varied from -5 degrees to +20 degrees.
Mougin guns had a maximum range of about . The guns themselves were made by de Bange. Twenty-five turrets were built at a cost of 205,000 francs each, primarily at Commentry near Montluçon.
Mougin casemate
A variant on the Mougin turret is the Mougin casemate, which employs the 155 mm Model 1877 gun on a hydraulic traversing carriage within an armored concrete casemate. The casemate has exceptionally low overhead clearance, resulting in a low profile above the ground. The gun can be traversed over a 60-degree arc, and can be elevated between −5 degrees and +20 degrees. This narrow range limits the gun to direct fire with a range of , as most indirect fire requires greater elevation. The firing port measures by , and can be blocked with a thick counterweighted armored shield when not firing. An interlock prevents firing while the shield is in the way. The gun's muzzle remains behind the movable shield and is not visible from the outside.
The shielding around the firing chamber is a mixture of masonry, concrete, steel armor and earth shielding. The limited angle of fire, coupled with problems of noise and ventilation, limited installations to ten locations. None were ever fired in action, and most were removed for scrap by the Germans in 1943, or by the French Army after the war.
Trials
A comparative evaluation between French Mougin turrets with de Bange guns and German Schumann-Gruson turrets with Krupp guns took place at Bucharest in 1883–84 under the supervision of Belgian General Henri Alexis Brialmont, who was then overseeing the design of the fortifications of Bucharest. The trials at Cotroceni revealed that the French turrets were more reliable, and had a higher rate of fire, but the German guns were more accurate. The French armor proved to be less durable under fire as well.
Installations
The first two Mougin turrets were installed at the Fort de Giromagny on the eastern defensive curtain of France near Belfort.
Surviving Mougin turrets may be found at Fort de Saint-Cyr (guns missing), Fort de Villey-le-Sec, Fort de Vaujours (guns missing), Fort de Frouard, Fort de Liouville, Fort de Corbas, Fort Suchet (two turrets, one with guns, the other turret's guns removed to Villey-le-Sec), Fort de Domont (guns missing), and Fort de Stains (guns missing).
Casemates
Surviving Mougin casemates exist at Fort du Mont Bart (gun missing, replica in place), Fort de Condé-sur-Aisne (gun remains, training mechanism missing), Fort de Joux (two casemates, guns and mounts missing), Fort Tête de Chien (gun missing), Fort des Ayvelles (destroyed) and the Batterie de l'Eperon (two casemates, parts of the mounting remain).
References
External links
Mougin Model 1876 at fortiffsere.fr
Mougin 155 mm casemate at fortiffsere.fr
Mougin turret installations at fortiff.be
Séré de Rivières system
155 mm artillery
World War I artillery of France | Mougin turret | Engineering | 1,089 |
981,045 | https://en.wikipedia.org/wiki/Three-key%20exposition | In music, the three-key exposition is a particular kind of exposition used in sonata form.
Normally, a sonata form exposition has two main key areas. The first asserts the primary key of the piece, that is, the tonic. The second section moves to a different key, establishes that key firmly, arriving ultimately at a cadence in that key. For the second key, composers normally chose the dominant for major-key sonatas, and the relative major (or less commonly, the minor-mode dominant) for minor-key sonatas. The three-key exposition moves not directly to the dominant or relative major, but indirectly via a third key; hence the name.
Examples
A very early example appears in the first movement of Haydn's String Quartet in D major, Op. 17 No. 6: the three keys are D major, C major, and A major. (C major is prepared by a modulation to its relative minor A minor, which happens to be the dominant minor of the original key.)
Ludwig van Beethoven wrote a number of sonata movements during the earlier part of his career with three-key expositions. For the "third" (that is, the intermediate) key, Beethoven made various choices: the dominant minor (Piano Sonata No. 2, Op. 2 no. 2; String Quartet No. 5, Op. 18 no. 5), the supertonic minor (Piano Sonata No. 3, Op. 2 no. 3), and the relative minor (Piano Sonata No. 7, Op. 10 no. 3). Later, Beethoven used the supertonic major (Piano Sonata No. 9, Op. 14 no. 1, Piano Sonata No. 11, Op. 22), which is only a mild sort of three-key exposition, since the supertonic major is the dominant of the dominant, and commonly arises in any event as part of the modulation. As he entered his so-called "middle period," Beethoven abandoned the three-key exposition. This was part of a general change in the composer's work in which he moved closer to the older practice of Haydn, writing less discursive and more closely organized sonata movements.
Franz Schubert, who liked discursive forms for the entirety of his short career, also employed the three-key expositions in many of his sonata movements. A famous example is the first movement of the Death and the Maiden Quartet in D minor, in which the exposition moves to F major and then A minor (translated to D major and minor respectively in the recapitulation), a formula that is repeated in the final movement; another is the Violin Sonata in A major (in which the second theme appears in G major and B major, while only the closing passage of the exposition is in the dominant, E major). His B major piano sonata, D 575, even uses a four-key exposition (B major, G major, E major, F-sharp major): this key scheme is literally transposed up a fourth for the recapitulation. The finale of his sixth symphony (D 589) is an even more extreme case: its exposition passes from C major to G major by way of A-flat major, F major, A major, and E-flat major, making a six-key exposition.
Felix Mendelssohn followed the Death and the Maiden example in the first movement of his second Piano Trio, in which the E flat major second theme gives way to a G minor close (transposed to C major and minor in the recapitulation).
The first movement of Frédéric Chopin's Piano Concerto in F minor also has a three-key exposition (F minor, A-flat major, C minor).
The first movement of the second cello sonata by Brahms also employs a three-key exposition moving to C major and then A minor, the exposition of the first movement of the String Sextet in B flat involves an intervening theme in A major before reaching F, and the Piano Quartet in G minor involves secondary themes in D minor and major respectively (the first of these being omitted in the recapitulation and the second transposed to E flat major moving back to G minor). The D minor violin sonata has a final movement that moves through a calm second theme in C major before closing the exposition in A minor.
Further reading
Longyear, Rey M., and Kate R. Covington (1988). Sources of the three-key exposition. The Journal of Musicology 6(4), pp. 448-470.
Rosen, Charles (1985) Sonata Forms. New York: Norton.
Graham G. Hunt; When Structure and Design Collide: The Three-Key Exposition Revisited, Music Theory Spectrum, Volume 36, Issue 2, 1 December 2014, Pages 247–269.
Formal sections in music analysis | Three-key exposition | Technology | 984 |
2,366,340 | https://en.wikipedia.org/wiki/Fishing%20weir | A fishing weir, fish weir, fishgarth or kiddle is an obstruction placed in tidal waters, or wholly or partially across a river, to direct the passage of, or trap fish. A weir may be used to trap marine fish in the intertidal zone as the tide recedes, fish such as salmon as they attempt to swim upstream to breed in a river, or eels as they migrate downstream. Alternatively, fish weirs can be used to channel fish to a particular location, such as to a fish ladder. Weirs were traditionally built from wood or stones. The use of fishing weirs as fish traps probably dates back prior to the emergence of modern humans, and have since been used by many societies around the world.
In the Philippines, specific indigenous fishing weirs (a version of the ancient Austronesian stone fish weirs) are also known in English as fish corral and barrier net.
Etymology
The English word 'weir' comes from the Anglo-Saxon wer, one meaning of which is a device to trap fish.
Fishing weirs by region
Africa
A line of stones dating to the Acheulean in Kenya may have been a stone tidal weir in a prehistoric lake, which if true would make this technology older than modern humans.
Americas
North America
In September 2014 researchers from University of Victoria investigated what may turn out to be a 14,000-year-old fish weir in of water off the coast of Haida Gwaii, British Columbia.
In Virginia, the Native Americans built V-shaped stone weirs in the Potomac River and James River. These were described in 1705 in The History and Present State of Virginia, In Four Parts by Robert Beverley Jr:
This practice was taken up by the early settlers but the Maryland General Assembly ordered the weirs to be destroyed on the Potomac in 1768. Between 1768 and 1828 considerable efforts were made to destroy fish weirs that were an obstruction to navigation and from the mid-1800s, those that were assumed to be detrimental to sports fishing.
In the Back Bay area of Boston, Massachusetts, wooden stake remains of the Boylston Street Fishweir have been documented during excavations for subway tunnels and building foundations. The Boylston Street Fishweir was actually a series of fish weirs built and maintained near the tidal shoreline between 3,700 and 5,200 years ago.
Natives in Nova Scotia use weirs that stretch across the entire river to retain shad during their seasonal runs up the Shubenacadie, Nine Mile, and Stewiacke rivers, and use nets to scoop the trapped fish. Various weir patterns were used on tidal waters to retain a variety of different species, which are still used today. V-shaped weirs with circular formations to hold the fish during high tides are used on the Bay of Fundy to fish herring, which follow the flow of water. Similar V-shaped weirs are also used in British Columbia to corral salmon to the end of the "V" during the changing of the tides.
The Cree of the Hudson Bay Lowlands used weirs consisting of a fence of poles and a trap across fast flowing rivers. The fish were channelled by the poles up a ramp and into a box-like structure made of poles lashed together. The top of the ramp remained below the surface of the water but slightly above the top of the box so that the flow of the water and the overhang of the ramp stopped the fish from escaping from the box. The fish were then scooped out of the box with a dip net.
South America
A large series of fish weirs, canals and artificial islands was built by an unknown pre-Columbian culture in the Baures region of Bolivia, part of the Llanos de Moxos. These earthworks cover over , and appear to have supported a large and dense population around 3000 BCE.
Stone fish weirs were in use 6,000 years ago in Chiloé Island off the coast of Chile.
Asia and Oceania
Tidal stone fish weirs are one of the ancestral fishing technologies of the seafaring Austronesian peoples. They are found throughout regions settled by Austronesians during the Austronesian expansion () and are very similar in shape and construction throughout. In some regions they have also been adopted into fish pens or use more perishable materials like bamboo, brushwood, and netting. They are found in the highest concentrations in Penghu Island in Taiwan, the Philippines, and all throughout Micronesia. They are also prevalent in eastern Indonesia, Melanesia, and Polynesia. Around 500 stone weirs survive in Taiwan, and millions of stone weirs used to exist through all of the islands of Micronesia. They are known as in the Visayas Islands of the Philippines, in Chuuk, in Yap, in Hawaii, and in New Zealand, among other names. The oldest known example of a stone fish weir in Taiwan was constructed by the indigenous Taokas people in Miaoli County. Most stone fish weirs are believed to also be ancient, but few studies have been conducted into their antiquity as they are difficult to determine due to being continually rebuilt in the same location.
The technology of tidal stone fish weirs has also spread to neighboring regions when Taiwan came under the jurisdiction of China and Imperial Japan in recent centuries. They are known as or in Kyushu, in the Ryukyu Islands; , , , or in South Korea (pariticularly Jeju Island); and in Taiwan.
The Han Chinese also had separate ancient fish weir techniques, known as , which use bamboo gates or "curtains" in river estuaries. These date back to at least the 7th century in China.
Europe
In medieval Europe, large fishing weir structures were constructed from wood posts and wattle fences. V-shaped structures in rivers could be as long as and worked by directing fish towards fish traps or nets. Such weirs were frequently the cause of disputes between various classes of river users and tenants of neighbouring land. Basket weir fish traps are shown in medieval illustrations and surviving examples have been found. Basket weirs are about long and comprise two wicker cones, one inside the other—easy for fish to get into but difficult to escape.
Great Britain
In Great Britain the traditional form was one or more rock weirs constructed in tidal races or on a sandy beach, with a small gap that could be blocked by wattle fences when the tide turned to flow out again.
Wales
Surviving examples, but no longer in use, can be seen in the Menai Strait, with the best preserved examples to be found at Ynys Gored Goch (Red Weir Island) dating back to around 1842. Also surviving are 'goredi' (originally twelve in number) on the beach at Aberarth, Ceredigion. Another ancient example was at Rhos Fynach in North Wales, which survived in use until World War I. The medieval fish weir at Traeth Lligwy, Moelfre, Anglesey was scheduled as an Ancient Monument in 2002.
England
Fish weirs were an obstacle to shipping and a threat to fish stocks, for which reasons over the course of history several attempts were made to control their proliferation. The Magna Carta of 1215 includes a clause embodying the barons' demands for the removal of the king's weirs and others:
A statute was passed during the reign of King Edward III (1327–1377) and was reaffirmed by King Edward IV in 1472 A further regulation was enacted under King Henry VIII, apparently at the instigation of Thomas Cromwell, when in 1535 commissioners were appointed in each county to oversee the "putting-down" of weirs. The words of the commission were as follows:
All weirs noisome to the passage of ships or boats to the hurt of passages or ways and causeys (i.e. causeways) shall be pulled down and those that be occasion of drowning of any lands or pastures by stopping of waters and also those that are the destruction of the increase of fish, by the discretion of the commissioners, so that if any of the before-mentioned depend or may grow by reason of the same weir then there is no redemption but to pull them down, although the same weirs have stood since 500 years before the Conquest.
The king did not exempt himself from the regulation and by the destruction of royal weirs lost 500 marks in annual income. The Lisle Papers provide a detailed contemporary narrative of the struggle of the owners of the weir at Umberleigh in Devon to be exempted from this 1535 regulation. The Salmon Fishery Act 1861 (24 & 25 Vict. c. 109) (relevant provisions re-enacted since) bans their use except wherever their almost continuous use can be traced to before the Magna Carta (1215).
Ireland
In Ireland, discoveries of fish traps associated with weirs have been dated to 8,000 years ago. Stone tidal weirs were used around the world and by 1707, 160 such structures, some of which reached 360 metres in length, were in use along the coast of the Shimabara Peninsula of Japan.
Gallery
See also
Desert kite
Fish screen
Mnjikaning Fish Weirs
Tailrace fishing
Weir
References
External links
Prehistoric Fishweirs in Eastern North America – master's thesis on fish weirs
Fishing equipment
Native American tools
Austronesian culture
Weirs | Fishing weir | Environmental_science | 1,885 |
64,362,831 | https://en.wikipedia.org/wiki/Stress%20wave%20communication | Stress wave communication is a technique of sending and receiving messages using host structure itself as the transmission medium.
Conventional modulation methods such as amplitude-shift keying (ASK), frequency-shift keying (FSK), phase-shift keying (PSK), quadrature amplitude modulation (QAM), pulse-position modulation (PPM) and orthogonal frequency-division multiplexing (OFDM) could be leveraged for stress wave communication. The challenge to use stress wave as the carrier of the communication is the severe signal distortions due to the multipath channel dispersion. Compared with other communication techniques, it is a very reliable communication for special applications, such as within concrete structures, well drilling string, pipeline structures and so on.
References
Quantized radio modulation modes
Applied probability
Fault tolerance | Stress wave communication | Mathematics,Engineering | 164 |
5,348,452 | https://en.wikipedia.org/wiki/Quasi-perfect%20equilibrium | Quasi-perfect equilibrium is a refinement of Nash Equilibrium for extensive form games due to Eric van Damme.
Informally, a player playing by a strategy from a quasi-perfect equilibrium takes observed as well as potential future mistakes of his opponents into account but assumes that he himself will not make a mistake in the future, even if he observes that he has done so in the past.
Quasi-perfect equilibrium is a further refinement of sequential equilibrium. It is itself refined by normal form proper equilibrium.
Mertens' voting game
It has been argued by Jean-François Mertens that quasi-perfect equilibrium is superior to Reinhard Selten's notion of extensive-form trembling hand perfect equilibrium as a quasi-perfect equilibrium is guaranteed to describe admissible behavior. In contrast, for a certain two-player voting game no extensive-form trembling hand perfect equilibrium describes admissible behavior for both players.
The voting game suggested by Mertens may be described as follows:
Two players must elect one of them to perform an effortless task. The task may be performed either correctly or incorrectly. If it is performed correctly, both players receive a payoff of 1, otherwise both players receive a payoff of 0. The election is by a secret vote. If both players vote for the same player, that player gets to perform the task. If each player votes for himself, the player to perform the task is chosen at random but is not told that he was elected this way. Finally, if each player votes for the other, the task is performed by somebody else, with no possibility of it being performed incorrectly.
In the unique quasi-perfect equilibrium for the game, each player votes for himself and, if elected, performs the task correctly. This is also the unique admissible behavior. But in any extensive-form trembling hand perfect equilibrium, at least one of the players believes that
he is at least as likely as the other player to tremble and perform the task incorrectly and hence votes for the other player.
The example illustrates that being a limit of equilibria of perturbed games, an extensive-form trembling hand perfect equilibrium implicitly assumes an agreement between the players about the relative magnitudes of future trembles. It also illustrates that such an assumption may be unwarranted and undesirable.
References
Game theory equilibrium concepts | Quasi-perfect equilibrium | Mathematics | 471 |
30,291,598 | https://en.wikipedia.org/wiki/Quelet%20reaction | The Quelet reaction (also called the Blanc–Quelet reaction) is an organic coupling reaction in which a phenolic ether reacts with an aliphatic aldehyde to generate an α-chloroalkyl derivative. The Quelet reaction is an example of a larger class of reaction, electrophilic aromatic substitution. The reaction is named after its creator R. Quelet, who first reported the reaction in 1932, and is similar to the Blanc chloromethylation process.
The reaction proceeds under strong acid catalysis using HCl; zinc(II) chloride may be used as a catalyst in instances where the ether is deactivated. The reaction primarily yields para-substituted products; however it can also produce ortho-substituted compounds if the para site is blocked.
Mechanism
The mechanism of the Quelet reaction is primarily categorized as a reaction in polar acid. First, the carbonyl is protonated forming a highly reactive protonated aldehyde that acts as the electrophile to the nucleophilic pi-bond of the aromatic ring. Next, the aromatic ring is reformed via E1. Finally, the hydroxy group formed from the carbonyl oxygen is protonated a second time and leaves as a molecule of water, creating a carbocation that is attacked by the negatively charged chlorine ion.
Reaction conditions and limitations
The reaction requires a strong acid catalyst, but both Lewis acids and Brownsted-Lowry acids can be used in the Quelet reaction. It has been noted that aqueous formaldehyde sometimes produces a better yield than paraformaldehyde. The reaction was first reported using zinc(II) chloride, however the reaction has been noted to proceed in the absence of this catalyst in highly activated aromatic compounds. If using an aromatic compound where the para-site is blocked, the reaction will add in the ortho-position (see example right).
Not all aromatic compounds can undergo Quelet reactions. For example, too highly halogenated aromatic compounds, aromatic compounds with nitro groups, and terphenyls cannot be used as reactants for Quelet reactions. Even for compounds that can undergo Quelet reactions, there sometimes exists other reactions that produce the same products in higher yields. The Quelet reaction can produce dangerous halomethyl ethers, gaseous and liquid compounds that are toxic to humans, and therefore is sometimes passed up for chloromethylations without these harmful byproducts.
Usage
The Quelet reaction is an important step in the polymerization of aromatic monomers, such as styrene, PPO and PPEK. These chloromethylated aromatic polymers are used in a diverse set of industries, such as fuel cells and membranes for drug delivery.
See also
Blanc reaction
Electrophilic aromatic substitution
Friedel-Crafts Alkylation
References
Name reactions
Addition reactions
Substitution reactions
Carbon-carbon bond forming reactions | Quelet reaction | Chemistry | 589 |
29,993,138 | https://en.wikipedia.org/wiki/Neoclitocybe%20byssiseda | Neoclitocybe byssiseda is a species of fungus in the family Tricholomataceae, and the type species of the genus Neoclitocybe. Initially described as Omphalia byssiseda by Giacomo Bresadola in 1907, it was transferred to Neoclitocybe by Rolf Singer in 1961. The mushroom is edible.
References
External links
Tricholomataceae
Fungus species
Taxa named by Giacomo Bresadola
Fungi described in 1907 | Neoclitocybe byssiseda | Biology | 99 |
64,280,274 | https://en.wikipedia.org/wiki/Bioactive%20glass%20S53P4 | Bioactive glass S53P4 (BAG-S53P4) is a biomaterial consisting of sodium, silicate, calcium and phosphate. S53P4 is osteoconductive and also osteoproductive in the promotion, migration, replication and differentiation of osteogenic cells and their matrix production. In other words, it facilitates bone formation and regeneration (osteostimulation). S53P4 has been proven to naturally inhibit the bacterial growth of up to 50 clinically relevant bacteria strains.
History
The S53P4 bioactive glass has its roots in the bioglass 45S5 developed by Larry Hench in the late 1960s in New York. A couple of decades later, in the 1980s, the compound S53P4 bioactive glass was developed in Turku, Finland. S53P4 was found to be osteostimulative (non-osteoinductive), but it also had one new additional property: the composition of 53% silica and smaller weights of sodium, calcium and phosphorus gave rise to surface reactions in vitro that appeared to inhibit bacterial growth – a material that could not be infected by bacteria was discovered.
Applications
Areas of use include a wide range of indications that require the filling of bone cavities, voids, and gaps as well as the reconstruction or regeneration of bone defects. Several long-term studies have shown that mastoid cavities in both cholesteatoma, old radical cavities, and chronic otitis media can be successfully obliterated with S53P4 bioactive glass.
Clinical application has been gained from several extensive studies where patients with bone infections have been treated. S53P4 has shown promising results in chronic osteomyelitis surgery, septic non-union surgery, segmental defect reconstructions and other infectious complications, such as sternum infections, diabetic foot osteomyelitis and spine infections.
S53P4 has gained clinical experience within spine surgery in spine fusions and spinal deformity surgery.
S53P4 has also been used successfully in the filling of benign bone tumor cavities in both adults and children, sustaining the bone cavity volume long term. Clinical experience has been gained from aneurysmal bone cysts (ABC), simple bone cysts (UBC), enchondroma and nonossifying fibroma (NOF).
Mechanism of action
When S53P4 bioactive glass is implanted into a bone cavity, the glass is activated through a reaction with body fluids. During this activation period, the bioactive glass goes through a series of chemical reactions, creating the ideal conditions for bone to rebuild through osteoconduction.
Na, Si, Ca, and P ions are released.
A silica gel layer forms on the bioactive glass surface.
CaP crystallizes, forming a layer of hydroxyapatite on the surface of the bioactive glass.
Once the hydroxyapatite layer is formed, the bioactive glass interacts with biological entities, i.e. blood proteins, growth factors and collagen. Following this interactive, osteoconductive and osteostimulative process, new bone grows onto and between the bioactive glass structures.
Bioactive glass bonds to bone – facilitating new bone formation.
Osteostimulation begins by stimulating osteogenic cells to increase the remodeling rate of bone.
Radio-dense quality of bioactive glass allows for post-operative evaluation.
In the final transformative phase, the process of bone regeneration and remodeling continues. Over time, the glass is fully remodeled into bone, restoring the patient's natural anatomy.
Bone consolidation occurs.
S53P4 bioactive glass continues to remodel into bone over a period of years.
Inhibition of bacterial growth
The bacterial growth inhibiting properties of S53P4 derive from two simultaneous chemical and physical processes, occurring once the bioactive glass reacts with body fluids. Sodium (Na) is released from the surface of the bioactive glass and induces an increase in pH (alkaline environment), which is not favorable for the bacteria, thus inhibiting their growth. The released Na, Ca, Si and P ions give rise to an increase in osmotic pressure due to an elevation in salt concentration, i.e. an environment where bacteria cannot grow.
References
Biomaterials | Bioactive glass S53P4 | Physics,Biology | 905 |
33,288,137 | https://en.wikipedia.org/wiki/Glycoside%20hydrolase%20family%2042 | In molecular biology, glycoside hydrolase family 42 is a family of glycoside hydrolases.
Glycoside hydrolases are a widespread group of enzymes that hydrolyse the glycosidic bond between two or more carbohydrates, or between a carbohydrate and a non-carbohydrate moiety. A classification system for glycoside hydrolases, based on sequence similarity, has led to the definition of >100 different families. This classification is available on the CAZy web site, and also discussed at CAZypedia, an online encyclopedia of carbohydrate active enzymes.
The glycosyl hydrolase 42 family CAZY GH_42 comprises beta-galactosidase enzymes (). These enzyme catalyse the hydrolysis of terminal, non-reducing terminal beta-D-galactoside residues. The middle domain of these three-domain enzymes is involved in trimerisation.
References
EC 3.2.1
Glycoside hydrolase families
Protein families | Glycoside hydrolase family 42 | Biology | 224 |
801,656 | https://en.wikipedia.org/wiki/Widescreen%20signaling | In television technology, Wide Screen Signaling (WSS) is digital metadata embedded in invisible part of the analog TV signal describing qualities of the broadcast, in particular the intended aspect ratio of the image. This allows television broadcasters to enable both 4:3 and 16:9 television sets to optimally present pictures transmitted in either format, by displaying them in full screen, letterbox, widescreen, pillar-box, zoomed letterbox, etc.
This development is related to introduction of widescreen TVs and broadcasts, with the PALplus system in the European Union (mid 1990s), the Clear-Vision system in Japan (early 1990s), and the need to downscale HD broadcasts to SD in the US. The bandwidth of the WSS signal is low enough to be recorded on VHS (at the time a popular home video recording technology). It is standardized on Rec. ITU-R BT.1119-2.
A modern digital equivalent would be the Active Format Description, a standard set of codes that can be sent in a MPEG video stream, with a similar set of aspect ratio possibilities.
625 line systems
For 625 line analog TV systems (like PAL or SECAM), the signal is placed in line 23. It begins with a run-in code and starts code followed by 14 bits of information, divided into four groups, as shown on the tables below (based on Rec. ITU-R BT.1119-2) :
Note: The transmitted aspect ratio is 4:3. Within this area a 14:9 window is protected, containing all the relevant picture content to allow a wide-screen display on a 16:9 television set.
525 line systems
525 line analog systems (like NTSC or PAL-M) made a provision for the use of pulses for signaling widescreen and other parameters, introduced with the development of Clear-Vision (EDTV-II), a NTSC-compatible Japanese system allowing widescreen broadcasts. On these systems the signals are present in lines 22 and 285, as 27 data bits, as defined by IEC 61880.
The following table shows the information present on the signal, based on Rec. ITU-R BT.1119-2 ("helper" signals are EDTV-II specific):
See also
PALplus
Clear-Vision
Active Format Description (AFD)
Teletext
References
External links
Renesas AN9716, Widescreen Signaling (WSS) covering 625 lines and 525 lines standard.
Television technology
de:Wide Screen Signalling | Widescreen signaling | Technology | 515 |
31,638,460 | https://en.wikipedia.org/wiki/KaPPA-View4 | KaPPA-View4 is a metabolic pathway database containing data about metabolic regulation from 'omics' data.
See also
Metabolic pathway
References
External links
kazusa
Biological databases
Gene expression
Metabolism | KaPPA-View4 | Chemistry,Biology | 38 |
37,796,507 | https://en.wikipedia.org/wiki/BI%20253 | BI 253 is an O2V star in the Large Magellanic Cloud and is a primary standard of the O2 type. It is one of the hottest main-sequence stars known and one of the most-massive and most-luminous stars known.
Discovery
BI 253 was first catalogued in 1975 as the 253rd of 272 likely O and early B stars in the Large Magellanic Cloud. In 1995, the spectral type was analysed to be O3 V, the earliest type defined at that time.
When the classification of the earliest type O stars was refined in 2002, the complete lack of neutral helium or doubly ionised nitrogen lines in the spectrum led to BI 253 being placed in a new O2 V class. It was given a ((f*)) qualifier because of the very weak emission lines of helium and nitrogen. The most recent published data gives a spectral type of O2V-III(n)((f*)), although it is unclear whether this is due to higher quality spectra or an actual change in the spectrum.
BI 253 has been identified as a runaway star because of its relatively isolated position outside the main star-forming areas of 30 Doradus, and because of its high space velocity. It was potentially ejected from the R136 cluster about a million years ago.
Properties
BI 253 is one of the hottest, most massive, and most luminous known main sequence stars. The temperature is around 54,000 K, the luminosity over , and the mass of nearly , although its radius is less than . The rotation rate of around is high, but this is common in the youngest and hottest stars, either due to spin-up during stellar formation or merger of a close binary system.
Evolution
BI 253 is still burning hydrogen in its core, but shows enrichment of nitrogen and helium at the surface due to strong rotational and convectional mixing and because of its strong stellar wind. It is very close to the expected ZAMS position for an star. It is expected that stars more massive than BI 253 would show a giant or supergiant luminosity class even on the main sequence.
References
Stars in the Large Magellanic Cloud
Large Magellanic Cloud
Dorado
Extragalactic stars
O-type main-sequence stars
Runaway stars
J05373446-6901102
Emission-line stars
O-type giants | BI 253 | Astronomy | 481 |
62,643,643 | https://en.wikipedia.org/wiki/ToTok | ToTok was a messaging app and Voice over IP app, developed by G42 (company), a UAE-based artificial intelligence company. It was the first Voice over IP app which the Emirati government allowed, and introduced in 2019 in Abu Dhabi Global Market economic free zone. According to The New York Times, it was also a mass surveillance tool of the United Arab Emirates intelligence services, used to gather private information on users' phones. The Timess reporting, denied by the app's developers, caused Google and Apple to remove the app in December 2019. It was removed on 15 February 2020.
Development and features
The app was developed by G42 (company), a UAE-based artificial intelligence company, whose CEO is the former CEO of Pegasus LLC, a one-time division of DarkMatter, an Emirati intelligence company under FBI investigation for cybercrimes. Breej Holding Ltd, was merely a front company of DarkMatter. The app puportedly was developed by Giacomo Ziani. Sheikh Tahnoun bin Zayed Al Nahyan (national security advisor)’s adopted son was the sole director of Breej Holding Ltd. ToTok was linked to Pax AI, an Emirati data mining firm tied to DarkMatter and located in the same building as the Emirates' Signals Intelligence Agency.
ToTok app offered free messaging, voice calls and video calls. Conference calls involving up to 20 people could also be made. According to The New York Times, the app appeared to be a slightly-customized copy of YeeCall, a Chinese messaging app.
Popularity
Introduced in 2019, it soon found a wide user base in the Emirates. Its spread was aided by the fact that the Emirati government blocks certain functions of other messaging services such as Skype and WhatsApp, and was apparently the first Voice over IP app to gain regulatory approval. The app was also promoted by state-linked Emirati publications and by the Chinese telecommunications company Huawei. That ToTok appeared to not be affiliated with a powerful country may also have helped its popularity in the Middle East. In December 2019, BotIM, a subscription-based messaging app, sent its users a message recommending ToTok for free messaging and calls.
As of December 2019, ToTok was among the most-used 50 free apps in several countries including Saudi Arabia, the United Kingdom, India and Sweden. , the app had 7.9 million downloads between the iOS App Store and Google Play, with nearly two million daily users.
Surveillance tool reports
On 22 December 2019, The New York Times reported that U.S. intelligence assessments and the paper's own investigations showed that ToTok was used by Emirati intelligence to gather all conversations, movements, relationships, appointments, sounds and images by the app's users. The app does not use exploits, backdoors or malware. Instead, it gives the government access to the information shared through the app, as well as to other information on the smartphone that the government can access through permissions granted by users in order to enable the app's features.
Breej Holding denied that its app was a spy tool, writing that its users "have the complete control over what data they want to share at their own discretion. The shameless fabrication by our distractors cannot be further from the truth." The Emirati telecommunications agency issued a statement that emphasized what it said were the country's strict privacy laws, but did not directly address the Timess reporting. The local Khaleej Times interviewed the "ToTok co-founder" Giacomo Ziani, who confirmed that he bought YeeCall's code, but also denied that his app was a government surveillance tool.
In response to the Timess inquiries, Google and Apple removed ToTok from their respective app stores on 19 and 20 December 2019. The app re-appeared on Google Play on 3 January 2020, and disappeared again on 15 February 2020.
See also
Cyber spying
References
Further reading
Bill Marczak A BREEJ TOO FAR: How Abu Dhabi’s Spy Sheikh hid his Chat App in Plain Sight Jan 2, 2020
2019 software
2019 controversies
Android (operating system) software
IOS software
Emirati inventions
Instant messaging clients
Surveillance scandals
Telecommunications in the United Arab Emirates
VoIP software | ToTok | Technology | 859 |
63,299,027 | https://en.wikipedia.org/wiki/Frank%20Calegari | Francesco Damien "Frank" Calegari is a professor of mathematics at the University of Chicago working in number theory and the Langlands program.
Early life and education
Frank Calegari was born on December 15, 1975. He has both Australian and American citizenship.
He won a bronze medal and a silver medal at the International Mathematical Olympiad while representing Australia in 1992 and 1993 respectively. Calegari received his PhD in mathematics from the University of California, Berkeley in 2002 under the supervision of Ken Ribet.
Career
Calegari was a Benjamin Peirce Assistant Professor at Harvard University from 2002 to 2006. He then moved to Northwestern University, where he was an assistant professor from 2006 to 2009, an associate professor from 2009 to 2012, and a professor from 2012 to 2015. He has been a professor of mathematics at the University of Chicago since 2015.
Calegari was a von Neumann Fellow of mathematics at the Institute for Advanced Study from 2010 to 2011.
Calegari was an editor at Mathematische Zeitschrift from 2013 to 2021. He has been an editor of Algebra & Number Theory and an associate editor of the Annals of Mathematics since 2019.
Research
Calegari works in algebraic number theory, including Langlands reciprocity and torsion classes in the cohomology of arithmetic groups.
In collaboration with Vesselin Dimitrov and Yunqing Tang, Calegari proved the unbounded denominators conjecture of A.O.L. Atkin and Swinnerton-Dyer: if a modular form is not modular for some congruence subgroup of the modular group, then the Fourier coefficients of have unbounded denominators. It has been known for decades that if is modular for some congruence subgroup, then its coefficients have bounded denominators.
Also in collaboration with Dimitrov and Tang, he proved the linear independence of and
Awards
Calegari held a 5-year American Institute of Mathematics Fellowship from 2002 to 2006 and a Sloan Research Fellowship from 2009 to 2012. He was inducted as a Fellow of the American Mathematical Society in 2013.
Selected publications
Personal life
Mathematician Danny Calegari is Frank Calegari's brother.
References
External links
20th-century Australian mathematicians
21st-century American mathematicians
Number theorists
Living people
Place of birth missing (living people)
University of Chicago faculty
UC Berkeley College of Letters and Science alumni
Institute for Advanced Study visiting scholars
International Mathematical Olympiad participants
1975 births | Frank Calegari | Mathematics | 489 |
7,601,985 | https://en.wikipedia.org/wiki/Chem-E-Car | The Chem-E-Car Competition is an annual college competition for students majoring in Chemical Engineering.
According to the competition's official rules, students must design small-scale automobiles that operate by chemical means, along with a poster describing their research. During the competition, they must drive their car a fixed distance (judged on how close the car is to the finish line) down a wedge-shaped course in order to demonstrate its capabilities. The exact distance (15-30 meters) and payload is revealed to the participants one hour before the competition. The size of designed cars cannot exceed certain specifications and cars must operate using "green" methods, which do not release any pollution or waste in the form of a visible liquid or gas, such as exhaust. The dimensions of the car are to be within 20x30x40 cm. This competition is hosted in the United States by the AIChE (American Institute of Chemical Engineers), and winners of the competitions receive various awards, depending on how they placed.
Awards
Regional Competition Awards (funded by AIChE)
Poster Competition
Ribbons for 1st, 2nd, and 3rd place
Ribbon for Most Creative Drive System
Ribbon for Most Creative Vehicle Design
Performance Competition
1st place: $200 and Ribbon
2nd place: $100 and Ribbon
3rd place: Honorable mention and Ribbon
Ribbons for 4th and 5th place finishers
Ribbon for Spirit of Competition
National Competition Awards (funded by Chevron)
1st place: $2,000 and a trophy.
2nd place: $1,000 and a trophy.
3rd place: $500 and a trophy.
Best Use of a Biological Reaction to Power a Car - $1,000 Prize: Sponsored by the Society for Biological Engineering
SAChE Safety Award for the best application of the principles of chemical process safety to the Chem-E-Car competition.
Most Consistent Performance - This award is based on the best average score for the two runs that the vehicle makes. It has been created to recognize the team that has designed and most understands the performance of the reaction that powers the vehicle. Award consists of a plaque.
Spirit of the Competition - This award is given to the team displaying the most team spirit as decided by a panel of judges. Award consists of a plaque.
Most Creative Drive System - Recognition is awarded to the team that has designed and installed the most creative propulsion system. The winner is decided by a panel of judges during the poster competition. Award consists of a plaque.
Golden Tire Award - In 2002, Northeastern University team members created this award to recognize the team with the most creative vehicle design. The national committee has adopted this as an annual award. The winning entry is decided by a ballot cast by each team entered in the competition. Award consists of a plaque.
Past National Performance Competition Winners
2024 - Cornell University
2023 - Auburn University
2022 - University of Toledo
2021 - University of Toledo
2020 - Virginia Tech
2019 - Virginia Tech
2018 - Georgia Institute of Technology
2017 - Institut Teknologi Sepuluh Nopember
2016 - Korea Advanced Institute of Science and Technology (KAIST)
2015 – Cornell University and McGill University (tie)
2014 - University of Utah
2013 - University of Tulsa
2012 – Cornell University
2011 – University of Puerto Rico at Mayagüez
2010 – Cornell University
2009 – Northeastern University
2008 – Cornell University
2007 – Cooper Union
2006 – University of Puerto Rico at Mayagüez
2005 – Tennessee Tech University
2004 – University of Tulsa
2003 – University of Dayton
2002 – University of Kentucky, Paducah
2001 – Colorado State University
2000 – University of Akron
1999 – University of Michigan
Rules
The competition has various rules:
The only energy source for the propulsion of the car is a chemical reaction. No liquid discharge is allowed. No obnoxious odor discharge is allowed.
No commercial batteries are allowed as the power source.
The stopping mechanism has to be controlled by a chemical reaction. No brakes, mechanical or electronic timing devices are allowed.
All components of the car must fit into a box of dimensions no larger than 40 cm x 30 cm x 20 cm (shoebox-sized). The car may be disassembled to meet this requirement.
The cost of the contents of the "shoe box" and the chemicals must not exceed $2,000. The vehicle cost includes the donated cost of any equipment. The time donated by university machine shops and other personnel will not be included in the total price of the car. It is expected that every university has equal access to these resources. The cost of pressure testing is also not included in the capital cost of the car.
Poster
Each car is required to have a poster board explaining how the car runs (power source), some of its specific features, and how it is environmentally friendly. Judges score these posters on four different things: the description of the chemical reaction and power source (20%), the creativity of the design and its unique features (20%), environment and safety features (40%), and the overall quality of the poster, along with the team's presentation (20%). Only posters judged with a score of 70% or above may move on to the performance competition.
Example reactions
Some ideas for chemical reactions have been using pressurized air (creating oxygen through a chemical reaction and allowing it to build pressure) or using electricity created by the dissolving of metals in certain acids (basic battery). One pedantic idea by Cooper Union was to use a fuel cell (a cell that converts fuel to electricity via an electrochemical reaction) to power their car.
Winners in this competition are not determined by whether their car is faster or more powerful, but how accurate their chemical reaction to stop their vehicle is. This is quite difficult, especially when the distance the car has to travel is unknown until the day of the competition. So teams must find a method that is flexible enough to fit a range of distances, and reliable enough so it does not fail with real world variables (temperature, humidity, track roughness, changes in elevation, etc.). Winners in the past have had a variety of ways of dealing with this problem, such as an iodine clock reaction. This reaction works by using two clear solutions (many variations) that change color after a time delay (the exact time can be found experimentally). When applied to the car, the team used a simple image sensor that could tell when the solutions changed color, at which point the cars power would shut off by cutting the circuit. While the process itself is somewhat simple, accounting for the unknown variables like the payload and distance is quite difficult.
References
External links
https://www.aiche.org/topics/students/chem-e-car
Science competitions | Chem-E-Car | Technology | 1,343 |
9,678,636 | https://en.wikipedia.org/wiki/Oct-2 | Oct-2 (octamer-binding protein 2) also known as POU domain, class 2, transcription factor 2 is a protein that in humans is encoded by the POU2F2 gene.
Oct-2 is an octamer transcription factor which is a member of the POU family.
References
External links
POU-domain proteins | Oct-2 | Chemistry,Biology | 72 |
14,667,031 | https://en.wikipedia.org/wiki/HD%20141937 | HD 141937 is a star in the southern zodiac constellation of Libra, positioned a couple of degrees to the north of Lambda Librae. It is a yellow-hued star with an apparent visual magnitude of 7.25, which means it is too faint to be seen with the naked eye. This object is located at a distance of 108.9 light years from the Sun based on parallax, but is drifting closer with a radial velocity of −2.2 km/s. It has an absolute magnitude of 4.71.
This is a G-type main-sequence star with a stellar classification of G1V. It is a solar-type star with slightly higher mass and radius compared to the Sun. The metallicity is higher than solar. It is an estimated 3.8 billion years old and is spinning with a projected rotational velocity of 6 km/s. The star is radiating 1.2 times the luminosity of the Sun from its photosphere at an effective temperature of 5,890 K.
The star has a substellar companion (HD 141937 b) announced in April 2001 by the European Southern Observatory. It has a minimum mass of 9.7 . In 2020, the inclination of the orbit was measured, revealing its true mass to be 27.4 , which makes it a brown dwarf. A 653-day orbit places the orbital distance 1.5 times farther away from the star as Earth is from the Sun, with a high eccentricity of 41%.
See also
HD 142022
HD 142415
List of extrasolar planets
References
G-type main-sequence stars
Brown dwarfs
Libra (constellation)
Durchmusterung objects
141937
077740 | HD 141937 | Astronomy | 353 |
52,208,686 | https://en.wikipedia.org/wiki/Asymptotic%20dimension | In metric geometry, asymptotic dimension of a metric space is a large-scale analog of Lebesgue covering dimension. The notion of asymptotic dimension was introduced by Mikhail Gromov in his 1993 monograph Asymptotic invariants of infinite groups in the context of geometric group theory, as a quasi-isometry invariant of finitely generated groups. As shown by Guoliang Yu, finitely generated groups of finite homotopy type with finite asymptotic dimension satisfy the Novikov conjecture. Asymptotic dimension has important applications in geometric analysis and index theory.
Formal definition
Let be a metric space and be an integer. We say that if for every there exists a uniformly bounded cover of such that every closed -ball in intersects at most subsets from . Here 'uniformly bounded' means that .
We then define the asymptotic dimension as the smallest integer such that , if at least one such exists, and define otherwise.
Also, one says that a family of metric spaces satisfies uniformly if for every and every there exists a cover of by sets of diameter at most (independent of ) such that every closed -ball in intersects at most subsets from .
Examples
If is a metric space of bounded diameter then .
.
.
.
Properties
If is a subspace of a metric space , then .
For any metric spaces and one has .
If then .
If is a coarse embedding (e.g. a quasi-isometric embedding), then .
If and are coarsely equivalent metric spaces (e.g. quasi-isometric metric spaces), then .
If is a real tree then .
Let be a Lipschitz map from a geodesic metric space to a metric space . Suppose that for every the set family satisfies the inequality uniformly. Then See
If is a metric space with then admits a coarse (uniform) embedding into a Hilbert space.
If is a metric space of bounded geometry with then admits a coarse embedding into a product of locally finite simplicial trees.
Asymptotic dimension in geometric group theory
Asymptotic dimension achieved particular prominence in geometric group theory after a 1998 paper of Guoliang Yu
, which proved that if is a finitely generated group of finite homotopy type (that is with a classifying space of the homotopy type of a finite CW-complex) such that , then satisfies the Novikov conjecture. As was subsequently shown, finitely generated groups with finite asymptotic dimension are topologically amenable, i.e. satisfy Guoliang Yu's Property A introduced in and equivalent to the exactness of the reduced C*-algebra of the group.
If is a word-hyperbolic group then .
If is relatively hyperbolic with respect to subgroups each of which has finite asymptotic dimension then .
.
If , where are finitely generated, then .
For Thompson's group F we have since contains subgroups isomorphic to for arbitrarily large .
If is the fundamental group of a finite graph of groups with underlying graph and finitely generated vertex groups, then
Mapping class groups of orientable finite type surfaces have finite asymptotic dimension.
Let be a connected Lie group and let be a finitely generated discrete subgroup. Then .
It is not known if has finite asymptotic dimension for .
References
Further reading
Metric geometry
Geometric group theory | Asymptotic dimension | Physics | 700 |
73,276,775 | https://en.wikipedia.org/wiki/Traditional%20Phenological%20Knowledge | Traditional Phenological Knowledge can be seen as a "subset of Indigenous Knowledge". Traditional Phenological Knowledge (TPK) is the knowledge based on traditional observations made by Indigenous Peoples that predict seasonal changes of nature and their immediate environment. This can be useful for the management of naturally occurring phenomenon, as well as "adaptive management" such as fire management. TPK is not a novel practice and has been practiced for hundreds of years. TPK encompasses history, observations and Traditional Knowledge(TK) or Indigenous Knowledge (IK). Indigenous Knowledge is flexible and always evolves. It considers the past, present and future of environmental and biological generations.
TPK is integrative and interactive. It falls under the same teachings of Traditional Ecological Knowledge also known as TEK. Both TPK and TEK share close definitions which IK can be an umbrella term. Traditional forms of knowledge are combined with sustainable interaction with the land. Indigenous knowledge creates a relationship that is respectful and symbiotic with the natural world and promotes the existence of passing on hands-on experiences to future generations.
Phenology in TPK can be qualitative and quantitative. Observations can be described, passed down by oral histories. TPK can reinforce what is measured and recorded scientifically. TPK can be a tool to help leverage climate change and biodiversity loss in today's climate crisis.
TPK can be "direct" or "indirect". Direct observations of phenology in TPK can refer to species signals and timings of secondary species. Direct TPK is translated through the use of belief systems, spirituality, stories, myth and ceremonial events. Indirect TPK is passed on through the use of language specifically. The use of both direct and indirect embodies, reinforces and defines the values TPK. The observation of nature timings along with stories and beliefs, pass down the knowledge from elders and family members that also contribute to the essence of TPK.
Phenology
Phenology observes the timing of seasonality of biological and weather events. Plant cycles, animal behaviour, weather patterns and climate change cycle through seasonality i.e.Flowering. As Swartz defines; "Phenology is the study of recurring plant, fungi and animal life cycle stages, especially as they relate to climate and weather".
Some of these observations can be variant depending on location. For instance, observing temperature and photoperiod can be indicators of seasonal change in parts the Northern Hemisphere and the Southern Hemisphere.
Observing plant species is an example of TPK. In temperate locations, the change of increased temperatures will signal growth which, in turn will create an environmental response that indicates spring and/or summer. Consequently, plants will flower with enough "accumulated heat".
Phenological processes
Phenology is describes as a process that revolves around the development of an organism (plants or animals) in relation to the change of the seasons. Moreover, temperature is a factor of these processes that create changes in the cycles. For instance, vegetation or biological beings can change due to temperature increase or decrease and that surpass a threshold which creates change in behaviour or change in seasonality.
Traditional phenological knowledge and phenology
Human observations and knowledge throughout generations is tied to TPK. The human response tied with seasonal change can create a symbiotic relationship with their proximity to the environment, hence Indigenous Peoples have been and still are practicing traditional practices that match timings of the seasonal change and seasonal indicators.
Synchronicity and natural timing
Time in most Indigenous communities is based on the pace of nature. Indigenous communities live synchronously with temporal phenological events that present themselves. It is also the interconnectivity between the natural environment and traditional Indigenous practices. The epistemology may differ from group to group, however, the many Indigenous groups share similarities regarding the innate knowledge of seasonal timings and landscape ecological practices.
The perception of time is unlike that of the Western world. For instance, the rainy season in some Indigenous communities may signal spring and/ or fall and the growing season signals spring and summer. Seasonal timings can relate to traditional practices as well. The observations of fish behaviour and migration patterns can indicate time windows in a season where one can fish. Spawning of salmon is also an indication of reproduction and multiplying of the species. Timing is important for availability. Too early fishing can affect spawning which can result in a decrease in numbers of fish. Fishing is also a cultural practice that many Indigenous communities still practise today and with TPK, these communities know there can be variance in time and change of numbers of fish from one year to the next.
Indigenous and Western application of Traditional Phenological Knowledge
TPK can be used as a predictive and management tool in both Traditional Indigenous practices and Western practices. Embracing TK and continuous observations of the physical environment creates reliable information for future generations. It pertains to the interconnectivity of animal species, plant species and human behaviour.
Indigenous applications
Fire management can be timed with phenological events in North-American Indigenous Nations. Burning shrubs in vast areas would help deer find food in the next season. Burning causes more water to be retained in the soil which promotes seedling sprouts in the spring and summer. For Indigenous communities in California, there can be more grass growth which is used for cultural "deer grass" weaving. Spring burning also promote species diversity and different cultivation such as Tobacco. Fire can kill fast growing vegetation and pests, and aid full-light vegetation to grow in these areas i.e. Oak and Huckleberries. Hence, Traditional Knowledge and TPK can help with food security, food for wildlife.
Western applications
TPK and TEK are seen as sustainable practices to help fight against climate change and are starting to be recognised as a tool to help mitigate food insecurity and issues regarding biodiversity loss. A term to describe the combination of Indigenous Knowledge and Western Knowledge is known as Two-eyed seeing. For instance, TPK is a tool for fire management that Western communities have adopted to decrease the severity of fires.
TPK and language
The transmission of TPK is passed down through stories which can be in the form of indirect TPK. It is not actually observed by the eye of the learner but rather transmitted through language by family members and community members.
Sustainability and biodiversity
Conservation of the land is engrained in indigenous knowledge. Practises of Indigenous Knowledge can be useful for sustainability and solutions for modern day environmental issues regarding climate change and biodiversity loss. TPK,TEK, TK, IK are ways to look at landscape ecology in a method that also scientists and the general public can learn from. Many practises can aid sustainable practises and fights against climate change.
Climate change
TPK can be a tool in understanding climate change. TPK is based on historical observations that can help climate scientists because of records of past and current changes in the environments around the world. TPK can provide knowledge and information that are not easily accessible to the Western Sciences regarding climate change. It can be a tool for decision making, revolving ecology and conservation where gaps of information and data are lacking.
Climate change affects First Nations and Indigenous communities differently than Western-based communities.The Western world is impacted mostly economically, financially and ecologically, whereas Indigenous communities have certain practices and traditions that are directly tied to the land. The change in climate might affect and threaten their livelihood and their relationship to the land. In other words, these communities might adapt their practices in new ways to fight against climate change. In recent years, communities have noticed changes of rainy periods and dry periods which can change the predictability of timings of traditional practices. Moreover, TPK can change and adapt due to climate change.
The dynamics of climate change in the Western world are linked to the growth of capital. This tends to lead to exploitation of natural resources, therefore leading to increased greenhouse gases in the atmosphere, degradation of the environment, affecting fresh water systems and soil health, etc.
Changes in climate also change indicators of seasonality. TPK can play a role in the study of climate change and sustainability.
Climate change impacts on Alaskan TPK
In most parts of the world, especially in higher elevations and northern latitudes such as Alaska, Alaskan communities have observed changes in phenological cycles. Locations such as Alaska are severely impacted due to their northern locations and closeness to shore which intensifies the changes in climate.
The Yukon-Kuskokwim Delta Indigenous communities have observed changes in berry resources. These communities have noticed a decrease in snowpack in recent winters. Hotter summers and thawing of permafrost also create an unsteady landscape which affects negatively the vegetation in this region, for instance, wild berries. Berries are essential for human consumption and food for wildlife. For Alaskan communities, berry picking provides nutrition, but also indicators of seasonality change. These communities have seen changes in the last ten years; variability in berry abundance from year to year and earlier ripening. This is seen in cloud berries, blueberries and crowberries.
Barriers and challenges
Some of the barriers of TPK would be that some institutions do not recognise TPK as a scientific way of practice due to Western ways of teaching. This is can be due to priority of importance of the institutions and education systems already put in place.
TPK around the world
India
Tripura
Indigenous communities in the North-East of India such as Tripura can use TPK to predict weather patterns which aid with activities of agroforestry, farming and agriculture. Additionally it is used for prevention of natural disasters. Through folklore and myths, Traditional Knowledge is shared in these means. These lores and myths about weather can be found in ancient scriptures such as Vedang Jyotish of Maharshi Laugakshi, Seval Samhita and Gura Samhita among others.
TPK can be found under two categories; theories of prediction and observations. The use of astronomy and observation of planetary positions such as conjunctions are important for Tripura Indigenous communities. Atmospheric observations play a big part of weather conditions such as looking at clouds. Behavioural patterns of plants and animals are also indicators of predicting weather.
The night-flowering jasmine (Nyctanthes arbor-tristis L.) helps predict abundant rainfall. The night-flowering jasmine flowers year round, therefore, depending on the time of the year it flowers, different amount of rain is predicted. June and July are the months containing the highest amount of rainfall and accurately predicted by traditional farmers in the region which is confirmed by the Meteorological Department of Narsinghgarh Bimangarh India. The Indian Laburnum (Cassia fistula L.) also known as Golden Shower predicts rain. When the Indian Laburnum flows it predicts the beginning of the Monsoon.
Uganda
In the Teso Sub-region in Eastern Uganda, the Iteso people of Teso practice TPK in terms of agriculture and pasture. This region has a hot and humid climate. There is a strong agricultural practice in the region of Teso. The main crops cultivated are cereal such as millet, corn, cassava and cotton. The main animals for livestock are pigs, cattle, chickens amongst other animals. Due to the region's geographical location, there are several lakes which permit the community to practise fishing. TPK helps for drought and flood predictions, infestations, water conservation and timings for fishing and plays a role in the prediction of rainfall.
The use of TPK has changed in the recent decades because of climate unpredictability, however, the Iteso people have adapted TPK to recent climatic events. Previously, traditional communities of this region used to sow millet seeds according to leaf fall and growth because it would predict the time frame to seed grain to be on time for future rainfall. Now, however, this predictability now varies. Astronomical observations such as the location of the moon and its color are used to determine when is the next onset of rainfall and rain intensity. TPK is in the hands of elders whose phenological observations are monitored, but there is concern for the younger generations to lose Traditional Knowledge of phenological events. Elders notice longer of droughts, increase winds and species disappearance, and a halt of fish migrations to previously plentiful rivers due to climate change which is affecting phenological cycles at a quicker and greater rate.
Mongolia
Traditional herders of the land-locked country of Mongolia use TPK and TEK for herding animals, determining height of grasses in the next seasons and their understanding of the ever-changing land. Observations of seasons are present in story telling and observations made through their nomadic way of life.
Herders have a distinct way of understanding plants of the landscape. Mongolia has a mix of extreme climates which temperatures could reach 40 degrees Celsius in warmer months, to below 30 degrees in colder months. Mongolia is also desertic, mountainous, contains grasslands. It is a considerably dry climate. Traditional herders use TPK to determine rainfall, time of movement and timing of vegetation growth and blooms of medicinal plants. Herder's traditional movement is in accordance with seasonality where observations are made on the relationship and behaviour of vegetation and animals. These observations and knowledge can vary from herder to herder. For instance, in more desertic parts of Mongolia, prediction of grass growth can be described as the same height as the snow in the winter and grass grows to the extent of the amount of rain the area receives. Additionally, numbers are associated with certain areas based on characteristics of viability, zones of ecology, soil health and topology. Furthermore, knowing when a plant will bloom comes from repeated observations and counting the joints of vegetation and relative humidity in the atmosphere i.e. Bagluur or Anabasis Brevifolia.
Some herders intertwine human existence with phenology. The seasons change and also humans change regarding the seasons. Humans are also an important part in phenology. In the light of climate change, the earth gets older and changes, so do humans.
Protecting Intellectual Rights
With the information provided by Indigenous Peoples, TPK is based on knowledge and intellectual property. Intellectual property ought to be respected, acknowledged, protected and accredited.
References
Traditional knowledge
Ecology | Traditional Phenological Knowledge | Biology | 2,945 |
670,316 | https://en.wikipedia.org/wiki/Exoatmospheric%20Kill%20Vehicle | The Exoatmospheric Kill Vehicle (EKV) is the Raytheon-manufactured interceptor component with subcontractor Aerojet of the U.S. Ground-Based Midcourse Defense (GMD), part of the larger National Missile Defense system.
The EKV is boosted to an intercept trajectory by a boost vehicle (missile), where it separates from the boost vehicle and autonomously collides with an incoming warhead.
The EKV is launched by the Ground-Based Interceptor (GBI) missile, the launch vehicle of the GMD system. The EKV's own rockets and fuel are for corrections in the trajectory, not for further acceleration.
The successor to the EKV, known as the Redesigned Kill Vehicle (RKV), was scheduled to debut in 2025. The RKV program, headed by Boeing and lead subcontractor Raytheon, was canceled by the Department of Defense on August 21, 2019. Earlier in the year, the Pentagon had issued a stop work order on the project following a design review deferment in December 2018 due to the failure of critical components meeting technical specification.
Raytheon is contracted to sustain, upgrade, and repair the EKV through 2034 until after the deployment of the Next Generation Interceptor (NGI), which will start to replace the EKV in 2030.
Characteristics
Weight: approx. 140 lb (64 kg)
Length: 55 in (4 ft. 7 in.) (1.4 m)
Diameter: 24 in (2 ft.) (0.6 m)
Speed of projectile: roughly 10 km/s (22,000 mph)
See also
Anti-ballistic missile
Lightweight Exo-Atmospheric Projectile
References
External links
https://web.archive.org/web/20080315061300/http://www.raytheon.com/products/ekv/
https://archive.today/20060314045159/http://www.oss.goodrich.com/ExoatmosphericKillVehicle.shtml
Space weapons
Raytheon Company products | Exoatmospheric Kill Vehicle | Astronomy | 441 |
26,612,412 | https://en.wikipedia.org/wiki/List%20of%20style%20guides | A style guide, or style manual, is a set of standards for the writing and design of documents, either for general use or for a specific publication, organization or field. The implementation of a style guide provides uniformity in style and formatting within a document and across multiple documents. A set of standards for a specific organization is often known as "house style". Style guides are common for general and specialized use, for the general reading and writing audience, and for students and scholars of various academic disciplines, medicine, journalism, the law, government, business, and industry.
International
Several basic style guides for technical and scientific communication have been defined by international standards organizations. These are often used as elements of and refined in more specialized style guides that are specific to a subject, region, or organization. Some examples are:
EN 15038, Annex DEuropean Standard for Translation Services (withdrawn)
ISO 8Presentation of periodicals
ISO 18Contents lists of periodicals
ISO 31Quantities & units
ISO 214Abstracts for publication & documentation
ISO 215Presentation of contributions to periodicals and other serials
ISO 690Bibliographic referencesContent, form & structure
ISO 832Bibliographic referencesAbbreviations of typical words
ISO 999Index of a publication
ISO 1086Title leaves of a book
ISO 2145Numbering of divisions & subdivisions in written documents
ISO 5966Presentation of scientific & technical reports (withdrawn)
ISO 6357Spine titles on books & other publications
ISO 7144Presentation of theses & similar documents
ISO 9241Ergonomics of Human System Interaction
ISO 17100Translation Services-Requirements for Translation Services
Other style guides that cover international usage:
The Cambridge Guide to English Usage, by Cambridge University Press
The Global English Style Guide, by SAS Institute
Australia
General
Australian Government Style Manual by Digital Transformation Agency. 7th ed.
Style Manual: For Authors, Editors and Printers by Snooks & Co for the Department of Finance and Administration. 6th ed. .
The Australian Handbook for Writers and Editors by Margaret McKenzie. 4th ed. .
The Cambridge Guide to Australian English Usage by Pam Peters of Macquarie University. 2nd ed. .
The Complete Guide to English Usage for Australian Students by Margaret Ramsay. 6th ed. .
Law
Australian Guide to Legal Citation published by University of Melbourne Law School. 4th ed. .
Science
Australian manual of scientific style (AMOSS) by Biotext; illustrated by Biotext. 1st ed.
Canada
European Union
Council of Europe - English Style Guide, by the Council of Europe
English Style Guide ("A handbook for authors and translators in the European Commission" – executive branch of the European Union.)
Interinstitutional Style Guide.
United Kingdom
In the United Kingdom, major publications, academic institutions and companies have their own style guides, otherwise they would normally rely on New Hart's Rules available in the New Oxford Style Manual.
For general writing
The Complete Plain Words, by Sir Ernest Gowers.
Copy-editing: The Cambridge Handbook for Editors, Authors and Publishers Judith Butcher. (2006 ed.) Cambridge: Cambridge University Press
Fowler's Dictionary of Modern English Usage (2015 ed.) Oxford: Oxford University Press, (hardcover). Based on Modern English Usage, by Henry Watson Fowler.
The King's English, by Henry Watson Fowler and Francis George Fowler.
New Oxford Style Manual (2016 ed.) Oxford: Oxford University Press. It combines New Hart's Rules and The Oxford Dictionary for Writers and Editors, it is an authoritative handbook on how to prepare copy.
Usage and Abusage, by Eric Partridge.
For legal documents
Oxford Standard for Citation of Legal Authorities (OSCOLA), by the University of Oxford Faculty of Law
The Lawyer's Style Guide (2021 ed.) by Bloomsbury Publishing
For academic papers
MHRA Style Guide for the arts and humanities; published by the Modern Humanities Research Association
The British Psychological Society Style Guide, published by The British Psychological Society
For journalism
The BBC News Style Guide: by the British Broadcasting Corporation.
The Daily Telegraph Style Guide, by The Daily Telegraph
The Economist Style Guide: by The Economist.
The Financial Times Style Guide, by The Financial Times
The Guardian Style Guide: by The Guardian
The Times Style and Usage Guide, by The Times.
For electronic publishing
GOV.UK Style Guide, by UK Government
University of Cambridge Style Guide, by University of Cambridge
For the computer industry (software and hardware)
Acorn Technical Publications Style Guide, by Acorn Computers. Provides editorial guidelines for text in RISC OS instructional publications, technical documentation, and reference information.
RISC OS Style Guide by RISC OS Open Limited. Provides design guidelines, help and dialogue box phrasing examples for the software user interface.
United States
In the United States, most journalistic forms of mass communication rely on styles provided in the Associated Press Stylebook (AP). Corporate publications typically follow either the AP style guide or the equally respected Chicago Manual of Style, often with entries that are additions or exceptions to the chosen style guide.
A classic grammar style guide is The Elements of Style. Together, these two books are referenced more than any other general style book for US third-person writing used across most professions.
For general writing
Bryson's Dictionary of Troublesome Words: A Writer's Guide to Getting It Right, by Bill Bryson.
The Careful Writer, by Theodore Bernstein.
Garner's Modern American Usage by Bryan A. Garner.
The Elements of Style. By William Strunk, Jr. and E. B. White. (Often referred to as "Strunk and White".)
For legal documents
ALWD Guide to Legal Citation, formerly ALWD Citation Manual, by the Association of Legal Writing Directors
The Bluebook: A Uniform System of Citation. Jointly, by the Harvard Law Review, Yale Law Journal, Columbia Law Review, and Penn Law Review.
The Indigo Book: An Open and Compatible Implementation of A Uniform System of Citation. Collaboratively by Professor Christopher Jon Sprigman and NYU law students, and published by Public.Resource.Org.
New York Style Manual: The Tanbook, by the New York State Reporter
For academic papers
The Chicago Manual of Style, Chicago: University of Chicago Press.
A Manual for Writers of Research Papers, Theses, and Dissertations, Chicago Style for Students and Researchers, by Kate L. Turabian. Often referred to as "Turabian."
MLA Handbook for Writers of Research Papers, by Joseph Gibaldi. Often referred to as "MLA".
Publication Manual of the American Psychological Association, by the American Psychological Association (APA).
For journalism
The Associated Press Stylebook Basic Books .
The BuzzFeed Style Guide: by Emmy Favilla and Megan Paolone.
The New York Times Manual of Style and Usage. By Allan M. Siegal and William G. Connolly.
The Wall Street Journal Guide to Business Style and Usage, by Ronald J. Alsop and the Staff of the Wall Street Journal.
For electronic publishing
The Columbia Guide to Online Style, by Janice Walker and Todd Taylor.
Web Style Guide: Basic Design Principles for Creating Web Sites, by Patrick J. Lynch and Sarah Horton.
The Yahoo! Style Guide, 2010.
For business
The Business Style Handbook, An A-to-Z Guide for Effective Writing on the Job, by Helen Cunningham and Brenda Greene.
The Gregg Reference Manual, by William A. Sabin.
For the computer industry
Apple Style Guide, published online by Apple Inc. Provides editorial guidelines for text in Apple instructional publications, technical documentation, reference information, training programs, and the software user interface. An earlier version was the Apple Publications Style Guide.
DigitalOcean documentation style guide, published online by DigitalOcean.
GNOME documentation style guide, published online by GNOME.
Google Developer Documentation Style Guide, published online by Google. Provides a set of editorial guidelines for anyone writing developer documentation for Google-related projects.
The IBM Style Guide: Conventions for Writers and Editors, 2011, and Developing Quality Technical Information: A Handbook for Writers and Editors, 2014, from IBM Press.
Mailchimp content style guide, published online by Mailchimp.
Microsoft Writing Style Guide, published online by Microsoft Corporation. Provides a style standard for technical documentation including use of terminology, conventions, procedure, design treatments, and punctuation and grammar usage. Before 2018, Microsoft published a book, the Microsoft Manual of Style for Technical Publications.
MongoDB documentation style guide, published by MongoDB.
Mozilla Writing Style Guide, published online by Mozilla.
Rackspace style guide for technical content, published online by Rackspace.
Read Me First! A Style Guide for the Computer Industry, by Sun Technical Publications, 3rd ed., 2010.
Red Hat style guide for technical documentation, published online by Red Hat.
Salesforce style guide for documentation and user interface text, published online by Salesforce.
The Splunk Style Guide, published online by Splunk. Provides a writing style reference for anyone writing or editing technical documentation.
SUSE documentation style guide, published online by SUSE.
Wired Style: Principles of English Usage in the Digital Age, 1996 by Constance Hale and Jessie Scanlon for Wired
Editorial style guides on preparing a manuscript for publication
The Chicago Manual of Style, by University of Chicago Press staff.
Words into Type, by Marjorie E. Skillin, Roberta
Academic
ACS Style Guide—for scientific papers published in journals of the American Chemical Society.
American Medical Association Manual of Style—for medical papers published in journals of the American Medical Association.
American Psychological Association Style Guide—for the behavioral and social sciences; published by the American Psychological Association.
American Sociological Association Style Guide—for the social sciences; published by the American Sociological Association.
The Chicago Manual of Style. The standard of the academic publishing industry including many journal publications.
Geoscience Reporting Guidelines—for geoscience reports in industry, academia and other disciplines.
Handbook of Technical Writing, by Gerald J. Alred, Charles T. Brusaw, and Walter E. Oliu.—for general technical writing.
IEEE style—used in many technical research papers, especially those relating to computer science.
The Little Style Guide by Leonard G. Goss and Carolyn Stanford Goss—provides a distinctively religious examination of style and language for writers and editors in religion, philosophy of religion, and theology—.
A Manual for Writers of Term Papers, Theses, and Dissertations (frequently called "Turabian style")—Published by Kate L. Turabian, the graduate school dissertation secretary at the University of Chicago from 1930 to 1958. The school required her approval for every master's thesis and doctoral dissertation. Her stylistic rules closely follow The Chicago Manual of Style, although there are some differences.
MHRA Style Guide—for the arts and humanities; published by the Modern Humanities Research Association. Available as a free download (see article).
MLA Style Manual, and the MLA Handbook for Writers of Research Papers—for subjects in the arts and the humanities; published by the Modern Language Association of America (MLA).
Scientific Style and Format: The CSE Manual for Authors, Editors, and Publishers—for scientific papers published by the Council of Science Editors (CSE), a group formerly known as the Council of Biology Editors (CBE).
SBL Handbook of Style—Society of Biblical Literature style manual specifically for the field of ancient Near Eastern, biblical, and early Christian studies. The SBL Handbook of Style includes a recommended standard format for abbreviation of Primary Sources in Ancient Near Eastern, biblical, and early Christian Studies.
The Style Manual for Political Science—used by many American political science journals; published by the American Political Science Association.
Communities
Conscious Style Guide -- A website "devoted to conscious language. My mission is to help writers and editors think critically about using language—including words, portrayals, framing, and representation—to empower instead of limit." Created by author and Robinson Prize winner Karen Yin.
GLAAD Media Reference Guide, 8th ed., GLAAD College Media Reference Guide, 1st ed., GLAAD Chinese Media Reference Guide, 1st ed. - published by GLAAD to encourage media outlets to use language and practices inclusive of LGBT people. Available as a free download.
Art
Association of Art Editors Style Guide
See also
Citation
Diction
Documentation
Disputed usage
English writing style
Grammar
Prescription and description
Punctuation
Sentence spacing in language and style guides
Spelling
Style guide
Stylistics
References
External links
General use of style guides
American English
Style Manuals & Guides listed by the University of Memphis Libraries (updated page Style Manuals).
Bartleby Searchable Usage Guides.
U.S. government publications
U.S. Government Printing Office Style Manual.
British English
BBC News Style Guide.
Economist.com Style Guide.
The Guardian Stylebook.
Canadian English
York University Style Guide – Adapts CP Stylebook for university student use.
Australian English
Style Manual: For Authors, Editors and Printers - online version of the Australian Government manual
The ABC Style Guide - the style guide of the Australian Broadcasting Corporation
International organizations
WHO English Style Guide
EU Interinstitutional Style Guide.
English Style Guide ("A handbook for authors and translators in the European Commission" – executive branch of the European Union.)
Academia
Citation Management Online research tutorial to documentation style guides from Cornell University Libraries.
"Style Manuals & Writing Guides" from the California State University, Los Angeles Library.
Medical journals
ICJME Uniform Requirements: Sample References.
International Committee of Medical Journal Editors (ICJME) Uniform Requirements for Manuscripts Submitted to Biomedical Journals (Updated February 2006).
Scientific journals
Advances in Physics - Style Guide for Physics journal published by Taylor & Francis Group (Taylor & Francis journals).
Writing for a Nature journal for Nature.
The Lancet: Formatting Guidelines for Authors: Formatting Guidelines for Electronic Submission of Revised Manuscripts.
WWW
OSNews Style Guide: Rules and Guidelines for Publishing and Participating on OSNews, by T. Holwerda. OSNews, 2007.
Web Style Guide, 2nd ed., by Patrick Lynch and Sarah Horton.
List
Communication design
Design
Technical communication | List of style guides | Engineering | 2,849 |
74,147,991 | https://en.wikipedia.org/wiki/Einsteinium%20trifluoride | Einsteinium fluoride is a binary inorganic chemical compound of einsteinium and fluorine with the chemical formula .
Synthesis
Einsteinium fluoride can be precipitated from einsteinium(III) chloride solutions upon reaction with fluoride ions. An alternative preparation procedure is to expose einsteinium(III) oxide to chlorine trifluoride (ClF3) or F2 gas at a pressure of 1–2 atmospheres and a temperature between 300 and 400 °C. The EsF3 crystal structure is hexagonal, as in californium(III) fluoride (CfF3) where the Es3+ ions are 8-fold coordinated by fluorine ions in a bicapped trigonal prism arrangement.
Physical properties
The compound forms crystals and is insoluble in water.
Chemical properties
The compound is reduced by metallic lithium:
References
Einsteinium compounds
Fluorides
Actinide halides | Einsteinium trifluoride | Chemistry | 192 |
19,647,464 | https://en.wikipedia.org/wiki/Nysted%20reagent | The Nysted reagent is a reagent used in organic synthesis for the methylenation of a carbonyl group. It was discovered in 1975 by Leonard N. Nysted in Chicago, Illinois. It was originally prepared by reacting dibromomethane and activated zinc in THF. A proposed mechanism for the methenylation reaction runs as follows:
A similar reagent is Tebbe's reagent. In the Nysted olefination, the Nysted reagent reacts with TiCl4 to methylenate a carbonyl group. The biggest problem with these reagents are that the reactivity has not been well documented. It is believed that the TiCl4 acts as a mediator in the reaction. Nysted reagent can methylenate different carbonyl groups in the presence of different mediators. For example, in the presence of BF3•OEt2, the reagent will methylenate aldehydes. On the other hand, in the presence of TiCl4, TiCl3 or TiCl2 and BF3•OEt2, the reagent can methylenate ketones. Most commonly, it is used to methylenate ketones because of their general difficulty to methylenate due to crowding around the carbonyl group. The Nysted reagent is able to overcome the additional steric hindrance found in ketones, and more easily methylenate the carbonyl group. In contrast to the Wittig reaction the neutral reaction conditions of the Nysted reagent make it a useful alternative for the methylenation of easily enolizable ketones.
There is little research on Nysted reagent because of the hazards and high reactivity and the difficulty of keeping the reagent stable while it is in use. More specifically, it can form explosive peroxides when exposed to air and is extremely flammable. Also, it reacts violently with water. These make this reagent very dangerous to work with.
See also
Petasis reagent
Titanium–zinc methylenation
Wittig reaction
References
Organozinc compounds
Reagents for organic chemistry
Zinc complexes | Nysted reagent | Chemistry | 449 |
39,529,359 | https://en.wikipedia.org/wiki/Centered%20octahedral%20number | In mathematics, a centered octahedral number or Haüy octahedral number is a figurate number that counts the points of a three-dimensional integer lattice that lie inside an octahedron centered at the origin. The same numbers are special cases of the Delannoy numbers, which count certain two-dimensional lattice paths. The Haüy octahedral numbers are named after René Just Haüy.
History
The name "Haüy octahedral number" comes from the work of René Just Haüy, a French mineralogist active in the late 18th and early 19th centuries. His "Haüy construction" approximates an octahedron as a polycube, formed by accreting concentric layers of cubes onto a central cube. The centered octahedral numbers count the cubes used by this construction. Haüy proposed this construction, and several related constructions of other polyhedra, as a model for the structure of crystalline minerals.
Formula
The number of three-dimensional lattice points within n steps of the origin is given by the formula
The first few of these numbers (for n = 0, 1, 2, ...) are
1, 7, 25, 63, 129, 231, 377, 575, 833, 1159, ...
The generating function of the centered octahedral numbers is
The centered octahedral numbers obey the recurrence relation
They may also be computed as the sums of pairs of consecutive octahedral numbers.
Alternative interpretations
The octahedron in the three-dimensional integer lattice, whose number of lattice points is counted by the centered octahedral number, is a metric ball for three-dimensional taxicab geometry, a geometry in which distance is measured by the sum of the coordinatewise distances rather than by Euclidean distance. For this reason, call the centered octahedral numbers "the volume of the crystal ball".
The same numbers can be viewed as figurate numbers in a different way, as the centered figurate numbers generated by a pentagonal pyramid. That is, if one forms a sequence of concentric shells in three dimensions, where the first shell consists of a single point, the second shell consists of the six vertices of a pentagonal pyramid, and each successive shell forms a larger pentagonal pyramid with a triangular number of points on each triangular face and a pentagonal number of points on the pentagonal face, then the total number of points in this configuration is a centered octahedral number.
The centered octahedral numbers are also the Delannoy numbers of the form D(3,n). As for Delannoy numbers more generally, these numbers count the paths from the southwest corner of a 3 × n grid to the northeast corner, using steps that go one unit east, north, or northeast.
References
Figurate numbers | Centered octahedral number | Mathematics | 590 |
47,134,010 | https://en.wikipedia.org/wiki/Penicillium%20piceum | Penicillium piceum is an anamorph species of fungi in the genus Penicillium which can cause in rare cases chronic granulomatous disease. This species has been isolated from human blood cultures and from pig lung tissue. Penicillium piceum produces β-glucosidase
Further reading
References
piceum
Fungi described in 1948
Fungus species | Penicillium piceum | Biology | 77 |
1,225,002 | https://en.wikipedia.org/wiki/Polyelectrolyte | Polyelectrolytes are polymers whose repeating units bear an electrolyte group. Polycations and polyanions are polyelectrolytes. These groups dissociate in aqueous solutions (water), making the polymers charged. Polyelectrolyte properties are thus similar to both electrolytes (salts) and polymers (high molecular weight compounds) and are sometimes called polysalts. Like salts, their solutions are electrically conductive. Like polymers, their solutions are often viscous. Charged molecular chains, commonly present in soft matter systems, play a fundamental role in determining structure, stability and the interactions of various molecular assemblies. Theoretical approaches to describe their statistical properties differ profoundly from those of their electrically neutral counterparts, while technological and industrial fields exploit their unique properties. Many biological molecules are polyelectrolytes. For instance, polypeptides, glycosaminoglycans, and DNA are polyelectrolytes. Both natural and synthetic polyelectrolytes are used in a variety of industries.
Charge
Acids are classified as either weak or strong (and bases similarly may be either weak or strong). Similarly, polyelectrolytes can be divided into "weak" and "strong" types. A "strong" polyelectrolyte dissociates completely in solution for most reasonable pH values. A "weak" polyelectrolyte, by contrast, has a dissociation constant (pKa or pKb) in the range of ~2 to ~10, meaning that it will be partially dissociated at intermediate pH. Thus, weak polyelectrolytes are not fully charged in the solution, and moreover, their fractional charge can be modified by changing the solution pH, counter-ion concentration, or ionic strength.
The physical properties of polyelectrolyte solutions are usually strongly affected by this degree of ionization. Since the polyelectrolyte dissociation releases counter-ions, this necessarily affects the solution's ionic strength, and therefore the Debye length. This, in turn, affects other properties, such as electrical conductivity.
When solutions of two oppositely charged polymers (that is, a solution of polycation and one of polyanion) are mixed, a bulk complex (precipitate) is usually formed. This occurs because the oppositely-charged polymers attract one another and bind together.
Conformation
The conformation of any polymer is affected by a number of factors, notably the polymer architecture and the solvent affinity. In the case of polyelectrolytes, charge also has an effect. Whereas an uncharged linear polymer chain is usually found in a random conformation in solution (closely approximating a self-avoiding three-dimensional random walk), the charges on a linear polyelectrolyte chain will repel each other via double layer forces, which causes the chain to adopt a more expanded, rigid-rod-like conformation. The charges will be screened if the solution contains a great deal of added salt. Consequently, the polyelectrolyte chain will collapse to a more conventional conformation (essentially identical to a neutral chain in good solvent).
Polymer conformation affects many bulk properties (such as viscosity, turbidity, etc.). Although the statistical conformation of polyelectrolytes can be captured using variants of conventional polymer theory, it is, in general, quite computationally intensive to properly model polyelectrolyte chains, owing to the long-range nature of the electrostatic interaction.
Techniques such as static light scattering can be used to study polyelectrolyte conformation and conformational changes.
Polyampholytes
Polyelectrolytes that bear both cationic and anionic repeat groups are called polyampholytes. The competition between the acid-base equilibria of these groups leads to additional complications in their physical behavior. These polymers usually only dissolve when sufficient added salt screens the interactions between oppositely charged segments. In the case of amphoteric macroporous hydrogels, the action of concentrated salt solution does not lead to the dissolution of polyampholyte material due to the covalent cross-linking of macromolecules. Synthetic 3-D macroporous hydrogels shows the excellent ability to adsorb heavy-metal ions in a wide range of pH from extremely diluted aqueous solutions, which can be later used as an adsorbent for purification of salty water All proteins are polyampholytes, as some amino acids tend to be acidic, while others are basic.
Applications
Polyelectrolytes have many applications, mostly related to modifying flow and stability properties of aqueous solutions and gels. For instance, they can be used to destabilize a colloidal suspension and to initiate flocculation (precipitation). They can also be used to impart a surface charge to neutral particles, enabling them to be dispersed in aqueous solution. They are thus often used as thickeners, emulsifiers, conditioners, clarifying agents, and even drag reducers. They are used in water treatment and for oil recovery. Many soaps, shampoos, and cosmetics incorporate polyelectrolytes. Furthermore, they are added to many foods and to concrete mixtures (superplasticizer). Some of the polyelectrolytes that appear on food labels are pectin, carrageenan, alginates, and carboxymethyl cellulose. All but the last are of natural origin. Finally, they are used in various materials, including cement.
Because some of them are water-soluble, they are also investigated for biochemical and medical applications. There is currently much research on using biocompatible polyelectrolytes for implant coatings, controlled drug release, and other applications. Thus, recently, the biocompatible and biodegradable macroporous material composed of polyelectrolyte complex was described, where the material exhibited excellent proliferation of mammalian cells and muscle like soft actuators.
Multilayers
Polyelectrolytes have been used in the formation of new types of materials known as polyelectrolyte multilayers (PEMs). These thin films are constructed using a layer-by-layer'' (LbL) deposition technique. During LbL deposition, a suitable growth substrate (usually charged) is dipped back and forth between dilute baths of positively and negatively charged polyelectrolyte solutions. During each dip, a small amount of polyelectrolyte is adsorbed, and the surface charge is reversed, allowing the gradual and controlled build-up of electrostatically cross-linked films of polycation-polyanion layers. Scientists have demonstrated thickness control of such films down to the single-nanometer scale. LbL films can also be constructed by substituting charged species such as nanoparticles or clay platelets in place of or in addition to one of the polyelectrolytes. LbL deposition has also been accomplished using hydrogen bonding instead of electrostatics. For more information on multilayer creation, please see polyelectrolyte adsorption.
An LbL formation of PEM (PSS-PAH (poly(allylamine) hydrochloride)) on a gold substrate can be seen in the Figure. The formation is measured using multi-parametric surface plasmon resonance to determine adsorption kinetics, layer thickness, and optical density.
The main benefits of PEM coatings are the ability to conformably coat objects (that is, the technique is not limited to coating flat objects), the environmental benefits of using water-based processes, reasonable costs, and the utilization of the particular chemical properties of the film for further modification, such as the synthesis of metal or semiconductor nanoparticles, or porosity phase transitions to create anti-reflective coatings, optical shutters, and superhydrophobic coatings.
Bridging
If polyelectrolyte chains are added to a system of charged macroions (i.e., an array of DNA molecules), an interesting phenomenon called the polyelectrolyte bridging might occur. The term bridging interactions is usually applied to the situation where a single polyelectrolyte chain can adsorb to two (or more) oppositely charged macroions (e.g. DNA molecule) thus establishing molecular bridges and, via its connectivity, mediate attractive interactions between them.
At small macroion separations, the chain is squeezed in between the macroions and electrostatic effects in the system are completely dominated by steric effects – the system is effectively discharged. As we increase the macroion separation, we simultaneously stretch the polyelectrolyte chain adsorbed to them. The stretching of the chain gives rise to the above-mentioned attractive interactions due to the chain's rubber elasticity.
Because of its connectivity, the behavior of the polyelectrolyte chain bears almost no resemblance to that of confined, unconnected ions.
Polyacid
In polymer terminology, a polyacid''' is a polyelectrolyte composed of macromolecules containing acid groups on a substantial fraction of the constitutional units. Most commonly, the acid groups are , , or .
See also
Dispersity
Ion-exchange resin
Polypyridinium salts
References
External links
Max Planck Institute for Polymer Research, Mainz, Germany
Polyelectrolytes: Institute of Physical & Theoretical Chemistry, University of Regensburg, Regensburg, Germany
Polyelectrolytes: Vadodara, Gujarat, India
Colloidal chemistry
Colloids
Food additives
Organic acids
Physical chemistry
Polymer chemistry
Polymers | Polyelectrolyte | Physics,Chemistry,Materials_science,Engineering | 2,017 |
34,462,209 | https://en.wikipedia.org/wiki/Squelching | Squelching is a biological phenomenon in which a strong transcriptional activator acts to inhibit the expression of another gene. Squelching has been mostly studied in yeast, and most of the ideas regarding its mechanisms have come from research into modes of transcriptional control in yeast. One important study of this topic was conducted using the Gal4-VP16 artificial transcription factor system, where it was shown that the activating complex formed by VP-16 was sequestering adapters required for transcription of other targets.
The primary cause of squelching is believed to be the interaction of activator molecules disrupting the biochemical pathways associated with related processes due to structural similarity between the activators and important substrates along that pathway. In particular, the activator binds to transcription factors along alternative biochemical pathways, inhibiting the ability of these transcription factors to bind to their true targets. As in the example above, sequestration of an intermediate in a metabolic pathway is a confounding variable in genetic studies because knowledge of the expected binding targets of the primary molecules involved does not help predict why unexpected behavior results.
References
Cell signaling
Cellular processes | Squelching | Biology | 233 |
11,548,095 | https://en.wikipedia.org/wiki/Phialophora%20asteris | Phialophora asteris is an ascomycete fungus that is a plant pathogen infecting sunflowers.
References
Further reading
Eurotiomycetes
Fungal plant pathogens and diseases
Sunflower diseases
Fungi described in 1923
Fungus species | Phialophora asteris | Biology | 51 |
61,376,283 | https://en.wikipedia.org/wiki/Copper%28II%29%20glycinate | Copper(II) glycinate (IUPAC suggested name: bis(glycinato)copper(II)) refers to the coordination complex of copper(II) with two equivalents of glycinate, with the formula [Cu(glycinate)2(H2O)x] where x = 1 (monohydrate) or 0 (anhydrous form). The complex was first reported in 1841, and its chemistry has been revisited many times, particularly in relation to the isomerisation reaction between the cis and trans forms which was first reported in 1890.
All forms are blue solids, with varying degrees of water solubility. A practical application of the compound is as a source of dietary copper in animal feeds.
Synthesis
Bis(glycinato)copper(II) is typically prepared from the reaction of copper(II) acetate in aqueous ethanol with glycine:
Cu(OAc)2 + 2 H2NCH2COOH + x H2O → [Cu(H2NCH2COO)2(H2O)x] + 2 AcOH, x = 0 or 1
The reaction proceeds through a non-redox dissociative substitution mechanism and usually affords the cis isomer.
Structure
Like most amino acid complexes, the glycinate forms a 5-membered chelate ring, with the glycinato ligand serving as a bidentate (κ2Ο,Ν) species. The chelating ligands assume a square planar configuration around the copper atom as is common for tetracoordinate d9 complexes, calculated to be much lower in energy than the alternative tetrahedral arrangement.
Cis and trans isomerism
The unsymmetric nature of the ligand and square planar coordination thereof gives rise to two possible geometric isomers: a cis and a trans form.
Multiple ways of differentiating the geometric isomers exist, an easily accessible one being IR spectroscopy with the characteristic number of C–N, C–O, and CuII–N identifying the ligand configuration. Crystal appearance may also be of some value for isomer indication, though the ultimate diagnostic technique is X-ray crystallography.
All forms of the complex have been characterized crystallographically, the most commonly isolated one being the cis monohydrate (x = 1).
Isomerisation of the cis to the trans form occurs at high temperatures via a ring-twisting mechanism.
References
Coordination chemistry
Copper complexes
Glycinates
Metal-amino acid complexes | Copper(II) glycinate | Chemistry | 523 |
60,844,836 | https://en.wikipedia.org/wiki/Nintendo%20Gateway%20System | The Nintendo Gateway System is a series of video game consoles specialized for airlines and hotels. As part of a partnership between Nintendo and LodgeNet from late 1993 up until the late 2000s, about 40,000 airline seats and 955,000 hotel rooms featured a modified version of the Super Nintendo Entertainment System, Game Boy, Game Boy Color, Game Boy Advance, Nintendo 64, or GameCube, installed on some Northwest, Singapore Airlines, Air China, Air Canada, Alitalia-Linee Aeree Italiane, All Nippon Airways, British Midland International, Kuwait Airways, Malaysia Airlines, Thai Airways, and Virgin Atlantic passenger aircraft, as well as certain hotels with LodgeNet, NXTV, or Quadriga entertainment systems.
Aimed at adults than Nintendo's core children's market, it was one of the first in-seat airline entertainment services, provided by Matsushita Avionics, Rockwell Collins, and Thales Avionics. The controller, or remote, for the airline version of the Gateway System had a button setup similar to the Super NES controller, and it also doubled as a remote for the movie and music aspects of the system. It was part of a much larger computer system that allowed air passengers to not only play video games, but also watch movies and shows, listen to music, talk on the phone, and even shop while in-flight, before the rise of the internet. Upon its release, there were 10 games installed in the system, which included The Legend of Zelda: A Link to the Past, F-Zero and Super Mario World. Future plans for the system were to have it installed on cruise ships as well.
LodgeNet partnered with Nintendo to bring video games directly into guest hotel rooms through streaming over the LodgeNet server, with the special LodgeNet controller plugging directly into the TV or LodgeNet set-top box, transmitting the game over phone lines connected to a central game server. Pricing was usually $6.95 plus tax for 1 hour of video games. After 1 hour, the game would immediately stop and prompt the user to purchase more play time. Many games were modified for single-player play only.
Its official website was discontinued in mid-2008, but units have been seen as late as 2013 for Nintendo 64 in hotels, and as late as 2012 for Game Boy and Game Boy Color on Singapore Airlines. LodgeNet was the most widespread pay-per-view system for hotels that used it.
History
On August 10, 1993, Nintendo of America began rolling out the Nintendo Gateway System, initially in one of Northwest Airlines' Boeing 747 and LodgeNet.
In late 1993, LodgeNet launched its on-demand hospitality service, including worldwide delivery of Super NES games to hotel guests via its proprietary building-wide networks. LodgeNet eventually reported the system being installed in 200,000 hotel guest rooms by April 1996, and 530,000 guest rooms by mid-1999. By April 1996, LodgeNet reported that its partnership with Nintendo to deliver Super NES games had yielded 200,000 worldwide hotel guest room installations. In June 1998, Nintendo and LodgeNet entered a 10-year licensing agreement for an "aggressive" upgrade to add Nintendo 64 support to their existing 500,000 Super NES equipped guest room installations. LodgeNet says that within the system's previous five years to date, the system had "caused Nintendo to become the most successful new product rollout in the history of the hotel pay-per-view industry". LodgeNet reported that within the middle of 1998 alone, 35 million hotel guests encountered the Nintendo name as an integral amenity, and it reported sales of more than 54 million minutes of Nintendo-based gameplay.
In June 1999, LodgeNet and Nintendo began expanding and upgrading their existing Super NES buildout to include Nintendo 64 support. In mid-1999, LodgeNet reported that its 530,000 hotel room installations were increasing at a rate of 11,000 rooms per month. In September 2000, Nintendo and LodgeNet began delivering newly released Nintendo 64 games to hotel rooms at more than 1,000 hotel sites, concurrently with the games' retail releases, demonstrating "the capacity to update LodgeNet's interactive digital systems with fresh content virtually overnight".
Games
Games are offered for six Nintendo platforms, the Super Nintendo Entertainment System, the Game Boy, the Game Boy Color, the Game Boy Advance, the Nintendo 64, and the GameCube, with support for the Nintendo Entertainment System planned. While GB, GBC, and GBA games are exclusive to the airlines, the N64 and GC games are exclusive to the hotels, and the SNES is available for both.
Super Nintendo Entertainment System
There were 49 Super Nintendo Entertainment System titles available to play on LodgeNet hotel televisions and on airlines equipped with Nintendo Gateway System, which LodgeNet used for their hotel service. Some titles were not playable on airlines.
Blackthorne
Boogerman: A Pick and Flick Adventure (not available on airlines)
Boxing Legends of the Ring
The Brainies
ClayFighter: Tournament Edition (not available on airlines)
ClayFighter 2: Judgment Clay (not available on airlines)
Claymates
Donkey Kong Country
Donkey Kong Country 2: Diddy's Kong Quest (not available on airlines)
Dr. Mario (standalone, exclusive to the service)
Final Fight
F-Zero
Hagane: The Final Conflict
Hal's Hole in One Golf
Hangman (exclusive to the service)
Killer Instinct (not available on airlines)
Kirby's Dream Course
The Legend of Zelda: A Link to the Past
The Lost Vikings
The Lost Vikings 2
Mega Man X
NCAA Basketball (listed in a Nintendo Power article about the Gateway Service, unknown availability)
Noughts & Crosses (exclusive to the service)
Panel de Pon
Postcard Puzzle (exclusive to the service)
Prehistorik Man
Pro Mahjong Kiwame
Shanghai II: Dragon's Eye
Street Fighter II: The World Warrior
Street Fighter II: Hyper Fighting (not available on airlines)
Super Adventure Island
Super Bonk
Super Ghouls 'n Ghosts
Super Mario All-Stars
Super Mario All-Stars + Super Mario World (unknown availability)
Super Mario World
Super Metroid (not available on airlines)
Super Play Action Football
Super Punch-Out!!
Super Soccer
Super Solitaire
Super Street Fighter II (not available on airlines)
Super Tennis
True Golf Classics: Pebble Beach Golf Links (listed in a Nintendo Power article about the Gateway Service, unknown availability)
Tetris (standalone, exclusive to the service)
Tetris Attack
Tetris & Dr. Mario
Vegas Stakes
Wario's Woods
Nintendo 64
There were 38 Nintendo 64 titles available to play on LodgeNet hotel televisions.
1080° Snowboarding
Donkey Kong 64
Dr. Mario 64
Excitebike 64
Extreme-G
F-Zero X
Forsaken 64
Gauntlet Legends
Hydro Thunder
Iggy's Reckin' Balls
Kirby 64: The Crystal Shards
The Legend of Zelda: Majora's Mask
The Legend of Zelda: Ocarina of Time
Mario Golf
Mario Kart 64
Mario Party 3
Mario Tennis
Midway's Greatest Arcade Hits
Milo's Astro Lanes
Mortal Kombat 4
Namco Museum 64
The New Tetris
Paper Mario
Pilotwings 64
Pokémon Snap
Rampage 2: Universal Tour
Ready 2 Rumble Boxing
Rush 2: Extreme Racing USA
San Francisco Rush: Extreme Racing
Star Fox 64
Star Wars: Rogue Squadron
Super Mario 64
Super Smash Bros.
Turok 2: Seeds of Evil
Virtual Chess 64
Virtual Pool 64
Wave Race 64
Yoshi's Story
GameCube
There were 43 Nintendo GameCube titles available to play on LodgeNet hotel televisions.
1080° Avalanche
Animal Crossing
Backyard Baseball 2007
Battalion Wars
Chibi-Robo!
Custom Robo
Eternal Darkness: Sanity's Requiem
Final Fantasy Crystal Chronicles
Fire Emblem: Path of Radiance
Geist
Kirby Air Ride
The Legend of Zelda: Collector's Edition
The Legend of Zelda: Four Swords Adventures
The Legend of Zelda: Ocarina of Time Master Quest
The Legend of Zelda: Twilight Princess
The Legend of Zelda: The Wind Waker
Luigi's Mansion
Mario Golf: Toadstool Tour
Mario Kart: Double Dash!!
Mario Party 4
Mario Party 5
Mario Party 6
Mario Party 7
Mario Power Tennis
Metroid Prime
Metroid Prime 2: Echoes
Paper Mario: The Thousand-Year Door
Pikmin
Pikmin 2
Pokémon Channel
Pokémon Colosseum
Pokémon XD: Gale of Darkness
Star Fox: Assault
Star Wars Rogue Squadron II: Rogue Leader
Star Wars Rogue Squadron III: Rebel Strike
Super Mario Strikers
Super Mario Sunshine
TMNT
Tomb Raider: Legend
The Urbz: Sims in the City
Wario World
WarioWare, Inc.: Mega Party Games!
Wave Race: Blue Storm
Game Boy and Game Boy Color
There were 33 Game Boy/Game Boy Color titles available to play on airlines featuring Nintendo Gateway System.
Baseball
Dr. Mario
F1 Race
Game & Watch Gallery
Game & Watch Gallery 2
Game & Watch Gallery 3
Golf
Kirby's Dream Land 2
Kirby's Pinball Land
Kirby's Star Stacker
Mario Golf
Mario Tennis
Metroid II: Return of Samus
Picross 2
Pokémon Gold Version
Pokémon Silver Version
Pokémon Pinball
Pokémon Puzzle Challenge
Pokémon Red Version
Pokémon Blue Version
Pokémon Trading Card Game
Pokémon Yellow Version
Super Mario Bros. Deluxe
Super Mario Land
Super Mario Land 2
Tennis
The Legend of Zelda: Link's Awakening
The Legend of Zelda: Oracle of Ages
The Legend of Zelda: Oracle of Seasons
Wario Land: Super Mario Land 3
Wario Land II
Wario Land 3
Yakuman
Game Boy Advance
There were 13 Game Boy Advance titles available to play on airlines featuring Nintendo Gateway System.
Advance Wars 2: Black Hole Rising
Dr. Mario & Puzzle League
Fire Emblem: The Blazing Blade
Game & Watch Gallery 4
Kirby: Nightmare in Dream Land
Kirby & the Amazing Mirror
Mario Kart: Super Circuit
Mario Pinball Land
Pokémon Pinball: Ruby & Sapphire
Super Mario Advance 4: Super Mario Bros. 3
Super Mario World: Super Mario Advance 2
The Legend of Zelda: A Link to the Past
Wario Land 4
See also
Interactive television
Smart TV
Sonifi Solutions
Notes
References
External links
1993 in video gaming
Audiovisual introductions in 1993
Television technology
Nintendo hardware | Nintendo Gateway System | Technology | 2,062 |
18,512,519 | https://en.wikipedia.org/wiki/Adobe%20Font%20Development%20Kit%20for%20OpenType | The Adobe Font Development Kit for OpenType, also known as Adobe FDKO or simply AFDKO, is a font development kit (FDK), a set of command-line tools freely distributed by Adobe for editing and verifying OpenType fonts. It does not offer a glyph editor, but focuses on tools for manipulating font metrics, kerning and other OpenType features. AFDKO runs on Microsoft Windows, Linux and macOS, and licensed under the Apache License.
References
General
Ken Lunde, CJKV Information Processing, Edition 2, O'Reilly Media, 2008, , pp. 447–450
External links
AFDKO – Official GitHub repository
Typophile page on Adobe FDK
Font Development Kit
Graphics libraries
Typography software
Font editors | Adobe Font Development Kit for OpenType | Technology | 165 |
34,006,423 | https://en.wikipedia.org/wiki/Klimisch%20score | The Klimisch score is a method of assessing the reliability of toxicological studies, mainly for regulatory purposes, that was proposed by H.J. Klimisch, M. Andreae and U. Tillmann of the chemical company BASF in 1997 in a paper entitled A Systematic Approach for Evaluating the Quality of Experimental Toxicological and Ecotoxicological Data which was published in Regulatory Toxicology and Pharmacology. It assigns studies to one of four categories as follows:
The applicable guidelines are the (OECD Guidelines for the Testing of Chemicals, EU Test Methods), and other such methods. Often studies are performed to more than one test guideline where they are in agreement as to the requirements. GLP is Good Laboratory Practice.
The scoring system is the standard method used in both the EU regulatory schemes (e.g. REACH Regulation). Generally, only Klimisch scores of 1 or 2 can be used by themselves to cover an endpoint. However, Klimisch score 3 and 4 data can still be used as supporting studies or as part of a weight of evidence approach. The Klimisch score can be found as a standard field within the IUCLID database.
ECHA has produced guidance on how to assess the reliability of data
Klimisch score has been criticized for favoring studies conducted under Good Laboratory Practice guidelines, which are mostly industry-funded studies. A reliable study according to the Klimisch score can actually be highly flawed. Klimisch score does not assess a number of study design criteria: randomization, blinding, sample size calculation, ….
ToxRTool
The ToxRTool was developed to assist with Klimisch scoring.
References
Toxicology | Klimisch score | Environmental_science | 346 |
26,093,384 | https://en.wikipedia.org/wiki/Fort%20de%20Villey-le-Sec | Fort de Villey-le-Sec, also known as Fort Trévise, is a fortification of the 19th century, built as part of the Séré de Rivières system of fortifications in Villey-le-Sec, France, one of the defenses of Toul. It is a unique example for its time of a defensive enclosure around a village. Conceived after the defeat of the Franco-Prussian War of 1870-71, the fort was located away from the main combat zone of World War I and has remained almost intact. The fort's preservation association has been at work since 1961 to restore and interpret the site. It has been included in the Inventory of Historic Sites and has been designated as a preserved natural area.
The Séré de Rivières was a response to the increasing power of explosive artillery, abandoning vertical masonry walls for more blended fortifications that served as artillery emplacements, defended by machine guns and small arms. The forts and batteries were designed to provide mutual support and to provide shelter and support for infantry units to maintain a defensive line, or cover for the assembly of larger offensive forces. In the 1880s, with the development of high explosives, much of the masonry construction of the forts became obsolete and was rebuilt using concrete and earth coverage.
History
Fort Villey-le-Sec was built between 1875 and 1879, then modernized in 1888, 1903 and 1914.
Construction
The fortified camp of Toul anchors the end of the fortification curtain of the Hauts de Meuse. The 1874 Declaration of Public Utility that authorized construction envisioned four forts: Ecouvres, Dongermain, St.-Michel and Villey-le-Sec. Villey-le-Sec was planned to protect the southwestern approach to Toul, located on the plateau of Haye, supported on the south by the bluffs along the Moselle. The fort was originally planned to be about further west, where the Chaudeney redoubt is located. The fort was responsible for the exits from the Forest of Haye, and to provide flanking support to the Fort de Gondreville and Fort de Chanot.
Design began 5 December 1873 and construction 26 July 1875. The work was carried out by Morel, and took four years with hundreds of workers. It was the most expensive fort of the region.
In 1885 the development of high-explosive shells made stone fortifications obsolete. It became necessary to reinforce the Fort de Villey-le-Sec with concrete and metal armor. At the same time the fort's artillery was judged to be vulnerable and was dispersed across the plateau.
In 1890 four barracks were built of concrete, as well as a redan and two batteries in the redoubt. About 1900 a firing range was created with the fort at Gondreville in the edge of the Forest of Haye, to test the fort's weapons. Another, in the Bois de l'Embanie, was used as a training area for s In 1912 work began to equip the fort with a battery of two Mougin turrets with 155 mm guns. This was interrupted by the First World War.
First World War
At the beginning of the war, after the French defeat at Morhange, the German troops moved rapidly to the west. From mid-September, after the Battle of Grand Couronné at Nancy, the front was stabilized within a few tens of kilometers along an axis Saint-Mihiel - Pont-à-Mousson - Nomeny - Moncel-on-Pail - Arracourt. It will more hardly move in this sector. The population was evacuated, leaving only the garrison and the men who were essential to work the farms.
After the war
After the recovery of the Alsace-Moselle region, the fort lost its strategic interest. The army installed a small garrison, but was concerned with little more than maintenance. During the Second World War the fort's metal was stripped by the German army for scrap. The fort was bombarded by the Americans during the liberation of Toul. The 155mm guns of the Mougin turret were sent to Ouvrage Barbonnet, a Séré de Rivières fort in the Alps that had been modernized to function as part of the Alpine Line portion of the Maginot Line.
Present
An association has existed since 1961 to carry out restoration work on the fort. Portions were opened to the public in 1967 and the fort was listed on the inventory of historic structures in 1973. The Mougin turret and the north battery artillery have been restored to functional status. At present, the site is registered as a site pittoresque with tourism authorities, and is open from May to September. The preservation association uses Highland cattle to keep the vegetation on the surface of the fort under control.
Main enclosure
Because the village already occupied the best site, the fort was built to limit the cost and difficulty in moving the occupants. The villagers were opposed to relocation because the village occupies one of the only places where water is retained at the ground's surface, due to the presence of a layer of clay.
Réduit
The réduit comprises the principal fortification of Villey-le-Sec. Constructed at the southwestern angle of the village , the réduit (a rallying point or center of resistance) forms a square, on each side. The réduit forms its own fort, concentrating together stores, quartering and ammunition magazines. Its plan is similar to the Fort de Lucey. However, its modernization was different and changed the fortification considerably. It is organized around a Mougin turret with two 155 mm guns, one of only two working Mougin turrets. The guns were returned from Fort Barbonnet, which had two turrets. Four rectangular courts constitute the original barracks. Two concrete-protected barracks were constructed, one in 1888 in a special concrete, the other in 1910 in reinforced concrete which forms the present entry to the réduit. In 1914 a battery to be armed with two 155mm turrets was under construction but never completed.
Redan
Situated between the north and south batteries on the opposite side of the village from the redoubt, the redan is equipped with a 75mm gun turret and two armored observatories. A concreted barracks was added in 1890 under the turret. The redan was overlooked by a water tower and the steeple of the village church, which were dynamited in 1914 to prevent the Germans from using them to sight artillery. The Germans never came close to Villey, and the church tower was rebuilt in 1950. A third observation point was added during the war. In 2002 the preservation association pumped w to 2.5 meters of water from the works, allowing access to the barracks and turret. Limited stabilization work was done, and the area awaits restoration. It is not presently accessible to the public.
North battery
The north battery is located just to the north of the village. In the form of a V with legs long, it possessed a retractable 75mm gun turret and a machine gun turret, with two armored observatories. , 310 meters altitude. The battery controlled the plain. The entry was protected by an Ardagt et Pilter drawbridge. The position retains its double caponier. The restoration society has recovered the 75mm turret's guns from the ouvrage du Mordant and restored them so that they can fire blank rounds, along with the eclipsing action of the turret.
South battery
More imposing than the north battery, the south battery was planned to cover the Maron road, taking the valley of the Moselle in enfilade and facing the Bois l'Eveque. It lies to the southeast of the village. , 320 meters altitude. Like the north battery, the south battery was laid out in the form of a V. The south battery was not reinforced with concrete and retains its stone construction, with the exception of a concrete barracks added on the south side in 1890. It was planned to receive a 75mm gun, which was never installed. The entry was equipped with a rolling bridge by Th. Pilter that could be laterally displaced. The southern battery was used by the National School of Applied Geology and Mineral Exploration (École Nationale Supérieure de Géologie Appliquée et de Prospection Minière) in Nancy to store a radioactive mineral collection.
Outer works
Redoute de Chaudeney
Also called the Charton redoubt, the Redoute de Chaudeney is located about one kilometer behind the fort at a location that could be used to bombard Toul. Its construction was initiated at the end of 1874 to anticipate delays in the construction of the main fort. Actual construction began in December 1875, predating the "panic" forts built after April–May 1875, when German chancellor Otto von Bismarck implied that Germany might initiate a pre-emptive war. The pentagonal position featured a Haxo casemate and a 164.7mm naval gun, protected by an earth rampart and masonry walls. The site is abandoned and covered with vegetation.
Batterie de Chaudeney
Four batteries were planned behind the redoubt , but only one was built, to the west in 1912. It was equipped with four 155mm guns. to the southeast of the batteries, a magazine was constructed. The magazine is abandoned but accessible.
Powder magazine of Bois sous roche
The powder magazine was located to the southwest of the fort . Constructed in 1890-91, it was equipped with its own well (now dry). Today it is in ruins and shelters bats (Myotis myotis, Plecotus, and Whiskered bats), with a temperature and humidity suitable for their hibernation. It is therefore designated a nature preserve and is closed to access.
Batteries de Bois sous roche
A set of six batteries was planned along the way to the powder magazine from the fort . Only four were built to the west in 1888, totaling 24 positions for 120mm or 155mm guns. A raised position behind the batteries shelters ammunition niches and conceals the 60 cm military railway from direct vision.
Advance post Ouvrage du Fays
A small infantry position north of the fort. . Constructed by a Declaration of Public Utility of 23 August 1889, it was planned to be slightly modernized between 1907 and 1914.
Train line
A Péchot system rail line using a gauge of was used to supply the fort. A Péchot wagon remains, along with several traces of the old line and the protective slope that shielded the line. The Villey-le-Sec military rail line was built between 1889 and 1891, with extensions in 1906 to the Redoubt of Chaudeney and in 1913-14 to the road between Villey-le-Sec and Gondreville. The preservation association has re-created a section of the line for tourists.
Other works
Also located on the plateau are:
The Redoubt of Dommartin ()
The Battery of Dommartin ()
The field work of Haut-des-Champs()
The battery of Charmois ()
The Fort de Gondreville ()
References
External links
Site du fort de Villey-le-Sec
Fortified ensemble of Villey-le-Sec) at fortiff.be
Le fort de Villey le Sec ou fort Trévise at fortiff' sere
Fort de Villey-le-Sec at Chemins de mémoire
Page describing the site naturel classé of the fortified ensemble of Villey-le-Sec
Séré de Rivières system
World War I museums in France
Military installations established in 1875
1875 establishments in France | Fort de Villey-le-Sec | Engineering | 2,360 |
50,080,000 | https://en.wikipedia.org/wiki/Travel%20in%20classical%20antiquity | Travel in classical antiquity over long distances was a specialised undertaking. Most travel was done in the interest of warfare, diplomacy, general state building, or trade. Social motivations for travel included visiting religious sites, festivals such as the Olympics, and health-related reasons. Most travel was difficult and expensive, due to the danger of violence, the scarcity of well-maintained roads, and the variability of travel times on water, as ancient ships were subject to the vagaries of both the wind and the tides.
Much of ancient literature is concerned with travel. The Odyssey, for example, relates the tale of Odysseus’ travel home to Ithaca over a ten-year period; later, the Aeneid tells the story of Aeneas' flight from Troy. Elsewhere, travel narratives from authors such as Herodotus and Caesar form more grounded examples of how individuals moved throughout the ancient world. Both Greek and Roman society had mores surrounding travel and the treatment of guests.
Historical context
The first instances of long-distance travel in the broader Mediterranean world occurred in what are today Egypt and Iraq. In Egypt, the Nile served as a conduit for trade and transportation. In the Near East, river travel on the Tigris and Euphrates was supplemented by long-distance travel over land in wagon-like vehicles pulled by oxen. Later, the chariot developed. Originally reserved for royalty, chariots later became important in warfare. In the Near East, large but not particularly sophisticated systems of roads evolved. Later these systems would be connected and redeveloped by the Persians.
The primary motivation for the development of travel and infrastructure to support it in both Egypt and the Near East was conquest and subsequent rule, a trend that continued in Greece and Rome. There is evidence, however, of travel motivated by tourism in Egypt, with visitors and scribes coming to view and record the pyramids and other religious monuments.
Maritime and river travel
The earliest maritime travel occurred on the Nile and other rivers in the Near East. Due to the lack of roads in ancient Greece, the most efficient way of shipping large amounts of goods, such as olive oil, was over the sea. Greek ships were built in varying sizes, with the largest accommodating as much as 500 tons of goods. Despite reliance on favourable weather, relatively little effort went into short sea voyages beyond the packing and management of the ship, although there was the danger of pirates and kidnapping for ransom.
For Romans, the seas were more or less clear of pirates due to the Roman military, although the fear of shipwreck by storm was greater, and often referenced in poetry and song. Passage by ship was considerably more pleasant than passage by land, but was not available throughout the year due to changes in tides and other weather conditions.
Land travel
The earliest notable road system in the ancient Mediterranean world was the Royal Road. The most extensive incarnation of the system was unified and organized under Darius I, ruler of the Achaemenid Empire in the 5th century BCE. It was built from connecting and upgrading older systems, and was used to enable rapid communication and transportation throughout the empire, as well as the collection of taxes. Generally, other contemporary roads were not paved or well maintained, making land travel both on foot and horseback arduous and time-consuming.
Greece
Greek roads were poorly developed due to the fragmentation of Greek society at the state level, and due to the extreme cost of constructing roads in Greece's mountainous terrain. At least in Athens, when built, roads were funded through special taxes on wealthy. Some roads were difficult to traverse even on foot, and most could not be traversed with the heavy wagons built to haul goods. Notable exceptions were the roads between major cities and nearby sanctuaries and holy sites, which were built to endure all weather. However, no portion of the Greek highway system included state-sponsored way stations or milestones, as found later in Rome.
There was danger of violence on Greek roads. For the same reasons that roads in Greece were poorly and rarely constructed, there was essentially no oversight by official forces, and as such travelers were prone to being accosted by highwaymen. As such, having a large entourage was useful for protection, and the transportation of valuables was risky.
Rome
Roman roads were extensive, largely to facilitate the transportation of the military. They suffered from lack of maintenance, at least during the Roman Republic due to the fact that more visible projects (new city-centric construction, such as aqueducts, and arena games) took precedence. The roads were built with few resources and had to be built to function in a variety of geographical and seasonal conditions. As Roman roads radiated away from towns, they became less elaborate and were sometimes paved with gravel, or markers were used to illustrate the proper path. This lack of attention to detail is contrasted with the uniform stone highways that generally existed near major urban centers, typically built by soldiers overseen by army engineers. Romans tended to build their roads without curves as often as possible. Contrasted with Greece, the danger of violence was less pronounced on Roman roads, due to the strong central military present in Rome.
Due to the difficulty and stress of travel by land, few civilians chose to conduct long-distance travel or trade over Rome's roads. Instead, tax collectors, couriers, soldiers, and other government officials were primary users of the highway network.
Mapmaking and itineraria
Scholars remain split as to whether or not the Greeks and Romans produced representative maps to serve as guides, and, if so, the level of their sophistication. Those who argue that there was no formal practice of cartography in the ancient world cite the lack of evidence, and the lack of materials that would have formed viable maps. Chinese contemporaries of the Romans did produce durable maps through their access to silk, a hardier material than those accessible in the Mediterranean world. Papyrus, the most likely Mediterranean candidate that could have been produced cheaply, was too sensitive, and would not have withstood long-term use. Other materials that might have been used (wood or vellum) have not survived; the only records that exist to this day are in the form of text, either as travelogues or descriptions of maps.
Without ready access to maps, both Greeks and Romans relied on itineraria to conduct sea travel, and the Romans used the documents for land travel as well. These documents were lists of cities or ports and the distances between them. Itineraria may have sometimes included crude illustrations, never to scale, to orient travelers as they moved through Roman or Greek territory. Extant examples, both saved by monastic copying traditions, include the Antonine Itinerary (which included roads in Britain) and the Tabula Peutingeriana (which may have been a guide for couriers, or a decoration).
Greek maps
Greek maps were often either localized, intended to demonstrate the shapes of specific cities, or to represent abstract concepts, such as the realm of the oikoumene, or known world. There are no extant Greek maps, but descriptions of one by Anaximander and others indicate that they split the oikoumene into continents—Europe, Asia, and Libya—within which there were further classifications. Divisions between continents and nations were often determined by bodies of water. Greek maps further separated the world into habitable and inhospitable zones, and some maps included the concept of the antipodes, an oikoumene in another habitable zone of the Earth not accessible with Greek transportation technologies.
Later Greek scientists determined that the Earth was round, although there was some debate as to the accuracy of this claim. During the conquests by Alexander the Great, Hellenistic knowledge of the world was expanded, resulting in an improved map of the world by Eratosthenes. He also calculated the circumference of the Earth, and published two treatises about geography, titled Geographica and On the Measurement of the Earth. Contemporaries of Eratosthenes questioned the validity of his methods, although his calculation of the Earth's circumference was quite accurate.
Roman maps
Like the Greeks, the Romans did not possess the materials or technology to produce useful maps. Several examples of what may have been Roman maps have been introduced through archaeology, although both their validity as maps and even their authenticity have been debated. Romans used itineraria in warfare, to guide government postal workers, and for civilian travel. Those that contained illustrations to underscore the text would not have been to scale, and would have born more similarity to transit maps than contemporary to-scale maps. Although not used for travel, there is evidence of maps created after surveys by the Roman government, in a process known as centuriation in which the land was reduced to grids for cultivation.
Travel narratives
Writings in Greek that could be qualified as travel narratives included Histories by Herodotus. Sections of the work are devoted to geographic and ethnographic descriptions of the territories encountered during his travels, which ranged from Egypt to the Persian Empire. Strabo also compiled his travels into a major work, Geographica, which outlined his conceptions of the world and the geographies of regions such as what are today Spain and India.
Narratives from Rome that include travel include Caesar's Commentarii de Bello Gallico, which describes Caesar's conquest of Gaul. Tacitus' Germania relies on information he gained from those who had journeyed to Germany, although he never did.
Paradoxography
Paradoxography is a genre of classical travel writing that recounts encounters with foreign or supernatural peoples, animals, and events. There were whole works devoted to such descriptions such as works by Palaephatus, and an otherwise anonymous Apollonius. Portions of other texts also include paradoxography in certain sections, including Commentarii de Bello Gallico, Histories, and certain versions of the Alexander Romance.
Hospitality and lodgings
Greek hospitality and lodgings
Greek hospitality demanded the fair treatment of guests and the reciprocal treatment of hosts, outlined in the concept of xenia. Xenia demanded both abstract respect as well as the exchange of material goods, such as gifts and food. Much of The Odyssey deals with Odysseus' treatment by guests, and his own violations of xenia. Xenia also appears in other Greek myths, such as that of Baucis and Philemon, in which a disguised Zeus and Hermes are given shelter and food by an elderly couple after their neighbors refused to accommodate the deities; the couple is subsequently rewarded.
In a non-mythological context, xenia provided for the equitable treatment of foreign dignitaries, traders, and guests while visiting alien city-states through the office of the proxenos. A proxenos was a city-appointed official, either a native or a resident alien, who would look over the citizens of a specific foreign city when they visited the city he represented. That is, for example, a native citizen of Athens or a resident Corinthian would be appointed proxenos for visitors of Corinth in Athens, and was therefore responsible for both diplomacy between the two cities and the interests of Corinthian citizens in Athens. Looking after another city's citizens meant arranging for favors as mundane as obtaining theater tickets, to more complicated procedures, such as ensuring access to capital or an audience with city officials. In addition to cities, trade groups or other organizations might appoint a proxenos to ensure their clients were treated fairly during visits.
The office of proxenos was honorary, and came only with a title and the prestige associated with the appointment. Some scholarship suggests that although the proxenos was tasked with diplomacy, they also may have occasionally engaged in subterfuge or intelligence gathering in order to grant their native city the upper hand in conflicts.
There is little evidence of a formalized system of inns in Greek cities, although some have been found that served tourists during festivals. These were organized around a central courtyard, and would have provided stables as well as beds.
Roman hospitality and lodgings
Roman hospitality mores were less formalized than in Greek civilization, although it was informed by both xenia and the concept of hospitium, essentially a Romanized version of the earlier term. Rome and Roman cities had systems of inns within their walls, individually known as hospitium or deversorium. These were available to all, and offered services ranging from simple access to a place to sleep to restaurants and stables. Cheaper alternatives, known as caupona, catered to sailors and soldiers, and offered alcohol in a less formal restaurant setting, as well as prostitution. Generally, lining the roads outside major cities were other inns, known as stabulum, which appealed to travelers.
Motivations
Ancient travel was motivated by reasons as diverse as trade (including postal communications), religious pilgrimages, warfare, and tourism.
Trade
Trade between different nations was an integral reason for travel. During the Roman Empire, trade was conducted with nations as disparate as China, India, and Tanzania. Generally, Roman and Chinese traders exchanged statues and other processed goods in exchange for Chinese silk. Trade in the city of Rome was focused around providing food for the city's massive population; as such, the trade of grain and other foods was subsidized by the government. Grain was brought into the city from all around empire: Egypt, Spain, Sardinia, and Sicily were all sources for the city.
Trade was sophisticated enough in Rome that a system similar to that of the proxenos emerged, with offices backed by different governments representing the interests of their private citizens in cities throughout the Mediterranean. These offices, like way stations along the Roman roads, were known as stationes.
Postal services
There were at least two postal services during the history of Rome—the cursus publicus and the agentes in rebus. Both were created during the Roman Empire, and both survived its dissolution, at least for a time. The established routes of the cursus publicus are sometimes argued to be outlined in the Tabula Peutingeriana, a surviving illustrated itineraria from the 4th century, although this claim is disputed. Both services existed to deliver messages (including military orders) and to collect taxes. Occasionally they also acted as spies and couriers for the military, collecting intelligence and delivering orders. Couriers relied on stationes (publicly funded way-stations) and mansiones (private residences) for shelter and food, and had the ability to compel private citizens to provide for them.
The cursus publicus differed from a conventional, modern postal service in that it was not universal (it was only available in more developed provinces) and in that deliveries were made at the government's discretion, rather than on a regular schedule. Usage of the system of lodgings and animals used by the service required official permission from the provincial governor, or the emperor. Governors acted as overseers for the system, and some public officials were also entitled to use it as a form of personal transportation.
Religion and health
Pilgrimage to one of the major oracles was one of the central reasons for religious travel in the ancient world, particularly in Rome. These pilgrimages were generally made to oracles, such as that at Delphi, who was known as Pythia or simply the Oracle of Delphi, a title that passed to different women. Romans would have visited these oracles in the hopes of gaining some insight about their future. Generally, oracles were associated with a god. Pythia, for example, was associated with Apollo.
Health also played a role in motivating travel, both in Roman and Greek culture. Travelers visited sanctuaries associated with deities or gods and specific phys, such as the Greek one at Epidaurus, in the hopes of curing illness and disease. These sanctuaries would have sometimes been associated with specific physicians, as well as deities. Galen, for example, was famously associated with the sanctuary of Asclepius. Sanctuary sites were often isolated, and included springs, as well as diversions, such as works of art and stadiums for athletic events. Other forms of travel in the name of health, such as journeys by ship, also existed.
Festivals
In Greco-Roman culture, festivals occurred either annually or every few years, and were held for religious reasons. Generally, they were held in fixed locations, and individuals traveled from city to city in order to attend. However, there were also universal festivals, occurring throughout Greece and Rome, and both universal and local festivals could be celebrated by citizens while abroad. The most popular of these festivals were the originally Greek games: the Pythian Games, the Isthmian Games, the Nemean Games, and, most famously, the Olympic Games. These revolved around the performance of artistic and athletic feats to honor individual deities, and continued to be celebrated after Rome's conquest of Greece.
Festivals served deliberately to motivate travel, with the purpose of not only asserting communal identity at the imperial or societal level, rather than the local level, but also promoting Greek and Roman culture as opposed to foreign or "barbaric" practices. The length of festivals rarely exceeded five days, while travel times could be measured in weeks or months.
Lodgings available to the general public at festivals ranged from crude huts or tents to elaborate inns reserved for the Greek and Roman elite. Traders and those with connections in other cities often stayed in private homes. The influxes of tourists, particularly in Athens, created temporary economies, with vendors, prostitutes, and guides providing goods and services to the visitors.
Tourism
In Rome, tourists traveled to beach and mountain side resorts for different periods of the year. It was not uncommon for even middle-class Romans to own multiple villas.
Warfare and settlements
Warfare and state building were the two most common reasons for travel by non-elite residents of Roman and Greek society. The pinnacle of travel in name of warfare in Hellenistic society was under Alexander, but his efforts to conquer the world were preceded by a general Greek and specifically Athenian colonial process that led to the foundation of cities throughout the Mediterranean. Warfare by Alexander led to the movement of Hellenistic peoples and culture as far east as India, with settlements in Afghanistan, Egypt, and elsewhere, and led to the establishment of several descendant kingdoms: the Seleucid Empire and Ptolemaic Egypt.
Immigration
Immigration was also a motivation for travel, particularly to large urban centers, including Rome itself. There were few restrictions (except in wartime) on the ability of individuals and families to migrate and subsequently settle in cities, and despite stratification, there was still some level of upward mobility in Roman society.
Archaeological evidence suggests that construction increased during times of heavy immigration, and that immigration increased during periods of conquest, and that it was at its lowest after the Sack of Rome by Alaric I.
See also
Appian Way
Roman navy
Notes
References
Classical antiquity
Society of ancient Rome
Roman Empire
Culture of ancient Rome
Travel | Travel in classical antiquity | Physics | 3,850 |
6,925,199 | https://en.wikipedia.org/wiki/PABPII | PABPII, or polyadenine binding protein II, is a protein involved in the assembly of the polyadenine tail added to newly synthesized pre-messenger RNA (mRNA) molecules during the process of gene transcription. It is a regulatory protein that controls the rate at which polyadenine polymerase (PAP) adds adenine nucleotides to the 3' end of the growing tail within the nucleus of the cell. In the absence of PABPII, PAP adds adenines slowly, typically about 12. PABPII then binds to the short polyadenine tail and induces an acceleration in the rate of addition by PAP until the tail has grown to about 200 adenines long. The mechanism by which PABPII signals the termination of the polymerization reaction once the tail has reached its required length is not clearly understood.
PABPII is distinct from the related protein PABPI in being localized to the cell nucleus rather than the cytoplasm.
See also
PABPN1
References
Lodish H, Berk A, Matsudaira P, Kaiser CA, Krieger M, Scott MP, Zipursky SL, Darnell J. (2004). Molecular Cell Biology. WH Freeman: New York, NY. 5th ed.
Gene expression | PABPII | Chemistry,Biology | 269 |
38,248,921 | https://en.wikipedia.org/wiki/Glossary%20of%20construction%20cost%20estimating | The following is a glossary of terms relating to construction cost estimating.
A
Allocation of costs is the transfer of costs from one cost item to one or more other cost items.
Allowance - a value in an estimate to cover the cost of known but not yet fully defined work.
As-sold estimate - the estimate which matches the agreed items and price for the project scope.
B
Basis of estimate (BOE) - a document which describes the scope basis, pricing basis, methods, qualifications, assumptions, inclusions, and exclusions.
Bill of materials (BOM) - a list of materials required for the construction of a project or part of a project, which may include quantities.
Bill of quantities (BOQ) - a document used in tendering in the construction industry in which materials, parts, and labor (and their costs) are itemized. It also (ideally) details the terms and conditions of the construction or repair contract and itemises all work to enable a contractor to price the work for which he or she is bidding.
Bond - usually refers to a performance bond, which is a surety bond issued by an insurance company or a bank to guarantee satisfactory completion of a project by a contractor. Other types of guarantees, such as a bid bond or a materials bond, are sometimes also required by a project owner.
C
Chart of accounts (Code of accounts) (COA) - a created list of the accounts used by a business entity to define each class of items for which money or the equivalent is spent or received. It is used to organize the finances of the entity and to segregate expenditures, revenue, assets and liabilities in order to give interested parties a better understanding of the financial health of the entity.
City cost index - see: Location cost index. RSMeans publishes a city cost index table.
Construction is a process that consists of the creation, modification, or demolition of facilities, buildings, civil and monumental works, and infrastructure.
Construction cost - the total cost to construct a project. This value usually does not include the preplanning, site or right of way acquisition, or design costs, and may not include start-up and commissioning costs. This total or subtotal is usually identified as such in an estimate report. Also known as Total Estimated Contract Cost (TECC).
Consumables are goods that, according to the 1913 edition of Webster's Dictionary, are capable of being consumed; that may be destroyed, dissipated, wasted, or spent (also known as consumable goods, nondurable goods, or soft goods). In construction, these may include such materials as weld rod, fasteners, tape, glue, etc.
Contingency - When estimating the cost for a project, product or other item or investment, there is always uncertainty as to the precise content of all items in the estimate, how work will be performed, what work conditions will be like when the project is executed and so on. These uncertainties are risks to the project. Some refer to these risks as "known-unknowns" because the estimator is aware of them, and based on past experience, can even estimate their probable costs. The estimated costs of the known-unknowns is referred to by cost estimators as cost contingency.
Cost - the value of currency required to obtain a product or service, to expend labor and use equipment and tools, or to operate a business.
Cost index (or factor) - a value used to adjust the cost of from one time to another. There are various published cost indexes, listed by year, quarter, or month. RSMeans publishes a historical cost index.
Costing - the process of applying appropriate costs to the line items after the take off. RSMeans refers to this as, "Price the quantities." May also be called pricing.
Crew – a group of people (workers) who execute a construction activity. The crew may also include construction equipment required to execute the work.
Crew hour (ch) – one crew's effort for one hour of time.
D
Deliverable is a term used in project management to describe a tangible or intangible object produced as a result of the project that is intended to be delivered to a customer (either internal or external).
Direct costs are directly attributable to the cost object. In construction, the costs of materials, labor, equipment, etc., and all directly involved efforts or expenses for the cost object are direct costs.
Distributables – a classification of project costs which are not associated with any specific direct account.
Duration – the amount of clock or calendar time which is required to execute a work activity or task.
E
Effort - the work done in accomplishing a task or project. May be a measurement of the hours required.
Equipment - (1) a category of cost for organizing and summarizing costs, (2) construction equipment used to execute the project work, (3) engineered equipment such as pumps or tanks.
Escalation is defined as changes in the cost or price of specific goods or services in a given economy over a period. In estimates, escalation is an allowance to provide for the anticipated escalation of costs during construction.
Estimation in project management is the processes of making cost estimates using the appropriate techniques.
F
Facility - an installation, contrivance, or other thing which facilitates something; a place for doing something. A building, plant, road, reservoir, etc.
Field Supervision (or field non-manual) - supervisory personnel and all other non-manual staff at the construction site.
Foreman - the worker or tradesman who is in charge of a construction crew. The foreman may be a hands-on worker who contributes to the work completion or a non-working foreman. A general foreman may be in charge of all or some crews.
Fringe Benefits - labor cost elements which are provided to pay for benefits received by workers, such as health insurance, pension, training, etc.
G
General & Administrative Costs (G&A) - the costs of operating a construction business. These costs include such things as office space, office staff, operating facilities, etc. They are not associated with any specific project, but may be allocated across projects in a cost estimate. See also Overhead, Indirect cost.
General contractor is responsible for the day-to-day oversight of a construction site, management of vendors and trades, and communication of information to involved parties throughout the course of a building project.
General requirements - costs for general requirements (Division 1) of the project execution which are actually part of the deliverable. Examples: project management & coordination 47, temporary facilities & controls, cleaning & waste management, commissioning.
I
Indirect costs are costs that are not directly accountable to a cost object (such as a particular project, facility, function or product). See also, Overhead, General & Administrative Cost, Distributable.
L
Labor – a category of cost which is incurred to employ people (workers, crafts, trades, etc.) in the execution of construction work activity.
Labor benefits are additional costs (such as holiday pay or health insurance) which the employer pays directly to the employee or into a fund on behalf of the employee.
Labor burden is the cost of payroll taxes and insurances (such as workers' compensation) which the employer must pay to employ workers.
Labor rate (sometimes price) – the amount of currency per unit of time which is required to employ people (workers, crafts, trades, etc.) in the execution of construction work activity. The rate may represent the wage rate only, or may include various benefits and labor burdens.
Line item - one element of cost in an estimate which is listed in the estimate spreadsheet.
Location cost index (or factor) - the ratio of the cost in one location to that in another location. These may include or exclude currency exchange rates. Example: 223 in Boston / 187 in Austin = 1.19. The location cost factor is used to adjust the cost from one location to another. To adjust a known cost in Austin to that in Boston, multiply the Austin cost by 1.19. See also City cost index.
Lump sum – "the complete in-place cost of a system, a subsystem, a particular item, or an entire project."
M
Man-hour (mh) – one person's (worker, craftsman, tradesman, etc.) effort for one hour of time. Note: some attempt to make this gender neutral by renaming this as work hour or job hour or person hour, or something similar.
Man-hour norms - a set of standard man-hour rates for work tasks, given normal working conditions.
Man-hour rate – the amount of man-hours which are consumed executing one unit of work activity. Man-hour rate = man-hours required for work / completed work quantity. Example: Excavation 0.125 mh/cy. The man-hour rate is related to the inverse of the production rate times the number of workers in the crew performing the work. Example: Excavation at 8 cy/day (8-hour day) with 2-man crew = 2 x 8 / 8 = 2 man-hours/cy. See also Production rate. Note: some sources call this Productivity, an unnecessary confusion.
Mark-up is the difference between the cost of a good or service and its selling price. A markup is added on to the total cost incurred by the producer of a good or service in order to create a profit.
Manual labor is physical work done by people involved in construction the project. All of the various trade workers are included in manual labor, including foremen.
Means & methods - the means and methods used in executing the work.
N
Non-manual labor - work done by people who are not classified as manual labor.
Non-productive time - work time which is paid but does not contribute to the production of work. Examples: safety meeting, travel time, clean up time, wash up time, etc.
O
Open shop is a place of employment at which one is not required to join or financially support a union (closed shop) as a condition of hiring or continued employment. Open shop is also known as a merit shop.
Overhead - In business, overhead or overhead expense refers to an ongoing expense of operating a business; it is also known as an "operating expense." See also General & Administrative Cost, Indirect Cost.
Overtime is the amount of time someone works beyond normal working hours.
P
Per diem - a daily allowance for expenses, a specific amount of money that an organization gives an individual per day to cover living and traveling expenses (allowance) in connection with work done away from home or on tour. (Latin for "per day" or "for each day")
Plug number - a value inserted in an estimate as a place holder and an approximation of the cost for a scope element which has not been detailed yet. See also Allowance.
Premium pay - the extra portion of wages paid when a worker works overtime. Example: Wage rate is 10.00/hour, overtime is paid at time and a half, or 15.00/hour, the premium pay is 5.00/hour.
Price is the quantity of payment or compensation given by one party to another in return for goods or services.
Pricing is the function of determining the amount of money asked in consideration for undertaking the project. Depending on the market and profit considerations, etc., the price may be more or less than the cost.
Production rate – the quantity of work which is completed in one unit of time. Production rate = completed work quantity / duration. Example: Excavation 8 cy/day = 1 cy/hour (in an 8-hour day). RSMeans lists this as Daily Output. See also Man-hour rate. Note: some sources use the term productivity for the production rate or man-hour rate, an unnecessary confusion.
Productivity is the term which relates one rate to another rate, given two differing sets of conditions for the same work. A production rate greater than the reference production rate indicates a higher productivity. A production rate less than the reference production rate indicates a lower productivity. A man-hour rate greater than the reference man-hour rate indicates a lower productivity. A man-hour rate less than the reference man-hour rate indicates a higher productivity. (The economic concept of productivity is an average measure of the efficiency of production. Productivity is a ratio of production output to what is required to produce it (inputs).)
Productivity factor – the ratio of a selected production rate to a reference production rate. Example: selected rate = 102, reference rate = 80, productivity factor = 102/80 = 1.28. Alternatively – the ratio of a reference man-hour rate to a selected man-hour rate. Example: selected rate = 0.104, reference rate = 0.125, productivity factor = 0.125/0.104 = 1.20. A productivity factor is often used to adjust a set of standard or normal (norm) production or man-hour rates to a set of rates for a specific project, location, or set of working conditions (see Labor Productivity Factor). Example: Specific type of excavation – standard = 150 cy/day. For specific project, location, or conditions the productivity factor is 0.80. The resulting production rate for that is 150 x 0.8 = 120 cy/day. The actual productivity factor for a project or subset of production rates (or man-hour rates) is the ratio of the actual production rate to the estimated production rate.
Profit - in accounting, is the difference between revenue and cost. In estimates, it is an allowance to provide for anticipated profit upon completion of the project.
Profit margin refers to a measure of profitability. It is calculated by finding the net profit as a percentage of the revenue.
Project - A temporary endeavor undertaken to create a unique product, service, or result.
Q
Quality can mean a high degree of excellence ("a quality product") or a degree of excellence or the lack of it ("work of average quality").
Quantify - see: Take off
Quantity is a property that can exist as a magnitude or multitude. For example 1200 mm or 10 each.
Quantity surveyor (QS) is a professional working within the construction industry concerned with building costs, in the U.K. and some other areas. A QS employs standard methods of measurement to develop a bill of quantities.
R
Resources are what is required to carry out a project's tasks. They can be people, equipment, facilities, materials, tools, supplies, or anything else capable of definition required for the completion of a project activity.
(RFI) Request For Information - A form of documentation sent to a customer requesting answers to questions about your bid. A "request for information" (RFI) is an important part of clarifying plans, designs and specifications during construction.
S
Schedule of values is a detailed statement furnished by a construction contractor, builder or others outlining the portions of the contract sum. It allocates values for the various parts of the work and is also used as the basis for submitting and reviewing progress payments.
Scope of a project in project management is the sum total of all of its products and their requirements or features.
Specification - an explicit set of requirements to be satisfied by a material, product, or service.
Subcontractor is an individual or in many cases a business that signs a contract to perform part or all of the obligations of another's contract.
Supplier - A distributor or other company which supplies materials, parts, equipment, etc.
T
Take off -The process of reviewing and understanding the design package and using the project scope drawings and documents to itemize the scope into line items with measured quantities. RSMeans refers to this as, "Scope out the project," and, "Quantify."
Task - a distinct piece of work performed.
Tool - any physical item that can be used to achieve a goal, especially if the item is not consumed in the process.
U
Unit cost - the cost for one measured unit of completed work activity.
Unit of measure - the term used for how a bid item is quantified.
V
Virtual Design and Construction (VDC) is the use of integrated multi-disciplinary performance models of design-construction projects, including the Product (i.e., facilities), Work Processes and Organization of the design - construction - operation team in order to support explicit and public business objectives. In VDC (BIM is one method) the modeling consists of the usual three dimensions, plus the time dimension and the cost dimension.
W
Work (1) is the amount of effort applied to produce a deliverable or to accomplish a task, (2) is everything required or supplied to complete a construction project.
Work breakdown structure (WBS) - a deliverable oriented decomposition of a project into smaller components. It defines and groups a project's discrete work elements in a way that helps organize and define the total work scope of the project.
Worker - a person engaged in the accomplishment of work. In cost estimating, the hands-on workers contribute to the production and are counted in calculations of the production rate. Other workers supervise or support the hands-on work in some way.
Wage rate - The agreed monetary compensation per hour for a person to accomplish work. This is the pay provided to the worker, excluding any fringe benefits or other labor burdens. Labor unions typically have negotiated agreements which define the wage rates for workers, as well as the rates for fringe benefits.
References
Cost engineering
Construction Cost Estimating
Construction
Wikipedia glossaries using unordered lists | Glossary of construction cost estimating | Engineering | 3,603 |
31,142,144 | https://en.wikipedia.org/wiki/Dubautia%20kenwoodii | Dubautia kenwoodii, the Kalalau rim dubautia, is an "extremely rare" species of flowering plant in the family Asteraceae. It is endemic to Hawaii where it is known only from the island of Kauai. Only one plant has ever been seen: the type specimen. A part of this plant was collected in 1991 and the individual was described as a new species in 1998. It was federally listed as an endangered species of the United States in 2010. Like other Dubautia this plant is known as naenae.
This member of the silversword alliance was discovered growing on a cliff along the Kalalau Rim adjacent to the Kalalau Valley on Kauai. The single plant was examined and collected by the biologist Ken Wood, who rappelled down the cliff to view it. It was later described to science and named for him. After Hurricane Iniki in 1992 this specimen was absent and feared extirpated. Biologists are hopeful that more individuals of this "exceedingly rare" and "critically endangered" plant will be located as more of Kauai is surveyed.
The only known specimen of the plant was described as a shrub half a meter tall with oppositely arranged leaves up to 12 centimeters long by 2 wide. The blades are shiny on top and more pale on the undersides. The flower heads contain several flowers which turn "rusty yellow in age".
References
kenwoodii
Endemic flora of Hawaii
Biota of Kauai
Plants described in 1998
Species known from a single specimen | Dubautia kenwoodii | Biology | 311 |
627,159 | https://en.wikipedia.org/wiki/Nicholas%20Aylward%20Vigors | Nicholas Aylward Vigors (1785 – 26 October 1840) was an Irish zoologist and politician. He popularized the classification of birds on the basis of the quinarian system.
Early life
Vigors was born at Old Leighlin, County Carlow, in 1785. He was the first son of Capt. Nicholas Aylward Vigors, who served in the 29th (Worcestershire) Regiment, and his first wife, Catherine Vigors, daughter of Solomon Richards of Solsborough. He matriculated at Trinity College, Oxford, in November 1803, and was admitted at Lincoln's Inn in November 1806. Without completing his studies, he served in the army during the Peninsular War from 1809 to 1811 and was wounded in the Battle of Barossa on 5 March 1811. Though he had not yet completed his studies, he still published "An inquiry into the nature and extent of poetick licence" in London in 1810. He then returned to Oxford to continue his studies and achieved his Bachelor of Arts in 1817 and Master of Arts in 1818. He practiced as a barrister and became a Doctor of Civil Law in 1832.
Zoology
Vigors was a co-founder of the Zoological Society of London in 1826, and its first secretary until 1833. In that year, he founded what became the Royal Entomological Society of London. He was a fellow of the Linnean Society and the Royal Society. He was the author of 40 papers, mostly on ornithology. He described 110 species of birds, enough to rank him among the top 30 bird authors historically. He provided the text for John Gould's A Century of Birds from the Himalaya Mountains (1830–32).
One bird that he described was "Sabine's snipe". This was treated as a common snipe by Barrett-Hamilton in 1895 and by Meinertzhagen in 1926 but was thought to be probably a Wilson's snipe in 1945. Vigors lent a skin for later editions of Thomas Bewick's History of British Birds.
Politics
Vigors succeeded to his father's estate in 1828. He was MP for the borough of Carlow from 1832 until 1835. He briefly represented County Carlow in 1835. Vigors had been elected in a by-election in June after the Conservative MPs originally returned at the 1835 United Kingdom general election were unseated on petition and a new writ issued. On 19 August 1835, Vigors and his running mate, in the two-member county constituency, were unseated on petition. The same two Conservatives who had previously been unseated were awarded the seats. On the death of one of them, Vigors won the subsequent by-election in 1837 and retained the seat until his own death.
References
Bibliography
Parliamentary Election Results in Ireland, 1801-1922, edited by B.M. Walker (Royal Irish Academy 1978)
External links
Art UK: Toucan by Vigors
1785 births
1840 deaths
Alumni of Trinity College, Oxford
British ornithologists
Irish ornithologists
British zoologists
Irish zoologists
Taxon authorities
Members of the Parliament of the United Kingdom for County Carlow constituencies (1801–1922)
Fellows of the Royal Society
Fellows of the Linnean Society of London
Secretaries of the Zoological Society of London
UK MPs 1832–1835
UK MPs 1835–1837
UK MPs 1837–1841
Grenadier Guards officers
British Army personnel of the Napoleonic Wars
Politicians from County Carlow
Irish Repeal Association MPs
Committee members of the Society for the Diffusion of Useful Knowledge
Scientists from County Carlow
Military personnel from County Carlow | Nicholas Aylward Vigors | Biology | 720 |
29,858,383 | https://en.wikipedia.org/wiki/Plank%20%28wood%29 | A plank is timber that is flat, elongated, and rectangular with parallel faces that are higher and longer than wide. Used primarily in carpentry, planks are critical in the construction of ships, houses, bridges, and many other structures. Planks also serve as supports to form shelves and tables.
Usually made from timber, sawed so that the grain runs along the length, planks are usually more than thick, and are generally wider than . In the United States, planks can be any length and are generally a minimum of 2×8 (), but planks that are 2×10 () and 2×12 () are more commonly stocked by lumber retailers. Planks are often used as a work surface on elevated scaffolding, and need to be thick enough to provide strength without breaking when walked on. The wood is categorized as a board if its width is less than , and its thickness is less than .
A plank used in a building as a horizontal supporting member that runs between foundations, walls, or beams to support a ceiling or floor is called a joist.
The plank was the basis of maritime transport: wood (except some dense hardwoods) floats on water, and abundant forests meant wooden logs could be easily obtained and processed, making planks the primary material in ship building. However, since the 20th century, wood has largely been supplanted in ship construction by iron and steel, to decrease cost and improve durability.
Gallery
See also
Lumber
Plank cooking
Walking the plank
References
Building materials
Shipbuilding | Plank (wood) | Physics,Engineering | 312 |
10,418,624 | https://en.wikipedia.org/wiki/Renewable%20energy%20commercialization | Renewable energy commercialization involves the deployment of three generations of renewable energy technologies dating back more than 100 years. First-generation technologies, which are already mature and economically competitive, include biomass, hydroelectricity, geothermal power and heat. Second-generation technologies are market-ready and are being deployed at the present time; they include solar heating, photovoltaics, wind power, solar thermal power stations, and modern forms of bioenergy. Third-generation technologies require continued R&D efforts in order to make large contributions on a global scale and include advanced biomass gasification, hot-dry-rock geothermal power, and ocean energy. In 2019, nearly 75% of new installed electricity generation capacity used renewable energy and the International Energy Agency (IEA) has predicted that by 2025, renewable capacity will meet 35% of global power generation.
Public policy and political leadership helps to "level the playing field" and drive the wider acceptance of renewable energy technologies. Countries such as Germany, Denmark, and Spain have led the way in implementing innovative policies which has driven most of the growth over the past decade. As of 2014, Germany has a commitment to the "Energiewende" transition to a sustainable energy economy, and Denmark has a commitment to 100% renewable energy by 2050. There are now 144 countries with renewable energy policy targets.
Renewable energy continued its rapid growth in 2015, providing multiple benefits. There was a new record set for installed wind and photovoltaic capacity (64GW and 57GW) and a new high of US$329 Billion for global renewables investment. A key benefit that this investment growth brings is a growth in jobs. The top countries for investment in recent years were China, Germany, Spain, the United States, Italy, and Brazil. Renewable energy companies include BrightSource Energy, First Solar, Gamesa, GE Energy, Goldwind, Sinovel, Targray, Trina Solar, Vestas, and Yingli.
Climate change concerns are also driving increasing growth in the renewable energy industries. According to a 2011 projection by the IEA, solar power generators may produce most of the world's electricity within 50 years, reducing harmful greenhouse gas emissions.
Background
Rationale for renewables
Climate change, pollution, and energy insecurity are significant problems, and addressing them requires major changes to energy infrastructures. Renewable energy technologies are essential contributors to the energy supply portfolio, as they contribute to world energy security, reduce dependency on fossil fuels, and some also provide opportunities for mitigating greenhouse gases. Climate-disrupting fossil fuels are being replaced by clean, climate-stabilizing, non-depletable sources of energy:
...the transition from coal, oil, and gas to wind, solar, and geothermal energy is well under way. In the old economy, energy was produced by burning something — oil, coal, or natural gas — leading to the carbon emissions that have come to define our economy. The new energy economy harnesses the energy in wind, the energy coming from the sun, and heat from within the earth itself.
In international public opinion surveys there is strong support for a variety of methods for addressing the problem of energy supply. These methods include promoting renewable sources such as solar power and wind power, requiring utilities to use more renewable energy, and providing tax incentives to encourage the development and use of such technologies. It is expected that renewable energy investments will pay off economically in the long term.
EU member countries have shown support for ambitious renewable energy goals. In 2010, Eurobarometer polled the twenty-seven EU member states about the target "to increase the share of renewable energy in the EU by 20 percent by 2020". Most people in all twenty-seven countries either approved of the target or called for it to go further. Across the EU, 57 percent thought the proposed goal was "about right" and 16 percent thought it was "too modest." In comparison, 19 percent said it was "too ambitious".
As of 2011, new evidence has emerged that there are considerable risks associated with traditional energy sources, and that major changes to the mix of energy technologies is needed:
Several mining tragedies globally have underscored the human toll of the coal supply chain. New EPA initiatives targeting air toxics, coal ash, and effluent releases highlight the environmental impacts of coal and the cost of addressing them with control technologies. The use of fracking in natural gas exploration is coming under scrutiny, with evidence of groundwater contamination and greenhouse gas emissions. Concerns are increasing about the vast amounts of water used at coal-fired and nuclear power plants, particularly in regions of the country facing water shortages. Events at the Fukushima nuclear plant have renewed doubts about the ability to operate large numbers of nuclear plants safely over the long term. Further, cost estimates for "next generation" nuclear units continue to climb, and lenders are unwilling to finance these plants without taxpayer guarantees.
The 2014 REN21 Global Status Report says that renewable energies are no longer just energy sources, but ways to address pressing social, political, economic and environmental problems:
Today, renewables are seen not only as sources of energy, but also as tools to address many other pressing needs, including: improving energy security; reducing the health and environmental impacts associated with fossil and nuclear energy; mitigating greenhouse gas emissions; improving educational opportunities; creating jobs; reducing poverty; and increasing gender equality... Renewables have entered the mainstream.
Growth of renewables
In 2008 for the first time, more renewable energy than conventional power capacity was added in both the European Union and United States, demonstrating a "fundamental transition" of the world's energy markets towards renewables, according to a report released by REN21, a global renewable energy policy network based in Paris. In 2010, renewable power consisted about a third of the newly built power generation capacities.
By the end of 2011, total renewable power capacity worldwide exceeded 1,360 GW, up 8%. Renewables producing electricity accounted for almost half of the 208 GW of capacity added globally during 2011. Wind and solar photovoltaics (PV) accounted for almost 40% and 30%. Based on REN21's 2014 report, renewables contributed 19 percent to our energy consumption and 22 percent to our electricity generation in 2012 and 2013, respectively. This energy consumption is divided as 9% coming from traditional biomass, 4.2% as heat energy (non-biomass), 3.8% hydro electricity and 2% electricity from wind, solar, geothermal, and biomass.
During the five-years from the end of 2004 through 2009, worldwide renewable energy capacity grew at rates of 10–60 percent annually for many technologies, while actual production grew 1.2% overall. In 2011, UN under-secretary general Achim Steiner said: "The continuing growth in this core segment of the green economy is not happening by chance. The combination of government target-setting, policy support and stimulus funds is underpinning the renewable industry's rise and bringing the much needed transformation of our global energy system within reach." He added: "Renewable energies are expanding both in terms of investment, projects and geographical spread. In doing so, they are making an increasing contribution to combating climate change, countering energy poverty and energy insecurity".
According to a 2011 projection by the International Energy Agency, solar power plants may produce most of the world's electricity within 50 years, significantly reducing the emissions of greenhouse gases that harm the environment. The IEA has said: "Photovoltaic and solar-thermal plants may meet most of the world's demand for electricity by 2060 – and half of all energy needs – with wind, hydropower and biomass plants supplying much of the remaining generation". "Photovoltaic and concentrated solar power together can become the major source of electricity".
In 2013, China led the world in renewable energy production, with a total capacity of 378 GW, mainly from hydroelectric and wind power. As of 2014, China leads the world in the production and use of wind power, solar photovoltaic power and smart grid technologies, generating almost as much water, wind and solar energy as all of France and Germany's power plants combined. China's renewable energy sector is growing faster than its fossil fuels and nuclear power capacity. Since 2005, production of solar cells in China has expanded 100-fold. As Chinese renewable manufacturing has grown, the costs of renewable energy technologies have dropped. Innovation has helped, but the main driver of reduced costs has been market expansion.
See also renewable energy in the United States for US-figures.
Economic trends
Renewable energy technologies are getting cheaper, through technological change and through the benefits of mass production and market competition. A 2011 IEA report said: "A portfolio of renewable energy technologies is becoming cost-competitive in an increasingly broad range of circumstances, in some cases providing investment opportunities without the need for specific economic support," and added that "cost reductions in critical technologies, such as wind and solar, are set to continue." , there have been substantial reductions in the cost of solar and wind technologies:
The price of PV modules per MW has fallen by 60 percent since the summer of 2008, according to Bloomberg New Energy Finance estimates, putting solar power for the first time on a competitive footing with the retail price of electricity in a number of sunny countries. Wind turbine prices have also fallen – by 18 percent per MW in the last two years – reflecting, as with solar, fierce competition in the supply chain. Further improvements in the levelised cost of energy for solar, wind and other technologies lie ahead, posing a growing threat to the dominance of fossil fuel generation sources in the next few years.
Hydro-electricity and geothermal electricity produced at favourable sites are now the cheapest way to generate electricity. Renewable energy costs continue to drop, and the levelised cost of electricity (LCOE) is declining for wind power, solar photovoltaic (PV), concentrated solar power (CSP) and some biomass technologies.
Renewable energy is also the most economic solution for new grid-connected capacity in areas with good resources. As the cost of renewable power falls, the scope of economically viable applications increases. Renewable technologies are now often the most economic solution for new generating capacity. Where "oil-fired generation is the predominant power generation source (e.g. on islands, off-grid and in some countries) a lower-cost renewable solution almost always exists today". As of 2012, renewable power generation technologies accounted for around half of all new power generation capacity additions globally. In 2011, additions included 41 gigawatt (GW) of new wind power capacity, 30 GW of PV, 25 GW of hydro-electricity, 6 GW of biomass, 0.5 GW of CSP, and 0.1 GW of geothermal power.
Three generations of technologies
Renewable energy includes a number of sources and technologies at different stages of commercialization. The International Energy Agency (IEA) has defined three generations of renewable energy technologies, reaching back over 100 years:
"First-generation technologies emerged from the industrial revolution at the end of the 19th century and include hydropower, biomass combustion, geothermal power and heat. These technologies are quite widely used.
Second-generation technologies include solar heating and cooling, wind power, modern forms of bioenergy, and solar photovoltaics. These are now entering markets as a result of research, development and demonstration (RD&D) investments since the 1980s. Initial investment was prompted by energy security concerns linked to the oil crises of the 1970s but the enduring appeal of these technologies is due, at least in part, to environmental benefits. Many of the technologies reflect significant advancements in materials.
Third-generation technologies are still under development and include advanced biomass gasification, biorefinery technologies, concentrating solar thermal power, hot-dry-rock geothermal power, and ocean energy. Advances in nanotechnology may also play a major role".
First-generation technologies are well established, second-generation technologies are entering markets, and third-generation technologies heavily depend on long-term research and development commitments, where the public sector has a role to play.
First-generation technologies
First-generation technologies are widely used in locations with abundant resources. Their future use depends on the exploration of the remaining resource potential, particularly in developing countries, and on overcoming challenges related to the environment and social acceptance.
Biomass
Biomass, the burning of organic materials for heat and power, is a fully mature technology. Unlike most renewable sources, biomass (and hydropower) can supply stable base load power generation.
Biomass produces CO2 emissions on combustion, and the issue of whether biomass is carbon neutral is contested. Material directly combusted in cook stoves produces pollutants, leading to severe health and environmental consequences. Improved cook stove programs are alleviating some of these effects.
The industry remained relatively stagnant over the decade to 2007, but demand for biomass (mostly wood) continues to grow in many developing countries, as well as Brazil and Germany.
The economic viability of biomass is dependent on regulated tariffs, due to high costs of infrastructure and ingredients for ongoing operations. Biomass does offer a ready disposal mechanism by burning municipal, agricultural, and industrial organic waste products. First-generation biomass technologies can be economically competitive, but may still require deployment support to overcome public acceptance and small-scale issues. As part of the food vs. fuel debate, several economists from Iowa State University found in 2008 "there is no evidence to disprove that the primary objective of biofuel policy is to support farm income."
Hydroelectricity
Hydroelectricity is the term referring to electricity generated by hydropower; the production of electrical power through the use of the gravitational force of falling or flowing water. In 2015 hydropower generated 16.6% of the worlds total electricity and 70% of all renewable electricity and is expected to increase about 3.1% each year for the next 25 years. Hydroelectric plants have the advantage of being long-lived and many existing plants have operated for more than 100 years.
Hydropower is produced in 150 countries, with the Asia-Pacific region generating 32 percent of global hydropower in 2010. China is the largest hydroelectricity producer, with 721 terawatt-hours of production in 2010, representing around 17 percent of domestic electricity use. There are now three hydroelectricity plants larger than 10 GW: the Three Gorges Dam in China, Itaipu Dam across the Brazil/Paraguay border, and Guri Dam in Venezuela. The cost of hydroelectricity is low, making it a competitive source of renewable electricity. The average cost of electricity from a hydro plant larger than 10 megawatts is 3 to 5 U.S. cents per kilowatt-hour.
Geothermal power and heat
Geothermal power plants can operate 24 hours per day, providing baseload capacity. Estimates for the world potential capacity for geothermal power generation vary widely, ranging from 40 GW by 2020 to as much as 6,000 GW.
Geothermal power capacity grew from around 1 GW in 1975 to almost 10 GW in 2008. The United States is the world leader in terms of installed capacity, representing 3.1 GW. Other countries with significant installed capacity include the Philippines (1.9 GW), Indonesia (1.2 GW), Mexico (1.0 GW), Italy (0.8 GW), Iceland (0.6 GW), Japan (0.5 GW), and New Zealand (0.5 GW). In some countries, geothermal power accounts for a significant share of the total electricity supply, such as in the Philippines, where geothermal represented 17 percent of the total power mix at the end of 2008.
Geothermal (ground source) heat pumps represented an estimated 30 GWth of installed capacity at the end of 2008, with other direct uses of geothermal heat (i.e., for space heating, agricultural drying and other uses) reaching an estimated 15 GWth. , at least 76 countries use direct geothermal energy in some form.
Second-generation technologies
Second-generation technologies have gone from being a passion for the dedicated few to a major economic sector in countries such as Germany, Spain, the United States, and Japan. Many large industrial companies and financial institutions are involved and the challenge is to broaden the market base for continued growth worldwide.
Solar heating
Solar heating systems are a well known second-generation technology and generally consist of solar thermal collectors, a fluid system to move the heat from the collector to its point of usage, and a reservoir or tank for heat storage. The systems may be used to heat domestic hot water, swimming pools, or homes and businesses. The heat can also be used for industrial process applications or as an energy input for other uses such as cooling equipment.
In many warmer climates, a solar heating system can provide a very high percentage (50 to 75%) of domestic hot water energy. , China has 27 million rooftop solar water heaters.
Photovoltaics
Photovoltaic (PV) cells, also called solar cells, convert light into electricity. In the 1980s and early 1990s, most photovoltaic modules were used to provide remote-area power supply, but from around 1995, industry efforts have focused increasingly on developing building integrated photovoltaics and photovoltaic power stations for grid connected applications.
Many plants are integrated with agriculture and some use innovative tracking systems that follow the sun's daily path across the sky to generate more electricity than conventional fixed-mounted systems. There are no fuel costs or emissions during operation of the power stations.
Wind power
Some of the second-generation renewables, such as wind power, have high potential and have already realised relatively low production costs. Wind power could become cheaper than nuclear power. Global wind power installations increased by 35,800 MW in 2010, bringing total installed capacity up to 194,400 MW, a 22.5% increase on the 158,700 MW installed at the end of 2009. The increase for 2010 represents investments totalling €47.3 billion (US$65 billion) and for the first time more than half of all new wind power was added outside of the traditional markets of Europe and North America, mainly driven, by the continuing boom in China which accounted for nearly half of all of the installations at 16,500 MW. China now has 42,300 MW of wind power installed. Wind power accounts for approximately 19% of electricity generated in Denmark, 9% in Spain and Portugal, and 6% in Germany and the Republic of Ireland. In Australian state of South Australia wind power, championed by Premier Mike Rann (2002–2011), now comprises 26% of the state's electricity generation, edging out coal fired power. At the end of 2011 South Australia, with 7.2% of Australia's population, had 54% of the nation's installed wind power capacity.
Wind power's share of worldwide electricity usage at the end of 2014 was 3.1%.
The wind industry is able to produce more power at lower cost by using taller wind turbines with longer blades, capturing the faster winds at higher elevations. This has opened up new opportunities and in Indiana, Michigan, and Ohio, the price of power from wind turbines built 300 feet to 400 feet above the ground can now compete with conventional fossil fuels like coal. Prices have fallen to about 4 cents per kilowatt-hour in some cases and utilities have been increasing the amount of wind energy in their portfolio, saying it is their cheapest option.
Solar thermal power stations
Solar thermal power stations include the 354 megawatt (MW) Solar Energy Generating Systems power plant in the US, Solnova Solar Power Station (Spain, 150 MW), Andasol solar power station (Spain, 100 MW), Nevada Solar One (USA, 64 MW), PS20 solar power tower (Spain, 20 MW), and the PS10 solar power tower (Spain, 11 MW). The 370 MW Ivanpah Solar Power Facility, located in California's Mojave Desert, is the world's largest solar-thermal power plant project currently under construction. Many other plants are under construction or planned, mainly in Spain and the USA. In developing countries, three World Bank projects for integrated solar thermal/combined-cycle gas-turbine power plants in Egypt, Mexico, and Morocco have been approved.
Modern forms of bioenergy
Global ethanol production for transport fuel tripled between 2000 and 2007 from 17 billion to more than 52 billion litres, while biodiesel expanded more than tenfold from less than 1 billion to almost 11 billion litres. Biofuels provide 1.8% of the world's transport fuel and recent estimates indicate a continued high growth. The main producing countries for transport biofuels are the US, Brazil, and the EU.
Brazil has one of the largest renewable energy programs in the world, involving production of ethanol fuel from sugar cane, and ethanol now provides 18 percent of the country's automotive fuel. As a result of this and the exploitation of domestic deep water oil sources, Brazil, which for years had to import a large share of the petroleum needed for domestic consumption, recently reached complete self-sufficiency in liquid fuels.
Nearly all the gasoline sold in the United States today is mixed with 10 percent ethanol, a mix known as E10, and motor vehicle manufacturers already produce vehicles designed to run on much higher ethanol blends. Ford, DaimlerChrysler, and GM are among the automobile companies that sell flexible-fuel cars, trucks, and minivans that can use gasoline and ethanol blends ranging from pure gasoline up to 85% ethanol (E85). The challenge is to expand the market for biofuels beyond the farm states where they have been most popular to date. The Energy Policy Act of 2005, which calls for of biofuels to be used annually by 2012, will also help to expand the market.
The growing ethanol and biodiesel industries are providing jobs in plant construction, operations, and maintenance, mostly in rural communities. According to the Renewable Fuels Association, "the ethanol industry created almost 154,000 U.S. jobs in 2005 alone, boosting household income by $5.7 billion. It also contributed about $3.5 billion in tax revenues at the local, state, and federal levels".
Third-generation technologies
Third-generation renewable energy technologies are still under development and include advanced biomass gasification, biorefinery technologies, hot-dry-rock geothermal power, and ocean energy. Third-generation technologies are not yet widely demonstrated or have limited commercialization. Many are on the horizon and may have potential comparable to other renewable energy technologies, but still depend on attracting sufficient attention and research and development funding.
New bioenergy technologies
According to the International Energy Agency, cellulosic ethanol biorefineries could allow biofuels to play a much bigger role in the future than organizations such as the IEA previously thought. Cellulosic ethanol can be made from plant matter composed primarily of inedible cellulose fibers that form the stems and branches of most plants. Crop residues (such as corn stalks, wheat straw and rice straw), wood waste, and municipal solid waste are potential sources of cellulosic biomass. Dedicated energy crops, such as switchgrass, are also promising cellulose sources that can be sustainably produced in many regions.
Ocean energy
Ocean energy is all forms of renewable energy derived from the sea including wave energy, tidal energy, river current, ocean current energy, offshore wind, salinity gradient energy and ocean thermal gradient energy.
The Rance Tidal Power Station (240 MW) is the world's first tidal power station. The facility is located on the estuary of the Rance River, in Brittany, France. Opened on 26 November 1966, it is currently operated by Électricité de France, and is the largest tidal power station in the world, in terms of installed capacity.
First proposed more than thirty years ago, systems to harvest utility-scale electrical power from ocean waves have recently been gaining momentum as a viable technology. The potential for this technology is considered promising, especially on west-facing coasts with latitudes between 40 and 60 degrees:
In the United Kingdom, for example, the Carbon Trust recently estimated the extent of the economically viable offshore resource at 55 TWh per year, about 14% of current national demand. Across Europe, the technologically achievable resource has been estimated to be at least 280 TWh per year. In 2003, the U.S. Electric Power Research Institute (EPRI) estimated the viable resource in the United States at 255 TWh per year (6% of demand).
There are currently nine projects, completed or in-development, off the coasts of the United Kingdom, United States, Spain and Australia to harness the rise and fall of waves by Ocean Power Technologies. The current maximum power output is 1.5 MW (Reedsport, Oregon), with development underway for 100 MW (Coos Bay, Oregon).
Enhanced geothermal systems
, geothermal power development was under way in more than 40 countries, partially attributable to the development of new technologies, such as Enhanced Geothermal Systems. The development of binary cycle power plants and improvements in drilling and extraction technology may enable enhanced geothermal systems over a much greater geographical range than "traditional" Geothermal systems. Demonstration EGS projects are operational in the US, Australia, Germany, France, and the United Kingdom.
Advanced solar concepts
Beyond the already established solar photovoltaics and solar thermal power technologies are such advanced solar concepts as the solar updraft tower or space-based solar power. These concepts have yet to (if ever) be commercialized.
The Solar updraft tower (SUT) is a renewable-energy power plant for generating electricity from low temperature solar heat. Sunshine heats the air beneath a very wide greenhouse-like roofed collector structure surrounding the central base of a very tall chimney tower. The resulting convection causes a hot air updraft in the tower by the chimney effect. This airflow drives wind turbines placed in the chimney updraft or around the chimney base to produce electricity. Plans for scaled-up versions of demonstration models will allow significant power generation, and may allow development of other applications, such as water extraction or distillation, and agriculture or horticulture. To view a study on the solar updraft tower and its affects click here
A more advanced version of a similarly themed technology is the Vortex engine (AVE) which aims to replace large physical chimneys with a vortex of air created by a shorter, less-expensive structure.
Space-based solar power (SBSP) is the concept of collecting solar power in space (using an "SPS", that is, a "solar-power satellite" or a "satellite power system") for use on Earth. It has been in research since the early 1970s. SBSP would differ from current solar collection methods in that the means used to collect energy would reside on an orbiting satellite instead of on Earth's surface. Some projected benefits of such a system are a higher collection rate and a longer collection period due to the lack of a diffusing atmosphere and night time in space.
Renewable energy industry
Total investment in renewable energy reached $211 billion in 2010, up from $160 billion in 2009. The top countries for investment in 2010 were
China, Germany, the United States, Italy, and Brazil. Continued growth for the renewable energy sector is expected and promotional policies helped the industry weather the 2009 economic crisis better than many other sectors.
Wind power companies
, Vestas (from Denmark) is the world's top wind turbine manufacturer in terms of percentage of market volume, and Sinovel (from China) is in second place. Together Vestas and Sinovel delivered 10,228 MW of new wind power capacity in 2010, and their market share was 25.9 percent. GE Energy (USA) was in third place, closely followed by Goldwind, another Chinese supplier. German Enercon ranks fifth in the world, and is followed in sixth place by Indian-based Suzlon.
Photovoltaic market trends
The solar PV market has been growing for the past few years. According to solar PV research company, PVinsights, worldwide shipment of solar modules in 2011 was around 25 GW, and the shipment year over year growth was around 40%. The top 5 solar module players in 2011 in turns are Suntech, First Solar, Yingli, Trina, and Sungen. The top 5 solar module companies possessed 51.3% market share of solar modules, according to PVinsights' market intelligence report.
The PV industry has seen drops in module prices since 2008. In late 2011, factory-gate prices for crystalline-silicon photovoltaic modules dropped below the $1.00/W mark. The $1.00/W installed cost, is often regarded in the PV industry as marking the achievement of grid parity for PV. These reductions have taken many stakeholders, including industry analysts, by surprise, and perceptions of current solar power economics often lags behind reality. Some stakeholders still have the perspective that solar PV remains too costly on an unsubsidized basis to compete with conventional generation options. Yet technological advancements, manufacturing process improvements, and industry re-structuring, mean that further price reductions are likely in coming years.
Non-technical barriers to acceptance
Many energy markets, institutions, and policies have been developed to support the production and use of fossil fuels. Newer and cleaner technologies may offer social and environmental benefits, but utility operators often reject renewable resources because they are trained to think only in terms of big, conventional power plants. Consumers often ignore renewable power systems because they are not given accurate price signals about electricity consumption. Intentional market distortions (such as subsidies), and unintentional market distortions (such as split incentives) may work against renewables. Benjamin K. Sovacool has argued that "some of the most surreptitious, yet powerful, impediments facing renewable energy and energy efficiency in the United States are more about culture and institutions than engineering and science".
The obstacles to the widespread commercialization of renewable energy technologies are primarily political, not technical, and there have been many studies which have identified a range of "non-technical barriers" to renewable energy use. These barriers are impediments which put renewable energy at a marketing, institutional, or policy disadvantage relative to other forms of energy. Key barriers include:
Difficulty overcoming established energy systems, which includes difficulty introducing innovative energy systems, particularly for distributed generation such as photovoltaics, because of technological lock-in, electricity markets designed for centralized power plants, and market control by established operators. As the Stern Review on the Economics of Climate Change points out:
"National grids are usually tailored towards the operation of centralised power plants and thus favour their performance. Technologies that do not easily fit into these networks may struggle to enter the market, even if the technology itself is commercially viable. This applies to distributed generation as most grids are not suited to receive electricity from many small sources. Large-scale renewables may also encounter problems if they are sited in areas far from existing grids."
Lack of government policy support, which includes the lack of policies and regulations supporting deployment of renewable energy technologies and the presence of policies and regulations hindering renewable energy development and supporting conventional energy development. Examples include subsidies for fossil-fuels, insufficient consumer-based renewable energy incentives, government underwriting for nuclear plant accidents, and complex zoning and permitting processes for renewable energy.
Lack of information dissemination and consumer awareness.
Higher capital cost of renewable energy technologies compared with conventional energy technologies.
Inadequate financing options for renewable energy projects, including insufficient access to affordable financing for project developers, entrepreneurs and consumers.
Imperfect capital markets, which includes failure to internalize all costs of conventional energy (e.g., effects of air pollution, risk of supply disruption) and failure to internalize all benefits of renewable energy (e.g., cleaner air, energy security).
Inadequate workforce skills and training, which includes lack of adequate scientific, technical, and manufacturing skills required for renewable energy production; lack of reliable installation, maintenance, and inspection services; and failure of the educational system to provide adequate training in new technologies.
Lack of adequate codes, standards, utility interconnection, and net-metering guidelines.
Poor public perception of renewable energy system aesthetics.
Lack of stakeholder/community participation and co-operation in energy choices and renewable energy projects.
With such a wide range of non-technical barriers, there is no "silver bullet" solution to drive the transition to renewable energy. So ideally there is a need for several different types of policy instruments to complement each other and overcome different types of barriers.
A policy framework must be created that will level the playing field and redress the imbalance of traditional approaches associated with fossil fuels. The policy landscape must keep pace with broad trends within the energy sector, as well as reflecting specific social, economic and environmental priorities. Some resource-rich countries struggle to move away from fossil fuels and have failed thus far to adopt regulatory frameworks necessary for developing renewable energy (e.g. Russia).
Public policy landscape
Public policy has a role to play in renewable energy commercialization because the free market system has some fundamental limitations. As the Stern Review points out: "In a liberalised energy market, investors, operators and consumers should face the full cost of their decisions. But this is not the case in many economies or energy sectors. Many policies distort the market in favour of existing fossil fuel technologies." The International Solar Energy Society has stated that "historical incentives for the conventional energy resources continue even today to bias markets by burying many of the real societal costs of their use".
Fossil-fuel energy systems have different production, transmission, and end-use costs and characteristics than do renewable energy systems, and new promotional policies are needed to ensure that renewable systems develop as quickly and broadly as is socially desirable. Lester Brown states that the market "does not incorporate the indirect costs of providing goods or services into prices, it does not value nature's services adequately, and it does not respect the sustainable-yield thresholds of natural systems". It also favors the near term over the long term, thereby showing limited concern for future generations. Tax and subsidy shifting can help overcome these problems, though is also problematic to combine different international normative regimes regulating this issue.
Shifting taxes
Tax shifting has been widely discussed and endorsed by economists. It involves lowering income taxes while raising levies on environmentally destructive activities, in order to create a more responsive market. For example, a tax on coal that included the increased health care costs associated with breathing polluted air, the costs of acid rain damage, and the costs of climate disruption would encourage investment in renewable technologies. Several Western European countries are already shifting taxes in a process known there as environmental tax reform.
In 2001, Sweden launched a new 10-year environmental tax shift designed to convert 30 billion kroner ($3.9 billion) of income taxes to taxes on environmentally destructive activities. Other European countries with significant tax reform efforts are France, Italy, Norway, Spain, and the United Kingdom. Asia's two leading economies, Japan and China, are considering carbon taxes.
Shifting subsidies
Just as there is a need for tax shifting, there is also a need for subsidy shifting. Subsidies are not an inherently bad thing as many technologies and industries emerged through government subsidy schemes. The Stern Review explains that of 20 key innovations from the past 30 years, only one of the 14 was funded entirely by the private sector and nine were totally publicly funded. In terms of specific examples, the Internet was the result of publicly funded links among computers in government laboratories and research institutes. And the combination of the federal tax deduction and a robust state tax deduction in California helped to create the modern wind power industry. At the same time specifically US tax credits systems for renewable energy have been described as an "opaque" financial instrument dominated by large investors to reduce their tax payments while greenhouse gas reduction targets are being treated as a side effect.
Lester Brown has argued that "a world facing the prospect of economically disruptive climate change can no longer justify subsidies to expand the burning of coal and oil. Shifting these subsidies to the development of climate-benign energy sources such as wind, solar, biomass, and geothermal power is the key to stabilizing the earth's climate." The International Solar Energy Society advocates "leveling the playing field" by redressing the continuing inequities in public subsidies of energy technologies and R&D, in which the fossil fuel and nuclear power receive the largest share of financial support.
Some countries are eliminating or reducing climate-disrupting subsidies and Belgium, France, and Japan have phased out all subsidies for coal. Germany is reducing its coal subsidy. The subsidy dropped from $5.4 billion in 1989 to $2.8 billion in 2002, and in the process Germany lowered its coal use by 46 percent. China cut its coal subsidy from $750 million in 1993 to $240 million in 1995 and more recently has imposed a high-sulfur coal tax. However, the United States has been increasing its support for the fossil fuel and nuclear industries.
In November 2011, an IEA report entitled Deploying Renewables 2011 said "subsidies in green energy technologies that were not yet competitive are justified in order to give an incentive to investing into technologies with clear environmental and energy security benefits". The IEA's report disagreed with claims that renewable energy technologies are only viable through costly subsidies and not able to produce energy reliably to meet demand.
A fair and efficient imposition of subsidies for renewable energies and aiming at sustainable development, however, require coordination and regulation at a global level, as subsidies granted in one country can easily disrupt industries and policies of others, thus underlining the relevance of this issue at the World Trade Organization.
Renewable energy targets
Setting national renewable energy targets can be an important part of a renewable energy policy and these targets are usually defined as a percentage of the primary energy and/or electricity generation mix. For example, the European Union has prescribed an indicative renewable energy target of 12 percent of the total EU energy mix and 22 percent of electricity consumption by 2010. National targets for individual EU Member States have also been set to meet the overall target. Other developed countries with defined national or regional targets include Australia, Canada, Israel, Japan, Korea, New Zealand, Norway, Singapore, Switzerland, and some US States.
National targets are also an important component of renewable energy strategies in some developing countries. Developing countries with renewable energy targets include China, India, Indonesia, Malaysia, the Philippines, Thailand, Brazil, Egypt, Mali, and South Africa. The targets set by many developing countries are quite modest when compared with those in some industrialized countries.
Renewable energy targets in most countries are indicative and nonbinding but they have assisted government actions and regulatory frameworks. The United Nations Environment Program has suggested that making renewable energy targets legally binding could be an important policy tool to achieve higher renewable energy market penetration.
Levelling the playing field
The IEA has identified three actions which will allow renewable energy and other clean energy technologies to "more effectively compete for private sector capital".
"First, energy prices must appropriately reflect the "true cost" of energy (e.g. through carbon pricing) so that the positive and negative impacts of energy production and consumption are fully taken into account". Example: New UK nuclear plants cost £92.50/MWh, whereas offshore wind farms in the UK are supported with €74.2/MWh at a price of £150 in 2011 falling to £130 per MWh in 2022. In Denmark, the price can be €84/MWh.
"Second, inefficient fossil fuel subsidies must be removed, while ensuring that all citizens have access to affordable energy".
"Third, governments must develop policy frameworks that encourage private sector investment in lower-carbon energy options".
Green stimulus programs
In response to the Great Recession, major governments made "green stimulus" programs one of their main policy instruments for supporting economic recovery. Some in green stimulus funding had been allocated to renewable energy and energy efficiency, to be spent mainly in 2010 and in 2011.
Energy sector regulation
Public policy determines the extent to which renewable energy (RE) is to be incorporated into a developed or developing country's generation mix. Energy sector regulators implement that policy—thus affecting the pace and pattern of RE investments and connections to the grid. Energy regulators often have authority to carry out a number of functions that have implications for the financial feasibility of renewable energy projects. Such functions include issuing licenses, setting performance standards, monitoring the performance of regulated firms, determining the price level and structure of tariffs, establishing uniform systems of accounts, arbitrating stakeholder disputes (like interconnection cost allocations), performing management audits, developing agency human resources (expertise), reporting sector and commission activities to government authorities, and coordinating decisions with other government agencies. Thus, regulators make a wide range of decisions that affect the financial outcomes associated with RE investments. In addition, the sector regulator is in a position to give advice to the government regarding the full implications of focusing on climate change or energy security. The energy sector regulator is the natural advocate for efficiency and cost-containment throughout the process of designing and implementing RE policies. Since policies are not self-implementing, energy sector regulators become a key facilitator (or blocker) of renewable energy investments.
Energy transition in Germany
The Energiewende (German for energy transition) is the transition by Germany to a low carbon, environmentally sound, reliable, and affordable energy supply. The new system will rely heavily on renewable energy (particularly wind, photovoltaics, and biomass) energy efficiency, and energy demand management. Most if not all existing coal-fired generation will need to be retired. The phase-out of Germany's fleet of nuclear reactors, to be complete by 2022, is a key part of the program.
Legislative support for the Energiewende was passed in late 2010 and includes greenhouse gas (GHG) reductions of 80–95% by 2050 (relative to 1990) and a renewable energy target of 60% by 2050. These targets are ambitious. The Berlin-based policy institute Agora Energiewende noted that "while the German approach is not unique worldwide, the speed and scope of the Energiewende are exceptional". The Energiewende also seeks a greater transparency in relation to national energy policy formation.
Germany has made significant progress on its GHG emissions reduction target, achieving a 27% decrease between 1990 and 2014. However Germany will need to maintain an average GHG emissions abatement rate of 3.5% per annum to reach its Energiewende goal, equal to the maximum historical value thus far.
Germany spends €1.5billion per annum on energy research (2013 figure) in an effort to solve the technical and social issues raised by the transition. This includes a number of computer studies that have confirmed the feasibility and a similar cost (relative to business-as-usual and given that carbon is adequately priced) of the Energiewende.
These initiatives go well beyond European Union legislation and the national policies of other European states. The policy objectives have been embraced by the German federal government and has resulted in a huge expansion of renewables, particularly wind power. Germany's share of renewables has increased from around 5% in 1999 to 22.9% in 2012, surpassing the OECD average of 18% usage of renewables.
Producers have been guaranteed a fixed feed-in tariff for 20 years, guaranteeing a fixed income. Energy co-operatives have been created, and efforts were made to decentralize control and profits. The large energy companies have a disproportionately small share of the renewables market. However, in some cases poor investment designs have caused bankruptcies and low returns, and unrealistic promises have been shown to be far from reality.
Nuclear power plants were closed, and the existing nine plants will close earlier than planned, in 2022.
One factor that has inhibited efficient employment of new renewable energy has been the lack of an accompanying investment in power infrastructure to bring the power to market. It is believed 8,300 km of power lines must be built or upgraded. The different German States have varying attitudes to the construction of new power lines. Industry has had their rates frozen and so the increased costs of the Energiewende have been passed on to consumers, who have had rising electricity bills.
Voluntary market mechanisms for renewable electricity
Voluntary markets, also referred to as green power markets, are driven by consumer preference. Voluntary markets allow a consumer to choose to do more than policy decisions require and reduce the environmental impact of their electricity use. Voluntary green power products must offer a significant benefit and value to buyers to be successful. Benefits may include zero or reduced greenhouse gas emissions, other pollution reductions or other environmental improvements on power stations.
The driving factors behind voluntary green electricity within the EU are the liberalized electricity markets and the RES Directive. According to the directive, the EU Member States must ensure that the origin of electricity produced from renewables can be guaranteed and therefore a "guarantee of origin" must be issued (article 15). Environmental organisations are using the voluntary market to create new renewables and improving sustainability of the existing power production. In the US the main tool to track and stimulate voluntary actions is Green-e program managed by Center for Resource Solutions. In Europe the main voluntary tool used by the NGOs to promote sustainable electricity production is EKOenergy label.
Recent developments
A number of events in 2006 pushed renewable energy up the political agenda, including the US mid-term elections in November, which confirmed clean energy as a mainstream issue. Also in 2006, the Stern Review made a strong economic case for investing in low carbon technologies now, and argued that economic growth need not be incompatible with cutting energy consumption. According to a trend analysis from the United Nations Environment Programme, climate change concerns coupled with recent high oil prices and increasing government support are driving increasing rates of investment in the renewable energy and energy efficiency industries.
Investment capital flowing into renewable energy reached a record US$77 billion in 2007, with the upward trend continuing in 2008. The OECD still dominates, but there is now increasing activity from companies in China, India and Brazil. Chinese companies were the second largest recipient of venture capital in 2006 after the United States. In the same year, India was the largest net buyer of companies abroad, mainly in the more established European markets.
New government spending, regulation, and policies helped the industry weather the 2009 economic crisis better than many other sectors. Most notably, U.S. President Barack Obama's American Recovery and Reinvestment Act of 2009 included more than $70 billion in direct spending and tax credits for clean energy and associated transportation programs. This policy-stimulus combination represents the largest federal commitment in U.S. history for renewables, advanced transportation, and energy conservation initiatives. Based on these new rules, many more utilities strengthened their clean-energy programs. Clean Edge suggests that the commercialization of clean energy will help countries around the world deal with the current economic malaise. Once-promising solar energy company, Solyndra, became involved in a political controversy involving U.S. President Barack Obama's administration's authorization of a $535 million loan guarantee to the Corporation in 2009 as part of a program to promote alternative energy growth. The company ceased all business activity, filed for Chapter 11 bankruptcy, and laid-off nearly all of its employees in early September 2011.
In his 24 January 2012, State of the Union address, President Barack Obama restated his commitment to renewable energy. Obama said that he "will not walk away from the promise of clean energy." Obama called for a commitment by the Defense Department to purchase 1,000 MW of renewable energy. He also mentioned the long-standing Interior Department commitment to permit 10,000 MW of renewable energy projects on public land in 2012.
As of 2012, renewable energy plays a major role in the energy mix of many countries globally. Renewables are becoming increasingly economic in both developing and developed countries. Prices for renewable energy technologies, primarily wind power and solar power, continued to drop, making renewables competitive with conventional energy sources. Without a level playing field, however, high market penetration of renewables is still dependent on robust promotional policies. Fossil fuel subsidies, which are far higher than those for renewable energy, remain in place and quickly need to be phased out.
United Nations' Secretary-General Ban Ki-moon has said that "renewable energy has the ability to lift the poorest nations to new levels of prosperity". In October 2011, he "announced the creation of a high-level group to drum up support for energy access, energy efficiency and greater use of renewable energy. The group is to be co-chaired by Kandeh Yumkella, the chair of UN Energy and director general of the UN Industrial Development Organisation, and Charles Holliday, chairman of Bank of America".
Worldwide use of solar power and wind power continued to grow significantly in 2012. Solar electricity consumption increased by 58 percent, to 93 terawatt-hours (TWh). Use of wind power in 2012 increased by 18.1 percent, to 521.3 TWh. Global solar and wind energy installed capacities continued to expand even though new investments in these technologies declined during 2012. Worldwide investment in solar power in 2012 was $140.4 billion, an 11 percent decline from 2011, and wind power investment was down 10.1 percent, to $80.3 billion. But due to lower production costs for both technologies, total installed capacities grew sharply. This investment decline, but growth in installed capacity, may again occur in 2013. Analysts expect the market to triple by 2030. In 2015, investment in renewables exceeded fossils.
100% renewable energy
The incentive to use 100% renewable energy for electricity, transport, or even total primary energy supply globally, has been motivated by global warming and other ecological as well as economic concerns. In the Intergovernmental Panel on Climate Change's reviews of scenarios of energy usage that would keep global warming to approximately 1.5 degrees, the proportion of primary energy supplied by renewables increases from 15% in 2020 to 60% in 2050 (median values across all published pathways). The proportion of primary energy supplied by biomass increases from 10% to 27%, with effective controls on whether land use is changed in the growing of biomass. The proportion from wind and solar increases from 1.8% to 21%.
At the national level, at least 30 nations around the world already have renewable energy contributing more than 20% of energy supply.
Mark Z. Jacobson, professor of civil and environmental engineering at Stanford University and director of its Atmosphere and Energy Program says producing all new energy with wind power, solar power, and hydropower by 2030 is feasible and existing energy supply arrangements could be replaced by 2050. Barriers to implementing the renewable energy plan are seen to be "primarily social and political, not technological or economic". Jacobson says that energy costs with a wind, solar, water system should be similar to today's energy costs.
Renewable projects must be sited at distant locations due to high land prices in urban areas or for the renewable resource itself which require transmission construction costs.
Similarly, in the United States, the independent National Research Council has noted that "sufficient domestic renewable resources exist to allow renewable electricity to play a significant role in future electricity generation and thus help confront issues related to climate change, energy security, and the escalation of energy costs … Renewable energy is an attractive option because renewable resources available in the United States, taken collectively, can supply significantly greater amounts of electricity than the total current or projected domestic demand."
The most significant barriers to the widespread implementation of large-scale renewable energy and low carbon energy strategies are primarily political and not technological. According to the 2013 Post Carbon Pathways report, which reviewed many international studies, the key roadblocks are: climate change denial, the fossil fuels lobby, political inaction, unsustainable energy consumption, outdated energy infrastructure, and financial constraints.
Energy efficiency
Moving towards energy sustainability will require changes not only in the way energy is supplied, but in the way it is used, and reducing the amount of energy required to deliver various goods or services is essential. Opportunities for improvement on the demand side of the energy equation are as rich and diverse as those on the supply side, and often offer significant economic benefits.
A sustainable energy economy requires commitments to both renewables and efficiency. Renewable energy and energy efficiency are said to be the "twin pillars" of sustainable energy policy. The American Council for an Energy-Efficient Economy has explained that both resources must be developed in order to stabilize and reduce carbon dioxide emissions:
Efficiency is essential to slowing the energy demand growth so that rising clean energy supplies can make deep cuts in fossil fuel use. If energy use grows too fast, renewable energy development will chase a receding target. Likewise, unless clean energy supplies come online rapidly, slowing demand growth will only begin to reduce total emissions; reducing the carbon content of energy sources is also needed.
The IEA has stated that renewable energy and energy efficiency policies are complementary tools for the development of a sustainable energy future, and should be developed together instead of being developed in isolation.
See also
Lists
Lists about renewable energy
List of countries by renewable electricity production
List of energy storage projects
List of large wind farms
List of notable renewable energy organizations
List of renewable energy topics by country
Topics
Environmental skepticism
Catching the Sun (film)
Clean Energy Trends
Cost of electricity by source
Ecotax
EKOenergy
Energy security and renewable technology
Environmental tariff
Feed-in Tariff
International Renewable Energy Agency
PV financial incentives
Rocky Mountain Institute
The Clean Tech Revolution
The Third Industrial Revolution
World Council for Renewable Energy
People
Andrew Blakers
Richard L. Crowther
James Dehlsen
Mark Diesendorf
Rolf Disch
David Faiman
Hans-Josef Fell
Harrison Fraker
Chris Goodall
Al Gore
Michael Grätzel
Martin Green
Jan Hamrin
Denis Hayes
Tetsunari Iida
Mark Z. Jacobson
Stefan Krauter
Jeremy Leggett
Richard Levine
Amory Lovins
Gaspar Makale
Joel Makower
Eric Martinot
David Mills
Huang Ming
Leonard L. Northrup Jr.
Arthur Nozik
Monica Oliphant
Stanford R. Ovshinsky
Luis Palmer
Alan Pears
Hélène Pelosse
Ron Pernick
Phil Radford
Jeremy Rifkin
Hermann Scheer
Zhengrong Shi
Benjamin K. Sovacool
Thomas H. Stoner, Jr.
Peter Taylor
Félix Trombe
John Twidell
Martin Vosseler
Stuart Wenham
Clint Wilder
John I. Yellott
Elon Musk
References
Bibliography
Aitken, Donald W. (2010). Transitioning to a Renewable Energy Future, International Solar Energy Society, January, 54 pages.
Armstrong, Robert C., Catherine Wolfram, Robert Gross, Nathan S. Lewis, and M.V. Ramana et al. The Frontiers of Energy, Nature Energy, Vol 1, 11 January 2016.
EurObserv'ER (2012). The state of renewable energies in Europe, 250 pages.
HM Treasury (2006). Stern Review on the Economics of Climate Change, 575 pages.
, chapters I–V
, 2 pp.
, 22 pp.
, 24 pp.
, 24 pp.
International Council for Science (c2006). Discussion Paper by the Scientific and Technological Community for the 14th session of the United Nations Commission on Sustainable Development, 17 pages.
International Energy Agency (2006). World Energy Outlook 2006: Summary and Conclusions, OECD, 11 pages.
International Energy Agency (2007). Renewables in global energy supply: An IEA facts sheet, OECD, 34 pages.
International Energy Agency (2008). Deploying Renewables: Principles for Effective Policies, OECD, 8 pages.
International Energy Agency (2011). Deploying Renewables 2011: Best and Future Policy Practice, OECD.
International Energy Agency (2011). Solar Energy Perspectives, OECD.
Lovins, Amory B. (2011). Reinventing Fire: Bold Business Solutions for the New Energy Era, Chelsea Green Publishing, 334 pages.
Makower, Joel, and Ron Pernick and Clint Wilder (2009). Clean Energy Trends 2009, Clean Edge.
National Renewable Energy Laboratory (2006). Non-technical Barriers to Solar Energy Use: Review of Recent Literature, Technical Report, NREL/TP-520-40116, September, 30 pages.
Pernick, Ron and Wilder, Clint (2012). Clean Tech Nation: How the U.S. Can Lead in the New Global Economy, HarperCollins.
External links
Investing: Green technology has big growth potential, LA Times, 2011
Global Renewable Energy: Policies and Measures
Missing the Market Meltdown
Bureau of Land Management 2012 Renewable Energy Priority Projects
Energy policy
Renewable resources
Environmental social science | Renewable energy commercialization | Environmental_science | 11,651 |
2,177,036 | https://en.wikipedia.org/wiki/Perceptual%20psychology | Perceptual psychology is a subfield of cognitive psychology that concerns the conscious and unconscious innate aspects of the human cognitive system: perception.
A pioneer of the field was James J. Gibson. One major study was that of affordances, i.e. the perceived utility of objects in, or features of, one's surroundings. According to Gibson, such features or objects were perceived as affordances and not as separate or distinct objects in themselves. This view was central to several other fields as software user interface and usability engineering, environmentalism in psychology, and ultimately to political economy where the perceptual view was used to explain the omission of key inputs or consequences of economic transactions, i.e. resources and wastes.
Gerard Egan and Robert Bolton explored areas of interpersonal interactions based on the premise that people act in accordance with their perception of a given situation. While behaviour is obvious, a person's thoughts and feelings are masked. This gives rise to the idea that the most common problems between people are based on the assumption that we can guess what the other person is feeling and thinking. They also offered methods, within this scope, for effective communications. This includes reflective listening, assertion skills, conflict resolution etc. Perceptual psychology is often used in therapy to help a patient better their problem-solving skills.
Nativism vs. empiricism
Nativist and empiricist approaches to perceptual psychology have been researched and debated to find out which is the basis in the development of perception. Nativists believe humans are born with all the perceptual abilities needed. Nativism is the favoured theory on perception. Empiricists believe that humans are not born with perceptual abilities, but instead must learn them.
See also
Binding problem
Psychophysics
Physiological psychology
Sociophysics
Vision science
References
Cognitive biases
Cognitive psychology | Perceptual psychology | Biology | 380 |
12,543 | https://en.wikipedia.org/wiki/Groupoid | In mathematics, especially in category theory and homotopy theory, a groupoid (less often Brandt groupoid or virtual group) generalises the notion of group in several equivalent ways. A groupoid can be seen as a:
Group with a partial function replacing the binary operation;
Category in which every morphism is invertible. A category of this sort can be viewed as augmented with a unary operation on the morphisms, called inverse by analogy with group theory. A groupoid where there is only one object is a usual group.
In the presence of dependent typing, a category in general can be viewed as a typed monoid, and similarly, a groupoid can be viewed as simply a typed group. The morphisms take one from one object to another, and form a dependent family of types, thus morphisms might be typed , , say. Composition is then a total function: , so that .
Special cases include:
Setoids: sets that come with an equivalence relation,
G-sets: sets equipped with an action of a group .
Groupoids are often used to reason about geometrical objects such as manifolds. introduced groupoids implicitly via Brandt semigroups.
Definitions
Algebraic
A groupoid can be viewed as an algebraic structure consisting of a set with a binary partial function .
Precisely, it is a non-empty set with a unary operation , and a partial function . Here is not a binary operation because it is not necessarily defined for all pairs of elements of . The precise conditions under which is defined are not articulated here and vary by situation.
The operations and −1 have the following axiomatic properties: For all , , and in ,
Associativity: If and are defined, then and are defined and are equal. Conversely, if one of or is defined, then they are both defined (and they are equal to each other), and and are also defined.
Inverse: and are always defined.
Identity: If is defined, then , and . (The previous two axioms already show that these expressions are defined and unambiguous.)
Two easy and convenient properties follow from these axioms:
,
If is defined, then .
Category-theoretic
A groupoid is a small category in which every morphism is an isomorphism, i.e., invertible. More explicitly, a groupoid is a set of objects with
for each pair of objects x and y, a (possibly empty) set G(x,y) of morphisms (or arrows) from x to y; we write f : x → y to indicate that f is an element of G(x,y);
for every object x, a designated element of G(x, x);
for each triple of objects x, y, and z, a function ;
for each pair of objects x, y, a function satisfying, for any f : x → y, g : y → z, and h : z → w:
and ;
;
and .
If f is an element of G(x,y), then x is called the source of f, written s(f), and y is called the target of f, written t(f).
A groupoid G is sometimes denoted as , where is the set of all morphisms, and the two arrows represent the source and the target.
More generally, one can consider a groupoid object in an arbitrary category admitting finite fiber products.
Comparing the definitions
The algebraic and category-theoretic definitions are equivalent, as we now show. Given a groupoid in the category-theoretic sense, let G be the disjoint union of all of the sets G(x,y) (i.e. the sets of morphisms from x to y). Then and become partial operations on G, and will in fact be defined everywhere. We define ∗ to be and −1 to be , which gives a groupoid in the algebraic sense. Explicit reference to G0 (and hence to ) can be dropped.
Conversely, given a groupoid G in the algebraic sense, define an equivalence relation on its elements by
iff a ∗ a−1 = b ∗ b−1. Let G0 be the set of equivalence classes of , i.e. . Denote a ∗ a−1 by if with .
Now define as the set of all elements f such that exists. Given and , their composite is defined as . To see that this is well defined, observe that since and exist, so does . The identity morphism on x is then , and the category-theoretic inverse of f is f−1.
Sets in the definitions above may be replaced with classes, as is generally the case in category theory.
Vertex groups and orbits
Given a groupoid G, the vertex groups or isotropy groups or object groups in G are the subsets of the form G(x,x), where x is any object of G. It follows easily from the axioms above that these are indeed groups, as every pair of elements is composable and inverses are in the same vertex group.
The orbit of a groupoid G at a point is given by the set containing every point that can be joined to x by a morphism in G. If two points and are in the same orbits, their vertex groups and are isomorphic: if is any morphism from to , then the isomorphism is given by the mapping .
Orbits form a partition of the set X, and a groupoid is called transitive if it has only one orbit (equivalently, if it is connected as a category). In that case, all the vertex groups are isomorphic (on the other hand, this is not a sufficient condition for transitivity; see the section below for counterexamples).
Subgroupoids and morphisms
A subgroupoid of is a subcategory that is itself a groupoid. It is called wide or full if it is wide or full as a subcategory, i.e., respectively, if or for every .
A groupoid morphism is simply a functor between two (category-theoretic) groupoids.
Particular kinds of morphisms of groupoids are of interest. A morphism of groupoids is called a fibration if for each object of and each morphism of starting at there is a morphism of starting at such that . A fibration is called a covering morphism or covering of groupoids if further such an is unique. The covering morphisms of groupoids are especially useful because they can be used to model covering maps of spaces.
It is also true that the category of covering morphisms of a given groupoid is equivalent to the category of actions of the groupoid on sets.
Examples
Topology
Given a topological space , let be the set . The morphisms from the point to the point are equivalence classes of continuous paths from to , with two paths being equivalent if they are homotopic.
Two such morphisms are composed by first following the first path, then the second; the homotopy equivalence guarantees that this composition is associative. This groupoid is called the fundamental groupoid of , denoted (or sometimes, ). The usual fundamental group is then the vertex group for the point .
The orbits of the fundamental groupoid are the path-connected components of . Accordingly, the fundamental groupoid of a path-connected space is transitive, and we recover the known fact that the fundamental groups at any base point are isomorphic. Moreover, in this case, the fundamental groupoid and the fundamental groups are equivalent as categories (see the section below for the general theory).
An important extension of this idea is to consider the fundamental groupoid where is a chosen set of "base points". Here is a (full) subgroupoid of , where one considers only paths whose endpoints belong to . The set may be chosen according to the geometry of the situation at hand.
Equivalence relation
If is a setoid, i.e. a set with an equivalence relation , then a groupoid "representing" this equivalence relation can be formed as follows:
The objects of the groupoid are the elements of ;
For any two elements and in , there is a single morphism from to (denote by ) if and only if ;
The composition of and is .
The vertex groups of this groupoid are always trivial; moreover, this groupoid is in general not transitive and its orbits are precisely the equivalence classes. There are two extreme examples:
If every element of is in relation with every other element of , we obtain the pair groupoid of , which has the entire as set of arrows, and which is transitive.
If every element of is only in relation with itself, one obtains the unit groupoid, which has as set of arrows, , and which is completely intransitive (every singleton is an orbit).
Examples
If is a smooth surjective submersion of smooth manifolds, then is an equivalence relation since has a topology isomorphic to the quotient topology of under the surjective map of topological spaces. If we write, then we get a groupoid which is sometimes called the banal groupoid of a surjective submersion of smooth manifolds.
If we relax the reflexivity requirement and consider partial equivalence relations, then it becomes possible to consider semidecidable notions of equivalence on computable realisers for sets. This allows groupoids to be used as a computable approximation to set theory, called PER models. Considered as a category, PER models are a cartesian closed category with natural numbers object and subobject classifier, giving rise to the effective topos introduced by Martin Hyland.
Čech groupoid
A Čech groupoidp. 5 is a special kind of groupoid associated to an equivalence relation given by an open cover of some manifold . Its objects are given by the disjoint union
and its arrows are the intersections
The source and target maps are then given by the induced mapsand the inclusion mapgiving the structure of a groupoid. In fact, this can be further extended by settingas the -iterated fiber product where the represents -tuples of composable arrows. The structure map of the fiber product is implicitly the target map, sinceis a cartesian diagram where the maps to are the target maps. This construction can be seen as a model for some ∞-groupoids. Also, another artifact of this construction is k-cocyclesfor some constant sheaf of abelian groups can be represented as a functiongiving an explicit representation of cohomology classes.
Group action
If the group acts on the set , then we can form the action groupoid (or transformation groupoid) representing this group action as follows:
The objects are the elements of ;
For any two elements and in , the morphisms from to correspond to the elements of such that ;
Composition of morphisms interprets the binary operation of .
More explicitly, the action groupoid is a small category with and and with source and target maps and . It is often denoted (or for a right action). Multiplication (or composition) in the groupoid is then , which is defined provided .
For in , the vertex group consists of those with , which is just the isotropy subgroup at for the given action (which is why vertex groups are also called isotropy groups). Similarly, the orbits of the action groupoid are the orbit of the group action, and the groupoid is transitive if and only if the group action is transitive.
Another way to describe -sets is the functor category , where is the groupoid (category) with one element and isomorphic to the group . Indeed, every functor of this category defines a set and for every in (i.e. for every morphism in ) induces a bijection : . The categorical structure of the functor assures us that defines a -action on the set . The (unique) representable functor is the Cayley representation of . In fact, this functor is isomorphic to and so sends to the set which is by definition the "set" and the morphism of (i.e. the element of ) to the permutation of the set . We deduce from the Yoneda embedding that the group is isomorphic to the group , a subgroup of the group of permutations of .
Finite set
Consider the group action of on the finite set that takes each number to its negative, so and . The quotient groupoid is the set of equivalence classes from this group action , and has a group action of on it.
Quotient variety
Any finite group that maps to gives a group action on the affine space (since this is the group of automorphisms). Then, a quotient groupoid can be of the form , which has one point with stabilizer at the origin. Examples like these form the basis for the theory of orbifolds. Another commonly studied family of orbifolds are weighted projective spaces and subspaces of them, such as Calabi–Yau orbifolds.
Fiber product of groupoids
Given a diagram of groupoids with groupoid morphisms
where and , we can form the groupoid whose objects are triples , where , , and in . Morphisms can be defined as a pair of morphisms where and
such that for triples , there is a commutative diagram in of , and the .
Homological algebra
A two term complex
of objects in a concrete Abelian category can be used to form a groupoid. It has as objects the set and as arrows the set ; the source morphism is just the projection onto while the target morphism is the addition of projection onto composed with and projection onto . That is, given , we have
Of course, if the abelian category is the category of coherent sheaves on a scheme, then this construction can be used to form a presheaf of groupoids.
Puzzles
While puzzles such as the Rubik's Cube can be modeled using group theory (see Rubik's Cube group), certain puzzles are better modeled as groupoids.
The transformations of the fifteen puzzle form a groupoid (not a group, as not all moves can be composed). This groupoid acts on configurations.
Mathieu groupoid
The Mathieu groupoid is a groupoid introduced by John Horton Conway acting on 13 points such that the elements fixing a point form a copy of the Mathieu group M12.
Relation to groups
If a groupoid has only one object, then the set of its morphisms forms a group. Using the algebraic definition, such a groupoid is literally just a group. Many concepts of group theory generalize to groupoids, with the notion of functor replacing that of group homomorphism.
Every transitive/connected groupoid - that is, as explained above, one in which any two objects are connected by at least one morphism - is isomorphic to an action groupoid (as defined above) . By transitivity, there will only be one orbit under the action.
Note that the isomorphism just mentioned is not unique, and there is no natural choice. Choosing such an isomorphism for a transitive groupoid essentially amounts to picking one object , a group isomorphism from to , and for each other than , a morphism in from to .
If a groupoid is not transitive, then it is isomorphic to a disjoint union of groupoids of the above type, also called its connected components (possibly with different groups and sets for each connected component).
In category-theoretic terms, each connected component of a groupoid is equivalent (but not isomorphic) to a groupoid with a single object, that is, a single group. Thus any groupoid is equivalent to a multiset of unrelated groups. In other words, for equivalence instead of isomorphism, one does not need to specify the sets , but only the groups . For example,
The fundamental groupoid of is equivalent to the collection of the fundamental groups of each path-connected component of , but an isomorphism requires specifying the set of points in each component;
The set with the equivalence relation is equivalent (as a groupoid) to one copy of the trivial group for each equivalence class, but an isomorphism requires specifying what each equivalence class is;
The set equipped with an action of the group is equivalent (as a groupoid) to one copy of for each orbit of the action, but an isomorphism requires specifying what set each orbit is.
The collapse of a groupoid into a mere collection of groups loses some information, even from a category-theoretic point of view, because it is not natural. Thus when groupoids arise in terms of other structures, as in the above examples, it can be helpful to maintain the entire groupoid. Otherwise, one must choose a way to view each in terms of a single group, and this choice can be arbitrary. In the example from topology, one would have to make a coherent choice of paths (or equivalence classes of paths) from each point to each point in the same path-connected component.
As a more illuminating example, the classification of groupoids with one endomorphism does not reduce to purely group theoretic considerations. This is analogous to the fact that the classification of vector spaces with one endomorphism is nontrivial.
Morphisms of groupoids come in more kinds than those of groups: we have, for example, fibrations, covering morphisms, universal morphisms, and quotient morphisms. Thus a subgroup of a group yields an action of on the set of cosets of in and hence a covering morphism from, say, to , where is a groupoid with vertex groups isomorphic to . In this way, presentations of the group can be "lifted" to presentations of the groupoid , and this is a useful way of obtaining information about presentations of the subgroup . For further information, see the books by Higgins and by Brown in the References.
Category of groupoids
The category whose objects are groupoids and whose morphisms are groupoid morphisms is called the groupoid category, or the category of groupoids, and is denoted by Grpd.
The category Grpd is, like the category of small categories, Cartesian closed: for any groupoids we can construct a groupoid whose objects are the morphisms and whose arrows are the natural equivalences of morphisms. Thus if are just groups, then such arrows are the conjugacies of morphisms. The main result is that for any groupoids there is a natural bijection
This result is of interest even if all the groupoids are just groups.
Another important property of Grpd is that it is both complete and cocomplete.
Relation to Cat
The inclusion has both a left and a right adjoint:
Here, denotes the localization of a category that inverts every morphism, and denotes the subcategory of all isomorphisms.
Relation to sSet
The nerve functor embeds Grpd as a full subcategory of the category of simplicial sets. The nerve of a groupoid is always a Kan complex.
The nerve has a left adjoint
Here, denotes the fundamental groupoid of the simplicial set .
Groupoids in Grpd
There is an additional structure which can be derived from groupoids internal to the category of groupoids, double-groupoids. Because Grpd is a 2-category, these objects form a 2-category instead of a 1-category since there is extra structure. Essentially, these are groupoids with functorsand an embedding given by an identity functorOne way to think about these 2-groupoids is they contain objects, morphisms, and squares which can compose together vertically and horizontally. For example, given squares and with the same morphism, they can be vertically conjoined giving a diagramwhich can be converted into another square by composing the vertical arrows. There is a similar composition law for horizontal attachments of squares.
Groupoids with geometric structures
When studying geometrical objects, the arising groupoids often carry a topology, turning them into topological groupoids, or even some differentiable structure, turning them into Lie groupoids. These last objects can be also studied in terms of their associated Lie algebroids, in analogy to the relation between Lie groups and Lie algebras.
Groupoids arising from geometry often possess further structures which interact with the groupoid multiplication. For instance, in Poisson geometry one has the notion of a symplectic groupoid, which is a Lie groupoid endowed with a compatible symplectic form. Similarly, one can have groupoids with a compatible Riemannian metric, or complex structure, etc.
See also
∞-groupoid
2-group
Homotopy type theory
Inverse category
Groupoid algebra (not to be confused with algebraic groupoid)
R-algebroid
Notes
References
Brown, Ronald, 1987, "From groups to groupoids: a brief survey", Bull. London Math. Soc. 19: 113–34. Reviews the history of groupoids up to 1987, starting with the work of Brandt on quadratic forms. The downloadable version updates the many references.
—, 2006. Topology and groupoids. Booksurge. Revised and extended edition of a book previously published in 1968 and 1988. Groupoids are introduced in the context of their topological application.
—, Higher dimensional group theory. Explains how the groupoid concept has led to higher-dimensional homotopy groupoids, having applications in homotopy theory and in group cohomology. Many references.
F. Borceux, G. Janelidze, 2001, Galois theories. Cambridge Univ. Press. Shows how generalisations of Galois theory lead to Galois groupoids.
Cannas da Silva, A., and A. Weinstein, Geometric Models for Noncommutative Algebras. Especially Part VI.
Golubitsky, M., Ian Stewart, 2006, "Nonlinear dynamics of networks: the groupoid formalism", Bull. Amer. Math. Soc. 43: 305–64
Higgins, P. J., "The fundamental groupoid of a graph of groups", J. London Math. Soc. (2) 13 (1976) 145–149.
Higgins, P. J. and Taylor, J., "The fundamental groupoid and the homotopy crossed complex of an orbit space", in Category theory (Gummersbach, 1981), Lecture Notes in Math., Volume 962. Springer, Berlin (1982), 115–122.
Higgins, P. J., 1971. Categories and groupoids. Van Nostrand Notes in Mathematics. Republished in Reprints in Theory and Applications of Categories, No. 7 (2005) pp. 1–195; freely downloadable. Substantial introduction to category theory with special emphasis on groupoids. Presents applications of groupoids in group theory, for example to a generalisation of Grushko's theorem, and in topology, e.g. fundamental groupoid.
Mackenzie, K. C. H., 2005. General theory of Lie groupoids and Lie algebroids. Cambridge Univ. Press.
Weinstein, Alan, "Groupoids: unifying internal and external symmetry – A tour through some examples". Also available in Postscript, Notices of the AMS, July 1996, pp. 744–752.
Weinstein, Alan, "The Geometry of Momentum" (2002)
R.T. Zivaljevic. "Groupoids in combinatorics—applications of a theory of local symmetries". In Algebraic and geometric combinatorics, volume 423 of Contemp. Math., 305–324. Amer. Math. Soc., Providence, RI (2006)
Algebraic structures
Category theory
Homotopy theory | Groupoid | Mathematics | 4,952 |
23,351,025 | https://en.wikipedia.org/wiki/Kobian | Kobian is a robot created by scientists at Waseda University in Japan. It is capable of displaying expressions of emotion and was developed to realize culture-specific greetings. It can also simulate human speech including the movement of the lips and the oscillations of the head.
Kobian is based on the WABIAN-2R robot and the emotion expression humanoid robot called WE-4RII and is 1,470 mm tall and weighs 62 kilograms. The robot's two eyeballs are outfitted with CMOS cameras. It is a bi-pedal standalone robot with control units such as motor drivers placed in the robotic head, making this particular part larger than the human head. The robotic head for the Kobian-R, the newer and more downsized version of the robot, has 24 degrees of freedom (DoFs) and a blue facial color due to an electro luminescence sheet. The original Kobian robot has a DoF of 48. Two versions of the Kobian-Rs have been built - a Western and a Japanese variant - to develop the system that produces the robot's facial cues.
The Kobian robot has another version called Debian, which has a slightly different facial and body color to provide these robots distinctions when interacting with each other and with other subjects. The color has no cultural significance.
See also
Robotics
References
External links
A robot displaying human emotion has been unveiled, By Emma Barnett, Technology and Digital Media Correspondent, UK Telegraph, 23 Jun 2009.
Bipedal humanoid robots
Social robots
Robots of Japan
2000s robots | Kobian | Technology | 315 |
68,838,431 | https://en.wikipedia.org/wiki/Amazon%20Astro | Amazon Astro is a home robot developed by Amazon.com, Inc. It was designed for home security monitoring, remote care of elderly relatives, and as a virtual assistant that can follow a person from room to room.
Features
Tom's Guide called the device "Alexa on wheels" and everything available on the Amazon Echo Show 10 is on this new device. The Astro has visual ID and should be able to recognize different family members and send an alert if the device sees someone it does not recognize in the home.
In 2022, Amazon announced a pilot program connecting Astro to the Ring security system, allowing workers in a remote call centre to control Astro to investigate security alerts.
Hardware
Reception
Mark Gurman of Bloomberg News says that, six months after its release, hardly anyone was talking about Astro online, and that Amazon had shipped only a few hundred units, at most.
David Priest of CNET observes that "For now, this robot remains a luxury item, for people with a lot of money to try out a cutting-edge technology that still lacks a compelling use case."
Lauren Goode of Wired magazine labels Astro as "a robot for the sake of a robot" and "a robot without a cause, at least for now".
The announcement in September 2022 that Astro would function as a security guard connected to Ring security devices for homes and small businesses led Gizmodo to comment on the increasing "creepiness" of Astro.
See also
Smart speaker
References
Astro
Products introduced in 2021
Robots
2021 robots | Amazon Astro | Physics,Technology | 305 |
1,039,011 | https://en.wikipedia.org/wiki/Chitosan | Chitosan is a linear polysaccharide composed of randomly distributed β-(1→4)-linked D-glucosamine (deacetylated unit) and N-acetyl-D-glucosamine (acetylated unit). It is made by treating the chitin shells of shrimp and other crustaceans with an alkaline substance, such as sodium hydroxide.
Chitosan has a number of commercial and possible biomedical uses. It can be used in agriculture as a seed treatment and biopesticide, helping plants to fight off fungal infections. In winemaking, it can be used as a fining agent, also helping to prevent spoilage. In industry, it can be used in a self-healing polyurethane paint coating. In medicine, it is useful in bandages to reduce bleeding and as an antibacterial agent; it can also be used to help deliver drugs through the skin.
History
In 1799, British chemist Charles Hatchett experimented with decalcifying the shells of various crustaceans, finding that a soft, yellow and cartilage-like substance was left behind that we now know to be chitin. In 1859, French physiologist Charles Marie Benjamin Rouget found that boiling chitin in potassium hydroxide solution could deacetylate it to produce a substance that was soluble in dilute organic acids, that he called chitine modifiée. In 1894, German chemist Felix Hoppe-Seyler named the substance chitosan. From 1894 to 1930 there was a period of debate and confusion over the exact composition of chitin and particularly whether animal and fungal forms where the same chemicals. In 1930 the first chitosan films and fibres were patented but competition from petroleum-derived polymers limited their uptake. It was not until the 1970s that there was renewed interest in the compound, spurred partly by laws that prevented the dumping of untreated shellfish waste.
Manufacture
Chitosan is produced commercially by deacetylation of chitin, which is the structural element in the exoskeleton of crustaceans (such as crabs and shrimp) and cell walls of fungi. A common method for obtaining chitosan is the deacetylation of chitin using sodium hydroxide in excess as a reagent and water as a solvent. The reaction follows first-order kinetics though it occurs in two steps; the activation energy barrier for the first stage is estimated at 48.8 kJ·mol−1 at and is higher than the barrier to the second stage.
The degree of deacetylation (%) can be determined by NMR spectroscopy and the degree of deacetylation in commercially available chitosan ranges from 60 to 100%. On average, the molecular weight of commercially produced chitosan is 3800–20,000 daltons.
Nanofibrils have been made using chitin and chitosan.
Chemical modifications
Chitosan contains the following three functional groups: C2-NH2, C3-OH, and C6-OH. C3-OH has a large spatial site resistance and therefore is relatively difficult to modify. C2-NH2 is highly reactive for fine modifications and is the most common modifying group in chitosan. In chitosan, although amino groups are more prone to nucleophilic reactions than hydroxyl groups, both can react non-selectively with electrophilic reagents such as acids, chlorides, and haloalkanes to functionalize them. Since chitosan contains a variety of functional groups, it can be functionalized in different ways such as phosphorylation, thiolation, and quaternization to adapt it to specific purposes.
Phosphorylated chitosan
Water-soluble phosphorylated chitosan can be obtained by the reaction of phosphorus pentoxide and chitosan under low-temperature conditions using methane sulfonic acid as the catalyst; phosphorylated chitosan with good antibacterial activity and ionic properties can be prepared by graft copolymerization of chitosan monophosphate.
The good water solubility and metal chelating properties of phosphorylated chitosan and its derivatives make them widely used in tissue engineering, drug delivery carriers, tissue regeneration, and the food industry.
In tissue engineering, phosphorylated chitosan exhibits improved swelling and ionic conductivity. Although its crystallinity is reduced, its tensile strength remains largely unchanged. These properties make it useful for creating scaffolds that can support bone tissue regeneration by binding growth factors and promoting stem cell differentiation into bone-forming cells. Additionally, to enhance the solubility of chitosan-based hydrogels at neutral or alkaline pH, the derivative N-methylene phosphonic acid chitosan (NMPC-GLU) has been developed. This material maintains good mechanical strength and improve cell proliferation, making it valuable for biomedical applications.
Thiolated chitosan
Thiolated chitosan is produced by attaching thiol groups to the amino groups of chitosan using a thiol-containing coupling agent. The primary site for this modification is the amino group at the 2nd position of chitosan's glucosamine units. During this process, thioglycolic acid and cysteine mediate the reaction, forming an amide bond between the thiol group and chitosan. At a pH below 5, thiol activity is reduced, which limits disulfide bond formation.
The modified chitosan exhibits improved adhesive properties and stability due to the covalent attachment of the thiol groups. Lower pH reduces oxidation, enhancing its adhesion properties. Additionally, thiolated chitosan can interact with cell membrane receptors, improving membrane permeability and showing potential for applications in bacterial adhesion prevention, for example for coating stainless steel.
Ionic chitosan
There are two main methods of chitosan quaternization: direct quaternization and indirect quaternization.
The direct quaternization of chitosan amino acids treats chitosan with haloalkanes under alkaline conditions. Another method is the reaction of chitosan with aldehydes first, followed by reduction, and finally with haloalkanes to obtain quaternized chitosan.
The indirect quaternization method refers to introducing small molecules containing quaternary ammonium groups into chitosan, such as glycidyl trimethyl ammonium chloride, (5-bromopentyl) trimethyl ammonium bromide, etc. Quaternary ammonium groups can further be introduced into the chitosan backbone via azide-alkyne cycloaddition, or by dissolving chitosan in alkali and urea and then reacting it with 3-chloro-2-hydroxypropyl trimethylammonium chloride, which provides a simple and green solution to achieve chitosan functionalization.
Cationic derivatives of chitosan have important roles in bioadhesion, absorption enhancement, anti-inflammatory, antibacterial and anti-tumor applications. Chitosan modified with quaternary ammonium groups is one of the most common cationic chitosan derivatives. Quaternized chitosan with a permanent positive charge has increased antimicrobial activity and solubility compared to normal chitosan.
Properties
The amino group in chitosan has a pKb value of ~6.5, which leads to significant protonation in neutral solution, increasing with increased acidity (decreased pH) and the %DA-value. This makes chitosan water-soluble and a bioadhesive which readily binds to negatively charged surfaces such as mucosal membranes. Also, chitosan can effectively bind to other surface via hydrophobic interaction and/or cation-π interaction (chitosan as a cation source) in aqueous solution. The free amine groups on chitosan chains can make crosslinked polymeric networks with dicarboxylic acids to improve chitosan's mechanical properties. Chitosan enhances the transport of polar drugs across epithelial surfaces, and is biocompatible and biodegradable. However, it is not approved by the FDA for drug delivery. Purified quantities of chitosan are available for biomedical applications.
Physicochemical properties
Chitosan has biological properties, such as biodegradability and biocompatibility. The biological properties of chitosan are closely related to its physicochemical structure, which includes the degree of deacetylation, water content, and molecular weight. Deacetylation refers to the process of removing the acetyl group from chitosan, and this process determines the content of free amine groups in chitosan. Studies have shown that chitosan has good solubility only when the degree of deacetylation is above 85%. The enhanced chitosan uptake is mainly due to the interaction of positively charged chitosan with cell membranes, activation of chlorine–bicarbonate exchange channels, and reorganization of proteins associated with epithelial tight junctions, thus opening epithelial tight junctions. Chitosan inhibits the growth of different bacteria and fungi by mechanisms involving several factors, including the degree of deacetylation, pH, divalent cations, and solvent type.
Uses
Agricultural and horticultural use
The agricultural and horticultural uses for chitosan, primarily for plant defense and yield increase, are based on how this glucosamine polymer influences the biochemistry and molecular biology of the plant cell. The cellular targets are the plasma membrane and nuclear chromatin. Subsequent changes occur in cell membranes, chromatin, DNA, calcium, MAP kinase, oxidative burst, reactive oxygen species, callose pathogenesis-related (PR) genes, and phytoalexins.
Chitosan was first registered as an active ingredient (licensed for sale) in 1986.
Natural biocontrol and elicitor
In agriculture, chitosan is typically used as a natural seed treatment and plant growth enhancer, and as an ecologically friendly biopesticide substance that boosts the innate ability of plants to defend themselves against fungal infections.
Degraded molecules of chitin/chitosan exist in soil and water. Chitosan applications for plants and crops are regulated in the USA by the EPA, and the USDA National Organic Program regulates its use on organic certified farms and crops. EPA-approved, biodegradable chitosan products are allowed for use outdoors and indoors on plants and crops grown commercially and by consumers.
In the European Union and United Kingdom, chitosan is registered as a "basic substance" for use as a biological fungicide and bactericide on a wide range of crops.
The natural biocontrol ability of chitosan should not be confused with the effects of fertilizers or pesticides upon plants or the environment. Chitosan active biopesticides represent a new tier of cost-effective biological control of crops for agriculture and horticulture. The biocontrol mode of action of chitosan elicits natural innate defense responses within plant to resist insects, pathogens, and soil-borne diseases when applied to foliage or the soil. Chitosan increases photosynthesis, promotes and enhances plant growth, stimulates nutrient uptake, increases germination and sprouting, and boosts plant vigor. When used as a seed treatment or seed coating on cotton, corn, seed potatoes, soybeans, sugar beets, tomatoes, wheat, and many other seeds, it elicits an innate immunity response in developing roots which destroys parasitic cyst nematodes without harming beneficial nematodes and organisms.
Agricultural applications of chitosan can reduce environmental stress due to drought and soil deficiencies, strengthen seed vitality, improve stand quality, increase yields, and reduce fruit decay of vegetables, fruits and citrus crops . Horticultural application of chitosan increases blooms and extends the life of cut flowers and Christmas trees. The US Forest Service has conducted research on chitosan to control pathogens in pine trees and increase resin pitch outflow which resists pine beetle infestation.
Chitosan has been studied for applications in agriculture and horticulture dating back to the 1980s. By 1989, chitosan salt solutions were applied to crops for improved freeze protection or to crop seed for seed priming. Shortly thereafter, chitosan salt received the first ever biopesticide label from the EPA, then followed by other intellectual property applications.
Chitosan has been used to protect plants in space, as well, exemplified by NASA's experiment to protect adzuki beans grown aboard the space shuttle and Mir space station in 1997. NASA results revealed chitosan induces increased growth (biomass) and pathogen resistance due to elevated levels of β-(1→3)-glucanase enzymes within plant cells. NASA confirmed chitosan elicits the same effect in plants on earth.
In 2008, the EPA approved natural broad-spectrum elicitor status for an ultralow molecular active ingredient of 0.25% chitosan. A natural chitosan elicitor solution for agriculture and horticultural uses was granted an amended label for foliar and irrigation applications by the EPA in 2009. Given its low potential for toxicity and abundance in the natural environment, chitosan does not harm people, pets, wildlife, or the environment when used according to label directions. Chitosan blends do not work against bark beetles when put on a tree's leaves or in its soil.
Filtration
Chitosan can be used in hydrology as a part of a filtration process. Chitosan causes the fine sediment particles to bind together, and is subsequently removed with the sediment during sand filtration. It also removes heavy minerals, dyes, and oils from the water. As an additive in water filtration, chitosan combined with sand filtration removes up to 99% of turbidity. Chitosan is among the biological adsorbents used for heavy metals removal without negative environmental impacts.
In combination with bentonite, gelatin, silica gel, isinglass, or other fining agents, it is used to clarify wine, mead, and beer. Added late in the brewing process, chitosan improves flocculation, and removes yeast cells, fruit particles, and other detritus that cause hazy wine.
Winemaking and fungal source chitosan
Chitosan has a long history for use as a fining agent in winemaking. Fungal source chitosan has shown an increase in settling activity, reduction of oxidized polyphenolics in juice and wine, chelation and removal of copper (post-racking) and control of the spoilage yeast Brettanomyces. These products and uses are approved for European use by the EU and OIV standards.
Wound management
Chitosan-based wound dressings have been widely explored for a variety of acute and chronic wounds. Chitosan has the ability to adhere to fibrinogen, which produces increased platelet adhesion, causing clotting of blood and hemostasis. Chitosan hemostatic agents are salts made from mixing chitosan with an organic acid (such as succinic or lactic acid). Chitosan may have other properties conducive to wound healing, including antibacterial and antifungal activity, which remain under preliminary research.
Chitosan is used within some wound dressings to decrease bleeding. Upon contact with blood, the bandage becomes sticky, effectively sealing the laceration. Chitosan hydrogel-based wound dressings have also been found useful as burn dressings, and for the treatment of chronic diabetic wounds and hydrofluoric acid burns.
Chitosan-containing wound dressings received approval for medical use in the United States in 2003.
Temperature-sensitive hydrogels
Chitosan is dissolved in dilute organic acid solutions but is insoluble in high concentrations of hydrogen ions at pH 6.5 and is precipitated as a gel-like compound. Chitosan is positively charged by amine groups, making it suitable for binding to negatively charged molecules. However, it has disadvantages such as low mechanical strength and low-temperature response rate; it must be combined with other gelling agents to improve its properties. Using glycerolphosphate salts (possessing a single anionic head) without chemical modification or cross-linking, the pH-dependent gelation properties can be converted to temperature-sensitive gelation properties. In the year 2000, Chenite was the first to design the temperature-sensitive chitosan hydrogels drug delivery system using chitosan and β-glycerol phosphate. This new system can remain in the liquid state at room temperature, while becoming gel with increasing temperature above the physiological temperature (37 °C). Phosphate salts cause a particular behaviour in chitosan solutions, thereby allowing these solutions to remain soluble in the physiological pH range (pH 7), and they will be gel only at body temperature. When the liquid solution of chitosan-glycerol phosphate, containing the drug, enters the body through a syringe injection, it becomes a water-insoluble gel at 37 °C. The entrapped drug particles between the hydrogel chains will be gradually released.
Research
Chitosan and derivatives have been explored in the development of nanomaterials, bioadhesives, wound dressing materials, improved drug delivery systems, enteric coatings, and in medical devices.
Bioprinting
Bioinspired materials, a manufacturing concept inspired by natural nacre, shrimp carapace, or insect cuticles, has led to development of bioprinting methods to manufacture large scale consumer objects using chitosan. This method is based on replicating the molecular arrangement of chitosan from natural materials into fabrication methods, such as injection molding or mold casting. Once discarded, chitosan-constructed objects are biodegradable and non-toxic. The method is used to engineer and bioprint human organs or tissues.
Pigmented chitosan objects can be recycled, with the option of reintroducing or discarding the dye at each recycling step, enabling reuse of the polymer independently of colorants. Unlike other plant-based bioplastics (e.g. cellulose, starch), the main natural sources of chitosan come from marine environments and do not compete for land or other human resources.
3D bioprinting of tissue engineering scaffolds for creating artificial tissues and organs is another application where chitosan has gained popularity. Chitosan has high biocompatibility, biodegradability, and antimicrobial, hemostatic, wound healing and immunomodulatory activities which make it suitable for making artificial tissues.
Weight loss
Chitosan is marketed in a tablet form as a "fat binder". Although the effect of chitosan on lowering cholesterol and body weight has been evaluated, the effect appears to have no or low clinical importance. Reviews from 2016 and 2008 found there was no significant effect, and no justification for overweight people to use chitosan supplements. In 2015, the U.S. Food and Drug Administration issued a public advisory about supplement retailers who made exaggerated claims concerning the supposed weight loss benefit of various products.
Biodegradable antimicrobial food packaging
Microbial contamination of food products accelerates the deterioration process and increases the risk of foodborne illness caused by potentially life-threatening pathogens. Ordinarily, food contamination originates superficially, requiring surface treatment and packaging as crucial factors to assure food quality and safety. Biodegradable chitosan films have potential for preserving various food products, retaining their firmness and restricting weight loss due to dehydration. In addition, composite biodegradable films containing chitosan and antimicrobial agents are in development as safe alternatives to preserve food products.
Battery electrolyte
Chitosan is being investigated as an electrolyte for rechargeable batteries with good performance and low environmental impact due to rapid biodegradability, leaving recycleable zinc. The electrolyte has excellent physical stability up to 50 °C, electrochemical stability up to 2 V with zinc electrodes, and accommodates redox reactions involved in the Zn-MnO2 alkaline system. results were promising, but the battery needed testing on a larger scale and under actual use conditions.
References
External links
International research project Nano3Bio, focused on tailor-made biotechnological production of chitosans (funded by the European Union)
Antihemorrhagics
Biopesticides
Elicitors
Polysaccharides | Chitosan | Chemistry | 4,318 |
1,425,862 | https://en.wikipedia.org/wiki/Tellurion | A tellurion (also spelled tellurian, tellurium, and yet another name is loxocosm), is a clock, typically of French or Swiss origin, surmounted by a mechanism that depicts how day, night, and the seasons are caused by the rotation and orientation of Earth on its axis and its orbit around the Sun. The clock normally also displays the phase of the Moon and the four-year (perpetual) calendar.
It is related to the orrery, which illustrates the relative positions and motions of the planets and moons in the Solar System in a heliocentric model.
The word tellurion derives from the Latin tellus, meaning "earth".
See also
Astronomical clock
Solar System models
References
External links
Astronomical instruments
Astronomical clocks
Solar System models
Seasons
de:Orrery#Tellurium | Tellurion | Physics,Astronomy | 166 |
1,078,092 | https://en.wikipedia.org/wiki/Dental%20restoration | Dental restoration, dental fillings, or simply fillings are treatments used to restore the function, integrity, and morphology of missing tooth structure resulting from caries or external trauma as well as to the replacement of such structure supported by dental implants. They are of two broad types—direct and indirect—and are further classified by location and size. Root canal therapy, for example, is a restorative technique used to fill the space where the dental pulp normally resides and are more hectic than a normal filling.
History
In Italy evidence dated to the Paleolithic, around 13,000 years ago, points to bitumen used to fill a tooth and in Neolithic Slovenia, 6500 years ago, beeswax was used to close a fracture in a tooth. In Graeco-Roman literature, such as Pliny the Elder's Naturalis Historia (AD 23–79), contains references to filling materials for hollow teeth.
Tooth preparation
Restoring a tooth to good form and function requires two steps:
preparing the tooth for placement of restorative material or materials, and
placement of these materials.
The process of preparation usually involves cutting the tooth with a rotary dental handpiece and dental burrs, a dental laser, or through air abrasion (or in the case of atraumatic restorative treatment, hand instruments), to make space for the planned restorative materials and to remove any dental decay or portions of the tooth that are structurally unsound. If permanent restoration cannot be carried out immediately after tooth preparation, temporary restoration may be performed.
The prepared tooth, ready for placement of restorative materials, is generally called a tooth preparation. Materials used may be gold, amalgam, dental composites, glass ionomer cement, or porcelain, among others.
Preparations may be intracoronal or extracoronal. Intracoronal preparations are those which serve to hold restorative material within the confines of the structure of the crown of a tooth. Examples include all classes of cavity preparations for composite or amalgam as well as those for gold and porcelain inlays. Intracoronal preparations are also made as female recipients to receive the male components of removable partial dentures. Extracoronal preparations provide a core or base upon which restorative material will be placed to bring the tooth back into a functional and aesthetic structure. Examples include crowns and onlays, as well as veneers.
In preparing a tooth for a restoration, a number of considerations will determine the type and extent of the preparation. The most important factor to consider is decay. For the most part, the extent of the decay will define the extent of the preparation, and in turn, the subsequent method and appropriate materials for restoration.
Another consideration is unsupported tooth structure. When preparing the tooth to receive a restoration, unsupported enamel is removed to allow for a more predictable restoration. While enamel is the hardest substance in the human body, it is particularly brittle, and unsupported enamel fractures easily.
A systematic review concluded that for decayed baby (primary) teeth, putting an off‐the‐shelf metal crown over the tooth (Hall technique) or only partially removing decay (also referred to as "selective removal") before placing a filling may be better than the conventional treatment of removing all decay before filling. For decayed adult (permanent) teeth, partial removal (also referred to as "selective removal") of decay before filling the tooth, or adding a second stage to this treatment where more decay is removed after several months, may be better than conventional treatment.
Direct restorations
This technique involves placing a soft or malleable filling into the prepared tooth and building up the tooth. The material is then set hard and the tooth is restored. Where a wall of the tooth is missing and needs to be rebuilt, a matrix should be used before placing the material to recreate the shape of the tooth, so it is cleansable and to prevent the teeth from sticking together. Sectional matrices are generally preferred to circumferential matrices when placing composite restorations in that they favour the formation of a contact point. This is important to reduce patient complaints of food impaction between the teeth. However, sectional matrices can be more technique sensitive to use, so care and skill is required to prevent problems occurring in the final restoration. The advantage of direct restorations is that they are usually set quickly and can be placed in a single procedure. The dentist has a variety of different filling options to choose from. A decision is usually made based on the location and severity of the associated cavity. Since the material is required to set while in contact with the tooth, limited energy (heat) is passed to the tooth from the setting process.
Indirect restorations
In this technique the restoration is fabricated outside of the mouth using the dental impressions of the prepared tooth. Common indirect restorations include inlays and onlays, crowns, bridges, and veneers. Usually a dental technician fabricates the indirect restoration from records the dentist has provided. The finished restoration is usually bonded permanently with a dental cement. It is often done in two separate visits to the dentist. Common indirect restorations are done using gold or ceramics.
While the indirect restoration is being prepared, a provisory/temporary restoration is sometimes used to cover the prepared tooth to help maintain the surrounding dental tissues.
Removable dental prostheses (mainly dentures) are sometimes considered a form of indirect dental restoration, as they are made to replace missing teeth. There are numerous types of precision attachments (also known as combined restorations) to aid removable prosthetic attachment to teeth, including magnets, clips, hooks, and implants which may themselves be seen as a form of dental restoration.
The CEREC method is a chairside CAD/CAM restorative procedure. An optical impression of the prepared tooth is taken using a camera. Next, the specific software takes the digital picture and converts it into a 3D virtual model on the computer screen. A ceramic block that matches the tooth shade is placed in the milling machine. An all-ceramic, tooth-colored restoration is finished and ready to bond in place.
Another fabrication method is to import STL and native dental CAD files into CAD/CAM software products that guide the user through the manufacturing process. The software can select the tools, machining sequences and cutting conditions optimized for particular types of materials, such as titanium and zirconium, and for particular prostheses, such as copings and bridges. In some cases, the intricate nature of some implants requires the use of 5-axis machining methods to reach every part of the job.
Cavity classifications
Greene Vardiman Black classification:
G.V. Black classified the cavities depending on their site:
Class I Caries affecting pit and fissure, on occlusal, buccal, and lingual surfaces of molars and premolars, and palatal of maxillary incisors.
Class II Caries affecting proximal surfaces of molars and premolars.
Class III Caries affecting proximal surfaces of centrals, laterals, and cuspids.
Class IV Caries affecting proximal including incisal edges of anterior teeth.
Class V Caries affecting gingival 1/3 of facial or lingual surfaces of anterior or posterior teeth.
Class VI Caries affecting cusp tips of molars, premolars, and cuspids.
Graham J. Mount's classification:
Mount classified cavities depending on their site and size. The proposed classification was designed to simplify the identification of lesions and to define their complexity as they enlarge.
Site:
Pit/Fissure: 1
Contact area: 2
Cervical: 3
Size:
Minimal: 1
Moderate: 2
Enlarged: 3
Extensive: 4
Materials used
Alloys
The following casting alloys are mostly used for making crowns, bridges and dentures. Titanium, usually commercially pure but sometimes a 90% alloy, is used as the anchor for dental implants as it is biocompatible and can integrate into bone.
Precious metallic alloys
gold (high purity: 99.7%)
gold alloys (with high gold content)
gold-platina alloy
silver-palladium alloy
Base metallic alloys
cobalt-chrome alloy
nickel-chrome alloy
Amalgam
Amalgams are alloys formed by a reaction between two or more metals, one of which is mercury. It is a hard restorative material and is silvery-grey in colour. One of the oldest direct restorative materials still in use, dental amalgam was widely used in the past with a high degree of success, although recently its popularity has declined due to a number of reasons, including the development of alternative bonded restorative materials, increase in demand for more aesthetic restorations and public perceptions concerning the potential health risks of the material.
The composition of dental amalgam is controlled by the ISO Standard for dental amalgam alloy (ISO 1559). The major components of amalgam are silver, tin and copper. Other metals and small amounts of minor elements such as zinc, mercury, palladium, platinum and indium are also present. Earlier versions of dental amalgams, known as 'conventional' amalgams consisted of at least 65 wt% silver, 29 wt% tin, and less than 6 wt% copper. Improvements in the understanding of the structure of amalgam post-1986 gave rise to copper-enriched amalgam alloys, which contain between 12 wt% and 30 wt% copper and at least 40 wt% silver. The higher level of copper improved the setting reaction of amalgam, giving greater corrosion resistance and early strength after setting.
Possible indications for amalgam are for load-bearing restorations in medium to large sized cavities in posterior teeth, and in core build-ups when a definitive restoration will be an indirect cast restoration such as a crown or bridge retainer. Contraindications for amalgam are if aesthetics are paramount to patient due to the colour of the material. Amalgams should be avoided if the patient has a history of sensitivity to mercury or other amalgam components. Besides that, amalgam is avoided if there is extensive loss of tooth substance such that a retentive cavity cannot be produced, or if excessive removal of health tooth substance would be required to produce a retentive cavity.
Advantages of amalgam include durability - if placed under ideal conditions, there is evidence of good long term clinical performance of the restorations. Placement time of amalgam is shorter compared to that of composites and the restoration can be completed in a single appointment. The material is also more technique-forgiving compared to composite restorations used for that purpose. Dental amalgam is also radiopaque which is beneficial for differentiating the material between tooth tissues on radiographs for diagnosing secondary caries. The cost of the restoration is typically cheaper than composite restorations.
Disadvantages of amalgam include poor aesthetic qualities due to its colour. Amalgam does not bond to tooth easily, hence it relies on mechanical forms of retention. Examples of this are undercuts, slots/grooves or root canal posts. In some cases this may necessitate excessive amounts of healthy tooth structure to be removed. Hence, alternative resin-based or glass-ionomer cement-based materials are used instead for smaller restorations including pit and small fissure caries. There is also a risk of marginal breakdown in the restorations. This could be due to corrosion which may result in "creep" and "ditching" of the restoration. Creep can be defined as the slow internal stressing and deformation of amalgam under stress. This effect is reduced by incorporating copper into amalgam alloys. Some patients may experience local sensitivity reactions to amalgam.
Although the mercury in cured amalgam is not available as free mercury, concern of its toxicity has existed since the invention of amalgam as a dental material. It is banned or restricted in Norway, Sweden and Finland. See dental amalgam controversy.
Direct gold
Direct gold fillings were practiced during the times of the Civil War in America.
Although rarely used today, due to expense and specialized training requirements, gold foil can be used for direct dental restorations.
Composite resin
Dental composites, commonly described to patients as "tooth-colored fillings", are a group of restorative materials used in dentistry. They can be used in direct restorations to fill in the cavities created by dental caries and trauma, minor buildup for restoring tooth wear (non-carious tooth surface loss) and filling in small gaps between teeth (labial veneer). Dental composites are also used as indirect restoration to make crowns and inlays in the laboratory.
These materials are similar to those used in direct fillings and are tooth-colored. Their strength and durability is not as high as porcelain or metal restorations and they are more prone to wear and discolouration. As with other composite materials, a dental composite typically consists of a resin-based matrix, which contains a modified methacrylate or acrylate. Two examples of such commonly used monomers include bisphenol A-glycidyl methacrylate (BISMA) and urethane dimethacrylate (UDMA), together with tri-ethylene glycol dimethacrylate (TEGMA). TEGMA is a comonomer which can be used to control viscosity, as Bis GMA is a large molecule with high viscosity, for easier clinical handling. Inorganic filler such as silica, quartz or various glasses, are added to reduce polymerization shrinkage by occupying volume and to confirm radio-opacity of products due to translucency in property, which can be helpful in diagnosis of dental caries around dental restorations. The filler particles give the composites wear resistance as well. Compositions vary widely, with proprietary mixes of resins forming the matrix, as well as engineered filler glasses and glass ceramics. A coupling agent such as silane is used to enhance the bond between resin matrix and filler particles. An initiator package begins the polymerization reaction of the resins when external energy (light/heat, etc.) is applied. For example, camphorquinone can be excited by visible blue light with critical wavelength of 460-480 nm to yield necessary free radicals to start the process.
After tooth preparation, a thin primer or bonding agent is used. Modern photo-polymerised composites are applied and cured in relatively thin layers as determined by their opacity. After some curing, the final surface will be shaped and polished.
Glass ionomer cement
A glass ionomer cement (GIC) is a class of materials commonly used in dentistry as direct filling materials and/or for luting indirect restorations. GIC can also be placed as a lining material in some restorations for extra protection. These tooth-coloured materials were introduced in 1972 for use as restorative materials for anterior teeth (particularly for eroded areas).
The material consists of two main components: Liquid and powder. The liquid is the acidic component containing of polyacrylic acid and tartaric acid (added to control the setting characteristics). The powder is the basic component consisting of sodium alumino-silicate glass. The desirable properties of glass ionomer cements make them useful materials in the restoration of carious lesions in low-stress areas such as smooth-surface and small anterior proximal cavities in primary teeth.
Advantages of using glass ionomer cement:
The addition of tartaric acid to GIC leads to a shortened setting time, hence providing better handling properties. This makes it easier for the operator to use the material in clinic.
GIC does not require bond, it can bond to enamel and dentine without the need for use of an intermediate material. Conventional GIC also has a good sealing ability providing little leakage around restoration margins and reducing the risk of secondary caries.
GIC contains and releases fluoride after being placed therefore it helps in preventing carious lesions in teeth.
It has good thermal properties as the expansion under stimulus is similar to dentine.
The material does not contract on setting meaning it is not subject to shrinkage and microleakage.
GIC is also less susceptible to staining and colour change than composite.
Disadvantages of using Glass ionomer cement:
GIC have poor wear resistance, they are usually weak after setting and are not stable in water however this improves when time goes on and progression reactions take place. Due to their low strength GICs are not appropriate to be placed in cavities in areas which bear an increase amount of occlusal load or wear.
The material is susceptible to moisture when it is first placed.
GIC varies in translucency therefore it can have poor aesthetics, especially noticeable if placed on anterior teeth.
Resin Modified Glass Ionomer
Resin modified glass ionomer was developed to combine the properties of glass ionomer cement with composite technology. It comes in a powder-liquid form. The powder contains fluro-alumino-silicate glass, barium glass (provides radiopacity), potassium persulphate (a redox catalyst to provide resin cure in the dark) and other components such as pigments. The liquid consists of HEMA (water miscible resin), polyacrylic acid (with pendant methacrylate groups) and tartaric acid. This can undergo both acid base and polymerisation reactions. It also has photoinitiators present which enable light curing.
The ionomer has a number of uses in dentistry. It can be applied as fissure sealant, placed in endodontic access cavity as a temporary filling and a luting agent. It can also be used to restore lesions in both primary and permanent dentition. They are easier to use and are a very popular group of materials.
Advantages of using RMGIC:
Provides a good bond to enamel and dentine.
It has better physical properties than GIC.
A Lower solubility in moisture.
It also releases fluoride over time.
Provided better translucency and aesthetics as compared to GIC.
Better handling properties making it easier to use.
Disadvantages of using RMGIC:
Polymerisation Contraction can cause microleakage around restoration margins
It has an exothermic setting reaction which can cause potential damage to tooth tissue.
The material swells due to uptake of water as HEMA is extremely hydrophilic.
Monomer leaching : HEMA is toxic to the pulp therefore it must be polymerised completely.
The strength of the material reduces if its not light-cured.
GIC and RMGIC are used in dentistry, there will be times when one of these materials is better than the other but that is dependent upon the clinical situation. However, in most cases the ease of use is deciding factor.
Compomer
Dental compomers are another type of white filling material although their use is not as widespread.
Compomers were formed by modifying dental composites with poly-acid in an effort to combine the desirable properties of dental composites, namely their good aesthetics, and glass ionomer cements, namely their ability to release fluoride over a long time. Whilst this combination of good aesthetics and fluoride release may seem to give compomers a selective advantage, their poor mechanical properties (detailed below) limits their use.
Compomers have a lower wear resistance and a lower compressive, flexural and tensile strength than dental composites, although their wear resistance is greater than resin-modified and conventional glass ionomer cements. Compomers cannot adhere directly to tooth tissue like glass ionomer cements; they require a bonding agent like dental composites.
Compomers may be used as a cavity lining material and a restorative material for non-load bearing cavities. In Paediatric dentistry, they can also be used as a fissure sealant material.
The luting version of compomer may be used to cement cast alloy and ceramic-metal restorations, and to cement orthodontic bands in Paediatric patients. However, compomer luting cement should not be used with all-ceramic crowns.
Porcelain (ceramics)
Full-porcelain dental materials include dental porcelain (porcelain meaning a high-firing-temperature ceramic), other ceramics, sintered-glass materials, and glass-ceramics as indirect fillings and crowns or metal-free "jacket crowns". They are also used as inlays, onlays, and aesthetic veneers. A veneer is a very thin shell of porcelain that can replace or cover part of the enamel of the tooth. Full-porcelain restorations are particularly desirable because their color and translucency mimic natural tooth enamel.
Another type is known as porcelain-fused-to-metal, which is used to provide strength to a crown or bridge. These restorations are very strong, durable and resistant to wear, because the combination of porcelain and metal creates a stronger restoration than porcelain used alone.
One of the advantages of computerized dentistry (CAD/CAM technologies) involves the use of machinable ceramics which are sold in a partially sintered, machinable state that is fired again after machining to form a hard ceramic. Some of the materials used are glass-bonded porcelain (Vitablock), lithium disilicate glass-ceramic (a ceramic crystallizing from a glass by special heat treatment), and phase stabilized zirconia (zirconium dioxide, ZrO2). Previous attempts to utilize high-performance ceramics such as zirconium-oxide were thwarted by the fact that this material could not be processed using the traditional methods used in dentistry. Because of its high strength and comparatively much higher fracture toughness, sintered zirconium oxide can be used in posterior crowns and bridges, implant abutments, and root dowel pins. Lithium disilicate (used in the latest Chairside Economical Restoration of Esthetic Ceramics CEREC product) also has the fracture resistance needed for use on molars. Some all-ceramic restorations, such as porcelain-fused-to-alumina set the standard for high aesthetics in dentistry because they are strong and their color and translucency mimic natural tooth enamel.
Cast metals and porcelain-on-metal were the standard material for crowns and bridges for long time. The full ceramic restorations are now the major choice of patients and are of commonly applied by dentists.
Comparison
Composites and amalgam are used mainly for direct restoration. Composites can be made of color matching the tooth, and the surface can be polished after the filling procedure has been completed.
Amalgam fillings expand with age, possibly cracking the tooth and requiring repair and filling replacement, but chance of leakage of filling is less.
Composite fillings shrink with age and may pull away from the tooth allowing leakage. If leakage is not noticed early, recurrent decay may occur.
A 2003 study showed that fillings have a finite lifespan: an average of 12.8 years for amalgam and 7.8 years for composite resins. Fillings fail because of changes in the filling, tooth or the bond between them. Secondary cavity formation can also affect the structural integrity the original filling. Fillings are recommended for small to medium-sized restorations.
Inlays and onlays are more expensive indirect restoration alternative to direct fillings. They are supposed to be more durable, but long-term studies did not always detect a significantly lower failure rate of ceramic or composite inlays compared to composite direct fillings.
Porcelain, cobalt-chrome, and gold are used for indirect restorations like crowns and partial coverage crowns (onlays). Traditional porcelains are brittle and are not always recommended for molar restorations. Some hard porcelains cause excessive wear on opposing teeth.
Experimental
The US National Institute of Dental Research and international organizations as well as commercial suppliers conduct research on new materials. In 2010, researchers reported that they were able to stimulate mineralization of an enamel-like layer of fluorapatite in vivo. Filling material that is compatible with pulp tissue has been developed; it could be used where previously a root canal or extraction was required, according to 2016 reports.
Restoration using dental implants
Dental implants are anchors placed in bone, usually made from titanium or titanium alloy. They can support dental restorations which replace missing teeth. Some restorative applications include supporting crowns, bridges, or dental prostheses.
Complications
Irritation of the nerve
When a deep cavity had been filled, there is a possibility that the nerve may have been irritated. This can result in short term sensitivity to cold and hot substances, and pain when biting down on the specific tooth. It may settle down on its own. If not, then alternative treatment such as root canal treatment may be considered to resolve the pain while keeping the tooth.
Weakening of tooth structure
In situations where a relatively larger amount of tooth structure has been lost or replaced with a filling material, the overall strength of the tooth may be affected. This significantly increases the risk of the tooth fracturing off in the future when excess force is placed on the tooth, such as trauma or grinding teeth at night, leading to cracked tooth syndrome.
See also
Dental curing light
Dental dam
Dental fear
Dental braces
Dental treatment
Fixed prosthodontics
Gold teeth
Oral and maxillofacial surgery
Oral and maxillofacial pathology
Treatment of knocked-out (avulsed) teeth
References
External links
How Dental Restoration Materials Compare
Dental materials
Dentistry procedures
Restorative dentistry | Dental restoration | Physics | 5,280 |
57,243,037 | https://en.wikipedia.org/wiki/Wankel%20AG%20LCR%20-%20407%20SGti | The Wankel AG LCR - 407 SGti is a German Wankel aircraft engine, designed and produced by Wankel AG of Kirchberg, Saxony for use in ultralight aircraft.
Design and development
The LCR - 407 SGti engine is a single-rotor four-stroke, displacement, liquid-cooled, fuel injected, petrol, Wankel engine design, with a toothed poly V belt reduction drive with a reduction ratio of 3:1. It employs dual electronic ignition and produces at 6000 rpm.
Specifications (LCR - 407 SGti)
See also
References
External links
Wankel AG aircraft engines
Pistonless rotary engine | Wankel AG LCR - 407 SGti | Technology | 128 |
69,490,190 | https://en.wikipedia.org/wiki/Gemmatimonas%20phototrophica | Gemmatimonas phototrophica is an aerobic, anoxygenic and chlorophotoheterotroph bacterium species from the genus of Gemmatimonas.
References
Further reading
Qian, Pu; Gardiner, Alastair T.; Šímová, Ivana; Naydenova, Katerina; Croll, Tristan I.; Jackson, Philip J.; Nupur; Kloz, Miroslav; Čubáková, Petra; Kuzma, Marek; Zeng, Yonghui (2022-02-18). "2.4-Å structure of the double-ring Gemmatimonas phototrophica photosystem". Science Advances. 8 (7): eabk3139. doi:10.1126/sciadv.abk3139. ISSN 2375-2548. PMC 8849296. PMID 35171663.
Gemmatimonadota
Bacteria described in 2015 | Gemmatimonas phototrophica | Biology | 202 |
1,392,697 | https://en.wikipedia.org/wiki/Irrumatio | Irrumatio (also known as irrumation or by the colloquialism face-fucking) is a form of oral sex in which someone thrusts their penis into another person's mouth, in contrast to fellatio where the penis is being actively orally excited by a fellator. The difference lies mainly in which party takes the active part. By extension, irrumatio can also refer to the sexual technique of thrusting the penis between the thighs of a partner (intercrural sex).
In the ancient Roman sexual vocabulary, irrumatio is a form of oral rape (os impurum), in which a man forces his penis into someone else's mouth, almost always another man's.
Etymology and history
The English nouns irrumatio and irrumation, and the verb irrumate, come from the Latin , meaning to force receptive male oral sex. J. L. Butrica, in his review of R. W. Hooper's edition of The Priapus Poems, a corpus of poems known as Priapeia in Latin, states that "some Roman sexual practices, like irrumatio, lack simple English equivalents".
There is some conjecture among linguists, as yet unresolved, that irrŭmātio may be connected with the Latin word rūmen, rūminis, the throat and gullet, whence 'ruminate', to chew the cud, therefore meaning 'insertion into the throat'. Others connect it with rūma or rūmis, an obsolete word for a teat, hence it would mean "giving milk", "giving to suck". (Compare the word fellō, which literally meant "suck (milk)" before it acquired its sexual sense.)
As the quotation from Butrica suggests and an article by W. A. Krenkel shows, irrumatio was a distinct sexual practice in ancient Rome. J. N. Adams states that "it was a standard joke to speak of irrumatio as a means of silencing someone". Oral sex was considered to be an act of defilement: the mouth had a particularly defined role as the organ of oratory, as in Greece, to participate in the central public sphere, where discursive powers were of great importance. Thus, to penetrate the mouth could be taken to be a sign of massive power differential within a relationship. Erotic art from Pompeii depicts irrumatio along with fututio, fellatio and cunnilingus, and pedicatio or anal sex. The extant wall paintings depicting explicit sex often appear to be in bathhouses and brothels, and oral sex was something usually practiced with prostitutes because of their lowly status. Craig A. Williams argues that irrumatio was regarded as a degrading act, even more so than anal rape. S. Tarkovsky states that, despite being popular, it was thought to be a hostile act, "taken directly from the Greek, whereby the Greek men would have to force the fellatio by violence". Furthermore, as Amy Richlin has shown in an article in the Journal of the History of Sexuality, it was also accepted as "oral rape", a punitive act against homosexuality. Catullus threatens two friends who have insulted him with both irrumatio and pedicatio in his Carmen 16, although the use could also mean "go to hell," rather than being a literal threat.
In modern English, the term "fellatio" has expanded to incorporate irrumatio, and the latter has fallen out of widespread use. Likewise, irrumatio might today be called "forced fellatio" or "oral rape". In modern English, especially in a non-rape context, the term "face fucking" is often used.
Another synonym for irrumatio is Egyptian rape or simply Egyptian; this goes back to the time of the Crusades when Mamluks were alleged to force their Christian captives to do this.
Ethnology
Peruvian erotic pottery of the Mochica cultures represent a form of fellatio in the vases showing oragenital acts. See the vases illustrated in color in Dr. Rafael Larco-Hoyle’s Checan (Love!), published in both French and English versions by Éditions Nagel in Geneva, 1965, plates 30–33 and 133–135. The action should really be considered irrumation
See also
Deep-throating
Latin obscenity
Pearl necklace
Notes
Bibliography
Legman, G. (1969). Oragenitalism: Oral Techniques in Genital Excitation. New York: Julian Press.
External links
Fellatio
Human throat
Oral eroticism
Penis
Sexual acts
Sexuality in ancient Rome | Irrumatio | Biology | 985 |
519,109 | https://en.wikipedia.org/wiki/Random%20encounter | A random encounter is a feature commonly used in various role-playing games whereby combat encounters with non-player character (NPC) enemies or other dangers occur sporadically and at random, usually without the enemy being physically detected beforehand. In general, random encounters are used to simulate the challenges associated with being in a hazardous environment—such as a monster-infested wilderness or dungeon—with uncertain frequency of occurrence and makeup (as opposed to a "placed" encounter). Frequent random encounters are common in Japanese role-playing games like Dragon Quest, Pokémon, and the Final Fantasy series.
Role-playing games
Random encounters—sometimes called wandering monsters—were a feature of Dungeons & Dragons from its beginnings in the 1970s, and persist in that game and its offshoots to this day. Random encounters are usually determined by the gamemaster by rolling dice against a random encounter table. The tables are usually based on terrain (and/or time/weather), and have a chance for differing encounters with different numbers or types of creatures. The results may be modified by other tables, such as whether the encounter is friendly, neutral or hostile. GMs are often encouraged to make their own tables. Specific adventures often have specific tables for locations, like a temple's hallways.
Wandering monsters are often used to wear down player characters and force them to use up consumable resources, such as hit points, magic spells and healing potions, as a way of punishing them for spending too much time in a dangerous area.
Video games
Random encounters were incorporated into early role-playing video games and have been common throughout the genre. Placed and random encounters were both used in 1981s Wizardry and by the mid-1980s, random encounters made up the bulk of battles in genre-defining games such as Dragon Warrior, Final Fantasy, and The Bard's Tale. Random encounters happen when the player is traversing the game world (often through the use of a "world map" or overworld). Most often, the player encounters enemies to battle, but occasionally friendly or neutral characters can appear, with whom the player might interact differently than with enemies. Random encounters are random in the respect that players cannot anticipate the exact moment of encounter or what will be encountered, as the occurrence of the event is based on factors such as programmed probabilities; Pseudo-random number generators create the sequence of numbers used to determine if an encounter will happen. The form and frequency can vary depending on a number of factors, such as where the player is located in the game world and the statistics of the player character. In some games, items can be found to increase or decrease the frequency of random encounters, even to eliminate them outright, or increase the odds of having a particular encounter.
Random encounters often occur more frequently in dungeons, caves, forests, deserts, and swamps than in open plains. The simplest sort of random encounter algorithm would be as follows:
Each step, set X to a random integer between 0 and 99.
If in plains, and X < 8, a random encounter occurs.
If in swamp, desert, or forest, and X < 16, a random encounter occurs.
The problem with this algorithm is that random encounters occur "too" randomly for the tastes of most players, as there will be "droughts" and "floods" in their distribution. Random encounters in rapid succession are considered undesirable as they lead to the player's perception of getting "bogged down", but with the simple algorithm, it is possible to have an encounter from taking only one step after an encounter. The early games in the Dragon Quest series, for example, allow random encounters to occur one step after the other. A more elaborate random encounter algorithm (and similar to those used in many games) would be the following:
Set X to a random integer between 64 and 255.
For each step in plains, decrement X by 4. For each step in forest, swamp, or desert, decrement X by 8.
When X < 0, a fight ensues. Go to step 1.
This ensures that, in any terrain, the player will not experience more than one random encounter every eight steps. A game with this type of system can sometimes be taken advantage of by initiating some action that will reset the counter (pausing, opening a menu, saving), especially when using an emulator. This is a popular trick in speedruns to skip time-consuming or dangerous battles or it can be used to ensure that each battle results in a rare or valuable encounter.
Random encounters have become less popular in video games with the passage of time, as gamers often complain that they are annoying, repetitive or discouraging to exploration. The Final Fantasy and Tales series have abandoned random encounter systems with successive games, while relatively newer franchises such as the Chrono series and Kingdom Hearts have never used them.
A more commonly used tactic in later RPGs (used in Final Fantasy XII, Radiata Stories, Fallout and Fallout 2 (although the Fallout games have unlimited random encounters on the world map)), like Legend of Legaia and all Kingdom Hearts games have a finite number of enemies in a given area. This cuts down on grinding and does not discourage exploration to the same extent. A similar approach is spawning, where visible monsters always (re)appear at the same location, as seen in Chrono Trigger and most of Dragon Quest IX. Both approaches give players the opportunity to anticipate, evade, or select encounters.
References
Role-playing game terminology
Video game terminology | Random encounter | Technology | 1,128 |
1,058,599 | https://en.wikipedia.org/wiki/Piaget%27s%20theory%20of%20cognitive%20development | Piaget's theory of cognitive development, or his genetic epistemology, is a comprehensive theory about the nature and development of human intelligence. It was originated by the Swiss developmental psychologist Jean Piaget (1896–1980). The theory deals with the nature of knowledge itself and how humans gradually come to acquire, construct, and use it. Piaget's theory is mainly known as a developmental stage theory.
In 1919, while working at the Alfred Binet Laboratory School in Paris, Piaget "was intrigued by the fact that children of different ages made different kinds of mistakes while solving problems". His experience and observations at the Alfred Binet Laboratory were the beginnings of his theory of cognitive development.
He believed that children of different ages made different mistakes because of the "quality rather than quantity" of their intelligence. Piaget proposed four stages to describe the development process of children: sensorimotor stage, pre-operational stage, concrete operational stage, and formal operational stage. Each stage describes a specific age group. In each stage, he described how children develop their cognitive skills. For example, he believed that children experience the world through actions, representing things with words, thinking logically, and using reasoning.
To Piaget, cognitive development was a progressive reorganisation of mental processes resulting from biological maturation and environmental experience. He believed that children construct an understanding of the world around them, experience discrepancies between what they already know and what they discover in their environment, then adjust their ideas accordingly. Moreover, Piaget claimed that cognitive development is at the centre of the human organism, and language is contingent on knowledge and understanding acquired through cognitive development. Piaget's earlier work received the greatest attention.
Child-centred classrooms and "open education" are direct applications of Piaget's views. Despite its huge success, Piaget's theory has some limitations that Piaget recognised himself: for example, the theory supports sharp stages rather than continuous development (horizontal and vertical décalage).
Nature of intelligence: operative and figurative
Piaget argued that reality is a construction. Reality is defined in reference to the two conditions that define dynamic systems. Specifically, he argued that reality involves transformations and states. Transformations refer to all manners of changes that a thing or person can undergo. States refer to the conditions or the appearances in which things or persons can be found between transformations. For example, there might be changes in shape or form (for instance, liquids are reshaped as they are transferred from one vessel to another, and similarly humans change in their characteristics as they grow older), in size (a toddler does not walk and run without falling, but after 7 yrs of age, the child's sensorimotor anatomy is well developed and now acquires skill faster), or in placement or location in space and time (e.g., various objects or persons might be found at one place at one time and at a different place at another time). Thus, Piaget argued, if human intelligence is to be adaptive, it must have functions to represent both the transformational and the static aspects of reality. He proposed that operative intelligence is responsible for the representation and manipulation of the dynamic or transformational aspects of reality, and that figurative intelligence is responsible for the representation of the static aspects of reality.
Operative intelligence is the active aspect of intelligence. It involves all actions, overt or covert, undertaken in order to follow, recover, or anticipate the transformations of the objects or persons of interest. Figurative intelligence is the more or less static aspect of intelligence, involving all means of representation used to retain in mind the states (i.e., successive forms, shapes, or locations) that intervene between transformations. That is, it involves perception, imitation, mental imagery, drawing, and language. Therefore, the figurative aspects of intelligence derive their meaning from the operative aspects of intelligence, because states cannot exist independently of the transformations that interconnect them. Piaget stated that the figurative or the representational aspects of intelligence are subservient to its operative and dynamic aspects, and therefore, that understanding essentially derives from the operative aspect of intelligence.
At any time, operative intelligence frames how the world is understood and it changes if understanding is not successful. Piaget stated that this process of understanding and change involves two basic functions: assimilation and accommodation.
Assimilation and accommodation
Through his study of the field of education, Piaget focused on two processes, which he named assimilation and accommodation. To Piaget, assimilation meant integrating external elements into structures of lives or environments, or those we could have through experience. Assimilation is how humans perceive and adapt to new information. It is the process of fitting new information into pre-existing cognitive schemas. Assimilation in which new experiences are reinterpreted to fit into, or assimilate with, old ideas and analyzing new facts accordingly. It occurs when humans are faced with new or unfamiliar information and refer to previously learned information in order to make sense of it. In contrast, accommodation is the process of taking new information in one's environment and altering pre-existing schemas in order to fit in the new information. This happens when the existing schema (knowledge) does not work, and needs to be changed to deal with a new object or situation. Accommodation is imperative because it is how people will continue to interpret new concepts, schemas, frameworks, and more.
Various teaching methods have been developed based on Piaget's insights that call for the use of questioning and inquiry-based education to help learners more blatantly face the sorts of contradictions to their pre-existing schemas that are conducive to learning.
Piaget believed that the human brain has been programmed through evolution to bring equilibrium, which is what he believed ultimately influences structures by the internal and external processes through assimilation and accommodation.
Piaget's understanding was that assimilation and accommodation cannot exist without the other. They are two sides of a coin. To assimilate an object into an existing mental schema, one first needs to take into account or accommodate to the particularities of this object to a certain extent. For instance, to recognize (assimilate) an apple as an apple, one must first focus (accommodate) on the contour of this object. To do this, one needs to roughly recognize the size of the object. Development increases the balance, or equilibration, between these two functions. When in balance with each other, assimilation and accommodation generate mental schemas of the operative intelligence. When one function dominates over the other, they generate representations which belong to figurative intelligence.
Cognitive equilibration
Piaget agreed with most other developmental psychologists in that there are three very important factors that are attributed to development: maturation, experience, and the social environment. But where his theory differs involves his addition of a fourth factor, equilibration, which "refers to the organism's attempt to keep its cognitive schemes in balance".
. Also see Piaget, and Boom's detailed account.
Equilibration is the motivational element that guides cognitive development. As humans, we have a biological need to make sense of the things we encounter in every aspect of our world in order to muster a greater understanding of it, and therefore, to flourish in it. This is where the concept of equilibration comes into play. If a child is confronted with information that does not fit into his or her previously held schemes, disequilibrium is said to occur. This, as one would imagine, is unsatisfactory to the child, so he or she will try to fix it. The incongruence will be fixed in one of three ways. The child will either ignore the newly discovered information, assimilate the information into a preexisting scheme, or accommodate the information by modifying a different scheme. Using any of these methods will return the child to a state of equilibrium, however, depending on the information being presented to the child, that state of equilibrium is not likely to be permanent.
For example, let's say Dave, a three-year-old boy who has grown up on a farm and is accustomed to seeing Horses regularly, has been brought to the zoo by his parents and sees an Elephant for the first time. Immediately he shouts "look mommy, Horsey!" Because Dave does not have a scheme for Elephants, he interprets the Elephant as being a Horse due to its large size, color, tail, and long face. He believes the Elephant is a Horse until his mother corrects. The new information Dave has received has put him in a state of disequilibrium. He now has to do one of three things. He can either: (1) turn his head, move towards another section of animals, and ignore this newly presented information; (2) distort the defining characteristics of an Elephant so that he can assimilate it into his "Horsey" scheme; or (3) he can modify his preexisting "Animal" schema to accommodate this new information regarding Elephants by slightly altering his knowledge of animals as he knows them.
With age comes entry into a higher stage of development. With that being said, previously held schemes (and the children that hold them) are more than likely to be confronted with discrepant information the older they get. Silverman and Geiringer propose that one would be more successful in attempting to change a child's mode of thought by exposing that child to concepts that reflect a higher rather than a lower stage of development. Furthermore, children are better influenced by modeled performances that are one stage above their developmental level, as opposed to modeled performances that are either lower or two or more stages above their level.
Four stages of development
In his theory of cognitive development, Jean Piaget proposed that humans progress through four developmental stages: the sensorimotor stage, preoperational stage, concrete operational stage, and formal operational stage.
Sensorimotor stage
The first of these, the sensorimotor stage "extends from birth to the acquisition of language". In this stage, infants progressively construct knowledge and understanding of the world by coordinating experiences (such as vision and hearing) from physical interactions with objects (such as grasping, sucking, and stepping). Infants gain knowledge of the world from the physical actions they perform within it. They progress from reflexive, instinctual action at birth to the beginning of symbolic thought toward the end of the stage.
Children learn that they are separate from the environment. They can think about aspects of the environment, even though these may be outside the reach of the child's senses. In this stage, according to Piaget, the development of object permanence is one of the most important accomplishments. Object permanence is a child's understanding that an object continues to exist even though they cannot see or hear it. Peek-a-boo is a game in which children who have yet to fully develop object permanence respond to sudden hiding and revealing of a face. By the end of the sensorimotor period, children develop a permanent sense of self and object and will quickly lose interest in Peek-a-boo.
Piaget divided the sensorimotor stage into six sub-stages.
Preoperational stage
By observing sequences of play, Piaget was able to demonstrate the second stage of his theory, the pre-operational stage. He said that this stage starts towards the end of the second year. It starts when the child begins to learn to speak and lasts up until the age of seven. During the pre-operational stage of cognitive development, Piaget noted that children do not yet understand concrete logic and cannot mentally manipulate information. Children's increase in playing and pretending takes place in this stage. However, the child still has trouble seeing things from different points of view. The children's play is mainly categorized by symbolic play and manipulating symbols. Such play is demonstrated by the idea of checkers being snacks, pieces of paper being plates, and a box being a table. Their observations of symbols exemplifies the idea of play with the absence of the actual objects involved.
The pre-operational stage is sparse and logically inadequate in regard to mental operations. The child is able to form stable concepts as well as magical beliefs (magical thinking). The child, however, is still not able to perform operations, which are tasks that the child can do mentally, rather than physically. Thinking in this stage is still egocentric, meaning the child has difficulty seeing the viewpoint of others. The Pre-operational Stage is split into two substages: the symbolic function substage, and the intuitive thought substage. The symbolic function substage is when children are able to understand, represent, remember, and picture objects in their mind without having the object in front of them. The intuitive thought substage is when children tend to propose the questions of "why?" and "how come?" This stage is when children want to understand everything.
Symbolic function substage
At about two to four years of age, children cannot yet manipulate and transform information in a logical way. However, they now can think in images and symbols. Other examples of mental abilities are language and pretend play. Symbolic play is when children develop imaginary friends or role-play with friends. Children's play becomes more social and they assign roles to each other. Some examples of symbolic play include playing house, or having a tea party. The type of symbolic play in which children engage is connected with their level of creativity and ability to connect with others. Additionally, the quality of their symbolic play can have consequences on their later development. For example, young children whose symbolic play is of a violent nature tend to exhibit less prosocial behavior and are more likely to display antisocial tendencies in later years.
In this stage, there are still limitations, such as egocentrism and precausal thinking.
Egocentrism occurs when a child is unable to distinguish between their own perspective and that of another person. Children tend to stick to their own viewpoint, rather than consider the view of others. Indeed, they are not even aware that such a concept as "different viewpoints" exists. Egocentrism can be seen in an experiment performed by Piaget and Swiss developmental psychologist Bärbel Inhelder, known as the three mountain problem. In this experiment, three views of a mountain are shown to the child, who is asked what a traveling doll would see at the various angles. The child will consistently describe what they can see from the position from which they are seated, regardless of the angle from which they are asked to take the doll's perspective. Egocentrism would also cause a child to believe, "I like The Lion Guard, so the high school student next door must like The Lion Guard, too."
Similar to preoperational children's egocentric thinking is their structuring of a cause and effect relationships. Piaget coined the term "precausal thinking" to describe the way in which preoperational children use their own existing ideas or views, like in egocentrism, to explain cause-and-effect relationships. Three main concepts of causality as displayed by children in the preoperational stage include: animism, artificialism and transductive reasoning.
Animism is the belief that inanimate objects are capable of actions and have lifelike qualities. An example could be a child believing that the sidewalk was mad and made them fall down, or that the stars twinkle in the sky because they are happy. Artificialism refers to the belief that environmental characteristics can be attributed to human actions or interventions. For example, a child might say that it is windy outside because someone is blowing very hard, or the clouds are white because someone painted them that color. Finally, precausal thinking is categorized by transductive reasoning. Transductive reasoning is when a child fails to understand the true relationships between cause and effect. Unlike deductive or inductive reasoning (general to specific, or specific to general), transductive reasoning refers to when a child reasons from specific to specific, drawing a relationship between two separate events that are otherwise unrelated. For example, if a child hears the dog bark and then a balloon popped, the child would conclude that because the dog barked, the balloon popped.
Intuitive thought substage
A main feature of the pre-operational stage of development is primitive reasoning. Between the ages of four and seven, reasoning changes from symbolic thought to intuitive thought. This stage is "marked by greater dependence on intuitive thinking rather than just perception." Children begin to have more automatic thoughts that don't require evidence. During this stage there is a heightened sense of curiosity and need to understand how and why things work. Piaget named this substage "intuitive thought" because they are starting to develop more logical thought but cannot explain their reasoning. Thought during this stage is still immature and cognitive errors occur. Children in this stage depend on their own subjective perception of the object or event. This stage is characterized by centration, conservation, irreversibility, class inclusion, and transitive inference.
Centration is the act of focusing all attention on one characteristic or dimension of a situation, whilst disregarding all others. Conservation is the awareness that altering a substance's appearance does not change its basic properties. Children at this stage are unaware of conservation and exhibit centration. Both centration and conservation can be more easily understood once familiarized with Piaget's most famous experimental task.
In this task, a child is presented with two identical beakers containing the same amount of liquid. The child usually notes that the beakers do contain the same amount of liquid. When one of the beakers is poured into a taller and thinner container, children who are younger than seven or eight years old typically say that the two beakers no longer contain the same amount of liquid, and that the taller container holds the larger quantity (centration), without taking into consideration the fact that both beakers were previously noted to contain the same amount of liquid. Due to superficial changes, the child was unable to comprehend that the properties of the substances continued to remain the same (conservation).
Irreversibility is a concept developed in this stage which is closely related to the ideas of centration and conservation. Irreversibility refers to when children are unable to mentally reverse a sequence of events. In the same beaker situation, the child does not realize that, if the sequence of events was reversed and the water from the tall beaker was poured back into its original beaker, then the same amount of water would exist. Another example of children's reliance on visual representations is their misunderstanding of "less than" or "more than". When two rows containing equal numbers of blocks are placed in front of a child, one row spread farther apart than the other, the child will think that the row spread farther contains more blocks.
Class inclusion refers to a kind of conceptual thinking that children in the preoperational stage cannot yet grasp. Children's inability to focus on two aspects of a situation at once inhibits them from understanding the principle that one category or class can contain several different subcategories or classes. For example, a four-year-old girl may be shown a picture of eight dogs and three cats. The girl knows what cats and dogs are, and she is aware that they are both animals. However, when asked, "Are there more dogs or animals?" she is likely to answer "more dogs". This is due to her difficulty focusing on the two subclasses and the larger class all at the same time. She may have been able to view the dogs as dogs or animals, but struggled when trying to classify them as both, simultaneously. Similar to this is concept relating to intuitive thought, known as "transitive inference".
Transitive inference is using previous knowledge to determine the missing piece, using basic logic. Children in the preoperational stage lack this logic. An example of transitive inference would be when a child is presented with the information "A" is greater than "B" and "B" is greater than "C". This child may have difficulty here understanding that "A" is also greater than "C".
Concrete operational stage
The concrete operational stage is the third stage of Piaget's theory of cognitive development. This stage, which follows the preoperational stage, occurs between the ages of 7 and 11 (middle childhood and preadolescence) years, and is characterized by the appropriate use of logic. During this stage, a child's thought processes become more mature and "adult like". They start solving problems in a more logical fashion. Abstract, hypothetical thinking is not yet developed in the child, and children can only solve problems that apply to concrete events or objects. At this stage, the children undergo a transition where the child learns rules such as conservation. Piaget determined that children are able to incorporate inductive reasoning. Inductive reasoning involves drawing inferences from observations in order to make a generalization. In contrast, children struggle with deductive reasoning, which involves using a generalized principle in order to try to predict the outcome of an event. Children in this stage commonly experience difficulties with figuring out logic in their heads. For example, a child will understand that "A is more than B" and "B is more than C". However, when asked "is A more than C?", the child might not be able to logically figure the question out mentally.
Two other important processes in the concrete operational stage are logic and the elimination of egocentrism.
Egocentrism is the inability to consider or understand a perspective other than one's own. It is the phase where the thought and morality of the child is completely self focused. During this stage, the child acquires the ability to view things from another individual's perspective, even if they think that perspective is incorrect. For instance, show a child a comic in which Jane puts a doll under a box, leaves the room, and then Melissa moves the doll to a drawer, and Jane comes back. A child in the concrete operations stage will say that Jane will still think it's under the box even though the child knows it is in the drawer. (See also False-belief task.)
Children in this stage can, however, only solve problems that apply to actual (concrete) objects or events, and not abstract concepts or hypothetical tasks. Understanding and knowing how to use full common sense has not yet been completely adapted.
Piaget determined that children in the concrete operational stage were able to incorporate inductive logic. On the other hand, children at this age have difficulty using deductive logic, which involves using a general principle to predict the outcome of a specific event. This includes mental reversibility. An example of this is being able to reverse the order of relationships between mental categories. For example, a child might be able to recognize that his or her dog is a Labrador, that a Labrador is a dog, and that a dog is an animal, and draw conclusions from the information available, as well as apply all these processes to hypothetical situations.
The abstract quality of the adolescent's thought at the formal operational level is evident in the adolescent's verbal problem solving ability. The logical quality of the adolescent's thought is when children are more likely to solve problems in a trial-and-error fashion. Adolescents begin to think more as a scientist thinks, devising plans to solve problems and systematically test opinions. They use hypothetical-deductive reasoning, which means that they develop hypotheses or best guesses, and systematically deduce, or conclude, which is the best path to follow in solving the problem. During this stage the adolescent is able to understand love, logical proofs and values. During this stage the young person begins to entertain possibilities for the future and is fascinated with what they can be.
Adolescents also are changing cognitively by the way that they think about social matters. One thing that brings about a change is egocentrism. This happens by heightening self-consciousness and giving adolescents an idea of who they are through their personal uniqueness and invincibility. Adolescent egocentrism can be dissected into two types of social thinking: imaginary audience and personal fable. Imaginary audience consists of an adolescent believing that others are watching them and the things they do. Personal fable is not the same thing as imaginary audience but is often confused with imaginary audience. Personal fable consists of believing that you are exceptional in some way. These types of social thinking begin in the concrete stage but carry on to the formal operational stage of development.
Testing for concrete operations
Piagetian tests are well known and practiced to test for concrete operations. The most prevalent tests are those for conservation. There are some important aspects that the experimenter must take into account when performing experiments with these children.
One example of an experiment for testing conservation is the water level task. An experimenter will have two glasses that are the same size, fill them to the same level with liquid, and make sure the child understands that both of the glasses have the same amount of water in them. Then, the experimenter will pour the liquid from one of the small glasses into a tall, thin glass. The experimenter will then ask the child if the taller glass has more liquid, less liquid, or the same amount of liquid. The child will then give his answer. There are three keys for the experimenter to keep in mind with this experiment. These are justification, number of times asking, and word choice.
Justification: After the child has answered the question being posed, the experimenter must ask why the child gave that answer. This is important because the answers they give can help the experimenter to assess the child's developmental age.
Number of times asking: Some argue that a child's answers can be influenced by the number of times an experimenter asks them about the amount of water in the glasses. For example, a child is asked about the amount of liquid in the first set of glasses and then asked once again after the water is moved into a different sized glass. Some children will doubt their original answer and say something they would not have said if they did not doubt their first answer.
Word choice: The phrasing that the experimenter uses may affect how the child answers. If, in the liquid and glass example, the experimenter asks, "Which of these glasses has more liquid?", the child may think that his thoughts of them being the same is wrong because the adult is saying that one must have more. Alternatively, if the experimenter asks, "Are these equal?", then the child is more likely to say that they are, because the experimenter is implying that they are.
Classification: As children's experiences and vocabularies grow, they build schemata and are able to organize objects in many different ways. They also understand classification hierarchies and can arrange objects into a variety of classes and subclasses.
Identity: One feature of concrete operational thought is the understanding that objects have qualities that do not change even if the object is altered in some way. For instance, mass of an object does not change by rearranging it. A piece of chalk is still chalk even when the piece is broken in two.
Reversibility: The child learns that some things that have been changed can be returned to their original state. Water can be frozen and then thawed to become liquid again; however, eggs cannot be unscrambled. Children use reversibility a lot in mathematical problems such as: 2 + 3 = 5 and 5 – 3 = 2.
Conservation: The ability to understand that the quantity (mass, weight volume) of something doesn't change due to the change of appearance.
Decentration: The ability to focus on more than one feature of scenario or problem at a time. This also describes the ability to attend to more than one task at a time. Decentration is what allows for conservation to occur.
Seriation: Arranging items along a quantitative dimension, such as length or weight, in a methodical way is now demonstrated by the concrete operational child. For example, they can logically arrange a series of different-sized sticks in order by length. Younger children not yet in the concrete stage approach a similar task in a haphazard way.
These new cognitive skills increase the child's understanding of the physical world. However, according to Piaget, they still cannot think in abstract ways. Additionally, they do not think in systematic scientific ways. For example, most children under age twelve would not be able to come up with the variables that influence the period that a pendulum takes to complete its arc. Even if they were given weights they could attach to strings in order to do this experiment, they would not be able to draw a clear conclusion.
Formal operational stage
The final stage is known as the formal operational stage (early to middle adolescence, beginning at age 11 and finalizing around 14–15): Intelligence is demonstrated through the logical use of symbols related to abstract concepts. This form of thought includes "assumptions that have no necessary relation to reality." At this point, the person is capable of hypothetical and deductive reasoning. During this time, people develop the ability to think about abstract concepts.
Piaget stated that "hypothetico-deductive reasoning" becomes important during the formal operational stage. This type of thinking involves hypothetical "what-if" situations that are not always rooted in reality, i.e. counterfactual thinking. It is often required in science and mathematics.
Abstract thought emerges during the formal operational stage. Children tend to think very concretely and specifically in earlier stages, and begin to consider possible outcomes and consequences of actions.
Metacognition, the capacity for "thinking about thinking" that allows adolescents and adults to reason about their thought processes and monitor them.
Problem-solving is demonstrated when children use trial-and-error to solve problems. The ability to systematically solve a problem in a logical and methodical way emerges.
Children in primary school years mostly use inductive reasoning, but adolescents start to use deductive reasoning. Inductive reasoning is when children draw general conclusions from personal experiences and specific facts. Adolescents learn how to use deductive reasoning by applying logic to create specific conclusions from abstract concepts. This capability results from their capacity to think hypothetically.
"However, research has shown that not all persons in all cultures reach formal operations, and most people do not use formal operations in all aspects of their lives".
Experiments
Piaget and his colleagues conducted several experiments to assess formal operational thought.
In one of the experiments, Piaget evaluated the cognitive capabilities of children of different ages through the use of a scale and varying weights. The task was to balance the scale by hooking weights on the ends of the scale. To successfully complete the task, the children must use formal operational thought to realize that the distance of the weights from the center and the heaviness of the weights both affected the balance. A heavier weight has to be placed closer to the center of the scale, and a lighter weight has to be placed farther from the center, so that the two weights balance each other. While 3- to 5- year olds could not at all comprehend the concept of balancing, children by the age of 7 could balance the scale by placing the same weights on both ends, but they failed to realize the importance of the location. By age 10, children could think about location but failed to use logic and instead used trial-and-error. Finally, by age 13 and 14, in early to middle adolescence, some children more clearly understood the relationship between weight and distance and could successfully implement their hypothesis.
The stages and causation
Piaget sees children's conception of causation as a march from "primitive" conceptions of cause to those of a more scientific, rigorous, and mechanical nature. These primitive concepts are characterized as supernatural, with a decidedly non-natural or non-mechanical tone. Piaget has as his most basic assumption that babies are phenomenists. That is, their knowledge "consists of assimilating things to schemas" from their own action such that they appear, from the child's point of view, "to have qualities which, in fact, stem from the organism". Consequently, these "subjective conceptions," so prevalent during Piaget's first stage of development, are dashed upon discovering deeper empirical truths.
Piaget gives the example of a child believing that the moon and stars follow him on a night walk. Upon learning that such is the case for his friends, he must separate his self from the object, resulting in a theory that the moon is immobile, or moves independently of other agents.
The second stage, from around three to eight years of age, is characterized by a mix of this type of magical, animistic, or "non-natural" conceptions of causation and mechanical or "naturalistic" causation. This conjunction of natural and non-natural causal explanations supposedly stems from experience itself, though Piaget does not make much of an attempt to describe the nature of the differences in conception. In his interviews with children, he asked questions specifically about natural phenomena, such as: "What makes clouds move?", "What makes the stars move?", "Why do rivers flow?" The nature of all the answers given, Piaget says, are such that these objects must perform their actions to "fulfill their obligations towards men". He calls this "moral explanation".
Postulated physical mechanisms underlying schemes, schemas, and stages
First note the distinction between 'schemes' (analogous to 1D lists of action-instructions, e.g. leading to separate pen-strokes), and figurative 'schemas' (aka 'schemata', akin to 2D drawings/sketches or virtual 3D models); see schema. This distinction (often overlooked by translators) is emphasized by Piaget & Inhelder, and others + (Appendix p. 21-22).
In 1967, Piaget considered the possibility of RNA molecules as likely embodiments of his still-abstract schemes (which he promoted as units of action) — though he did not come to any firm conclusion. At that time, due to work such as that of Swedish biochemist Holger Hydén, RNA concentrations had, indeed, been shown to correlate with learning.
To date, with one exception, it has been impossible to investigate such RNA hypotheses by traditional direct observation and logical deduction. The one exception is that such ultra-micro sites would almost certainly have to use optical communication, and recently studies have demonstrated that nerve-fibres can indeed transmit light/infra-red (in addition to their acknowledged role). However it accords with the philosophy of science, especially scientific realism, to do indirect investigations of such phenomena which are intrinsically unobservable for practical reasons. The art then is to build up a plausible interdisciplinary case from the indirect evidence (as indeed the child does during concept development) — and then retain that model until it is disproved by observable-or-other new evidence which then calls for new accommodation.
In that spirit, it now might be said that the RNA/infra-red model is valid (for explaining Piagetian higher intelligence). Anyhow the current situation opens the way for more testing, and further development in several directions, including the finer points of Piaget's agenda.
Practical applications
Parents can use Piaget's theory in many ways to support their child's growth. Teachers can also use Piaget's theory to help their students. For example, recent studies have shown that children in the same grade and of the same age perform differently on tasks measuring basic addition and subtraction accuracy. Children in the preoperational and concrete operational levels of cognitive development perform arithmetic operations (such as addition and subtraction) with similar accuracy; however, children in the concrete operational level have been able to perform both addition problems and subtraction problems with overall greater precision. Teachers can use Piaget's theory to see where each child in their class stands with each subject by discussing the syllabus with their students and the students' parents.
The stage of cognitive growth of a person differ from another. Cognitive development or thinking is an active process from the beginning to the end of life. Intellectual advancement happens because people at every age and developmental period look for cognitive equilibrium. To achieve this balance, the easiest way is to understand the new experiences through the lens of the preexisting ideas. Infants learn that new objects can be grabbed in the same way of familiar objects, and adults explain the day's headlines as evidence for their existing worldview.
However, the application of standardized Piagetian theory and procedures in different societies established widely varying results that lead some to speculate not only that some cultures produce more cognitive development than others but that without specific kinds of cultural experience, but also formal schooling, development might cease at certain level, such as concrete operational level. A procedure was done following methods developed in Geneva (i.e. water level task). Participants were presented with two beakers of equal circumference and height, filled with equal amounts of water. The water from one beaker was transferred into another with taller and smaller circumference. The children and young adults from non-literate societies of a given age were more likely to think that the taller, thinner beaker had more water in it. On the other hand, an experiment on the effects of modifying testing procedures to match local cultural produced a different pattern of results. In the revised procedures, the participants explained in their own language and indicated that while the water was now "more", the quantity was the same. Piaget's water level task has also been applied to the elderly by Formann and results showed an age-associated non-linear decline of performance.
Relation to psychometric theories of intelligence
Researchers have linked Piaget's theory to Cattell and Horn's theory of fluid and crystallized abilities. Piaget's operative intelligence corresponds to the Cattell-Horn formulation of fluid ability in that both concern logical thinking and the "eduction of relations" (an expression Cattell used to refer to the inferring of relationships). Piaget's treatment of everyday learning corresponds to the Cattell-Horn formulation of crystallized ability in that both reflect the impress of experience. Piaget's operativity is considered to be prior to, and ultimately provides the foundation for, everyday learning, much like fluid ability's relation to crystallized intelligence.
Piaget's theory also aligns with another psychometric theory, namely the psychometric theory of g, general intelligence. Piaget designed a number of tasks to assess hypotheses arising from his theory. The tasks were not intended to measure individual differences and they have no equivalent in psychometric intelligence tests. Notwithstanding the different research traditions in which psychometric tests and Piagetian tasks were developed, the correlations between the two types of measures have been found to be consistently positive and generally moderate in magnitude. g is thought to underlie performance on the two types of tasks. It has been shown that it is possible to construct a battery consisting of Piagetian tasks that is as good a measure of g as standard IQ tests.
Challenges to Piagetian stage theory
Piagetian accounts of development have been challenged on several grounds. First, as Piaget himself noted, development does not always progress in the smooth manner his theory seems to predict. Décalage, or progressive forms of cognitive developmental progression in a specific domain, suggest that the stage model is, at best, a useful approximation. Furthermore, studies have found that children may be able to learn concepts and capability of complex reasoning that supposedly represented in more advanced stages with relative ease (Lourenço & Machado, 1996, p. 145). More broadly, Piaget's theory is "domain general," predicting that cognitive maturation occurs concurrently across different domains of knowledge (such as mathematics, logic, and understanding of physics or language). Piaget did not take into account variability in a child's performance notably how a child can differ in sophistication across several domains.
During the 1980s and 1990s, cognitive developmentalists were influenced by "neo-nativist" and evolutionary psychology ideas. These ideas de-emphasized domain general theories and emphasized domain specificity or modularity of mind. Modularity implies that different cognitive faculties may be largely independent of one another, and thus develop according to quite different timetables, which are "influenced by real world experiences". In this vein, some cognitive developmentalists argued that, rather than being domain general learners, children come equipped with domain specific theories, sometimes referred to as "core knowledge," which allows them to break into learning within that domain. For example, even young infants appear to be sensitive to some predictable regularities in the movement and interactions of objects (for example, an object cannot pass through another object), or in human behavior (for example, a hand repeatedly reaching for an object has that object, not just a particular path of motion), as it becomes the building block of which more elaborate knowledge is constructed.
Piaget's theory has been said to undervalue the influence that culture has on cognitive development. Piaget demonstrates that a child goes through several stages of cognitive development and come to conclusions on their own, however, a child's sociocultural environment plays an important part in their cognitive development. Social interaction teaches the child about the world and helps them develop through the cognitive stages, which Piaget neglected to consider.
More recent work from a newer dynamic systems approach has strongly challenged some of the basic presumptions of the "core knowledge" school that Piaget suggested. Dynamic systems approaches harken to modern neuroscientific research that was not available to Piaget when he was constructing his theory. This brought new light into research in psychology in which new techniques such as brain imaging provided new understanding to cognitive development. One important finding is that domain-specific knowledge is constructed as children develop and integrate knowledge. This enables the domain to improve the accuracy of the knowledge as well as organization of memories. However, this suggests more of a "smooth integration" of learning and development than either Piaget, or his neo-nativist critics, had envisioned. Additionally, some psychologists, such as Lev Vygotsky and Jerome Bruner, thought differently from Piaget, suggesting that language was more important for cognition development than Piaget implied.
Post-Piagetian and neo-Piagetian stages
In recent years, several theorists attempted to address concerns with Piaget's theory by developing new theories and models that can accommodate evidence which violates Piagetian predictions and postulates.
The neo-Piagetian theories of cognitive development, advanced by Robbie Case, Andreas Demetriou, Graeme S. Halford, Kurt W. Fischer, Michael Lamport Commons, and Juan Pascual-Leone, attempted to integrate Piaget's theory with cognitive and differential theories of cognitive organization and development. Their aim was to better account for the cognitive factors of development and for intra-individual and inter-individual differences in cognitive development. They suggested that development along Piaget's stages is due to increasing working memory capacity and processing efficiency by "biological maturation". Moreover, Demetriou's theory ascribes an important role to hypercognitive processes of "self-monitoring, self-recording, self-evaluation, and self-regulation", and it recognizes the operation of several relatively autonomous domains of thought (Demetriou, 1998; Demetriou, Mouyi, Spanoudis, 2010; Demetriou, 2003, p. 153).
Piaget's theory stops at the formal operational stage, but other researchers have observed the thinking of adults is more nuanced than formal operational thought. This fifth stage has been named post formal thought or operation. Post formal stages have been proposed. Michael Commons presented evidence for four post formal stages in the model of hierarchical complexity: systematic, meta-systematic, paradigmatic, and cross-paradigmatic (Commons & Richards, 2003, p. 206–208; Oliver, 2004, p. 31). There are many theorists, however, who have criticized "post formal thinking," because the concept lacks both theoretical and empirical verification. The term "integrative thinking" has been suggested for use instead.
A "sentential" stage, said to occur before the early preoperational stage, has been proposed by Fischer, Biggs and Biggs, Commons, and Richards.
Jerome Bruner has expressed views on cognitive development in a "pragmatic orientation" in which humans actively use knowledge for practical applications, such as problem solving and understanding reality.
Michael Lamport Commons proposed the model of hierarchical complexity (MHC) in two dimensions: horizontal complexity and vertical complexity (Commons & Richards, 2003, p. 205).
Kieran Egan has proposed five stages of understanding. These are "somatic", "mythic", "romantic", "philosophic", and "ironic". These stages are developed through cognitive tools such as "stories", "binary oppositions", "fantasy" and "rhyme, rhythm, and meter" to enhance memorization to develop a long-lasting learning capacity.
Lawrence Kohlberg developed three stages of moral development: "Preconventional", "Conventional" and "Postconventional". Each level is composed of two orientation stages, with a total of six orientation stages: (1) "Punishment-Obedience", (2) "Instrumental Relativist", (3) "Good Boy-Nice Girl", (4) "Law and Order", (5) "Social Contract", and (6) "Universal Ethical Principle".
Andreas Demetriou has expressed neo-Piagetian theories of cognitive development.
Jane Loevinger's stages of ego development occur through "an evolution of stages". "First is the Presocial Stage followed by the Symbiotic Stage, Impulsive Stage, Self-Protective Stage, Conformist Stage, Self-Aware Level: Transition from Conformist to Conscientious Stage, Individualistic Level: Transition from Conscientious to the Autonomous Stage, Conformist Stage, and Integrated Stage".
Ken Wilber has incorporated Piaget's theory in his multidisciplinary field of integral theory. The human consciousness is structured in hierarchical order and organized in "holon" chains or "great chain of being", which are based on the level of spiritual and psychological development.
Oliver Kress published a model that connected Piaget's theory of development and Abraham Maslow's concept of self-actualization.
Cheryl Armon has proposed five stages of " the Good Life". These are "Egoistic Hedonism", "Instrumental Hedonism", "Affective/Altruistic Mutuality", "Individuality", and "Autonomy/Community" (Andreoletti & Demick, 2003, p. 284) (Armon, 1984, p. 40–43).
Christopher R. Hallpike proposed that human evolution of cognitive moral understanding had evolved from the beginning of time from its primitive state to the present time.
Robert Kegan extended Piaget's developmental model to adults in describing what he called constructive-developmental psychology.
References
External links
Cognitive psychology
Constructivism (psychological school)
Enactive cognition
Developmental neuroscience
Developmental stage theories | Piaget's theory of cognitive development | Biology | 9,766 |
68,217,205 | https://en.wikipedia.org/wiki/Toshiba%20Pasopia | Toshiba Pasopia is a computer from manufacturer Toshiba, released in 1981 and based around a Zilog Z80 microprocessor. This is not to be confused with the Toshiba Pasopia IQ, a similar named line of MSX compatible computers.
There are two models, the PA7010 and the PA7012. PA7010 comes with T-BASIC, a version of Microsoft BASIC. PA7012 comes with the more powerful built-in operating system - OA-BASIC developed by Toshiba, capable of sequential file access and automated loading of programs.
The keyboard has 90 keys, a separate numeric keypad and eight function keys. The machine could be expanded with disk drives, extra RAM and offered a RS-232 and a parallel printer port.
In 1982 the machine was sold on the American market as Toshiba T100. It had an optional LCD screen (with 320 x 64 resolution) that fitted into the keyboard. Two CRT monitors were available: a 13" green monochrome, and 15" RGB color.
1982 models came with T-BASIC version 1.1.
The machine supported cartridge-type peripherals called PAC, RAM packs with battery backup, Kanji ROM packs and joystick ports. Pascal and OA-BASIC cartridges were on sale.
In 1983 Toshiba released the Pasopia 5 and Pasopia 7, intended as successors to the original Pasopia.
A dedicated magazine, named "Oh! Pasopia" was published in Japan between 1983 and 1987.
See also
Toshiba Pasopia 5
Toshiba Pasopia 7
Toshiba Pasopia 16 (IBM PC compatible)
Toshiba Pasopia IQ (MSX compatible)
References
Pasopia
Z80-based home computers
Computer-related introductions in 1981 | Toshiba Pasopia | Technology | 364 |
1,283,240 | https://en.wikipedia.org/wiki/Culvert | A culvert is a structure that channels water past an obstacle or to a subterranean waterway. Typically embedded so as to be surrounded by soil, a culvert may be made from a pipe, reinforced concrete or other material. In the United Kingdom, the word can also be used for a longer artificially buried watercourse.
Culverts are commonly used both as cross-drains to relieve drainage of ditches at the roadside, and to pass water under a road at natural drainage and stream crossings. When they are found beneath roads, they are frequently empty. A culvert may also be a bridge-like structure designed to allow vehicle or pedestrian traffic to cross over the waterway while allowing adequate passage for the water. Dry culverts are used to channel a fire hose beneath a noise barrier for the ease of firefighting along a highway without the need or danger of placing hydrants along the roadway itself.
Culverts come in many sizes and shapes including round, elliptical, flat-bottomed, open-bottomed, pear-shaped, and box-like constructions. The culvert type and shape selection is based on a number of factors including requirements for hydraulic performance, limitations on upstream water surface elevation, and roadway embankment height.
The process of removing culverts to restore an open-air watercourse is known as daylighting. In the UK, the practice is also known as deculverting.
Materials
Culverts can be constructed of a variety of materials including cast-in-place or precast concrete (reinforced or non-reinforced), galvanized steel, aluminum, or plastic (typically high-density polyethylene). Two or more materials may be combined to form composite structures. For example, open-bottom corrugated steel structures are often built on concrete footings.
Design and engineering
Construction or installation at a culvert site generally results in disturbance of the site's soil, stream banks, or stream bed, and can result in the occurrence of unwanted problems such as scour holes or slumping of banks adjacent to the culvert structure.
Culverts must be properly sized and installed, and protected from erosion and scour. Many US agencies such as the Federal Highway Administration, Bureau of Land Management, and Environmental Protection Agency, as well as state or local authorities, require that culverts be designed and engineered to meet specific federal, state, or local regulations and guidelines to ensure proper function and to protect against culvert failures.
Culverts are classified by standards for their load capacities, water flow capacities, life spans, and installation requirements for bedding and backfill. Most agencies adhere to these standards when designing, engineering, and specifying culverts.
Failures
Culvert failures can occur for a wide variety of reasons including maintenance, environmental, and installation-related failures, functional or process failures related to capacity and volume causing the erosion of the soil around or under them, and structural or material failures that cause culverts to fail due to collapse or corrosion of the materials from which they are made.
If the failure is sudden and catastrophic, it can result in injury or loss of life. Sudden road collapses are often the result of poorly designed and engineered culvert crossing sites or unexpected changes in the surrounding environment cause design parameters to be exceeded. Water passing through undersized culverts will scour away the surrounding soil over time. This can cause a sudden failure during medium-sized rain events. Accidents from culvert failure can also occur if a culvert has not been adequately sized and a flood event overwhelms the culvert, or disrupts the road or railway above it.
Ongoing culvert function without failure depends on proper design and engineering considerations being given to load, hydraulic flow, surrounding soil analysis, backfill and bedding compaction, and erosion protection. Improperly designed backfill support around culverts can result in material collapse or failure from inadequate load support.
For existing culverts which have experienced degradation, loss of structural integrity or need to meet new codes or standards, rehabilitation using a reline pipe may be preferred versus replacement. Sizing of a reline culvert uses the same hydraulic flow design criteria as that of a new culvert however as the reline culvert is meant to be inserted into an existing culvert or host pipe, reline installation requires the grouting of the annular space between the host pipe and the surface of reline pipe (typically using a low compression strength grout) so as to prevent or reduce seepage and soil migration. Grouting also serves as a means in establishing a structural connection between the liner, host pipe and soil. Depending on the size and annular space to be filled as well as the pipe elevation between the inlet and outlet, it may be necessary to add grout in multiple stages or "lifts". If multiple lifts are required, then a grouting plan is required, which should define the placement of grout feed tubes, air tubes, type of grout to be used, and if injecting or pumping grout, then the required developed pressure for injection. As the diameter of the reline pipe will be smaller than the host pipe, the cross-sectional flow area will be smaller. By selecting a reline pipe with a very smooth internal surface with an approximate Hazen-Williams Friction Factor C value of between 140–150, the decreased flow area can be offset, and hydraulic flow rates potentially increased by way of reduced surface flow resistance. Examples of pipe materials with high C-factors are high-density polyethylene (150) and polyvinyl chloride (140).
Environmental impacts
Safe and stable stream crossings can accommodate wildlife and protect stream health, while reducing expensive erosion and structural damage. Undersized and poorly placed culverts can cause problems for water quality and aquatic organisms. Poorly designed culverts can degrade water quality via scour and erosion, as well as restrict the movement of aquatic organisms between upstream and downstream habitat. Fish are a common victim in the loss of habitat due to poorly designed crossing structures.
Culverts that offer adequate aquatic organism passage reduce impediments to movement of fish, wildlife, and other aquatic life that require instream passage. Poorly designed culverts are also more apt to become jammed with sediment and debris during medium to large scale rain events. If the culvert cannot pass the water volume in the stream, then the water may overflow the road embankment. This may cause significant erosion, ultimately washing out the culvert. The embankment material that is washed away can clog other structures downstream, causing them to fail as well. It can also damage crops and property. A properly sized structure and hard bank armoring can help to alleviate this pressure.
Culvert style replacement is a widespread practice in stream restoration. Long-term benefits of this practice include reduced risk of catastrophic failure and improved fish passage. If best management practices are followed, short-term impacts on the aquatic biology are minimal.
Fish passage
While the culvert discharge capacity derives from hydrological and hydraulic engineering considerations, this results often in large velocities in the barrel, creating a possible fish passage barrier. Critical culvert parameters in terms of fish passage are the dimensions of the barrel, particularly its length, cross-sectional shape, and invert slope. The behavioural response by fish species to culvert dimensions, light conditions, and flow turbulence may play a role in their swimming ability and culvert passage rate. There is no simple technical means to ascertain the turbulence characteristics most relevant to fish passage in culverts, but it is understood that the flow turbulence plays a key role in fish behaviour.
The interactions between swimming fish and vortical structures involve a broad range of relevant length and time scales. Recent discussions emphasised the role of secondary flow motion, considerations of fish dimensions in relation to the spectrum of turbulence scales, and the beneficial role of turbulent structures provided that fish are able to exploit them.
The current literature on culvert fish passage focuses mostly on fast-swimming fish species, but a few studies have argued for better guidelines for small-bodied fish including juveniles. Finally, a solid understanding of turbulence typology is a basic requirement to any successful hydraulic structure design conducive of upstream fish passage.
Minimum energy loss culverts
In the coastal plains of Queensland, Australia, torrential rains during the wet season place a heavy demand on culverts. The natural slope of the flood plains is often very small, and little fall (or head loss) is permissible in the culverts. Researchers developed and patented the design procedure of minimum energy loss culverts which yield small afflux.
A minimum energy loss culvert or waterway is a structure designed with the concept of minimum head loss. The flow in the approach channel is contracted through a streamlined inlet into the barrel where the channel width is minimum, and then it is expanded in a streamlined outlet before being finally released into the downstream natural channel. Both the inlet and the outlet must be streamlined to avoid significant form losses. The barrel invert is often lowered to increase the discharge capacity.
The concept of minimum energy loss culverts was developed by a shire engineer in Victoria and a professor at the University of Queensland during the late 1960s. While a number of small-size structures were designed and built in Victoria, some major structures were designed, tested and built in south-east Queensland.
See also
Notes
References
Oxford English Dictionary,
Culvert Design for Aquatic Organism Passage. US Department of Transportation, Federal Highway Administration
External links
Impact of culverts on salmon
Culvert fact sheet
Culvert analysis tool
Bottomless Culvert Scour Study
Culverts for Fish Passage
Hydraulics of Minimum Energy Loss (MEL)
Hydraulics engineering circular
Culvert use, installation, and sizing
Design guidelines for culverts
Upstream fish passage in box culverts
Bridges
Tunnels | Culvert | Engineering | 2,031 |
1,586,871 | https://en.wikipedia.org/wiki/Lochium%20Funis | Lochium Funis (Latin for the log and line) was a constellation created by Johann Bode in 1801 next to the constellation Pyxis, an earlier invention of Nicolas Louis de Lacaille. It represented the log and line used by seamen for measuring a ship's speed through the water. It was never used by other astronomers.
External links
Lochium Funis, Ian Ridpath's Star Tales
Former constellations | Lochium Funis | Astronomy | 88 |
36,236,283 | https://en.wikipedia.org/wiki/Casement%20stay | A casement stay is a metal bar used to hold a casement window in a specific open or closed position. Metal windows will normally have the stay included at the time of manufacture, while wooden windows will have them added after fitting.
Different kinds of casement stay include peg type, telescopic and friction
The peg type casement stay has one or two pins or pegs inside the rebate. The stay is a metal bar with holes that fit onto the peg, and allow the sash window to be held open in various positions. The peg nearest the hinge can then be used as a fulcrum. Disadvantages of peg type stays are that the stay handle may protrude dangerously into the room. Another issue is the limited opening that can be achieved. They also rattle in the wind. The mounting plate on the bar connects to the bottom rail of the sash window. The pegs will be connected to a small plate that can be called a pintie plate.
The range of opening depends on the length of the bar and the position of the pins.
There are locks that can put a bolt through a hole in the stay to prevent the window from opening.
Telescopic friction stays are tube shaped and can extend from 11 to 16 inches. They have models for outward opening or inward opening windows. These were invented by Alfred M Lane for the Monarch Metal Weather Strip Company later called Monarch Metal Products Company in St. Louis, Missouri. The tubes maintained their position by friction blocks that applied pressure to the outer tube. The tubes had the advantage of keeping out dirt and water, and having no protruding parts that could harm people. This became known as the Monarch casement stay and cost US$1.50 in 1925.
Another kind of friction stay is in the shape of a bent arm and can allow the window to open to 180°. This can also be called a restrictor stay.
Peg and bar stays have different models that mount vertically or horizontally. Common materials include steel, brass, zinc alloy, nickel and aluminum.
The screw down adjustable stay has a bar that slides through a slot with screw, that can be tightened to hold the window in position. Such a stay will limit the opening to a window.
The handle of the bar in a stay can take on different shapes. The monkey tail has a spiral with over one and a half turns, but the pig tail only does just over one turn. The cockspur handle curves down narrowly. A bulb or ball handle has a hemispherical end on it. The shepherd's crook handle curls around just over 180°. Reeded handles have ridges that help the grip.
Four bar stays combine their function with a hinge, and can shift the window sideways as it opens.
An alternative is the chainwinder.
Installing a casement stay takes about half an hour.
References
Architectural elements
Ironmongery | Casement stay | Technology,Engineering | 582 |
2,237,679 | https://en.wikipedia.org/wiki/Methyl%20nitrate | Methyl nitrate is the methyl ester of nitric acid and has the chemical formula CH3NO3. It is a colourless explosive volatile liquid.
Synthesis
It can be produced by the condensation of nitric acid and methanol:
CH3OH + HNO3 → CH3NO3 + H2O
A newer method uses methyl iodide and silver nitrate:
CH3I + AgNO3 → CH3NO3 + AgI
Methyl nitrate can be produced on a laboratory or industrial scale either through the distillation of a mixture of methanol and nitric acid, or by the nitration of methanol by a mixture of sulfuric and nitric acids. The first procedure is not preferred due to the great explosion danger presented by the methyl nitrate vapour. The second procedure is essentially identical to that of making nitroglycerin. However, the process is usually run at a slightly higher temperature and the mixture is stirred mechanically on an industrial scale instead of with compressed air.
Electrolytic production methods have been reported involving electrolyzing sodium acetate and sodium nitrate in acetic acid.
Methyl nitrate is also the product of the oxidation of some organic compounds in the presence of nitrogen oxides and chlorine, namely chloroethane or di-tert-butyl ether, while also producing nitromethane. Oxidation of nitromethane using nitrogen dioxide in an inert atmosphere can also yield methyl nitrate.
Explosive properties
Methyl nitrate is a sensitive explosive. When ignited it burns extremely fiercely with a gray-blue flame. Methyl nitrate is a very strong explosive with a detonation velocity of 6,300 m/s, like nitroglycerin, ethylene glycol dinitrate, and other nitrate esters. The sensitivity of methyl nitrate to initiation by detonation is among the greatest known, with even a number one blasting cap, the lowest power available, producing a near full detonation of the explosive.
Despite the superior explosive properties of methyl nitrate, it has not received application as an explosive due mostly to its high volatility, which prevents it from being stored or handled safely.
Safety
As well as being an explosive, methyl nitrate is toxic and causes headaches when inhaled.
History
Methyl nitrate has not received much attention as an explosive, but as a mixture containing 25% methanol it was used as rocket fuel and volumetric explosive under the name Myrol in Nazi Germany during World War II. This mixture would evaporate at a constant rate and so its composition would not change over time. It presents a slight explosive danger (it is somewhat difficult to detonate) and does not detonate easily via shock.
According to A. Stettbacher, the substance was used as a combustible during the Reichstag fire in 1933. Gartz shows in a recent work that only methyl nitrate with its production and explosion potential can represent the famous and mysterious "shooting water" from the German Feuerwerkbuch ("fireworks book") of about 1420 (the oldest technical text in German language, handwritten in Dresden and later printed in Augsburg).
An extract of the text from the 1420 Feuerwerkbuch is as follows (written in Early New High German):
Translated:
Structure
The structure of methyl nitrate has been studied experimentally in the gas phase (combined gas-electron diffraction and microwave spectroscopy, GED/MW) and in the crystalline state (X-ray diffraction, XRD) (see Table 1).
In the solid state there are weak interactions between the O and N atoms of different molecules (see figure).
References
External links
Alkyl nitrates
Explosive chemicals
Liquid explosives
Methyl esters | Methyl nitrate | Chemistry | 761 |
53,316,670 | https://en.wikipedia.org/wiki/Disk%20footprint | Disk footprint (or storage footprint) of a software application refers to its sizing information when it's in an inactive state, or in other words, when it's not executing but stored on a secondary media or downloaded over a network connection. It gives a sense of the size of an application, typically expressed in units of computer bytes (kilobytes, megabytes, etc.) that would be required to store the application on a media device or to be transmitted over a network. Due to organization of modern software applications, disk footprint may not be the best indicator of its actual execution time memory requirements - a tiny application that has huge memory requirements or loads a large number dynamically linked libraries, may not have comparable disk footprint vis-a-vis its runtime footprint.
See also
Computer data storage
Disk storage
References
Software optimization | Disk footprint | Technology | 171 |
20,561,282 | https://en.wikipedia.org/wiki/Alpha%E2%80%93beta%20transformation | In electrical engineering, the alpha-beta () transformation (also known as the Clarke transformation) is a mathematical transformation employed to simplify the analysis of three-phase circuits. Conceptually it is similar to the dq0 transformation. One very useful application of the transformation is the generation of the reference signal used for space vector modulation control of three-phase inverters.
History
In 1937 and 1938, Edith Clarke published papers with modified methods of calculations on unbalanced three-phase problems, that turned out to be particularly useful.
Definition
The transform applied to three-phase currents, as used by Edith Clarke, is
where is a generic three-phase current sequence and is the corresponding current sequence given by the transformation .
The inverse transform is:
The above Clarke's transformation preserves the amplitude of the electrical variables which it is applied to. Indeed, consider a three-phase symmetric, direct, current sequence
where is the RMS of , , and is the generic time-varying angle that can also be set to without loss of generality. Then, by applying to the current sequence, it results
where the last equation holds since we have considered balanced currents. As it is shown in the above, the amplitudes of the currents in the reference frame are the same of that in the natural reference frame.
Power invariant transformation
The active and reactive powers computed in the Clarke's domain with the transformation shown above are not the same of those computed in the standard reference frame. This happens because is not unitary. In order to preserve the active and reactive powers one has, instead, to consider
which is a unitary matrix and the inverse coincides with its transpose.
In this case the amplitudes of the transformed currents are not the same of those in the standard reference frame, that is
Finally, the inverse transformation in this case is
Simplified transformation
Since in a balanced system and thus one can also consider the simplified transform
which is simply the original Clarke's transformation with the 3rd equation excluded, and
which is the corresponding inverse transformation.
Geometric Interpretation
The transformation can be thought of as the projection of the three phase quantities (voltages or currents) onto two stationary axes, the alpha axis and the beta axis.
However, no information is lost if the system is balanced, as the equation is equivalent to the equation for in the transform. If the system is not balanced, then the term will contain the error component of the projection. Thus, a of zero indicates that the system is balanced (and thus exists entirely in the alpha-beta coordinate space), and can be ignored for two coordinate calculations that operate under this assumption that the system is balanced. This is the elegance of the clarke transform as it reduces a three component system into a two component system thanks to this assumption.
Another way to understand this is that the equation defines a plane in a euclidean three coordinate space. The alpha-beta coordinate space can be understood as the two coordinate space defined by this plane, i.e. the alpha-beta axes lie on the plane defined by .
This also means that in order the use the Clarke transform, one must ensure the system is balanced, otherwise subsequent two coordinate calculations will be erroneous. This is a practical consideration in applications where the three phase quantities are measured and can possibly have measurement error.
dq0 transform
The transform is conceptually similar to the transform. Whereas the transform is the projection of the phase quantities onto a rotating two-axis reference frame, the transform can be thought of as the projection of the phase quantities onto a stationary two-axis reference frame.
See also
Symmetrical components
Y-Δ transform
Vector control (motor)
References
Electrical engineering
Three-phase AC power
General references
C.J. O'Rourke et al. "A Geometric Interpretation of Reference Frames and Transformations: dq0, Clarke, and Park," in IEEE Transactions on Energy Conversion, vol. 34, no. 4, pp. 2070-2083, Dec. 2019. | Alpha–beta transformation | Engineering | 791 |
44,498,198 | https://en.wikipedia.org/wiki/Misra%20%26%20Gries%20edge%20coloring%20algorithm | The Misra & Gries edge coloring algorithm is a polynomial time algorithm in graph theory that finds an edge coloring of any simple graph. The coloring produced uses at most colors, where is the maximum degree of the graph. This is optimal for some graphs, and it uses at most one color more than optimal for all others. The existence of such a coloring is guaranteed by Vizing's theorem.
It was first published by Jayadev Misra and David Gries in 1992. It is a simplification of a prior algorithm by Béla Bollobás.
This algorithm is the fastest known almost-optimal algorithm for edge coloring, executing in time. A faster time bound of was claimed in a 1985 technical report by Gabow et al., but this has never been published.
In general, optimal edge coloring is NP-complete, so it is very unlikely that a polynomial time algorithm exists. There are however exponential time exact edge coloring algorithms that give an optimal solution.
Key concepts
Free color
A color c is said to be free on a vertex u if no incident edge of u has color c.
Fan
A fan of a vertex X is a sequence of vertices F[1:k] that satisfies the following conditions:
F[1:k] is a non-empty sequence of distinct neighbors of X;
Edge (X,F[1]) is uncolored;
The color of (X,F[i+1]) is free on F[i] for 1 ≤ i < k.
Given a fan F, any edge (X,F[i]) for 1 ≤ i ≤ k is a fan edge.
Rotating a fan
Given a fan F[1:k] of a vertex X, the "rotate fan" operation does the following: for i = 1, ..., k-1, assign the color of (X,F[i + 1]) to edge (X,F[i]).
Finally, uncolor (X, F[k]).
This operation leaves the coloring valid because, by the definition of a fan, the color of (X,F[i+1]) was free on F[i].
cd-path
Let c and d be colors.
A cdX-path is an edge path that goes through vertex X, only contains edges colored c or d and is maximal.
(We cannot add any other edge with color c or d to the path.)
If neither c nor d is incident on X, there is no such path.
If such a path exists, it is unique as at most one edge of each color can be incident on X.
Inverting a cd-path
The operation "invert the cdX-path" switches every edge on the path colored c to d and every edge colored d to c.
Inverting a path can be useful to free a color on X if X is one of the endpoints of the path: if color c but not d was incident on X, now color d but not c is incident on X, freeing c for X.
This operation leaves the coloring valid.
For vertices on the path that are not endpoints, no new color is added.
For endpoints, the operation switches the color of one of its edges between c and d.
This is valid: suppose the endpoint was connected by a c edge, then d was free on this endpoint because otherwise, this vertex cannot be an endpoint.
Since d was free, this edge can switch to d.
Algorithm
algorithm Misra & Gries edge coloring algorithm is
input: A graph G.
output: A proper coloring c of the edges of G.
Let U := E(G)
while U ≠ ∅ do
Let (X,v) be any edge in U.
Let F[1:k] be a maximal fan of X with F[1]=v.
Let c be a free color on X and d be a free color on F[k].
Invert the cdX-path.
Let w ∈ {1..k} such that F'=F[1:w] is a fan and d is free on F[w].
Rotate F'.
Set the color of (X,w) to d.
U := U − {(X,v)}
end while
Proof of correctness
The correctness of the algorithm is proved in three parts.
First, it is shown that the inversion of the cdX-path guarantees a w ∈ {1,..,k} such that F = F[1:w] is a fan and d is free on F[w].
Then, it is shown that the edge coloring is valid and requires at most Δ+1 colors.
Path inversion guarantee
Prior to the inversion, there are two cases:
Case 1: the fan has no edge colored d.
Since F is a maximal fan on X and d is free on F[k], d is free on X.
Otherwise, suppose an edge (X,u) has color d, then u can be added to F to make a bigger fan, contradicting with F being maximal.
Thus, d is free on X, and since c is also free on X, there is no cdX-path and the inversion has no effect on the graph.
Set w = k.
Case 2: the fan has one edge with color d.
Let (X,F[i+1]) be this edge.
Note that i+1 ≠ 1 since (X,F[1]) is uncolored.
By definition of a fan, d is free on F[i].
Also, i ≠ k since the fan has length k but there exists a F[i+1].
We can now show that after the inversion,
(1): for j ∈ {1, ..., k-1} \ {i}, the color of (X,F[j+1]) is free on F[j].
Note that prior to the inversion, c is free on X and (X,F[i+1]) has color d, so the other edges in the fan, i.e., all (X,F[j+1]) above, cannot have color c or d.
Since the inversion only affects edges that are colored c or d, (1) holds.
Case2.1: F[i] is not on the cdX-path.
The inversion will not affect the set of free colors on F[i], and d will remain free on it.
By (1), F = F[1:i] is a valid fan, and we can set w = i.
Case2.2: F[i] is on the cdX-path.
Below, we can show that F[1:k] is still a fan after the inversion and d remains free on F[k], so we can set w = k.
Since d was free on F[i] before the inversion and F[i] is on the cdX-path, F[i] is an endpoint of the path and c will be free on F[i] after the inversion.
The inversion will change the color of (X,F[i+1]) from d to c.
Thus, since c is now free on F[i] and (1) holds, th whole F remains a fan.
Also, d remains free on F[k], since F[k] is not on the cdX-path.
(Suppose that it is; since d is free on F[k], then it would have to be an endpoint of the path, but X and F[i] are the endpoints.)
The edge coloring is valid
This can be shown by induction on the number of colored edges.
Base case: no edge is colored, this is valid.
Induction step: suppose this was true at the end of the previous iteration.
In the current iteration, after inverting the path, d will be free on X, and by the previous result, it will also be free on w.
Rotating F does not compromise the validity of the coloring.
Thus, after setting the color of (X,w) to d, the coloring is still valid.
The algorithm requires at most Δ+1 colors
In a given step, only colors c and d are used.
F[k] has at most Δ colored edges, so Δ+1 colors in total ensures that we can pick d.
This leaves Δ colors for c.
Since there is at least one uncolored edge incident on X, and its degree is bounded by Δ, there are most Δ-1 colors incident on X currently, which leaves at least one choice for c.
Complexity
In every iteration of the loop, one additional edge gets colored.
Hence, the loop will run times.
Finding the maximal fan, the colors c and d and invert the cdX-path can be done in time.
Finding w and rotating F takes time.
Finding and removing an edge can be done using a stack in constant time (pop the last element) and this stack can be populated in time.
Thus, each iteration of the loop takes time, and the total running time is .
References
Graph coloring
Graph algorithms | Misra & Gries edge coloring algorithm | Mathematics | 1,904 |
3,612,067 | https://en.wikipedia.org/wiki/Invader%20potential | In ecology, invader potential is the qualitative and quantitative measures of a given invasive species probability to invade a given ecosystem. This is often seen through climate matching. There are many reasons why a species may invade a new area. The term invader potential may also be interchangeable with invasiveness. Invader potential is a large threat to global biodiversity. It has been shown that there is an ecosystem function loss due to the introduction of species in areas they are not native to.
Invaders are species that, through biomass, abundance, and strong interactions with native species, have significantly altered the structure and composition of the established community. This differs greatly from the term "introduced", which merely refers to species that have been introduced to an environment, disregarding whether or not they have created a successful establishment. They are simply organisms that have been accidentally, or deliberately, placed into an unfamiliar area. Many times, in fact, species do not have a strong impact on the introduced habitat. This can be for a variety of reasons; either the newcomers are not abundant or because they are small and unobtrusive.
Understanding the mechanisms of invader potential is important to understanding why species relocate and to predict future invasions. There are three predicted reasons as to why species invade an area. They are as follows: adaptation to physical environment, resource competition and/or utilization, and enemy release. Some of these reasons as to why species move seem relatively simple to understand. For example, species may adapt to the new physical environment through having great phenotypic plasticity and environmental tolerance. Species with high rates of these find it easier to adapt to new environments. In terms of resources, those with low resource requirements thrive in unknown areas more than those with complex resource needs. This is shown directly through Tilman's R* rule. Those with less needs can competitively exclude those with more complex needs and take over an area. And finally, species with high reproduction rate and low defense to natural enemies have a better chance of invading other areas. All of these are reasons why species may thrive in places they are non-native to, due to having desirable flexibility within their species' needs.
Climate matching
Climate matching is a technique used to identify extralimital destinations that invasive species may like to overtake, based on its similarities to the species previous native range. Species are more likely to invade areas that match their origin for ease of use, and abundance of resources. Climate matching assesses the invasion risk and heavily prioritizes destination-specific action.
The Bioga irregularis, the brown tree snake, is a great example of a species that climate matches. This species is native to northern and eastern Australia, eastern Indonesia, Papua New Guinea and most of the Solomon Islands. The brown tree snake was accidentally translocated by means of ship cargo to Guam, where it is responsible for replacing the majority of the native bird species.
Human-mediated invader potential
Humans play a significant role in the ways species invade an area. By changing the habitat, an invasion is made easier or more advantageous for an invasive species. As previously mentioned, species are more likely to invade areas they feel they can competitively win in.
As an example, human led shoreline development, specifically in New England, was found to explain over 90% of intermarsh variation. This has boosted nitrogen availability, which can draw in new species. This human made change, among others, was the reason that Phragmites australis invaded the New England salt marshes.
In a study by Sillman and Bertness, 22 salt marshes were surveyed for changes following this invasion. This study specifically looked at how human habitat alteration led to the invasion success of this species. Shoreline development, nutrient enrichment, and salinity reduction were all human made changes that contributed to the species ability to invade.
Impact and risk assessment
It is critical, especially in conservation biology, to have the ability to foresee impacts on ecosystems. For example, the predictions of the identities and ecological impacts of invasive alien species assists in risk assessment. Currently, scientists are lacking the universal and standardized metrics that are reliable enough to predict the likelihood and degree of impact of the specific invaders. Data on the measurable changes in populations of the affected species, for instance, would be especially beneficial.
Invader potential is a tool to aid in this dilemma. By understanding the qualitative and quantitative measures of a given invasive species probability to invade a given ecosystem, researchers can hypothesize which species will impact which environments. The addition, or removal, of a species from an ecosystem can cause drastic changes to environmental factors as well as the community's food web. Predicting these inevitable situations can aid in both maintenance and conservation. This is especially advised for emerging and potential future invaders that have no invasion history.
Consequences faced by invading species
Although the focus is typically on the invading species' adverse impacts on native species, they are also often negatively impacted, as well. The new colonization of a foreign species has proven to lead to introduced species being subject to genetic bottlenecks, random genetic drift, and increased levels of inbreeding. Genetic changes, such as these, can pose a potential threat to allelic diversity. This could lead to genetic differentiation of the introduced population. In addition, invasive organisms face new biotic and abiotic factors.
Invasion potential has a great impact on whether or not the invasive organism will survive these biotic or abiotic factors. The species' ability to adapt to the new conditions will contribute to the success of the particular invasion. In the majority of cases, a small subset of introduced species become invaders as a result of rapid changes in the new habitat. In other cases, the species fails to thrive symbiotically with the ecosystem.
See also
Invasive species
Ecological niche
Ecological metrics
References
Conservation biology
Invasive species
Ecological metrics | Invader potential | Mathematics,Biology | 1,184 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.